Abstract
The previous chapters focused on a system’s ability to reject perturbations and to remain stable with respect to uncertainties. This chapter focuses on a system’s ability to track external changes in the environment or changes in the system’s desired setpoint.
You have full access to this open access chapter, Download chapter PDF
The previous chapters on regulation and stabilization ignored the reference input, r. In those cases, we focused on a system’s ability to reject perturbations and to remain stable with respect to uncertainties. However, a system’s performance often depends strongly on its ability to track external environmental or reference signals.
To study tracking of a reference input, let us return to the basic feedback loop structure in Fig. 2.1c, shown again in Fig. 8.1. Good tracking performance means minimizing the error, \(e=r-y\), the difference between the reference input and the system output.
Typically, we can reduce tracking error by increasing the control signal, u, which increases the speed at which the system changes its output to be closer to the input. However, in a real system, a larger control signal requires more energy. Thus, we must consider the tradeoff between minimizing the error and reducing the cost of control.
I previously introduced a cost function that combines the control and error signals in Eq. 5.1 as
in which u(t) and e(t) are functions of time, and \(\rho \) is a weighting for the relative importance of the error signal relative to the control signal.
I noted in Eq. 5.2 that the square of the \(\mathcal {H}_2\) norm is equal to the energy of a signal, for example,
In this chapter, we will consider reference signals that change over time. A system will typically not track a changing reference perfectly. Thus, the error will not go to zero over time, and the energy will be infinite. For infinite energy, we typically cannot use the \(\mathcal {H}_2\) norm. Instead, we may consider the average of the squared signal per unit time, which is the power. Or we may analyze the error over a finite time period, as in Eq. 8.1.
To analyze particular problems, we begin by expressing the transfer function for the error from Eq. 3.5 as
We may write the transfer function for the control signal as
These equations express the key tradeoff between the error signal and the control signal. A controller, C, that outputs a large control signal reduces the error, E, and increases the control signal, U. The following example illustrates this tradeoff and the potential consequences for instability.
1 Varying Input Frequencies
To analyze the cost over a particular time period, as in Eq. 8.1, we must express the transfer functions as differential equations that describe change over time. We can use the basic relation between transfer functions in Eq. 2.5 and differential equations in Eq. 2.6.
In this example, I use the process in Eq. 4.1 that I analyzed in earlier chapters
I use the controller
Our goal is to find a controller of this form that minimizes the cost function in Eq. 8.1.
I use a reference signal that is the sum of three sine waves with frequencies \(\omega _i=\left( \psi ^{-1}, 1,\psi \right) \). I weight each frequency by \(\kappa _i=\left( 1,1,0.2\right) \), such that the high frequency may be considered a rapid, relatively low-amplitude disturbance. Thus,
in which each of the three terms in the sum expresses a sine wave with frequency \(\omega _i\). Here, I use \(\psi =10\).
Often, low-frequency signals represent true changes in the external environment. By contrast, high-frequency inputs represent noise or signals that change too rapidly to track effectively. Thus, we may wish to optimize the system with respect to low-frequency inputs and to ignore high-frequency inputs.
We can accomplish frequency weighting by using a filtered error signal in the cost function, \(E_W(s)=R(s)W(s)-Y(s)\), for a weighting function W that passes low frequencies and reduces the gain of high frequencies. The weighted error signal as a function of time is \(e_w(t)\).
In our example, the function
will reduce the relative weighting of the high-frequency input at frequency \(\psi \). I use the filtered error signal, \(e_w\), for the cost function in Eq. 8.1, yielding
The gold curve in Fig. 8.2 shows the environmental reference signal, r, for the associated transfer function, R(s). The blue curve shows the filtered reference signal, \(r_w\), for the filtered system, R(s)W(s). The filtered curve removes the high-frequency noise of the reference signal and closely matches the fluctuations from the two lower frequency sine wave inputs.
Figure 8.3 illustrates the tradeoff between the tracking performance and the cost of the control signal energy to drive the system. The cost function in Eq. 8.5 describes the tradeoff between tracking, measured by the squared error between the filtered reference signal and the system output, \(e_w^2\), and the control signal energy, \(u^2\).
The parameter \(\rho \) sets the relative balance between these opposing costs. A higher \(\rho \) value favors closer tracking and smaller error because a high value of \(\rho \) puts less weight on the cost of the control signal. With a lower cost for control, the controller can output a stronger signal to drive the system toward a closer match with the target reference signal.
2 Stability Margins
Minimizing a quadratic cost function or an \(\mathcal {H}_2\) norm may lead to a poor stability margin. For example, close tracking of a reference signal may require a large control signal from the controller. Such high gain feedback creates rapidly responding system dynamics, which can be sensitive to uncertainties.
In Fig. 8.3, the stability margins for the three rows associated with \(\rho =(1,10,100)\) are \(b_{P, C}=(0.285,0.023,0.038)\). A robust stability margin typically requires a value greater than approximately 1 / 3 or perhaps 1 / 4.
In this case, the system associated with \(\rho =1\) has a reasonable stability margin, whereas the systems associated with higher \(\rho \) have very poor stability margins. The poor stability margins suggest that those systems could easily be destabilized by perturbations of the underlying process or controller dynamics.
We could minimize the cost function subject to a constraint on the lower bound of the stability margin. However, numerical minimization for that problem can be challenging. See the supplemental Mathematica code for an example.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2018 The Author(s)
About this chapter
Cite this chapter
Frank, S.A. (2018). Tracking. In: Control Theory Tutorial. SpringerBriefs in Applied Sciences and Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-91707-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-91707-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91706-1
Online ISBN: 978-3-319-91707-8
eBook Packages: EngineeringEngineering (R0)