Abstract
Earlier chapters focused on the inputs into a system and the outputs produced by a system. This chapter considers the internal dynamics of the system. For example, in the regulation of an organism’s body temperature, we could model performance and cost in terms of the system’s body temperature output. Alternatively, the internal dynamics of the system may include the burning of stored energy, the rise and fall of various signaling molecules, the dilation of blood vessels.
You have full access to this open access chapter, Download chapter PDF
A transfer function corresponds to a time-invariant, linear system of ordinary differential equations. In an earlier chapter, I showed the general form of a transfer function in Eq. 2.5 and the underlying differential equations in Eq. 2.6.
For example, the transfer function \(P(s)=1/(s+a)\) with input u and output y corresponds to the differential equation \(\dot{x}=-ax+u\), with output \(y=x\). Here, x is the internal state of the process. Models that work directly with internal states are called state-space models.
Transfer functions provide significant conceptual and analytical advantages. For example, the multiplication of transfer functions and the simple rules for creating feedback loops allow easy creation of complex process cascades. With regard to system response, a Bode plot summarizes many aspects in a simple, visual way.
However, it often makes sense to analyze the underlying states directly. Consider, for example, the regulation of an organism’s body temperature. We could model performance and cost in terms of body temperature. Alternatively, the underlying states may include the burning of stored energy, the rise and fall of various signaling molecules, the dilation of blood vessels, and so on.
Direct analysis of those internal states provides advantages. The individual states may have associated costs, which we could study directly in our cost function. We could consider the regulatory control of the individual states rather than temperature because temperature is an aggregate outcome of the underlying states. For example, each state could be regulated through feedback, in which the feedback into one state may depend on the values of all of the states, allowing more refined control of costs and performance.
When we use a state-space analysis, we do not have to give up all of the tools of frequency analysis that we developed for transfer functions. For example, we can consider the response of a system to different input frequencies.
State-space models can also describe time-varying, nonlinear dynamics. The response of a nonlinear system will change with its underlying state, whereas transfer function systems have a constant frequency response.
1 Regulation Example
In the prior chapter on regulation, I analyzed the process in Eq. 6.8 as
with \(\alpha =0.1\) and \(\beta =1\). This process has a resonance peak near \(\omega =1\). The state-space model for this process is
in which the dynamics are equivalent to a second-order differential equation, \(\ddot{x}+\alpha \dot{x}+\beta x=u\), with \(y=x\).
For a state-space regulation problem, the design seeks to keep the states close to their equilibrium values. We can use equilibrium values of zero without loss of generality. When the states are perturbed away from their equilibrium, we adjust the input control signal, u, to drive the states back to their equilibrium.
The cost function combines the distance from equilibrium with regard to the state vector, \(\mathbf {x}\), and the energy required for the control signal, \(\mathbf {u}\). Distances and energies are squared deviations from zero, which we can write in a general way in vector notation as
in which \(\mathbf {R}\) and \(\mathbf {Q}\) are matrices that give the cost weightings for components of the state vector, \(\mathbf {x}=x_1,x_2,\dots \), and components of the input vector, \(\mathbf {u}=u_1,u_2,\dots \). In the example here, there is only one input. However, state-space models easily extend to handle multiple inputs.
For the regulation problem in Fig. 9.1, the goal is to find the feedback gains for the states given in the matrix \(\mathbf {K}\) that minimize the cost function. The full specification of the problem requires the state equation matrices for use in Eq. 2.6, which we have from Eq. 9.2 as
and the cost matrices, \(\mathbf {R}\) and \(\mathbf {Q}\). In this case, we have a single input, so the cost matrix for inputs, \(\mathbf {R}\), can be set to one, yielding an input cost term, \(u^2\).
For the state costs, we could ignore the second state, \(x_2\), leaving only \(x_1=y\), so that the state cost would be proportional to the squared output, \(y^2=e^2\). Here, y is equivalent to the error, \(e=y-r\), because the reference input is \(r=0\). A cost based on \(u^2\) and \(e^2\) matches the earlier cost function in Eq. 8.1.
In this case, I weight the costs for each state equally by letting \(\mathbf {Q}=\rho ^2\mathbf {I}_2\), in which \(\mathbf {I}_n\) is the identity matrix of dimension n, and \(\rho \) is the cost weighting for states relative to inputs. With those definitions, the cost becomes
in which \(x_1^2+x_2^2\) measures the distance of the state vector from the target equilibrium of zero.
We obtain the gain matrix for state feedback models, \(\mathbf {K}\), by solving a matrix Riccati equation. Introductory texts on control theory derive the Riccati equation. For our purposes, we can simply use a software package, such as Mathematica, to obtain the solution for particular problems. See the supplemental software code for an example.
Figure 9.2 shows the response of the state feedback system in Fig. 9.1 with the Riccati solution for the feedback gain values, \(\mathbf {K}\). Within each panel, the different curves show different values of \(\rho \), the ratio of the state costs for \(\mathbf {x}\) relative to the input costs for \(\mathbf {u}\). In the figure, the blue curves show \(\rho =1/4\), which penalizes the input costs four times more than the state costs. In that case, the control inputs tend to be costly and weaker, allowing the state values to be larger.
At the other extreme, the green curves show \(\rho =4\). That value penalizes states more heavily and allows greater control input values. The larger input controls drive the states back toward zero much more quickly. The figure caption provides details about each panel.
In this example, the underlying equations for the dynamics do not vary with time. Time-invariant dynamics correspond to constant values in the state matrices, \(\mathbf {A}\), \(\mathbf {B}\), and \(\mathbf {C}\). A time-invariant system typically leads to constant values in the optimal gain matrix, \(\mathbf {K}\), obtained by solving the Riccati equation.
The Riccati solution also works when those coefficient matrices have time-varying values, leading to time-varying control inputs in the optimal gain matrix, \(\mathbf {K}\). The general approach can also be extended to nonlinear systems. However, the Riccati equation is not sufficient to solve nonlinear problems.
Methods that minimize quadratic costs or \(\mathcal {H}_2\) norms can produce systems with poor stability margins. To obtain guaranteed stability margins, one can minimize costs subject to a constraint on the minimum stability margin.
2 Tracking Example
Consider the tracking example from the previous chapter. That example began with the process in Eq. 4.1 as
with \(\alpha =a+b=10.1\) and \(\beta =ab=1\). The state-space model is given in Eq. 9.2, expressed in matrix form in Eq. 9.4. The state-space model describes the process output over time, y(t), which we abbreviate as y.
Here, I describe a state-space design of tracking control for this process. For this example, I use the tracking reference signal in Eq. 8.3, ignoring high-frequency noise \((\kappa _2=0)\). The reference signal is the sum of low-frequency \((\omega _0=0.1)\) and mid-frequency \((\omega _1=1)\) sine waves. The transfer function for the reference signal is
In state-space form, the reference signal, r(t), is
We can transform a tracking problem into a regulator problem and then use the methods from the previous chapter (Anderson and Moore 1989). In the regulator problem, we minimized a combination of the squared inputs and states. For a tracking problem, we use the error, \(e=y-r\), instead of the state values, and express the cost as
We can combine the state-space expressions for y and r into a single state-space model with output e. That combined model allows us to apply the regulator theory to solve the tracking problem with state feedback.
The combined model for the tracking problem is
which has output determined by \(\mathbf {C_t}\) as \(e=y-r\) (Anderson and Moore 1989). In this form, we can apply the regulator theory to find the optimal state feedback matrix, \(\mathbf {K}\), that minimizes the costs, \(\mathcal {J}\), in Eq. 9.5. Figure 9.3 presents an example and mentions some technical issues in the caption.
The example illustrates two key points. First, as the relative cost weighting of the inputs declines, the system applies stronger feedback inputs and improves tracking performance.
Second, the state equations for the intrinsic process, P(s), in Eq. 9.4 provide input only into the second state of the process, as can be seen in the equation for \(\dot{x}_2\) in Eq. 9.2. When we allow a second input into the intrinsic process, P(s), by allowing feedback directly into both \(\dot{x}_1\) and \(\dot{x}_2\), we obtain much better tracking performance, as shown in Fig. 9.3.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2018 The Author(s)
About this chapter
Cite this chapter
Frank, S.A. (2018). State Feedback. In: Control Theory Tutorial. SpringerBriefs in Applied Sciences and Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-91707-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-91707-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91706-1
Online ISBN: 978-3-319-91707-8
eBook Packages: EngineeringEngineering (R0)