Abstract
In this chapter, an introduction to the basics of continuous-time feedback systems is given. For more detailed treatments, the reader is referred to textbooks such as [1–5]. A simple amplitude control loop serves as an example in the following sections. The concepts presented here, however, may also be applied to more advanced control loops (cf. [6, 7]). The RF control loops are often called low-level RF (LLRF) systems to distinguish them from the high-power parts.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
In this chapter, an introduction to the basics of continuous-time feedback systems is given. For more detailed treatments, the reader is referred to textbooks such as [1–5]. A simple amplitude control loop serves as an example in the following sections. The concepts presented here, however, may also be applied to more advanced control loops (cf. [6, 7]). The RF control loops are often called low-level RF (LLRF) systems to distinguish them from the high-power parts.
7.1 Basics of Continuous-Time Feedback Systems
Since many discrete feedback systems may be treated as quasicontinuous if the sampling time is small enough, discrete-time systems are not covered in the following. The analysis of discrete-time systems is, however, possible in an analogous way to continuous-time systems with the \(\mathcal{Z}\)-transform instead of the Laplace transform [8]. Most feedback analysis and design methods may then be used for discrete systems in a very similar way.
7.1.1 Linear Time-Invariant Systems
The systems under consideration are assumed to be linear and time-invariant (they are so-called LTI systems). Assume a general dynamic system
that maps the input signal x(t) to the output signal y(t). If the system is time-invariant, a time shift at the input will lead to the shifted output
In case of a linear system, a linear combination of two input signals x1(t) and x2(t) will lead to the same linear combination of their corresponding outputs \(y_{1}(t) =\varphi \{ x_{1}(t)\}\) and \(y_{2}(t) =\varphi \{ x_{2}(t)\}\), i.e.,
holds for arbitrary constants a1 and a2.
A consequence of properties (7.1) and (7.2) is that the output of LTI systems can be calculated in the Laplace domain as
where the transfer functionH(s) corresponds to the impulse response h(t) of the system as defined in Sect. 2.3, and X(s) is the Laplace transform of the system input x(t). This fact is of particular importance, because it enables the analysis and design of feedback systems in the Laplace domain. For a demonstration of the fact that Eq. (7.3) holds for any LTI system, we follow [9] and approximate the input signal x(t) by the step function
where \(\tau _{\nu } =\nu \Delta \tau\) are discrete sampling times with distance \(\Delta \tau\) and \(\Theta (t)\) is the Heaviside step function. It is assumed that x(t) is zero for t < 0, as introduced in Sect. 2.2, for all functions for which the one-sided Laplace transform is used. The step response of the system, i.e., the output for \(x(t) = \Theta (t)\), will be denoted by \(y_{\Theta }(t)\) in the following. For the input xstep(t), the LTI properties then lead to the output response
The continuous output response y(t) is obtained for the limit \(\Delta \tau \rightarrow 0\):Footnote 1
If the derivative \(\dot{y}_{\Theta }(t)\) is denoted by the function h(t), this is a convolution integral, and Eq. (2.27),
holds. The choice of h(t) is indeed not coincidental, because a comparison with Sect. 2.3 shows that due to \(\dot{\Theta }(t) =\delta (t)\), this is the already defined impulse response, and the relation
holds for t > 0, i.e., the impulse response h(t) is the derivative of the step response with respect to time. Conversely, it can easily be shown that systems defined by Eq. (7.3) are linear, because in the Laplace domain, the output Y (s) results from a simple multiplication of the input X(s) and the transfer function [10]. In addition, they are time-invariant, because the shifted input
leads to the output
In summary, we can conclude that the definition of LTI systems by the properties (7.1) and (7.2) is equivalent to Definition (7.3).
In many cases, the transfer function H(s) has the form
with real coefficients bν and aν and nonzero coefficients bm ≠ 0 and an ≠ 0. This is a rational transfer function, and the system (7.3) is then represented in the time domain by the linear ODE
with constant coefficients. A transfer function (7.4) is called properif m ≤ n and strictly properif m < n. In the latter case, H(s) tends to zero as | s | → ∞.
It is sometimes more convenient to use the zero-pole-gain representation
The zeroszν are those values for which H(s) becomes zero, whereas the polespν ≠ 0 are singularities of H(s). In case a pole and a zero are exactly equal, they cancel and do not influence the input–output behavior of the system. The gain can also be expressed as \(K = b_{m}/a_{n}\).
As will be discussed in the following, the system represented by H(s) is called stableif all poles have a negative real part, i.e., Re{pν} < 0 and N = 0. In this case, all poles lie in the open left half of the complex s-plane, which is referred to as OLHP. The abbreviations ORHP (open right half-plane), LHP (left half-plane), RHP (right half-plane) follow accordingly. If at least one pole has a positive real part, the system is unstable.
7.1.2 State-Space Representation
The higher-order ODE (7.5) can be rewritten as a system of ODEs of first order. Consider the transfer function H(s) with input U and output Y, as shown in Fig. 7.1. The input variable U(s) corresponds to X(s) in the previous section. The notation is changed here to be consistent with the standard notation in the control system literature. Without loss of generality, it is assumed that H(s) has the form (7.4) but with an = 1, i.e., the coefficients of H(s) are normalized by an ≠ 0. By splitting H(s) in two blocks with its denominator and numerator, a new variable X(s) may be defined as shown in Fig. 7.1.
In the time domain, the following ODEs can be derived from this block diagram:
Defining the states (see also Sect. 2.8.1)
leads to the system of equations
With the definition of the state vector
the matrix representation
is obtained, which is called the controllable canonical formand is a special case of a state space representation. Different choices of the states (7.7) lead to different representations, but these have the general form
with the state vector \(\vec{x}\) of dimension n, the input vector \(\vec{u}\) of dimension p, the output vector \(\vec{y}\) of dimension q, the n × n system matrix A, the n × p input matrix B, and the q × n output matrix C. All matrices are assumed to have constant and real elements. A feedthrough matrix for a direct influence of \(\vec{u}\) on \(\vec{y}\) can be avoided in most practical cases. The Laplace transform of these equations yieldsFootnote 2
or
where I denotes the n × n identity matrix. The Laplace transform for the output \(\vec{y}\) leads to
In case of a system with a single input and a single output (SISO system), the transfer function H(s) is obtained as
where C is a row vector and B a column vector (p = 1, q = 1).
7.1.3 Linearization of Nonlinear Systems
Every practical system contains nonlinearities. Examples are nonlinear friction and constraints on the input that lead to saturation. Fortunately, in many cases, the considered nonlinear system behaves similarly to a linear system in the vicinity of its operating point. Consider a nonlinear system described by
with the analytic vector function \(\vec{v}\). Suppose that \(\vec{x} =\vec{ x}_{\mathrm{F}}\) and \(\vec{u} =\vec{ u}_{\mathrm{F}}\) constitute a constant equilibrium point, i.e.,
With the use of the Jacobian matrix
the Taylor series expansion around the equilibrium can be written as
where \(\Big\vert _{\mathrm{F}}\) denotes the value at the equilibrium and \(\vec{v}_{\mathrm{ho}}\) are higher-order terms. For small deviations
from equilibrium, the higher-order terms may be neglected, and the linear system
with
can be used as a linearization of the nonlinear system.
7.1.4 Dynamic Response of LTI Systems
The output of an LTI system depends on its transfer function H(s) and on the input signal u(t). In the following, the response of a general LTI system with respect to important test signals is discussed. This prepares the definition of stability. It is assumed that the poles and zeros of H(s) are all distinct, apart from N poles at s = 0. In most cases, this is a valid assumption. The calculations for the case with poles or zeros of higher multiplicity are similar but more intricate. Because the coefficients in Eq. (7.4) are real, nonreal poles p or zeros z are always accompanied by their complex conjugate counterparts p∗ and z∗. The complex conjugate operator commutes with every holomorphic function f(x) on its domain of definition if f(x) is real for real x. Thus in this case, f∗(x) equals f(x∗). In particular, this applies to every polynomial and rational function with real coefficients.
According to Eq. (7.6), the considered transfer function can be written as
where the zr, ν and pr, ν are the nonzero real zeros and poles, zc, ν and pc, ν are the nonzero complex zeros and poles, and K is the real gain. The total polynomial degree equals \(m = m_{1} + 2m_{2}\) for the numerator and \(n = N + n_{1} + 2n_{2}\) for the denominator. For a proper transfer function, n ≥ m holds.
7.1.4.1 Impulse Response
The impulse response is of practical interest for the study of pulse-shaped disturbances that may act on the feedback loop. In addition, this case is equivalent to the response of the state-space representation with zero input and certain nonzero initial conditions \(\vec{x}(t = 0)\neq 0\).
According to Eq. (7.3), the excitation of the system with the Dirac function
yields
in the Laplace domain. To calculate the response in the time domain, the partial fraction decomposition
is used. Here one assumes that the transfer function H(s) is proper, i.e., n ≥ m. The constants Kr, ν can be calculated as follows. Multiplying Eqs. (7.9) and (7.10) by (s − pr, i) for a specific i = 1, …, n1 and setting s = pr, i leads to
The constants Kr, ν are always real, because in the denominator, the expression
is real, and the same applies to the numerator. A similar calculation yields the constants
and using the above-mentioned commutability property of the complex conjugate operator leads to
The constant K0, N is obtained by multiplying by sN; it reads
For the remaining constants K0, i, a system of N linear equations is obtained by evaluating Eqs. (7.9) and (7.10) at N points s = si that are different from the zeros and poles of the system. The constant K0, 0 is zero for strictly proper transfer functions H(s), i.e., for n > m.
The transformation of Eq. (7.10) into the time domain
yields the impulse response
as Table A.4 shows (\(\Theta (t)\) is omitted for the sake of simplicity). The elements of the last sum can be rewritten as
and the term of this expression in square brackets is equal to
where | K | and \(\measuredangle K\) are the amplitude and phase of the complex number K, respectively. Altogether, the impulse response for t ≥ 0 is
and it tends to zero as t → ∞ if all poles have negative real parts.
7.1.4.2 Step Response
The response \(y(t) = y_{\Theta }(t)\) to a step command
can be calculated in an analogous way with \(U(s) = 1/s\). An alternative is the use of the convolution integral
With
the integration of (7.11) for t ≥ 0 yields
This calculation shows that the step response \(y_{\Theta }\) will approach a finite value for large times t if and only if the conditions
are satisfied, i.e., all poles have negative real parts. Because the limit \(\lim _{t\rightarrow \infty }y_{\Theta }(t)\) is then finite, the final value theorem can be applied:
The initial value theorem leads to
For this reason, systems with n = m are also said to have direct feedthrough. In contrast, strictly proper transfer functions with n > m have a continuous output response at t = 0.
7.1.4.3 Frequency Response
An important test signal is the harmonic excitation
The amplitude of the test signal may be chosen arbitrarily because of the linearity property (7.2). If it is assumed that none of the poles of H(s) is equal to ± j ω, the decomposition of the output response in the Laplace domain can be written as
where Ytrans has the same structure as the expression in Eq. (7.10), but with K0, 0 = 0. A multiplication by (s − j ω) and the evaluation at s = j ω leads to
In the time domain, the output response reads
If the transfer function H(s) has only poles with negative real parts, the transient response ytrans will tend to zero, and y(t) tends to a constant oscillation. The amplitude and phase of this oscillation with respect to the excitation u(t) is determined by H(j ω), i.e., the value of the transfer function at s = j ω. Because of the linearity property, this also applies to any shifted or scaled sinusoidal excitation. For this reason, the function H(j ω) depending on the frequency ω is called the frequency responseof the system H(s) and is obtained by introducing s = j ω into H(s).
There are two main reasons why the frequency response is important for feedback systems. First, H(j ω) can easily be measured by exciting the system with different frequencies ω, even if the transfer function H(s) of the physical system is not known. Second, H(j ω) can be used for the stability analysis of the closed feedback loop with the Nyquist criterion (see Sect. 7.4.2).
So far, it has been assumed that j ω is not a pole of H(s). Without further calculation, it can be reasoned that if j ω is a pole, H(s) has a singularity at H(j ω) and the excitation with frequencies close to ω will lead to very large amplitudes. If the chosen frequency is exactly ω, this will result in a perfect resonance, and the oscillation at the output will grow without bound, although the input is a bounded signal.
7.1.4.4 General Input Function
In the previous sections, the Laplace transform was used to calculate specific output responses for SISO systems. In case of general input functions, multiple-input and multiple-output (MIMO)systems, or initial values, it is often more convenient to consider the state-space representation. In Sect. 2.8.6, it was shown that autonomous linear systems of differential equations
have the solution (2.99),
where etA is the matrix exponential function. In the presence of an input vector \(\vec{u}(t)\), the system is no longer autonomous in general. The input may be a control effort or a disturbance such as a noise signal. In Sect. 7.1.2, the Laplace domain solution of a system with inputs was given by Eq. (7.8) as
Comparing this with the solution \(\vec{r}(t)\) of the autonomous system, it is apparent that
must hold, i.e., we have found the Laplace transform of the matrix exponential function. Transforming \(\vec{X}(s)\) into the time domain thus leads to
It can be shown that the matrix exponential function has the following properties, similar to those of an ordinary exponential function (cf. [11]):
-
series representation: \(e^{\mathit{tA}} = I +\sum _{ \nu =1}^{\infty }A^{\nu }\frac{t^{\nu }} {\nu !}\)
-
inverse: \(\left (e^{\mathit{tA}}\right )^{-1} = e^{-\mathit{tA}}\)
-
multiplication: \(e^{t_{2}A}e^{t_{1}A} = e^{(t_{2}+t_{1})A}\)
7.1.5 Stability
In Sects. 2.8.6 and 2.8.10, it was shown that a linear autonomous system is asymptotically stable if and only if all eigenvalues of the system matrix A have negative real parts, i.e., are situated in the OLHP. Equivalently, the same holds for the roots of the characteristic equation. Asymptotic stability for autonomous systems implies that a trajectory that starts at some initial value will tend to a fixed point.
For a system with nonzero inputs \(\vec{u}(t)\), this definition may not be sufficient. The input can be a persistent disturbance with a certain amplitude that prevents the system from approaching the fixed point. For a feedback system, it is, however, necessary that the states \(\vec{x}(t)\) or the output \(\vec{y}(t)\) remain bounded. This motivates the following definition:
Definition 7.1.
A dynamical system
with input \(\vec{u}(t)\), states \(\vec{x}(t)\), and output \(\vec{y}(t)\) is assumed to be in equilibrium for t = t0 with arbitrary real t0, i.e., \(\vec{x}(t_{0}) =\vec{ x}_{\mathrm{F}}\), where \(\vec{x}_{\mathrm{F}}\) is a fixed point. This fixed point is said to be bounded-input bounded-output (BIBO) stable if for every finite c1 with \(\|\vec{u}(t)\| < c_{1}\) for t ≥ t0, there exists a finite c2 such that \(\|\vec{y}(t)\| \leq c_{2}\) for t ≥ t0.
(See, e.g., Ludyk [11, Definition 3.37, p. 159].)
The step response (7.13) shows that \(y_{\Theta }(t)\) is bounded if all poles of H(s) have negative real parts. Because of Eq. (7.12), this is also true if
holds, i.e., if the impulse response h(t) is absolutely integrable.
In general, the following theorem holds.
Theorem 7.2.
An LTI SISO system is BIBO stable if and only if the following (equivalent) conditions are satisfied:
-
The transfer function H(s) has only poles with negative real parts.
-
the impulse response h(t) is absolutely integrable.
(See, e.g., Ludyk [11, Theorems 3.39 and 3.40, p. 160].)
In addition, there is a close relationship between BIBO and asymptotic stability. The transfer function H(s) can be written as
where adj(A) denotes the adjugateFootnote 3 matrix of A. Thus, the poles of H(s) are obtained by calculating the roots of the characteristic equation
and these are identical to the eigenvalues of A. However, due to pole–zero cancelations, the poles are, in general, a subset of the eigenvalues of A, i.e., not every eigenvalue is a pole of H(s). If A has only eigenvalues with negative real parts, the system is asymptotically stable, and this always implies that the poles have negative real parts. This consideration leads to the following theorem:
Theorem 7.3.
An LTI system that is asymptotically stable is also BIBO stable, but a BIBO stable system is not always asymptotically stable.
(See, e.g., Ludyk [Theorem 3.41, p. 160][11].)
7.2 Standard Closed Loop
The block diagram in Fig. 7.2 is called the standard feedback loop. It has one input and one output and is thus also called a single-input single-output (SISO) system.
The feedback system can be described by the following equations:
Solving these equations for the output Y (s) leads to
with the reference to output transfer function
and the disturbance to output transfer function
A unity feedback systemhas Hm(s) = 1, and in this case, the disturbance to output transfer function
is also called the sensitivity function, and the reference to output transfer function
is the complementary sensitivity function. Note that \(H_{\mathrm{dy}}(s) + H_{\mathrm{ry}}(s) = 1\).
Usually, the process transfer function Hp(s) has to be determined in a separate modeling step before the analysis or the design of the feedback loop. The modeling can be based on analytical equations if the underlying physical principles are well known. If this is not the case, measurements may be used for a system identification. In both cases, modeling assumptions have to be made to limit the complexity of the system. Often, nonlinearities in the feedback loop are linearized, and high-frequency dynamics are omitted.
7.3 Example: Amplitude Feedback
As a realistic example of a feedback loop, the amplitude feedback control of a ferrite-loaded cavity will be considered. The feedback is needed to hold the amplitude \(\hat{V }_{\mathrm{gap}}\) of the RF voltage close to a given reference value \(\hat{V }_{\mathrm{gap,ref}}\). In our example, the cavity feedback loop behaves highly nonlinearly with respect to the RF frequency fRF and the reference amplitude \(\hat{V }_{\mathrm{ref}}\). In the following, the operating point
will be considered. A model of the feedback loop was obtained in [12] based on measurements, and the corresponding block diagram is shown in Fig. 7.3. In the following, only amplitudes of RF signals are used, not the RF signals themselves.
The feedback loop consists of the following subcomponents:
-
The cavity is driven by the anode current with the amplitude \(\hat{I}_{\mathrm{a}}\). The amplitude of the resulting gap voltage \(\hat{V }_{\mathrm{gap}}\) acts approximately as a first-order system (PT1) with respect to \(\hat{I}_{\mathrm{a}}\) (see also Appendix A.12.1). The “gain” is equalFootnote 4 to \(R_{\mathrm{p}} \approx 2700\,\mathrm{\Omega }\), and the time constant is \(T_{\mathrm{cav}} \approx 4\,\upmu\ \mathrm{s}\). The set points are \(\hat{V }_{\mathrm{gap}} = 2\,\mathrm{kV}\) and \(\hat{I}_{\mathrm{a}} = 0.75\,\mathrm{A}\).
-
A capacitive divider is used to downscale the gap voltage of one-half the gap with a factor of 1000. This has no significant influence on the time constants in the loop. With respect to the total gap voltage, the scaling is \(K_{\mathrm{cd}} = 1/2000\).
-
An amplitude detector with time constant \(T_{\mathrm{det}} = 5\,\upmu\ \mathrm{s}\) is used to obtain the amplitude \(\hat{V }_{\mathrm{gap,det}}\). This amplitude is then compared to the reference \(\hat{V }_{\mathrm{ref}}\). The set points are \(\hat{V }_{\mathrm{gap,det}} = 1\,\mathrm{V}\) and \(\hat{V }_{\mathrm{ref}} = 1.04\,\mathrm{V}\).
-
The parameters of the controller are Kc = 14. 9, \(T_{\mathrm{c1}} = 17.2\,\upmu\ \mathrm{s}\), and \(T_{\mathrm{c2}} = 487.2\,\upmu\ \mathrm{s}\). A saturation limit sat follows that limits the control output to \(\pm 7.23\,\mathrm{V}\). The offset voltage is \(\hat{V }_{\mathrm{c,off}} = 0.2\,\mathrm{V}\). In the feedforward loop, the gain is Kff = 0. 6. According to these values, the set point of the control effort is \(\hat{V }_{\mathrm{c}} = 1.02\,\mathrm{V}\).
-
The (amplitude) modulator produces a sinusoidal signal modulated with \(\hat{V }_{\mathrm{c}}\). The sinusoidal signal with initial amplitude \(0.316\,\mathrm{V}\) (0 dBm) is damped with a factor of \(-12.2\,\mathrm{dB}\); this corresponds to a factor of 0. 245 for the voltage amplitude. Altogether, the modulator can be modeled as a gain Kmod = 0. 316 ⋅ 0. 245. Hence, the set point of the driving voltage is \(\hat{V }_{\mathrm{dr}} = 79\,\mathrm{mV}\).
-
The gains of the driver and tetrode amplifiers depend on the RF frequency and the amplitude of the gap voltage. For the chosen setting, we have \(G_{\mathrm{Vgain}} \approx 27\,\mathrm{S}\) and \(K_{\mathrm{Vgain}} \approx 0.35\).
Signal time delays with a magnitude of about \(1\,\upmu\ \mathrm{s}\) are neglected in the following. However, they would be important for larger feedback gains.
The given set-point values were obtained by choosing \(\hat{V }_{\mathrm{gap}} = 2\,\mathrm{kV}\). Because the stationary gain of the cavity transfer function is Rp, the necessary anode current amplitude equals \(\hat{I}_{\mathrm{a}} =\hat{ V }_{\mathrm{gap}}/R_{\mathrm{p}}\). All other set-point values in the feedback loop follow accordingly. This results in a reference \(\hat{V }_{\mathrm{ref}}\) that is slightly higher than \(\hat{V }_{\mathrm{gap,det}}\) and thus in a stationary control error \(\hat{V }_{\mathrm{e}} = 40\,\mathrm{mV}\). This steady-state error could be avoided by introducing an integral controller in the loop. However, it is also possible to adjust the reference in such a way that the desired value \(\hat{V }_{\mathrm{gap}}\) is reached, as has been done in this case.
The system is nonlinear due to the saturation function. This function and the offset values \(\hat{V }_{\mathrm{ref}}\) and \(\hat{V }_{\mathrm{c,off}}\) can be neglected if only small deviations with respect to the set point are considered. This leads to the linearized feedback loop in standard notation, as shown in Fig. 7.4 with amplitude error \(\Delta \hat{V }_{\mathrm{gap}} =\hat{ V }_{\mathrm{gap}} -\hat{ V }_{\mathrm{gap,ref}}\) and reference \(\Delta \hat{V }_{\mathrm{ref}} = 0\). Similarly, all other values are defined relative to their set-point values, e.g., the relative control effort is \(\Delta \hat{V }_{\mathrm{c}} =\hat{ V }_{\mathrm{c}} - 1.02\,\mathrm{V}\).
A calculation of the reference to output transfer function according to (7.15) yields
A zero-pole-gain representation of this transfer function can be obtained by a numerical calculation of the poles and zeros. The gain is equal to the ratio of the factors of the highest order in s in the numerator and denominator. For the amplitude loop, these orders are s2 and s3, respectively, and the gain is
The resulting zero-pole-gain representation is
with zeros
and poles
Thus, the closed-loop system is BIBO stable. The pole p1 is closest to the imaginary axis and dominates the dynamics of the feedback. The dominating pole corresponds to a closed-loop bandwidth and a time constant of
The absolute values of the remaining poles are larger by an order of magnitude. They are thus negligible for a first rough evaluation of the closed-loop dynamics.
7.4 Analysis and Stability
The closed-loop transfer function
can be obtained from the given open-loop transfer function using only basic manipulations. The calculation of the poles pi from the characteristic equation
is a more complex task, and numerical computations are necessary for higher-order systems in general. For a stability analysis, one may, however, not be interested in the exact values of the poles, but only in the decision whether all poles have negative real parts. There are several stability criteria that can be applied without solving the characteristic equation directly. The Hurwitz and Nyquist criteria will be presented in the next sections.
7.4.1 Routh–Hurwitz Stability Criterion
The Routh–Hurwitz criterion is a necessary and sufficient condition for the roots of the polynomial
to have only negative real parts, in which case the polynomial is then called a Hurwitz polynomial. The criterion is of particular interest if the coefficients ai contain undetermined parameters. An example of such a parameter is the controller gain in the feedback loop. With the Routh–Hurwitz criterion, inequalities in these parameters can then be obtained for the closed loop to be stable.
A first necessary condition is given by the following theorem:
Theorem 7.4.
If the polynomial (7.17) is Hurwitz, then it has only positive coefficients ai> 0, \(i = 0,1,\ldots,n - 1\).
(See, e.g., Ludyk [11, Theorem 3.43, p. 161].)
This enables a first simple test whether a polynomial can be Hurwitz. If any of the coefficients is missing, i.e., ai = 0, or any ai is negative, there will be roots with nonnegative real part, and the polynomial is not Hurwitz.
A necessary and sufficient condition is presented by the Hurwitz criterion. It uses the ν ×ν Hurwitz determinants
where the coefficients ai in the matrix with an index i < 0 are set to zero. As an example, the first three determinants for a polynomial with degree n ≥ 5 are
The Hurwitz criterion is given by the following theorem:
Theorem 7.5.
The polynomial (7.17) is Hurwitz if and only if the Hurwitz determinants Hνdefined by (7.18) are positive for ν = 1,…,n.
(See, e.g., Gantmacher [13].)
A simplified version of this theorem needs only half the determinants:
Theorem 7.6.
Suppose that all the coefficients of the polynomial (7.17) are positive. For odd n, the polynomial is Hurwitz if and only if the Hurwitz determinants H2,H4,…,Hn−1are positive. For even n, the polynomial is Hurwitz if and only if the Hurwitz determinants H3,H5,…,Hn−1are positive.
(See, e.g., Gantmacher [13].)
Consider as an example the amplitude feedback introduced in Sect. 7.3. The denominator of the closed-loop transfer function reads
with
In the following, the physical units of these coefficients will be ignored to avoid confusion with the Laplace variable s. Since all coefficients are positive, this polynomial with n = 3 is Hurwitz, because
Now assume that the feedback gain Kc in the loop of Fig. 7.4 is a free parameter. As a consequence, the coefficients a0 and a1 become parameter-dependent:
The Hurwitz criterion now leads to the conditions
Thus, the feedback loop is stable for Kc > −1. 01. Due to the stability of the open-loop system, the closed-loop system obviously remains stable even if the feedback gain is slightly negative. A positive feedback gain Kc, however, is the typical case for the amplitude control. Figure 7.5 shows the closed-loop poles in the complex s-plane as a function of the positive gain Kc > 0. This type of diagram is also referred to as a the root locus. For Kc = 0, the closed-loop poles are equal to the open-loop poles
that are obtained from the open-loop transfer function (cf. Fig. 7.4)
For increasing Kc, the closed-loop pole p1 moves to the left toward the open-loop zero
of Hopen(s), whereas the poles p2 and p3 approach each other and for a certain Kc between 0 and 14.9, a complex conjugate pole pair arises. The root locus indicates that the closed loop remains stable also for higher Kc → ∞, because all three branches of the root locus remain in the OLHP. Since the branches of the root locus are the positions of the closed-loop poles,Footnote 5 the closed loop is stable. This is in agreement with the result of the Hurwitz criterion.
Please note that for a practical implementation, very large feedback gains Kc would not be recommendable for several reasons:
-
For sufficiently large gains, the complex pair p2, 3 dominates the dynamics of the loop, resulting in an unacceptable oscillatory behavior.
-
Large gains may increase disturbances, especially the measured noise.
-
The feedback of the real system may become unstable for very large gains due to unmodeled high-frequency dynamics and delays.
7.4.2 Bode Plots and Nyquist Criterion
The Hurwitz stability criterion is based on the characteristic equation, i.e., on the denominator polynomial of the closed-loop transfer function. The Bode plots and the Nyquist criterion are approaches that are different in the sense that they rely on the open-loop transfer function
of the standard feedback loop; cf. Fig. 7.2. Consider as an example the system
This system is assumed to have a real zero z1 ≠ 0, a real pole p1 ≠ 0, a complex pole pair p2 and p2∗, and N poles at s = 0. The frequency response of Hopen(s) is given by
The complex pole pair can also be written as
In a Bode diagram, the amplitude and phase of Hopen are plotted versus the frequency ω > 0. A logarithmic scale is used, which has the advantage that the multiplication of two transfer functions is equivalent to the sum of their Bode diagrams. The amplitude of Hopen in decibels (dB) is calculated as
In our example, using the properties of the logarithmic function leads to
This expression is the sum of five components. The first is the constant
The second function is due to the zero and can be approximated by two asymptotes:
The N-fold integrator leads to
For the pole p1, the result is similar to the case of zero z1, but with opposite signs:
Finally, the pole pair has the following asymptotes:
The phase of Hopen is given by
The phases \(\measuredangle H_{i}(j\omega )\) can be approximated by asymptotes in a similar way as shown for the amplitudes. For example, the zero leads to the phase
Figure 7.6 shows the Bode plots of the transfer functions Hi(j ω) with their asymptotes for a system with N = 1, positive gain K, and with the zero and poles in the OLHP, i.e., a stable system. The following observations can be made:
-
The gain H1(j ω) = K leads to an amplitude shift of the open-loop transfer function Hopen.
-
The zero z1 > 0 raises the amplitude and phase; cf. H2(j ω). At the frequency ω = | z1 | , the amplitude is close to \(3\,\mathrm{dB}\), and the phase equals π∕4. For large frequencies, the amplitude increases with \(20\,\mathrm{dB}\) per (frequency) decade and the phase approaches π∕2.
-
The amplitude of the integrator H3(j ω) tends to infinity for small frequencies. This fact enables steady-state accuracy for the closed loop with regard to stepwise disturbances. However, the phase of \(-\pi /2\) may lead to stability problems in some cases. This can be shown with the Nyquist stability criterion, which will be presented below.
-
The pole p1 has the opposite effect to that of the zero z1. For large frequencies, the amplitude slope is \(-20\,\mathrm{dB}\) per decade, and the phase approaches \(-\pi /2\).
-
For small or large frequencies, the complex pole pair acts as a double pole at ω = | p2 | . However, for frequencies close to | p2 | , a resonance may occur. This means that | H5(j ω) | may become considerably larger than 1. The frequency at which the maximum of | H5(j ω) | occurs can be calculated analytically, and it reads
$$\displaystyle{\omega _{\mathrm{res}} = \sqrt{\mathrm{Im }\{p_{2 } \}^{2 } -\mathrm{ Re }\{p_{2 } \}^{2}} \approx 1.94,}$$i.e., Im{p2} > Re{p2} is a necessary condition for a resonance. Disturbances or input signals with frequencies close to ωres will be amplified significantly in the open loop. A resonance in the open loop may be one reason why feedback is necessary. Feedback can provide additional damping, so that the resonance is not present in the closed-loop frequency response.
For the Bode plot of the system with the transfer function Hopen, the Bode plots of the subsystems Hi have to be combined. As already shown, this simply corresponds to the sum of the amplitude and phase plots due to the use of a logarithmic scale. This also applies to the asymptotes. To sketch the asymptotes of the Bode plot of Hopen, it is therefore possible to proceed as follows. First, the break points are calculated as the absolute value of the zeros and poles, i.e., ω = | zi(0) | and ω = | pi(0) | . The argument 0 for both zi and pi emphasizes that the open-loop zeros and poles are used. Next, one begins with the asymptote of the N-fold integrator H3. This asymptote is a line with slope \(-20N\,\mathrm{dB}\) per decade (of the frequency ω) that crosses the point with amplitude 20log10(K) at \(\omega = 1\,\mathrm{s^{-1}}\). For N = 0, the Bode plot begins with a horizontal asymptote. One then proceeds to higher frequencies, changing the slope of the asymptote at every break point. For a single pole, the slope changes by \(-20\,\mathrm{dB}\) per decade; for a single zero, by \(20\,\mathrm{dB}\) per decade; and for multiple poles or zeros, accordingly with the multiple of these slopes. For the phase plot, one begins with a horizontal asymptote of \(-N \frac{\pi } {2}\). At the break points, the asymptote is changed stepwise with \(\frac{-\pi } {2}\) for a single pole, \(\frac{\pi }{2}\) for a zero, and a multiple of \(\frac{\pi }{2}\) for multiple poles or zeros. For the amplitude feedback, this procedure leads to the asymptotes as shown in Fig. 7.7 for Kc = 1. The exact Bode plot is shown as a solid black curve. The static open-loop gain equals
for Kc = 1. At ω = p1(0), the first pole leads to a negative slope of \(-20\,\mathrm{dB}\) per decade. Next, the zero z1(0) raises the slope to zero, before the two remaining poles finally lead to a slope or cutoff rate of \(-40\,\mathrm{dB}\) per decade. The phase begins at zero and drops to
for large frequencies.
The frequency at which the amplitude drops by \(-3\,\mathrm{dB}\) is called the cutoff frequency. It is denoted by \(\omega _{\mathrm{c}} = 2004\,\mathrm{\frac{1} {s} }\) in Fig. 7.7 and is also called the bandwidth of the open-loop transfer function [1].
Because the Bode plot contains all information about the open loop, there is a unique correspondence between this diagram and the transfer function Hopen(s). If the open loop is stable, the Bode plot can be obtained by measuring the frequency response Hopen(j ω). An equivalent diagram that is very useful for determining the stability of the closed loop is the Nyquist plot. It is obtained by plotting the curve
in the complex plane for \(\omega \in \mathbb{R}\). The Nyquist plot of the amplitude feedback example is shown in Fig. 7.8.
Due to
the part of the Nyquist plot for negative frequencies ω is always axially symmetric to the part for positive frequencies. For this reason, the Nyquist plot is usually analyzed for only positive frequencies. From the discussion of the Bode plot, it is already known that the Nyquist plot begins at Hopen(j0) = 0. 99 and approaches the origin for large ω. Also, the phase approaches −π, as can be observed from the closeup view in Fig. 7.8. The vector
points from \(-1 + j0\) to the Nyquist plot, as shown in Fig. 7.8. Its behavior is essential for the stability of the closed loop. If we follow this vector from ω = 0 to ω → ∞, we can define the change of its argument as
The general Nyquist stability criterion can now be used to determine the stability of the closed loop:
Theorem 7.7.
The closed loop is asymptotically stable if and only if the continuous change of the argument as defined in Eq. (7.21) is equal to
where nunstableis the number of (unstable) open-loop poles in the ORHP and ncriticalis the number of open-loop poles on the imaginary axis.
(See, e.g., Unbehauen [14, p. 156].)
Only the continuous change in the argument is considered. If, for example, the Nyquist plot consists of several branches due to open-loop poles on the imaginary axis, then \(\Delta \varphi _{\mathrm{Nyquist}}\) can be determined for each branch separately, and the total change is the sum of these results.
Since the amplitude feedback system in our example contains only stable open-loop poles, a necessary and sufficient condition for stability is
as is the case for Kc = 1 in Fig. 7.8. Changing the gain Kc will only scale the Nyquist plot, as shown in Fig. 7.9. For positive gains Kc > 0, the closed loop will always be stable, because \(\Delta \varphi _{\mathrm{Nyquist}} = 0\). In the case of negative Kc, the Nyquist plot is also rotated by 180∘, and the critical point \(-1 + j0\) is crossed for
and the change in the argument is \(\Delta \varphi _{\mathrm{Nyquist}} = +\pi\). Thus, the closed loop is unstable for Kc < −1. 01, a result already obtained with the Hurwitz criterion.
7.4.3 Time Delay
If the feedback loop contains a considerable time delay Td, this can be taken into account in the Laplace transform of the open loop Hopen(s). If, for example, the measurement of the output y(t) is delayed, this leads to
Due to the shift theorem of the Laplace transform, every open loop with a single delay can be expressed by
The consequence of the exponential function is that the characteristic equation of the closed loop is no longer an algebraic equation, but a transcendental one. The number of poles becomes infinite, and the stability analysis is thus more involved. Fortunately, the Nyquist criterion can still be applied [15]. For the frequency response,
holds, i.e., the delay leads to a faster decrease of the phase, but does not affect the amplitude. Figure 7.10 shows the Nyquist plot of the amplitude feedback with the nominal feedback gain of Kc = 14. 9 and an additional time delay of \(T_{\mathrm{d}} = 5\,\upmu\ \mathrm{s}\). This time delay is a worst-case scenario for signal transit times due to a distance of about \(100\,\mathrm{m}\) between the cavity and the LLRF unit [12]. The closeup shows that the closed loop is still stable, but not for arbitrary Kc > 0. The Nyquist plot crosses the horizontal axis at − 0. 237. Increasing the gain Kc by a factor of
will therefore lead to a crossing of the critical point \(-1 + j0\) and to instability. This factor is called the amplitude margin and is a measure for variations in the amplitude of the process transfer function that can be tolerated. For larger amplitude margins, the feedback is more robust against such variations. In addition, Fig. 7.10 shows that the Nyquist plot crosses the unit circle at an angle of about − 83∘. The frequency of this crossing is \(\omega = 34.2 \cdot 10^{3}\,\mathrm{s^{-1}}\). The phase margin
is defined as the distance to the critical point in terms of the phase, i.e., the tolerable variation in the phase of the process transfer function. A simple estimateFootnote 6 shows that an additional time delay of \(T_{\mathrm{d}} = 50\,\upmu\ \mathrm{s}\) would lead to a phase decrease of
i.e., the feedback will remain stable for time delays up to this order of magnitude.
7.4.4 Steady-State Accuracy
The standard closed loop in Fig. 7.2 on p. 341 is said to have no steady-state error if
is guaranteed, i.e., if the measured value converges to the reference value. From Fig. 7.2, the following expression for the steady-state error can be obtained:
In the following, it is assumed that all transfer functions in this expression are stable, i.e., have only poles in the OLHP. In this case, we can use the final-value theorem for Laplace transforms (cf. Sect. 2.2). Without disturbances, this leads to
It is now particularly important which type of reference signal yr(t) is assumed. For a step function, we have \(Y _{\mathrm{r}}(s) = K/s\) andFootnote 7
This shows that an integrator (1∕s) in the feedback loop—in the controller, the process, or the measurement transfer function—is sufficient for a vanishing steady-state error. For other reference signals, this may not be sufficient. For example, a ramp signal (1∕s2) requires at least two integrators in the transfer functions of the feedback loop. However, too many integrators may lead to stability problems, because each integrator lowers the phase of the open-loop transfer function by \(-\pi /2\).
If significant disturbances are present, it is usually necessary that the integrator be contained in the controller, as can be seen from the other transfer functions in Eq. (7.22). Assuming that the process and measurement transfer functions have no integrator, Hp(0) and Hm(0) are finite, and an integral controller will lead to xe(∞) = 0 for stepwise disturbances.
7.5 Feedback Design
7.5.1 Tradeoff Between Performance and Robustness
The transfer function Hp(s) in Fig. 7.2 on p. 341 usually describes the physical behavior of the real process only approximately. Reasons for model errors can be nonlinearities, dependence on time or operating conditions, and unmodeled high-frequency dynamics. In many cases, the model errors may be described by parameter variations in the numerator and denominator of the transfer function Hp(s). These variations will lead to a change in performance of the closed-loop control. To estimate this effect, the sensitivity function
is defined as the relative change of the closed-loop transfer function Hry(s) with respect to variations of the process transfer function Hp(s). With Eq. (7.15), this leads to
and finally to the sensitivity function
This is exactly the disturbance-to-output transfer function Hdy(s) (cf. Eq. (7.16)) that was derived from Fig. 7.2. It is apparent that a sufficiently large feedback gain | Hc | will lead to both a small sensitivity | Hs | and a good disturbance rejection. However, a large feedback decreases the amplitude margin AM in many cases and may lead to instability. This shows that a tradeoff between performance and robustness specifications is usually necessary. Please note that for the open-loop system, Hc = 0, and the sensitivity equals 1. For the closed-loop system, | Hs | also approaches 1 for large frequencies, because for most practical cases, | HpHcHm | tends to zero.
For our amplitude feedback example, the sensitivity function is equal to
with
Its amplitude | Hs(j ω) | is shown in Fig. 7.11. In contrast to | Hopen,delay(j ω) | , the amplitude of the sensitivity function depends on the time delay.
The sensitivity shows that the amplitude feedback rejects disturbances or noise with frequency components up to about \(10\,\mathrm{kHz}\). The closed loop is also less sensitive with respect to model variations than the open loop in this frequency range. However, the sensitivity is not zero for ω → 0. This implies that the closed loop does not reject DC offsets completely and may thus have a steady-state error. This can be shown as follows. From the standard feedback loop, the control error can be calculated as
If we assume that the closed loop is stable and the reference signal is equal to a unit step, i.e., \(Y _{\mathrm{r}} = 1/s\), then the final value of the control error is given by
Thus, the value of the sensitivity function for ω = 0 is equal to the relative steady-state error of the closed-loop system. For the amplitude feedback loop, a value of 6. 4%, or \(-23.9\,\mathrm{dB}\), is obtained. This steady-state error will also be apparent in the simulation results in the next section.
7.5.2 Design Goals and Specifications
The main design goals of feedback are stability, a fast dynamic response, disturbance rejection, a small tracking error, and robustness against parameter variations. In addition, the control effort should comply with the physical limitations of the process. There exist several parameters to describe these specifications quantitatively. In the time domain, the response to a step disturbance or reference signal is often considered, and the following quantities are used to describe the dynamic response:
-
Rise time: transit time from 10% to 90% of the final value, i.e., of the output step size.
-
Percentage of overshoot.
-
Settling time: time after which the output stays inside a ± 5% or ± 2% interval around the final value.
-
Steady-state error between the reference signal and the output.
The performance of the amplitude feedback example is shown in Fig. 7.12. The curve \(\hat{V }_{\mathrm{gap,det}}\) is obtained from a simulation model from [12], which is in good agreement with measurements. The reference signal \(\hat{V }_{\mathrm{ref}}\) is initially raised from zero to \(1\,\mathrm{V}\). Due to a prefilter with a time constant of \(43\,\upmu\ \mathrm{s}\), the reference signal is raised not stepwise, but smoothly. The simulation model includes not only the amplitude feedback, but also a resonance frequency feedback to ensure that the cavity is in resonance. At the beginning of the simulation, the resonance frequency feedback has to settle and has a strong coupling with \(\hat{V }_{\mathrm{gap}}\). At \(t \approx 3\,\mathrm{ms}\), both feedback loops have reached their equilibrium.
The amplitude feedback is excited at \(t = 3.5\,\mathrm{ms}\) with a stepwise disturbance of the measurement \(\hat{V }_{\mathrm{gap,det}}\). The dynamic response of the simulation model is compared to the response of the linear closed loop Hry(s) with Td = 0 (Fig. 7.12, bottom left). This shows that the transfer function Hry(s) describes the behavior very well for small deviations from equilibrium. From the simulation results, a rise time of \(73\,\upmu\ \mathrm{s}\), a 5% settling time of \(103\,\upmu\ \mathrm{s}\), and a steady-state error of 6. 4% are obtained.
At \(t = 4.5\,\mathrm{ms}\), the cavity is detuned, so that the gap voltage drops by about \(0.5\,\mathrm{kV}\). This time, the simulation model shows a different behavior due to the interaction of the resonance frequency feedback with the amplitude feedback. This demonstrates that nested control loops are dynamically coupled in general. If the coupling is strong, it is necessary to take this into account during the analysis and design of the feedback. Nested control loops can be described by MIMO or multivariable control systems [16].
In addition to the mentioned parameters, there also exist specifications in the frequency domain:
-
Resonant peak: the maximum of the closed-loop frequency response | Hry(j ω) | indicates relative stability and is recommended to be between 1.1 and 1.5 [1].
-
Bandwidth: the frequency at which | Hry(j ω) | has decreased by \(-3\,\mathrm{dB}\) with respect to the zero-frequency value.
-
Cutoff rate: the slope of | Hry | at high frequencies.
-
Amplitude margin and phase margin (cf. Sect. 7.4.3): an AM larger than \(6\,\mathrm{dB}\) and a PM between 30∘ and 60∘ are regarded as a good tradeoff between robustness and performance [1].
In our example, the bandwidth of Hry(j ω) equals \(30.3 \cdot 10^{3}\,\mathrm{s^{-1}}\) (which corresponds to \(\Delta f = 4831\;\mathrm{Hz}\)), and the cutoff-rate is \(-20\,\mathrm{dB/decade}\).
7.5.3 PID Control
A general proper PID control algorithm is given by
it is a combination of a proportional, an integral, and a derivative controller. The transfer function can also be written as
it has two zeros and two poles. A pure derivative is obtained for TD = 0. However, this leads to an improper transfer function. In the time domain, the controller is described by the differential equation
In steady state, the control error xe must be zero due to the integration.
The controller of the amplitude feedback example is of PDT1 type. This can be shown as follows. A general PDT1controller can be written as
With
we obtain the amplitude controller that is shown in Fig. 7.4.
To design a general PID controller, it is necessary to determine the four degrees of freedom KP, KD, KI, and TD so that the specifications are met. If the open-loop system is stable, the two zeros of Hc(s) may be used to compensate open-loop poles. The time constant TD should not be chosen too small, because that would amplify high-frequency noise.
Several so-called tuning rules exist for the design of PI and PID controllers [5]. A simple tuning rule is described in [16] that is based on the approximation of the process transfer function with a first-order model
with the gain K, the time constant T, and a time delay Td. For a PI controller, the tuning rule is (cf. [16, p. 57])
with a single tuning parameter Ttune. A small value of this parameter will lead to fast output performance, whereas a large value implies a high robustness and smaller values of the input. A typical tradeoff is the choice Ttune = Td.
This tuning rule can be applied to the amplitude feedback loop example. From Fig. 7.4, the open-loop transfer function
is obtained. For this type of transfer function, the following first-order approximation may be used; cf. [16, p. 58]:
With Ttune = Td as the choice of the tuning parameter, the coefficients of the resulting PI controller are
The settling time of the linear amplitude feedback with this controller is \(16.4\,\upmu\ \mathrm{s}\) for a 5% interval around the set point. This is considerably faster than the PDT1 controller. Furthermore, the PI controller leads to a zero steady-state error. Note, however, that for the design in this section, we have neglected any interaction of the amplitude loop with the resonance frequency feedback loop.
For the practical implementation of a PID controller, some issues should be taken into account. If the process is stable, it is often sufficient to use a PI controller. Derivative action, i.e., KD ≠ 0, will lead to an increased sensitivity with respect to measurement noise. If the reference signal yr(t) contains steps and a derivative action is needed, it is usually better to use the measured output ym(t) as input of the derivative part of the controller instead of the control error xe(t); cf. [16, p. 56] and [5, p. 317]. One challenge for the integral action is the so-called integrator windup [5], a nonlinear effect.
We can illustrate this effect by means of Fig. 7.3. We assume that the controller has integral action and generates a value that exceeds the constraints of the subsequent saturation function. In this case, the output of the feedback will be a constant value as long as the saturation function is active. This may be interpreted as a feedback loop that is no longer closed, because the output of the controller does not depend on the control error. The integral controller will, however, continue to integrate the control error, and this may result in a poor overall feedback performance. Measures that prevent windup are known as antiwindup.
7.5.4 Stability Issues for Nonlinear Systems
As described in Sect. 7.1.3, almost every practical feedback system is, in fact, a nonlinear system
where \(\vec{y}_{\mathrm{m}}\) is the output vector with the measured quantities of the process. A common approach is to calculate the linearization
of the system for a certain equilibrium and to use it for the analysis or design of a linear controller so that the closed-loop behavior is stable. This approach has also been chosen in the previous sections. An important question that now arises is whether the linear controller will also be able to stabilize the nonlinear system. The stability theory of Lyapunov that was described in Sect. 2.8.5 is useful to obtain some conclusions concerning this question. In order to use the theory of Lyapunov, it is necessary to analyze the feedback loop in the time domain, because the frequency domain approach is in general not applicable to nonlinear systems.
Consider first a very general linear controller in state-space representation
where \(\vec{x}_{\mathrm{e}} = \Delta \vec{y}_{\mathrm{r}} - \Delta \vec{y}_{\mathrm{m}}\) denotes the vector with measured control errors, \(\vec{u}_{\mathrm{c}}\) is the actuator value that can be used as input to the process (i.e., \(\Delta \vec{u} =\vec{ u}_{\mathrm{c}}\)), and \(\vec{x}_{\mathrm{c}}\) contains the internal states of the controller. This type of controller is also known as a dynamic output feedback, because the controller has a dynamic structure and it uses the output vector \(\vec{y}_{\mathrm{m}}\) as the only information about the process. This type of controller also contains the PID controller as a special case: rewriting the transfer function (7.23) as the sum of a constant and a remaining polynomial leads to
Using the results of Sect. 7.1.2 and taking the additional direct feedthrough into account leads to the following state-space representation of the controller:
This is a dynamic output feedback. Note that the case of a pure derivative controller (TD = 0) is not included in this representation.Footnote 8 Due to Eq. (7.26), the transfer function of the controller can be obtained by
Connecting the controller (7.26) with system (7.25) (i.e., by \(\Delta \vec{u} =\vec{ u}_{\mathrm{c}}\)) leads directly to the following dynamics of the closed loop:
We assume that the controller is designed properly, so that the closed-loop dynamics are stable. According to the results of Sect. 7.1.5, this is the case if Acl has only eigenvalues with negative real parts.
After the controller design, the controller will be connected to the real nonlinear process. One possible choice for the input of the nonlinear system (7.24) is then
where \(\vec{u}_{\mathrm{F}}\) is a feedforward value that equals the input value at the equilibrium point \(\vec{x} =\vec{ x}_{\mathrm{F}}\). In other words,
is assumed, and the controller has only to correct deviations from the equilibrium. The control error is now given by
These choices of the closed-loop connection lead to the following dynamics:
A linearization around \(\vec{x} =\vec{ x}_{\mathrm{F}}\), \(\vec{u} =\vec{ u}_{\mathrm{F}}\), \(\vec{x}_{\mathrm{c}} = 0\), and \(\vec{y}_{\mathrm{r}} =\vec{ v}_{2}(\vec{x}_{\mathrm{F}})\) leads to the same linear dynamics as Eq. (7.27). This is reasonable, because it means that the same result is obtained either by linearizing the nonlinear closed-loop dynamics or by using the linearization (7.25) of the open-loop system (7.24) to obtain the linear closed-loop model (7.27).
We already assumed that Eq. (7.27) is stable, and we can now use theorem 2.18. For \(\Delta \vec{y}_{\mathrm{r}} = 0\) and the previous assumption of a strictly stable matrix Acl (the real parts of all eigenvalues are negative), the theorem can be applied to Eq. (7.28), and the consequence is a stable equilibrium of the nonlinear setup. This is an important motivation for using linear control design in many cases, even for systems that are practically nonlinear.
Note, however, that the linear system (7.27) is asymptotically stable in the global sense, i.e., for arbitrary initial values, whereas in general, the asymptotic stability of the nonlinear system (7.28) is given only in a local neighborhood around the equilibrium. This neighborhood, also called a region of attraction, may be so small that from a practical point of view, the equilibrium is in fact unstable. The size of the region of attraction can be estimated using Lyapunov functions as defined in Sect. 2.8.5.
A nonzero reference value \(\Delta \vec{y}_{\mathrm{r}}\neq 0\) acts as an excitation. As long as it is not too large, the closed loop will be stable.
If further disturbances act on the system (7.24) or the model is inaccurate, this may lead to a steady-state error. In most cases, an integral controller will help to avoid such an error. A pure integral controller can be written as
Therefore, Ac = 0, Bc = 1, Cc = KI, and Dc = 0. The closed-loop dynamics for a SISO system are then
and from the bottom row, we have the equilibrium
and the steady-state error will therefore tend to zero for stepwise reference signals.
Notes
- 1.
The assumption is made that the step response \(y_{\Theta }(t)\) is continuous at t = 0, continuously differentiable for t > 0, and zero for t < 0. However, the proof is also possible if \(y_{\Theta }(t)\) is piecewise analytic for t > 0 and zero for t < 0 [10].
- 2.
In the following, we write \(\vec{x}(0)\) instead of \(\vec{x}(0+)\) because we assume that the value at t = 0 is defined by the limit t → 0 for positive values of t.
- 3.
The cofactor matrix of A is a matrix that consists of the (i, k) minors of A multiplied by the factor \((-1)^{i+k}\). The adjugate matrix of A is the transpose of the cofactor matrix of A.
- 4.
Due to the output impedance of the tetrode, this value is about one-half the pure cavity impedance specified in Table 4.1 on p. 198.
- 5.
The root locus is usually obtained by a numerical calculation of the closed-loop poles for different values of the gain.
- 6.
Because the amplitude does not depend on the time delay, the crossing of the unit circle always occurs at the same frequency.
- 7.
Note that \(1 + H_{\mathrm{open}}(0) = 0\) is impossible, since that would imply that s = 0 would be a pole, and this has been excluded by considering stable transfer functions.
- 8.
This is, however, not a serious limitation, since a pure derivative would be both undesirable in the presence of noise and is not realizable on any physical hardware.
References
W.S. Levine (ed.), The Control Handbook: Control System Fundamentals, 2nd edn. (CRC Press/Taylor & Francis Group, West Palm Beach/London, 2011)
J.W. Polderman, J.C. Willems, Introduction to the Mathematical Theory of Systems and Control (Springer, New York, 1998)
J. Doyle, B. Francis, A. Tannenbaum, Feedback Control Theory (Macmillan, New York, 1990)
E.D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems. Textbooks in Applied Mathematics, Number 6, 2nd edn. (Springer, New York, 1998)
K.J. Aström, R.M. Murray, Feedback Systems: An Introduction for Scientists and Engineers (Princeton University Press, Princeton, 2012)
P. Baudrenghien, Low-level RF, in CAS - CERN Accelerator School: RF for Accelerators, Ebeltoft, 8–17 June 2010, pp. 341–367
D. Lens, Modeling and control of longitudinal single-bunch oscillations in heavy-ion synchrotrons. Fortschrittberichte VDI, Reihe 8, Mess-, Steuerungs- und Regelungstechnik; Nr. 1209; Dissertation, Technische Universität Darmstadt, 2012
K.J. Aström, B. Wittenmark, Computer-Controlled Systems (Prentice Hall, Englewood, 1997)
O. Föllinger, Laplace- und Fourier-Transformation (Hüthig, Heidelberg, 1990)
O. Föllinger, Regelungstechnik: Einführung in die Methoden und ihre Anwendung (Hüthig Buch, Heidelberg, 1990)
G. Ludyk, Theoretische Regelungstechnik 1 (Springer, Berlin, 1995)
U. Hartel, Modellierung des Regelungs- und Steuerungssystems einer Beschleunigungseinheit für Synchrotrons. Diplomarbeit, Technische Universität Darmstadt, Darmstadt, 2011
F.R. Gantmacher, Matrizentheorie (Deutscher Verlag der Wissenschaften, Berlin, 1986)
H. Unbehauen, Regelungstechnik I, 15. Auflage (Vieweg+Teubner Verlag, Wiesbaden, 2008)
O. Föllinger, Zur Stabilität von Totzeitsystemen, Regelungstechnik, S. 145–149 (1967)
S. Skogestad, I. Postlethwaite, Multivariable Feedback Control: Analysis and Design (Wiley, London, 2005)
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits any noncommercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if you modified the licensed material. You do not have permission under this license to share adapted material derived from this chapter or parts of it.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2015 The Author(s)
About this chapter
Cite this chapter
Klingbeil, H., Laier, U., Lens, D. (2015). Closed-Loop Control. In: Theoretical Foundations of Synchrotron and Storage Ring RF Systems. Particle Acceleration and Detection. Springer, Cham. https://doi.org/10.1007/978-3-319-07188-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-07188-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07187-9
Online ISBN: 978-3-319-07188-6
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)