Abstract
Linear time-periodic systems arise whenever a nonlinear system is linearized about a periodic trajectory. Examples include anisotropic rotor-bearing systems and parametrically excited systems. The structure of the solution to linear time-periodic systems is known due to Floquet’s Theorem. We use this information to derive a new norm which yields two-sided bounds on the solution and in this norm vibrations of the solution are suppressed. The obtained results are a generalization for linear time-invariant systems. Since Floquet’s Theorem is non-constructive, the applicability of the aforementioned results suffers in general from an unknown Floquet normal form. Hence, we discuss trigonometric splines and spectral methods that are both equipped with rigorous bounds on the solution. The methodology differs systematically for the two methods. While in the first method the solution is approximated by trigonometric splines and the upper bound depends on the approximation quality, in the second method the linear time-periodic system is approximated and its solution is represented as an infinite series. Depending on the smoothness of the time-periodic system, we formulate two upper bounds which incorporate the approximation error of the linear time-periodic system and the truncation error of the series representation. Rigorous bounds on the solution are necessary whenever reliable results are needed, and hence they can support the analysis and, e.g., stability or robustness of the solution may be proven or falsified. The theoretical results are illustrated and compared to trigonometric spline bounds and spectral bounds by means of three examples that include an anisotropic rotor-bearing system and a parametrically excited Cantilever beam.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
To analyze the vibration behavior of a system completely, one has to consider all its components individually. For large-scale systems such a detailed analysis is often not applicable, hence all system components are combined to a single quantity, e.g., Euclidean norm or any other norm. This simplification is a rough measure of the vibration behavior of the system and therefore, it does not show its exact behavior. In this paper, we derive bounds on the norm of the solution of linear time-periodic systems. We investigate various norms and with the respective bounds on the solution the vibration behavior of the system and its transient analysis can be supported and, e.g., stability and robustness can be analyzed.
Linear time-invariant systems arise in many fields of application, e.g., via linearization of vibrational systems [33], and have been an active area of research. Their solution is defined via the matrix exponential and their numerical evaluation can be done by methods to solve ordinary differential equations, e.g., by Runge–Kutta methods [12], or the computation of the matrix exponential [13, 21]. Two-sided bounds for the solution of linear time-invariant systems have been investigated in a series of papers [17, 18]. A time-varying system in general does not possess a closed form solution (unless the system matrix commutes for any two times). Hence, the theory derived in [17, 18] cannot easily be extended to a general linear time-varying system with an infinite time horizon. In this paper, we investigate linear time-periodic systems and generalize the theory of bounds to the solution of linear time-periodic systems while using its solution structure defined by Floquet’s theory [8]. In general, Floquet’s normal form is non-constructive, hence an approximation by numerical methods is needed, e.g., in [30, 31]. As long as the approximation is not exact, it involves an error and the bounds on the solution of the approximated system then may not be valid w.r.t. the solution of the original linear time-periodic system. In [29], the stability of a linear time-periodic system is analyzed by an approximation approach with quadratic polynomials. We generalize the idea in three different ways. Firstly, we use trigonometric splines [27, 28], which can be seen as a natural choice for time-periodic systems, since they mimic its time-periodicity. Here, we show bounds on the solution for quadratic trigonometric splines. In general, bounds can be derived for higher order, as well, as long as the method converges, see e.g., [25]. But a spline approximation of order 4 or larger is divergent [20]. Secondly, we do not limit ourselves to quadratic polynomials but use a general framework such that the polynomial approximation can be performed with any desired degree by Chebyshev projections. In [30, 31], numerical methods based on Chebyshev projection have already been considered to solve linear time-periodic systems. We generalize the integration [30] or differentiation [31] scheme by a general framework. Here, we do not approximate the solution but the time-varying system matrix by Chebyshev polynomials [5]. We use results from approximation theory [36] in order to obtain bounds on the approximated system. The solution of the approximated system is then entire and it has an infinite series representation. Hence, it can be truncated and a bound on the truncation error is derived. Within this approximation framework we show that the truncated solution of the approximated system converges to the original solution of the linear time-periodic system which is a very important extension to the work in [30, 31]. Thirdly and most importantly, the trigonometric splines and the Chebyshev approximation framework yield rigorous bounds on the solution of a linear time-periodic system, i.e., we do not only approximate the solution by the aforementioned methods but we obtain bounds on the solution as well. These bounds essentially behave like the approximated solutions, i.e., they converge to the original solution at the same rate as the approximated solution. Transient analysis of the original linear time-periodic system can be supported by stability and robustness analysis of the aforementioned bounds due to their rigorousness. The ideas and bounds for trigonometric splines and Chebyshev projections on the solution of linear time-periodic systems can be applied to general time-varying systems over a finite time interval.
The paper is structured as follows. In Sect. 3, rigorous bounds are obtained due to the structure of the solution. In Sect. 3.1 we summarize results for linear time-invariant systems [18, 19]. Two-sided bounds on the solution with the differential calculus of norms, e.g., in [15–17], are shown. In Sect. 3.2 we generalize the results for time-invariant to time-periodic systems. Here, the matrix logarithm of the monodromy matrix w.r.t. the length of the period takes the role of the time-invariant coefficient matrix. A newly defined time-dependent norm yields two-sided bounds and properties such as decoupling, vibration suppression and monotonicity of the solution, as well. This is a generalization of the time-invariant case in [19]. We use and explain two methods to solve the linear time-periodic system. The first one is described in Sect. 4, where we approximate the solution of the system by trigonometric splines following ideas in [20, 24, 25] and then we establish bounds on the quality of the approximation. The second method is the so-called spectral method [11, 26], which is explained in the setting of polynomial approximation of linear ordinary differential equation [10] in Sect. 5. We derive an upper bound based on the approximation quality and show its convergence to the solution of the linear time-periodic system. We conclude our theory on rigorous bounds of time-periodic systems with some remarks about convergence and computational complexity and show its effectiveness in Sect. 6 on various examples which include an anisotropic rotor-bearing system and a parametrically excited Cantilever beam.
2 Preliminaries
A linear time-periodic system is a set of linear ordinary differential equations (ODEs) with time-periodic coefficients with some periodicity T and a given initial condition
where \(x :{\mathbb {R}}\rightarrow {\mathbb {R}}^{n}\) and \(A:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n \times n}\).
Throughout this paper, we denote with \({\mathcal {C}}(X,Y)\) the space of continuous functions and with \({\mathcal {C}}^k(X,Y)\) the space of k-times continuously differentiable functions that map the domain \(X \subseteq {\mathbb {R}}\) to its range \(Y\subseteq {\mathbb {R}}^{n\times n}\).
2.1 Existence and uniqueness of a solution
First of all, we pose the question whether a solution to (1) exists and when it does if it is unique. We therefore cite a global existence and uniqueness result from [6] in the context of linear systems. Here, the periodicity of the system matrix can be omitted.
Proposition 1
Let \(A\in {\mathcal {C}}({\mathbb {R}},{\mathbb {R}}^{n\times n})\). Then there exists a unique solution x(t) of (1).
2.2 Floquet’s Theorem
The most fundamental result in the setting of linear time-periodic systems is Floquet’s Theorem [8]. Originally, it was given for a scalar ordinary differential equation of order \(m>1\). But here we follow the presentation of the proposition for a linear system of ordinary differential equations e.g., given in [22].
Proposition 2
(Floquet’s Theorem 1883) Let \(\varPhi (t)\) be a principal fundamental matrix of (1). Then
where \(C=\varPhi (T)\) is a constant nonsingular matrix which is known as the monodromy matrix. In addition, for a matrix L such that
there is a periodic matrix function \(t \mapsto Z(t)\) such that
Equation (4) is called Floquet normal form since the structure of the solution to (1) is given by Floquet’s Theorem as
where \(L,\, Z(t) \in {\mathbb {C}}^{n\times n}\) and \(Z(t)=Z(t+T)\) are nonsingular \(\forall t \in {\mathbb {R}}\). The eigenvalues of the matrix L, also known as Floquet exponents, determine the asymptotic behavior of the system. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative. It is stable if the Lyapunov exponents are non-positive and whenever the Lyapunov exponent vanishes, the geometric and algebraic multiplicity of the eigenvalue must coincide. Otherwise the zero solution is unstable.
The proof of Floquet’s Theorem is non-constructive, hence one needs other methods and/or bounds to approximate the solution. Nevertheless, determining the fundamental solution (4) in the interval [0, T] is sufficient due to its semigroup property given in Eq. (2).
3 Bounds for time-dependent norm
We generalize results obtained for constant linear systems in [18] and [19] to time-periodic systems. First, we recall the obtained results in order to base our generalization on them. We consider the general case when the constant coefficient matrix is non-diagonalizable. The results for a diagonalizable matrix are stated in [18] and [19]. Basically, the difference for a diagonalizable matrix is, that the algebraic and geometric multiplicity of each eigenvalue coincide. Hence, each Jordan block has size one.
Our generalization is based on Floquet’s Theorem that yields the so-called Floquet-Lyapunov coordinate transformation \(z(t)=Z^{-1}(t)x(t)=e^{Lt}x_0\) such that the original problem (1) is transformed into a linear system with constant coefficients
The solution of the transformed system (5) is \(z(t)=e^{Lt}x_0\).
3.1 Time-invariant setting
For \(u \in {\mathbb {C}}^{n}\) and \(A\in {\mathbb {C}}^{n\times n}\) in the following let \(u^*\) and \(A^*\) denote the conjugate transpose of u and A, respectively. Let \(v_k^{(i)}\) for \(k=1,\ldots ,m_i\) be the chain of right principal vectors, i.e.
and \(v_0^{(i)}=0\) for \(i=1,\ldots ,r\), corresponding to an eigenvalue \(\lambda _i\) of \(L^*\). Let r be the number of Jordan blocks and \(m_i\) the algebraic multiplicity of the eigenvalue \(\lambda _i\). Then define the following matrices:
The matrices \(R_i\) are eigenmatrices of the matrix eigenvalue problem \(R_iL+L^*R_i = 2 \hbox {Re}({\lambda _i}) R_i\). Here, L replaces the time-invariant system matrix in [18]. We recall the following results given in Proposition 3, 4 and Lemma 1 from [18] for a time-invariant system given in Eq. (5) and a possibly non-diagonalizable system matrix L.
Proposition 3
For \(k=1,\ldots ,m_i,\, i=1,\ldots ,r\), \(R_i^{(k,k)}\) and \(R_i\) are positive semi-definite and R is positive definite.
Hence, \(\Vert \cdot \Vert _R\) is a norm defined by \(\Vert v\Vert _R^2 = (Rv,v),\, v \in {\mathbb {C}}^n\) and \(\Vert \cdot \Vert _{R_i}\) is a semi-norm defined by \(\Vert v\Vert _{R_i}^2 = (R_iv,v),\, v\in {\mathbb {C}}^n\). In general, \(\Vert \cdot \Vert _{R_i}\) does not fulfill the definiteness property. Furthermore, the square of the semi-norm \(\Vert \cdot \Vert _{R_i}^2\) has a decoupling and filter effect shown by the next proposition [18].
Proposition 4
Let z(t) be the solution to the IVP (5) and
for \(k=1,\ldots , m_i, \, i=1,\ldots ,r\). Then
and
The polynomials in \(p_{x_0,k-1}^{(i)}(t)\) of Eq. (6) are due to the Jordan blocks, hence to the non-diagonalizability of the matrix L, i.e., if the matrix L is diagonalizable, then all polynomials in (6) are constant.
Lemma 1
Let
\(\psi ^{(i)}(t)=[\psi _1^{(i)},\ldots ,\psi _{k}^{(i)},\ldots \psi _{m_i}^{(i)}]^T\) for \(i=1,\ldots ,r\) and \(k=1,\ldots ,m_i\) and \(\psi (t) = [\psi ^{(1)}(t)^T,\ldots ,\psi ^{(i)}(t)^{T},\ldots ,\psi ^{(r)}(t)^T]^T\). Then
Lemma 1 shows the connection to the Euclidean norm of the function \(\psi \). By the equivalence of norms in finite-dimensional vector spaces, a two-sided bound \(c\Vert \psi (t)\Vert _p\le \Vert x(t)\Vert _R \le C\Vert \psi (t)\Vert _p\) with \(1\le p \le \infty \) can be derived. For \(p=2\), the constants c, C can be chosen as unity by Lemma 1.
3.2 Time-periodic setting
In the following we denote with \(B^{-*}\) the inverse of the conjugate transpose of B, i.e. \(B^{-*}=(B^*)^{-1}=(B^{-1})^*\). First, we show that the matrix \(\tilde{R}(t)\) is Hermitian, positive definite and bounded for any \(t\in {\mathbb {R}}\) under the right assumptions on R. For the definition of a more general time-dependent norm, see [32].
Lemma 2
Let R be Hermitian and positive definite and \(\tilde{R}(t)=Z^{-*}(t)RZ^{-1}(t)\), where Z(t) is defined by Floquet’s normal form (4). Then
-
1.
\(\tilde{R}(t)\) is positive definite for all \(t \in {\mathbb {R}}\),
-
2.
\(\tilde{R}(t)\) is Hermitian for all \(t\in {\mathbb {R}}\),
-
3.
\(\tilde{R}(t)\) is T-periodic, i.e. \(\tilde{R}(t)=\tilde{R}(T+t)\), for all \(t\in {\mathbb {R}}\) and
-
4.
\(\tilde{R}(t)\) is bounded, i.e., there exist \(c,C>0: c\le \Vert \tilde{R}(t)\Vert \le C\) for all \(t\in {\mathbb {R}}\).
Proof
-
1.
Choose u and t arbitrarily but fixed and let \(\tilde{u}=Z^{-1}(t)u\), then
$$\begin{aligned} u^*\tilde{R}(t)u = u^*Z^{-*}(t)RZ^{-1}(t)u=\tilde{u}^*R\tilde{u} \ge 0, \end{aligned}$$for all \(\tilde{u} \in {\mathbb {C}}^{n}\) since R is positive definite. Now,
$$\begin{aligned} \tilde{u}^*R\tilde{u}=0 \Leftrightarrow \tilde{u}=0 \Leftrightarrow \tilde{u}=Z^{-1}(t)u=0 \Leftrightarrow u=0, \end{aligned}$$since Z(t) has full rank and is invertible for all t.
-
2.
\(\tilde{R}(t)\) is Hermitian, since R is Hermitian.
-
3.
\(\tilde{R}(t)\) is T-periodic, since Z(t) is T-periodic.
-
4.
\(Z^{-1}(t)=e^{Lt}\varPhi ^{-1}(t)\) and \(Z^{-*}(t)=\varPhi ^{-*}(t)e^{L^*t}\) are continuous and periodic with periodicity T. Note, that \(\varPhi (t)\) is a fundamental matrix, \(\varPhi ^{-1}(t) = \varPhi (-t)\) holds [22]. \(\tilde{R}(t)\) and \(p: t \mapsto \Vert \tilde{R}(t) \Vert \) are continuous and periodic as well. Due to the extreme value theorem [9], p attains its minimum c and maximum C in \(t_c \in \left[ 0,T \right] \) and \(t_C \in \left[ 0,T \right] \), respectively. Since p is periodic, it can be bounded globally: \(c \le \Vert \tilde{R}(t)\Vert \le C\). Since \(\tilde{R}(t)\) has full rank for all \(t\in {\mathbb {R}}\), \(\tilde{R}(t_c)\) has full rank and hence, \(\tilde{R}(t_c) \ne 0\) and therefore \(c>0\), i.e.
$$\begin{aligned} \exists c,C>0: c \le \Vert \tilde{R}(t)\Vert \le C \quad \forall t \in {\mathbb {R}}. \end{aligned}$$
\(\square \)
Let \(\Vert \cdot \Vert _R\) be a global norm, then we define a local (time-dependent) norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\), see e.g., [32]. We define the (time-dependent) norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) as
By Lemma 2, \(\Vert \cdot \Vert _{\tilde{R}(t)}\) is well defined and fulfills the axioms of a norm. Furthermore,
holds. In the following we generalize results from the previous Sect. 3.1 to the norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\).
Theorem 1
(Decoupling and filter effect of the norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\)) Let L be a complex matrix such that it fulfills (3) and z(t) be the solution to the IVP (5). Then
where \(p_{x_0,k-1}^{(i)}(t)\) for \(k=1,\ldots , m_i\) and \(i=1,\ldots ,r\) are defined in (6).
Proof
The relation \(\Vert x(t) \Vert _{\tilde{R}(t)}^2 = \Vert z(t) \Vert _{R}^2\) is given by (10) and \(\Vert z(t) \Vert _{R}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t \hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}\) is given by Proposition 4. \(\square \)
By Proposition 4 a decoupling and filter effect on the semi-norms \(\Vert \cdot \Vert _{R_i^{(k,k)}}^2\) for \(k=1,\ldots ,m_i\) and \(i=1,\ldots ,r\) is shown, which carries over to the norms \(\Vert \cdot \Vert _R^2\) by Proposition 4 and \(\Vert \cdot \Vert _{\tilde{R}(t)}^2\) by Theorem 1. Decoupling and filtering are meant in the sense that we obtain a system of decoupled differential equations, where only the real part of the eigenvalues is passed and the imaginary parts are suppressed. The semi-norms suppress vibration in the sense of decoupling and filtering which is given by Corollary 1.
Corollary 1
(Vibration-suppression property of \(\Vert x(t) \Vert _{\tilde{R}(t)}\))
-
If L is diagonalizable, then
$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{n} \left\| x_0\right\| _{R_i}^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$ -
If L is non-diagonalizable, then
$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$
If the spectral abscissa \(\nu [L]=\max _{i=1,\ldots ,r} \hbox {Re}\lambda _i \) is negative, i.e., \(\nu [L]<0\), and \( d = \max _{i=1,\ldots ,r}\max _{k=1,\ldots ,m_i} {\text {degree}}(p_{x_0,k-1}^{(i)}(t)),\) then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) behaves essentially in a way similar to \(t^d e^{-t}\), i.e., there exist \(t_1>0\) such that \(\Vert x(t) \Vert _{\tilde{R}(t)}\searrow 0\) (monotonic decrease) for \(t\ge t_1\) as \(t \rightarrow \infty \). If the matrix L is diagonalizable and the spectral abscissa is nonzero, then one can conclude a monotonic behavior in \(\Vert \cdot \Vert _{\tilde{R}(t)}\) since no Jordan block occurs.
Corollary 1 does not state, that in the linear time-periodic system (1) the vibrations are suppressed, but in the \(\tilde{R}(t)\)-norm of its solution due to the decoupling and filtering effect of the norm. We would like to mention the following two cases of monotonic behavior:
-
1.
If the spectral abscissa \(\nu [L]=\max _{i=1}^n \hbox {Re}\lambda _i <0\) for a diagonalizable matrix L, then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to zero, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \searrow 0\) as \(t \rightarrow \infty \).
-
2.
If all eigenvalue have positive real part, i.e., \(\hbox {Re}\lambda _i >0\) for \(i=1,\ldots ,r\), then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to infinity, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \nearrow \infty \) as \(t \rightarrow \infty \). In general, if a mechanical system is vibrating with an increasing amplitude, then the system will eventually collapse.
The monotonic behavior of \(\Vert x(t) \Vert _{\tilde{R}(t)}\) can be used to derive upper bounds on the amplitude of \(\Vert x(t)\Vert _\infty \).
4 Trigonometric spline bound
In [20], the authors introduced a method of spline approximation in order to solve ODEs. This idea was further developed by many other researchers, see e.g., [23, 24] and [25]. They used trigonometric B-splines of second and third order to solve a nonlinear ODE. We use a modified approach in order to apply it to a linear system of ODEs and further equip the computation with rigorous bounds [4]. The unknown quantities are the coefficients of the trigonometric splines. While in the nonlinear approach one has to solve a series of nonlinear systems, this simplifies to a series of structured linear systems. Hence, a decrease of computational complexity and an effective speed-up is achieved. For further details on trigonometric splines we refer the interested reader to [27] and [28].
First, we need some mathematical basics. Let \(({\mathbb {R}}^n,\Vert \cdot \Vert _\infty )\) be a normed vector space and \({\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\) be the space of measurable and essentially bounded functions from [0, T] to \({\mathbb {R}}^n\). For a function \(x\in {\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\), its essential supremum serves as an appropriate norm:
As a reminder, \(\Vert x(t)\Vert _\infty \) denotes the maximum norm of a vector, i.e., its maximal absolute component,
Here, the idea is that the solution x(t) to (1) is approximated by splines. Due to the periodicity of our initial problem (1), trigonometric splines are chosen which mimic the behavior of the periodic system matrix A(t). In order to perform a spline interpolation, we need a node sequence and for the sake of simplicity we choose \(r+1\) equidistant nodes \(\varOmega _r=\left\{ t_0,\ldots ,t_r\right\} \) in the interval [0, T] with \(t_0=0\) and \(t_r=T\), i.e., \(t_i=ih\) for \(i=0,1,\ldots ,r\) with \(h=\frac{T}{r}\). The restriction of the quadratic trigonometric splines to any subinterval \([t_i, t_{i+1}]\) is a linear combination of \(\left\{ 1,\cos (t),\sin (t)\right\} \). Trigonometric B-splines \(S_i(t)\) are defined by
with \(\theta = \frac{1}{\sin (h) \sin \left( \frac{h}{2}\right) }\), see [23, 24, 28].
A trigonometric B-spline \(S_i(t)\) is shown in Fig. 1. First, as it can be seen in Fig. 1, for any inner subinterval \([t_i,t_{i+1}]\) with \(0<i<r\), the spline \(S_i(t)\) is fully described. For the intervals \([t_0,t_1]\) and \([t_{r-1},t_r]\), artificial intervals \([t_{-1},t_0]\) and \([t_r,t_{r+1}]\) have to be included in the definition of \(S_i(t)\) such that the restriction to the respective subinterval is still a linear combination of the functions \(1,\cos (t)\) and \(\sin (t)\). If we denote by \(S_2(\varOmega _r)\) the space of quadratic trigonometric splines in [0, T] w.r.t. the nodes \(\varOmega _r\), then \(S_2(\varOmega _r) = \mathop {\mathrm {span}} \left\{ S_i\right\} _{i=-1}^r\). Hence every quadratic trigonometric spline can be expressed in the form \(\sum _{i=-1}^r{\alpha _i S_i(t)}\). For representing a spline, the summation index i is from \(-1\) to r which does not represent the number of nodes, but the number of intervals \([t_i,t_{i+1}]\) for \(i=-1,\ldots ,r\) which includes the aforementioned artificial intervals. In our case, the coefficients \(\alpha _i\) are unknown and have to be determined.
Now, we describe in more detail a method how to compute the coefficients \(\alpha _i\). First, let us generalize the quadratic trigonometric B-splines with compact support \(s_j(t)\) to a vector \(s(t)=[s_1(t),\ldots ,s_n(t)]^T\) such that each \(s_j(t)\) approximates \(x_j(t)\) for \(j=1,\ldots ,n\), i.e.
where the unknown coefficients of the trigonometric B-splines are given by the coefficient vectors \(\alpha ^{(i)}\in {\mathbb {R}}^n\) for \(i=-1,0,\ldots ,r\). By demanding that the spline s fulfills the ODE (1), i.e., \(\dot{s}(t_i)=A(t_i)s(t_i)\) at the nodes \(t_i\) for \(i=0,\ldots ,r\), one obtains a sequence of \(r+1\) linear systems
for the coefficient vector \(\alpha ^{(i)}\). It is a sequence since the coefficient matrix \(A^{(i)}\) and the right-hand side \(b^{(i)}\) change w.r.t. the i-th node \(t_i\)
where \(I_n\) is the n-dimensional identity matrix and the initial condition \(s(t_0)=x_0\) yields \(\alpha ^{(-1)} = \cos {\left( \frac{h}{2}\right) }x_0 - \sin {\left( \frac{h}{2}\right) }A(t_0)x_0\).
Nikolis has investigated this procedure for nonlinear systems [23], where one does not solve a sequence of linear systems but a sequence of nonlinear systems by an iterative method such as the Newton method. In fact, trigonometric splines are L-splines [28]. Here the L corresponds to a certain linear differential operator, which in our case is \(L_3 x:=x'''+x'\), where x is the solution of (1). The convergence result from nonlinear systems carries over to the linear case and is stated in Proposition 5.
Proposition 5
(Nikolis [23]) For \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n \times n})\), the quadratic trigonometric spline converges quadratically to the solution, more precisely \(\Vert x-s \Vert _{\infty } = {\mathcal O}(\Vert L_3 x \Vert _\infty r^{-2})\).
The following rigorous upper bound is based on Proposition 5, see [4].
Theorem 2
Let \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n\times n})\). Then, \(L_3x\in {\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\) and
where
for \(t \in (t_i,t_{i+1}]\) and h is sufficiently small, i.e. \(L|\tan {\left( \frac{h}{2}\right) }|<1\) for L being the Lipschitz constant of the ODE (1), and \(L_3 x=x'''+x'\).
Since the proof of Theorem 2 is lengthy, it is given in Appendix. The spline and the upper bound converge to the solution resp., to the norm of the solution by Proposition 5 resp. Theorem 2 as \(h\rightarrow 0\).
5 Spectral bound by Chebyshev projections
The key idea is to replace the system (1) by an approximation. We use the spectral method [11, 34] in the setting of polynomial approximation of linear ordinary differential equations [3, 10]. The solution of the approximated system is entire and hence, the truncation error of the approximated solution can be given. Here, we approximate the system matrix by Chebyshev polynomials [5] and use results from approximation theory [36] in order to derive rigorous bounds on the original solution x(t). As preliminaries, we need some results from approximation theory, here we focus on Chebyshev polynomials, which were introduced in [5] and Chebyshev projections. We follow the presentation of Chebyshev projections based on [36]. Any approximation can be used to replace the original system, but our focus is on Chebyshev polynomials since they minimize the maximal error, which is a property we sought for in the previously introduced bounds. In Sect. 5.2 we explain the general idea of the spectral method and how we use the results from approximation theory in order to derive bounds. The bound depends heavily on how well the original system is approximated.
5.1 Chebyshev polynomials and projections
Chebyshev polynomials of the first kind can be defined by the three term recurrence relation
where \(T_0(t)=1\), \(T_1(t)=t\) for \(k = 1,2,3, \ldots \)
Chebyshev polynomials are orthogonal over the interval \([-1,1]\):
with the weight function \(\omega (t)=\frac{1}{\sqrt{1-t^2}}\). In the following, we state results only for the interval \([-1,1]\), but they can be generalized to any interval since by an affine time transformation, the Chebyshev polynomials can be mapped to an arbitrary interval. A Lipschitz continuous f has a unique representation as a Chebyshev series [36],
which is absolutely and uniformly convergent. The coefficients \(c_k\) are given by the orthogonality relationship (15),
The m-truncated Chebyshev series is defined as
For \(m\in {\mathbb {N}}\), let \({\mathcal {P}}_m\) be the space of polynomials of degree at most m. Clearly, the Chebyshev polynomials \(T_k\), \(k=0,1,\ldots ,m\), are a basis of \({\mathcal {P}}_m\). Let \({\mathcal {C}}\) be the space of continuous functions. Then \(P_m:{\mathcal {C}}\rightarrow {\mathcal {P}}_m\) defined by (17) is a linear operator and it is also called Chebyshev projection since \(P_m p = p\) for any \(p \in {\mathcal {P}}_m\) and \(P_m T_k = 0\) for \(k>m\). We recall the following two propositions given in [35, 36] which are essential for the derivation of our spectral bounds.
Proposition 6
If f and its derivatives through \(f^{(\nu -1)}\) are absolutely continuous on \([-1,1]\) and if the \(\nu \)-th derivative \(f^{(\nu )}\) is of bounded variation V for some \(\nu \ge 1\), then for any \(m>\nu \), the Chebyshev projection satisfies
For \(\rho >1\) let the Bernstein ellipse \({\mathcal {E}}_\rho \) be defined as
Since \(r e^{i\theta }+r^{-1} e^{-i\theta } = (\rho +\rho ^{-1})\cos (\theta )+(\rho -\rho ^{-1})i \sin (\theta )\) for \(-\pi \le \theta \le \pi \), the boundary of the Bernstein ellipse \(\partial {\mathcal {E}}_\rho \) can be written in parametric form as \(\partial {\mathcal {E}}_\rho =\left\{ z \in {\mathbb {C}}:\frac{\hbox {Re}(z)^2}{a_\rho ^2}+\frac{\hbox {Im}(z)^2}{b_\rho ^2}=1 \right\} \), where its semi-axes are \(a_\rho = \frac{\rho +\rho ^{-1}}{2}\) and \(b_\rho = \frac{\rho -\rho ^{-1}}{2}\) with foci at \(\pm 1\). Figure 2 shows Bernstein ellipses in the complex plane for \(\rho =1.1,1.2,\ldots ,1.5\) as in [36].
Proposition 7
If f is analytic in \([-1,1]\) and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|f(t)|\le M\) for some M, then for each \(m\ge 0\) its Chebyshev projection satisfies
5.2 Spectral method and spectral bound
We now return to our original problem of a linear time-periodic system (1) but instead of solving it directly, we first approximate it by the following system
where \((P_{m} A)\) denotes the component-wise Chebyshev projection of A, see (17). If \((P_{m} A)(t_1)\) commutes with \((P_{m} A)(t_2)\) for all times \(t_1\) and \(t_2\), then the solution to the approximated system (18) is given by \(y(t) = \exp \left( \int _0^t{(P_m A)(\tau ) \mathrm{d}\tau } \right) x_0\). y(t) is entire since polynomials and their exponentials are entire. But in general the commutativity of \((P_m A)(t)\) is a rather strong assumption. Hence, we cite a more general result which, e.g., is given in [6].
Proposition 8
Suppose \({\mathcal {A}}:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n\times n}\) is analytic at \(\tau \in {\mathbb {R}}\), where \(\varrho \) is its radius of convergence, and u(t) is the unique solution to the ODE
with \(u(0)=u_0\). Then u is also analytic at \(\tau \in {\mathbb {R}}\) with the same convergence radius \(\varrho \).
As a corollary, it follows that the solution y(t) is entire since the function \((P_m A)(t)\) is a polynomial which by definition is entire. If the approximation is exact, i.e., \(a_{ij}(t)\) is a polynomial of degree at most m for \(1\le i,j \le n\), then x(t) and y(t) coincide. In order to prove rigorous upper bounds on x(t), we use Propositions 6 and 7 to bound the difference between the original function A and its Chebyshev projection. These bounds depend on the smoothness of the system matrix A. Furthermore, define \(\gamma \) for a matrix function \(A:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n\times n}\) as its maximal absolute entry, i.e.
Let \(\hbox {AC}\) denote the set of absolutely continuous functions and \(\hbox {AC}^{k}\) the set of k-times differentiable functions such that \(f^{(j)}\in \hbox {AC}\) for \(0\le j \le k\).
Theorem 3
If \(a_{ij} \in \hbox {AC}^{k-1}([0,T])\) and the k-th derivative \(a_{ij}^{(k)}\) is of bounded variation V for all \(i,j=1,\ldots ,n\), then for any \(m>k>0\):
Theorem 4
If \(a_{ij}\) is analytic in [0, T] and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|a_{ij}(t)| \le M\) for all \(i,j=1,\ldots ,n\) for some M, then for any \(m\ge 0\):
Proving Theorems 3 and 4 can be combined, but for this, Gronwall’s lemma is needed. Here, we use the integral version by R. Bellman [2], which e.g., is given in [38].
Lemma 3
(Gronwall’s lemma) Let \(g: [a,b] \mapsto {\mathbb {R}}\) and \(\beta : [a,b] \mapsto {\mathbb {R}}\) be continuous, \(\alpha : [a,b] \mapsto {\mathbb {R}}\) be integrable on [a, b] and \(\beta (t)\ge 0\). Assume g(t) satisfies
Then
Furthermore, if \(\alpha \) is non-decreasing and \(\beta >0\) is constant, then
Now we return to the proof of Theorems 3 and 4.
Proof
x(t) and y(t) fulfill the integral formulation of the ODE
Taking the maximum norm \(\Vert \cdot \Vert _\infty \) (12), which is a compatible matrix norm, on both sides and using the triangle inequality yields
The case of \(\gamma =0\), i.e., \(A \equiv 0\) and \(x=const\), is trivial. Otherwise, define \(\beta \) in Gronwall’s lemma as \(\beta :=n\gamma >0\), hence
-
1.
If the assumptions of Theorem 3 are fulfilled, then
$$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty= & {} \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2V}{\pi k(m- k)^k}}\le \frac{2nV}{\pi k(m- k)^k}. \end{aligned}$$Therefore,
$$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty\le & {} \beta \int _{0}^t \Vert x(s)-y(s)\Vert _\infty \mathrm{d}s + \frac{2nV}{\pi k(m- k)^k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$and applying Gronwall’s lemma with
$$\begin{aligned} g(t)= & {} \Vert x(t)-y(t)\Vert _\infty , \\ \alpha (t)= & {} \frac{2nV(A)}{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$and \(\beta ={\text {const}}>0\) yields
$$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty \le \frac{2nV e^{t n\gamma } }{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s. \end{aligned}$$(23)With the reverse triangle inequality the theorem follows.
-
3.
If the assumptions of Theorem 4 are fulfilled, then
$$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty = \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2M\rho ^{-m}}{\rho -1}}\le \frac{2nM\rho ^{-m}}{\rho -1}. \end{aligned}$$The remaining proof is analogous to the previous case.\(\square \)
The ODE system (18) has to be solved nevertheless, but we know that the solution y is entire due to Proposition 8. Hence by Proposition 7, \(\Vert y(t)-(P_m y)(t)\Vert _\infty \le \frac{2M\rho ^{-m}}{\rho -1}\) in the Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|y_i(t)|\le M\) for some M and \(i=1,\ldots , n\). The Chebyshev projections of A and y so not necessarily have the same degree, hence in the following we distinguish them by their subscripts. The index A refers to the matrix function A and an index y to the solution of (18). For a higher approximation by Chebyshev projections one hopes to have a sharper upper bound. This convergence result is established by the following inequality which is due to Eq. (23) in the proof of Theorems 3 and 7. For a matrix function A with the assumptions of Theorem 3, we obtain
And for an analytic matrix function A, we obtain
Since \(\int _0^t\Vert y(s)\Vert _\infty \mathrm{d}s\) is bounded, the right-hand sides of (24) and (25) tend to zero as \(m_A,m_y\rightarrow \infty \). Hence, the approximated solution \(P_{m_y} y\) converges to the original solution x for better approximation levels \(m_A\) and \(m_y\), i.e., \(P_{m_y} y\rightarrow x\) as \(m_A,m_y\rightarrow \infty \). In the first case, the rate of convergence is of order k, while for an analytic matrix function A one obtains geometric convergence.
With the reverse triangle inequality, we obtain the rigorous bounds with the assumptions on the matrix function A of Theorem 3
where \(\epsilon (t) = \frac{2M_y\rho _y^{-m_y}}{\rho _y-1}\left( 1+\frac{2nVe^{n\gamma t}}{\pi k(m_A-k)^{k}}t\right) \). And for the case of an analytic matrix function A (as in Theorem 4)
where \(\delta (t) = \frac{2M_y\rho _y^{-m_y}}{\rho _y-1}\left( 1+\frac{2M_An\rho _A^{-m_A} e^{n\gamma t}}{\rho _A-1}t\right) \). The rigorous upper bounds (26) and (27) tend to the norm of the solution \(\Vert x(t)\Vert _\infty \) as \(m_A,m_y\rightarrow \infty \) since \(P_{m_y}y\rightarrow x\) as \(m_A,m_y\rightarrow \infty \) by (24) and (25).
If the matrix function A is analytic, one does not need to replace the original system by (18) since even for the original system the solution is analytic by Proposition 8. But for the sake of completeness we also derived bounds in this case and the bounds are very tight for moderate \(m_A\) as shown in Sect. 6.
Similar results can be obtained for interpolation instead of Chebyshev projection. In this context, the main question concerns the interpolation points. If Chebyshev points are chosen, then the Chebyshev interpolant satisfies Propositions 6 and 7 with an additional factor 2, see e.g., [36]. Hence, one can obtain results such as Theorems 3 and 4 with the same additional factor.
6 Overview and numerical results
First, we discuss the convergence of trigonometric splines and of the spectral method depending on the smoothness of A indicated by Theorem 2 and Propositions 6 and 7. In Table 1, the convergence rates for the trigonometric spline bound defined in Theorem 2 and the spectral bounds defined in Eqs. (26) and (27) are given for various function classes and they are visualized in Figs. 10 and 11. The computational complexity for the trigonometric spline bound is dominated by computing the spline solution. Trigonometric splines with compact support, i.e., trigonometric B-splines, are chosen due to the local influence of each spline. For general splines, a linear system of dimension \(n(r+1)\times n(r+1)\) has to be solved while for B-splines, \(r+1\) systems of dimension \(n\times n\) have to be solved. Hence, the computational complexity for trigonometric B-splines is \({\mathcal O}(n^3(r+1))\). For the spectral bound, each element of the system matrix A has to be approximated, which can be done by fast Fourier transformations (FFT) in \({\mathcal O}((m+1)\log (m+1))\). The convergence of the trigonometric spline bound is local, i.e., a trigonometric spline \(S_i\), which is visualized in Fig. 1, converges on its support, i.e., \({\mathrm {supp}}(S_i)=\left\{ t \in [0,T]: S_i(t)\ne 0\right\} = [t_{i-1},t_{i+2}]\), to the solution. The spectral bound converges globally, i.e., on the whole interval [0, T], to the solution. The rigorous bounds are illustrated for three examples which all can be described by a time-periodic system of the form (1). An overview of the settings are given in Table 2. In the following, the parameters r and \(m_A\) of the trigonometric spline or the spectral bound, respectively, are chosen such that firstly, a visible difference between the solution and its respective upper bounds can be seen and secondly, an effect of the parameters can be noticed. If the order of the Chebyshev projection \(m_A\) is increased slightly in the Figures 3, 5 and 7, the spectral bound cannot be distinguished from the original solution. This observation does not hold for the trigonometric spline bound since its convergence is slower, see Table 1 and Fig. 10 compared to Fig. 11b. But for a larger number of nodes r, the trigonometric spline bound tends to the solution by Proposition 5, compare Figs. 3, 5 and 7. Computation of global extrema is not an easy task due to the possibly large number of local minima and maxima of the objective function [14, 37]. The constants \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are determined by the fminsearch routine in MATLAB.Footnote 1 But in general only a local minimum is found by fminsearch, therefore we combined it with a Global Search strategy of the Global Optimization Toolbox in MATLAB. The computed values for \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are given in Table 3. They are used in the figures mentioned above and also appear in the convergence rates of the methods in Table 1. Note, that the parameters \(\rho _A\) and \(M_A\) with respect to the spectral bound are not unique, especially any Bernstein ellipse can be chosen since the function is entire. Here, we chose \(\rho _A\) with respect to the decay of the Chebyshev coefficients \(|c_k|\) given by (16) but for the sake of simplicity the derivation is omitted and for the appropriate examples \(\rho _A\) is given in Table 3. \(M_A\) is determined by the strategy mentioned above, i.e., by a combination of fminsearch and Global Search.
The first example is a one-dimensional IVP \(\dot{x}(t)=|\sin (2\pi t)|^3x(t)\) with initial condition \(x(0)=1\). The function of the right hand-side \(A(t)=|\sin (2\pi t)|^3\) is thrice differentiable and \(A^{(k)}\) is absolutely continuous, i.e., \(A\in \hbox {AC}^{3}([0,T])\). We use this example as here, we are able to compare our results to the analytical solution which is
The results of the trigonometric spline bound and the spectral bound are shown in Fig. 3. For better approximation levels, the trigonometric spline and spectral bound are closer to the original solution \(\Vert x(t)\Vert _\infty \) as indicated by the convergence results. The convergence rates are quadratic and cubic as shown in Table 1.
Figure 4 shows the solution of the first example in the Euclidean norm and the weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\). For the one-dimensional example, the Euclidean norm and the maximum norm coincide with the absolute value, i.e., \(|\cdot |=\Vert \cdot \Vert _2=\Vert \cdot \Vert _\infty \). Furthermore, the weighted R-norm is a scaling, but since the single eigenvector is normalized, \(|\cdot |=\Vert \cdot \Vert _2=\Vert \cdot \Vert _\infty =\Vert \cdot \Vert _R\) holds. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the spectral abscissa is positive, \(\nu [L]=0.424413181578411 >0\), a monotonic increase can be observed, cf. Corollary 1.
As the second example, we chose a Jeffcott rotor on an anisotropic shaft supported by anisotropic bearings [1]. It can be modeled as a linear time-periodic system (1) where A(t) is entire with system dimension \(n=4\). The same parameter values are chosen as in [1]. This is an asymptotically stable system since the maximal Lyapunov exponent is \(\nu [L]=-0.002000131812440<0\). The results are illustrated in Fig. 5. The trigonometric spline bound for \(r=40,000\) is highly oscillatory such that some components of its graph in Fig. 5 cannot be distinguished anymore. But nevertheless, the upper bound is valid. If one can further assume smoothness of the solution, interpolation of the valleys of the oscillations would give a smoother upper bound.
Figure 6 shows the solution of the Jeffcott rotor over time in the interval \([0,10\pi ]\) in various norms, the Euclidean norm, the maximum norm, the weighted time-invariant R-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the matrix L is diagonalizable and the spectral abscissa is negative, \(\nu [L]<0\), a monotonic decrease can be observed, cf. Corollary 1.
The third example is an axial parametrically excited cantilever beam [7]. The planar beam model is composed of m finite elements. We chose the same parameter values as in [7]. The assembling of mass, damping and stiffness matrix by \(m=4\) finite elements is well described in [7]. We used the aforementioned method which results in a periodic system matrix of dimension \(n=16\). The parameter for the parametric excitation frequency \(\nu \) is chosen to be the parametric combination resonances of first order \(\nu =|\varOmega _1 - \varOmega _2|=138.44\). Furthermore, we introduce a coordinate transformation W. Hence, the system (1) is not only given by the original system matrix A(t), but also by the coordinate transformation W, i.e. the system is given by
The coordinate transformation W is a diagonal matrix and it is computed by the balance routine in MATLAB for A(t) at \(t=0\) in order to decrease the constant \(\gamma \) in (20). Of course, any \(t\in [0,T]\) could be chosen to determine a coordinate transformation, but our initial choice was sufficient to reduce \(\gamma \) by two orders of magnitude to \(\gamma =32\). The system is asymptotically stable since the maximal Lyapunov exponent is \(\nu [L]=-2.546655954908259\times 10^{-6}<0\).
Figure 7 shows the solution of the parametrically excited cantilever beam in the interval \([0,\frac{2\pi }{\nu }]\) in the maximum norm and its trigonometric spline upper bound and spectral upper bound for \(r=20,25\) and \(m_A=41,42\), respectively. From this figure, an asymptotic behavior cannot be concluded, hence we plotted Figs. 8 and 9. While Fig. 8 shows an oscillatory behavior of the solution of the Cantilever beam over time in the interval \([0,\frac{10^4\pi }{nu}]\) in the Euclidean norm and the maximum norm, Fig. 9 shows the solution in the weighted time-invariant R-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. Firstly, the weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillation of Fig. 8 and by Corollary 1 it is proven that the solution decreases monotonically since the matrix L is diagonalizable and its spectral abscissa is negative \(\nu [L]<0\). Hence, the solution is asymptotically stable, i.e., in any norm the solution decays to zero as \(t\rightarrow \infty \). Even with a larger time horizon this effect is not visible in Fig. 8, but due to the vibration suppression it may easily be seen in Fig. 9. Secondly, the matrix \(\tilde{R}(t):=Z(t)^{-*}RZ^{-1}(t)\) for this particular example is almost constant for all times. Surprisingly, the matrices \(\tilde{R}(t)\) and R almost coincide and hence, so do the curves \(\Vert x(t)\Vert _R\) and \(\Vert x(t)\Vert _{\tilde{R}(t)}\) in Fig. 9 (Figs. 10, 11).
7 Conclusions
Linear time-periodic systems arise in many fields of application, e.g., in parametrically excited systems and anisotropic rotor-bearing systems. In general, they are obtained by linearizing a nonlinear system about a periodic trajectory. Complete knowledge of the system’s components is necessary to understand its transient behavior which may not be applicable for very complex and large-scale systems. Hence, understanding system characteristics such as stability and robustness may be sufficient. The solution structure for a linear time-periodic system is known (Floquet’s Theorem 2). But nevertheless, in general, it has to be approximated since it cannot be given in closed form. Important physical properties such as stability and robustness can be lost due to the (numerical) approximations. In order to guarantee such properties for the original solution and not only for the approximation, one can derive analytic results on the solution or the approximation error has to be incorporated in the analysis. This is the key idea of this paper: bounds that solely depend on the solution structure or bounds that incorporate the approximation error. Firstly, we were able to generalize results from the linear time-invariant [17, 18] to time-periodic setting and derive a time-varying norm that captures important properties such as decoupling, filtering and monotonicity. Secondly, we used two different methodologies where the approximation error is incorporated in the upper bound. In the first one, an approximated solution is obtained due to time discretization and a quadratic trigonometric spline approximation. The upper bound depends on the discretization grid of the quadratic trigonometric spline solution and converges quadratically to the original solution. The derived upper bound is an extension to work on the solution of ODEs by trigonometric splines [23–25]. In the second case we used a general framework — the linear time-periodic system is approximated by Chebyshev projections [36]. Here, we generalized results from [30, 31] w.r.t. convergence and convergence rates and most importantly we could incorporate the two approximation errors of the Chebyshev projections into the rigorous upper bound. While the first approximation error is due to the polynomial approximation of the linear time-periodic system, the second error is due to solving the approximated system. The polynomial approximation of the linear time-periodic system yields properties of the solution such that its solution can be represented by an infinite series. Truncation of this series yields the second error. A series representation of the solution is not necessarily possible for the original system.
In summary, the bounds converge to the original solution of the linear time-periodic system as the number of splines or the degree of the Chebyshev projections is increased. For a smooth time-periodic system, the spectral bound in general superiors the trigonometric spline bound due to its faster convergence. In all cases the upper bounds converge to the norm of the solution if and only if the approximation converges to the solution. The computational complexity and convergence rate for the trigonometric spline bound and the spectral bound are stated. The applicability of all bounds and stability analysis of linear time-periodic systems is demonstrated by means of various examples which include a Jeffcott rotor and a parametrically excited Cantilever beam.
Notes
MATLAB, The MathWorks, R2014a, 8.3.0.532.
References
Allen, M.S.: Frequency-domain identification of linear time-periodic systems using LTI techniques. J. Comput. Nonlinear Dyn. 4, 041,004.1–041,004.6 (2009)
Bellman, R.: The stability of solutions of linear differential equations. Duke Math. J. 10(4), 643–647 (1943). doi:10.1215/S0012-7094-43-01059-2
Benner, P., Denißen, J.: Spectral bounds on the solution of linear time-periodic systems. Proc. Appl. Math. Mech. 14(1), 863–864 (2014). doi:10.1002/pamm.201410412
Benner, P., Denißen, J., Kohaupt, L.: Bounds on the solution of linear time-periodic systems. Proc. Appl. Math. Mech. 13(1), 447–448 (2013). doi:10.1002/pamm.201310217
Chebyshev, P.L.: Théorie des mécanismes connus sous le nom de parallélogrammes. Mémoires des Savants étrangers présentés à l’Académie de Saint-Pétersbourg 7, 539–568 (1854)
Coddington, A., Carlson, R.: Linear Ordinary Differential Equations. SIAM, Philadelphia (1997)
Dohnal, F., Ecker, H., Springer, H.: Enhanced damping of a Cantilever beam by axial parametric excitation. Arch. Appl. Mech. 78(12), 935–947 (2008). doi:10.1007/s00419-008-0202-0
Floquet, G.: Sur les équations différentielles linéaires à coefficients périodiques. Annales Scientifiques de l’École Normale Supérieure 12(2), 47–88 (1883). doi:10.1016/j.ansens.2007.09.002
Forster, O.: Analysis 1. Vieweg+Teubner Verlag, Berlin (2011). doi:10.1007/978-3-8348-8139-7
Funaro, D.: Polynomial approximation of differential equations. Lecture notes in physics. Springer, Berlin (1992)
Gottlieb, D., Orszag, S.A.: Numerical Analysis of Spectral Methods: Theory and Applications. CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM (1977)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations. II. Stiff and Differential-Algebraic Problems. Springer Series in Computational Mathematics. Springer, Berlin (2010)
Higham, N.J.: The scaling and squaring method for the matrix exponential revisited. SIAM Rev. 51(4), 747–764 (2009). doi:10.1137/090768539
Horst, R., Tuy, H.: Global Optimization. Springer, Berlin (1996). doi:10.1007/978-3-662-03199-5
Kohaupt, L.: Differential calculus for some p-norms of the fundamental matrix with applications. J. Comput. Appl. Math. 135(1), 1–22 (2001). doi:10.1016/S0377-0427(00)00559-8
Kohaupt, L.: Differential calculus for p-norms of complex-valued vector functions with applications. J. Comput. Appl. Math. 145(2), 425–457 (2002). doi:10.1016/S0377-0427(01)00594-5
Kohaupt, L.: Computation of optimal two-sided bounds for the asymptotic behavior of free linear dynamical systems with application of the differential calculus of norms. J. Comput. Math. Optim. 2, 127–173 (2006)
Kohaupt, L.: Solution of the matrix eigenvalue problem \({V}{A}^*+{A}{V}=\mu {V}\) with applications to the study of free linear dynamical systems. J. Comput. Appl. Math. 213(1), 142–165 (2008). doi:10.1016/j.cam.2007.01.001
Kohaupt, L.: On the vibration-suppression property and monotonicity behavior of a special weighted norm for dynamical systems. Appl. Math. Comput. 222, 307–330 (2013). doi:10.1016/j.amc.2013.06.091
Loscalzo, F.R., Talbot, T.D.: Spline function approximations for solutions of ordinary differential equations. Bull. Am. Math. Soc. 73, 438–442 (1967). doi:10.1090/S0002-9904-1967-11778-6
Moler, C., Van Loan, C.: Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Rev. 45(1), 3–49 (2003)
Müller, P.C., Schiehlen, W.: Lineare Schwingungen. Akademische Verlagsgesellschaft Wiesbaden, Wiesbaden (1976)
Nikolis, A.: Trigonometrische splines und ihre anwendung zur numerischen behandlung von integralgleichungen. Ph.D. thesis, Ludwig-Maximilians-Universität München (1993)
Nikolis, A.: Numerical solutions of ordinary differential equations with quadratic trigonometric splines. Appl. Math. E-Notes 4, 142–149 (2004)
Nikolis, A., Seimenis, I.: Solving dynamical systems with cubic trigonometric splines. Appl. Math. E-Notes 5, 116–123 (2005)
Orszag, S.A.: Numerical methods for the simulation of turbulence. Phys. Fluids 12(Supp. II), 250–257 (1969)
Schoenberg, I.: On trigonometric spline interpolation. Indiana Univ. Math. J. 13, 795–825 (1964)
Schumaker, L.L.: Spline Functions: Basic Theory. Wiley, Hoboken (1981)
Sinha, S.C., Chou, C.C., Denman, H.H.: Stability analysis of systems with periodic coefficients: an approximate approach. J. Sound Vib. 64, 515–527 (1979). doi:10.1016/0022-460X(79)90801-0
Sinha, S.C., Wu, D.H.: An efficient computational scheme for the analysis of periodic systems. J. Sound Vib. 151, 91–117 (1991). doi:10.1016/0022-460X(91)90654-3
Sinha, S., Butcher, E.: Solution and stability of a set of p-th order linear differential equations with periodic coefficients via Chebyshev polynomials. Math. Probl. Eng. 2, 165–190 (1996). doi:10.1155/S1024123X96000294
Söderlind, G., Mattheij, R.M.M.: Stability and asymptotic estimates in nonautonomous linear differential systems. SIAM J. Math. Anal. 16(1), 69–92 (1985). doi:10.1137/0516005
Tisseur, F., Meerbergen, K.: The quadratic eigenvalue problem. SIAM Rev. 43(2), 235–286 (2001). doi:10.1137/S0036144500381988
Trefethen, L.N.: Spectral Methods in MatLab. SIAM, Philadelphia (2000)
Trefethen, L.N.: Is Gauss quadrature better than Clenshaw–Curtis? SIAM Rev. 50(1), 67–87 (2008). doi:10.1137/060659831
Trefethen, L.N.: Approximation Theory and Approximation Practice. SIAM, Philadelphia (2013)
Ugray, Z., Lasdon, L., Plummer, J., Glover, F., Kelly, J., Mart, R.: Scatter search and local NLP solvers: a multistart framework for global optimization. INFORMS J. Comput. 19(3), 328–340 (2007). doi:10.1287/ijoc.1060.0175
Walter, W.: Differential- und Integral-Ungleichungen. Springer Tracts in Natural Philosophy, vol. 2. Springer, Berlin (1970)
Acknowledgments
Open access funding provided by Max Planck Society (Max Planck Institute for Dynamics of Complex Technical Systems).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Here, we return to the proof of Theorem 2 which we omitted in Sect. 4 due to its length. The idea is given by Nikolis in [23, 24] and with details about trigonometric splines in [28]. We extended the proof with rigorous upper bounds, especially the upper bounds on the errors at the nodes \(t_i\) in (31), (32) and (33) are newly derived. In the second part of the proof, the general upper bound given in (35) for any \(t\in [0,T]\) and the recursive upper bounds on the errors which are used to derive (35) are new.
Proof
Since \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n\times n})\) and \(x\in {\mathcal {C}}^3([0,T],{\mathbb {R}}^{n})\), \(L_3x\in \mathcal L ^\infty ([0,T],{\mathbb {R}}^n)\) is obvious. We split the remaining proof in two parts. First, we prove an upper bound on the error \(e(t)=x(t)-s(t)\in {\mathbb {R}}^n\) at the node \(t=t_i\) between the solution x(t) and its spline approximation s(t). Secondly, we derive an upper bound on the error for any \(t\in [0,T]\). For the linear differential operator \(L_3=\frac{\mathrm{d}}{\mathrm{d} t} + \frac{\mathrm{d}^3}{\mathrm{d} t^3}\), its null space is
Any set of three functions spanning \(N_{L_3}\) forms a fundamental solution of \(L_3\). As mentioned above, the \(L_3\)-spline has a fundamental system \(N_{L_3} = \left\{ 1,\cos (t), \sin (t)\right\} \). The associated Green’s function for \(L_3\) is
L-splines fulfill an extended Taylor formula [28], which in the case of \(L_3\) for t \(\in [t_i,t_{i+1}]\) is
\(u_x(t)\) is the unique element in \(N_{L_3}\) such that \(u_x(t_i)=x(t_i)\), \(\dot{u}_x(t_i)=\dot{x}(t_i)\) and \(\ddot{u}_x(t_i)=\ddot{x}(t_i)\) [28]. The derivative of the extended Taylor formula for \(t \in [t_i,t_{i+1}]\) is
-
1.
We want to bound the error \(\Vert e(t_i)\Vert _\infty =\Vert x(t_i)-s(t_i)\Vert _\infty \). Therefore, we bound the error for \(t=t_1\) first and then derive a recursive formula for the i-th error. We can use the extended Taylor formula (28) since trigonometric splines are L-splines,
$$\begin{aligned} x(t_1) = x(t_0)+\ddot{x}(t_0)+\dot{x}(t_0) \sin (h)-\ddot{x}(t_0) \cos (h) +\int _{t_0}^{t_1}{G(t_1,\xi )L_3x(\xi )\mathrm{d} \xi }. \end{aligned}$$The spline s fulfills the extended Taylor formula as well, but since \(L_3s(t)=0\), it holds
$$\begin{aligned} s(t_1) = s(t_0)+\ddot{s}(t_0)+\dot{s}(t_0) \sin (h)-\ddot{s}(t_0) \cos (h). \end{aligned}$$Hence,
$$\begin{aligned} e(t_1)= & {} (\ddot{x}(t_0)-\ddot{s}(t_0)) - (\ddot{x}(t_0)-\ddot{s}(t_0))\cos (h) - \int _{t_0}^{t_1}{G(t_1,\xi )L_3x(\xi )\mathrm{d} \xi } \\= & {} 2(\ddot{x}(t_0)-\ddot{s}(t_0))\sin ^2\left( \frac{h}{2}\right) - \int _{t_0}^{t_1}{G(t_1,\xi )L_3x(\xi )\mathrm{d} \xi }. \end{aligned}$$For the derivatives \(\dot{x}\) and \(\dot{s}\) we can apply (29)
$$\begin{aligned} \dot{x}(t_1)= & {} \dot{x}(t_0)\cos (h)+\ddot{x}(t_0)\sin (h)+\int _{t_0}^{t_1}{\sin (t_1-\xi )L_3 x(\xi )\mathrm{d}\xi }, \\ \dot{s}(t_1)= & {} \dot{s}(t_0)\cos (h)+\ddot{s}(t_0)\sin (h) \end{aligned}$$and subtraction yields
$$\begin{aligned} \ddot{x}(t_0)-\ddot{s}(t_0) = \frac{\dot{x}(t_1)-\dot{s}(t_1)}{\sin (h)}+\int _{t_0}^{t_1}{\frac{\sin (t_1-\xi )}{\sin (h)}L_3 x(\xi )\mathrm{d}\xi }. \end{aligned}$$(30)Hence,
$$\begin{aligned} \Vert e(t_1)\Vert _\infty = \left\| 2(\ddot{x}(t_0)-\ddot{s}(t_0))\sin ^2\left( \frac{h}{2}\right) - \int _{t_0}^{t_1}{G(t_1,\xi )L_3x(\xi )\mathrm{d} \xi }\right\| _\infty \end{aligned}$$and substituting (30) yields
$$\begin{aligned} \Vert e(t_1)\Vert _\infty\le & {} \left\| \left( \dot{x}(t_1)-\dot{s}(t_1)\right) \tan \left( \frac{h}{2}\right) \right\| _\infty \\&+\left\| \int _{t_0}^{t_1}{\left[ \tan \left( \frac{h}{2}\right) \sin (t_1-\xi )-G(t_1,\xi )\right] L_3 x(\xi )\mathrm{d}\xi }\right\| _\infty \\\le & {} L\Vert e(t_1)\Vert _\infty \left| \tan \left( \frac{h}{2}\right) \right| +\left\| L_3 x\right\| _{\infty } \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| \end{aligned}$$where L is the Lipschitz constant of the ODE (1), i.e. the ODE fulfills the Lipschitz condition \(\left\| \dot{x}(t)-\dot{s}(t)\right\| _\infty = \left\| A(t) (x(t)-s(t)) \right\| _\infty \le L \left\| x(t)-s(t) \right\| \) since \(A\in \mathcal {C}([0,T],{\mathbb {R}}^{n \times n})\) and by periodicity of A, it is bounded by \(\left\| A\right\| _\infty \le L\). For \(L \left| \tan {\left( \frac{h}{2}\right) }\right| <1\), follows
$$\begin{aligned} \Vert e(t_1)\Vert _\infty \le \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) }\right| }. \end{aligned}$$(31)The right-hand side of (31) tends to zero, especially \(\frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) }\right| } \rightarrow 0\) as \(h \rightarrow 0\). With the same analysis, the i-th discrete error can be bounded by
$$\begin{aligned} \Vert e(t_i)\Vert _\infty= & {} \Vert x(t_i)-s(t_i)\Vert _\infty \nonumber \\\le & {} \Vert e(t_{i-1})\Vert _\infty \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| } + \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| }. \end{aligned}$$(32)The bound of the error at the i-th node consists of the error at the previous node \(\Vert e(t_{i-1})\Vert _\infty \) with the factor \(\frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| } \) and a cubic order term \({\mathcal O}(\left\| L_3 x \right\| _{\infty } h^3)\). Additionally, we obtain an explicit upper bound for the i-th discrete error by recursively expanding the series:
$$\begin{aligned} \Vert e(t_i)\Vert _\infty\le & {} \Vert e(t_{i-1})\Vert _\infty \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| } + \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \\\le & {} \Vert e(t_{i-2})\Vert _\infty \left( \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| }\right) ^2 \\&+ \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \left[ 1+ \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| } \right] \\\le & {} \Vert e(t_{1})\Vert _\infty \left( \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| }\right) ^{i-1} \\&+ \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \sum _{j=0}^{i-2}{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| } \right) ^j} \end{aligned}$$and with (31), it follows
$$\begin{aligned} \Vert e(t_i)\Vert _\infty \le \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \sum _{j=0}^{i-1}{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| } \right) ^j}. \end{aligned}$$Since \(\frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\ne 1\), the \((i-1)\)-st partial sum of the (finite) geometric series can be simplified to
$$\begin{aligned} \sum _{j=0}^{i-1}{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| } \right) ^j} = \frac{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\right) ^i-1}{\frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }-1} = \frac{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\right) ^i-1}{L\frac{|\sin (h)|+\left| \tan {\left( \frac{h}{2}\right) }\right| }{1-L\left| \tan {\left( \frac{h}{2}\right) }\right| }} \end{aligned}$$and hence,
$$\begin{aligned} \Vert e(t_i)\Vert _\infty \le \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{L|\sin (h)|+L\left| \tan {\left( \frac{h}{2}\right) }\right| } \left[ \left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\right) ^i-1\right] . \end{aligned}$$(33)The right-hand side of (33) tends to zero as the number of nodes r tends to infinity, i.e., the error \( \Vert e(t_i)\Vert _\infty \) for any \(i=0,\ldots ,r\) tends to zero as well for \(r\rightarrow \infty \) (Proposition 5).
-
2.
Now we want to bound the error \(e(t)=x(t)-s(t)\) for any \(t\in [0,T]\). Therefore, let \(t\in [0,T]\) be fixed and choose i such that \(t \in (t_i,t_{i+1}]\) and apply the extended Taylor formula (28) to the solution and the spline:
$$\begin{aligned} x(t)= & {} x(t_i) +\dot{x}(t_i)\sin (t-t_i)+\ddot{x}(t_i)(1-\cos (t-t_i))+\int _{t_i}^t{G(t,\xi )L_3 x(\xi ) \mathrm{d}\xi },\\ s(t)= & {} s(t_i) +\dot{s}(t_i)\sin (t-t_i)+\ddot{s}(t_i)(1-\cos (t-t_i)). \end{aligned}$$The mean value theorem for integrals yields: \(\exists \gamma _i \in \left( t_i,t\right) \) such that
$$\begin{aligned} x(t)= & {} x(t_i) +\dot{x}(t_i)\sin (t-t_i)+\ddot{x}(t_i)(1-\cos (t-t_i))\nonumber \\&+\,L_3 x(\gamma _i)\left( t-t_i-\sin (t-t_i)\right) . \end{aligned}$$Then, for the error, it follows
$$\begin{aligned} e(t)= & {} e(t_i)+\dot{e}(t_i)\sin (t-t_i)+\ddot{e}(t_i)(1-\cos (t-t_i))\nonumber \\&+\,L_3 x(\gamma _i)\left( t-t_i-\sin (t-t_i)\right) . \end{aligned}$$(34)Differentiation leads to
$$\begin{aligned} \dot{e}(t) =\dot{x}(t)-\dot{s}(t) = \dot{e}(t_i)\cos (t-t_i)+\ddot{e}(t_i)\sin (t-t_i)+L_3 x(\gamma _i)\left( 1-\cos (t-t_i)\right) \end{aligned}$$and evaluation at \(t=t_{i+1}\)
$$\begin{aligned}&\dot{e}(t_{i+1})= \dot{e}(t_i)\cos (h)+\ddot{e}(t_i)\sin (h)+L_3 x(\gamma _i)\left( 1-\cos (h)\right) \\&\Leftrightarrow \ddot{e}(t_i) = -\dot{e}(t_{i}) \frac{\cos (h)}{\sin (h)} + \frac{\dot{e}(t_{i+1})}{\sin (h)} - L_3 x(\gamma _i) \frac{1-\cos (h)}{\sin (h)}. \end{aligned}$$The spline s and the solution x fulfill the ODE (1) at the time-points \(t_i\) for \(i=0,1,\ldots ,r\), and as mentioned above, both are Lipschitz-continuous, hence
$$\begin{aligned} \Vert \dot{e}(t_i)\Vert _\infty= & {} \Vert \dot{x}(t_{i})-\dot{s}(t_{i})\Vert _\infty = \Vert A(t_i)\Vert _\infty \Vert x(t_i)-s(t_i)\Vert _\infty \\\le & {} L \Vert x(t_{i})-s(t_{i})\Vert _\infty = L \Vert e(t_i)\Vert _\infty . \end{aligned}$$Hence,
$$\begin{aligned} \Vert \ddot{e}(t_i)\Vert _\infty \le L\Vert e(t_{i})\Vert _\infty \left| \frac{\cos (h)}{\sin (h)}\right| + \frac{L\Vert e(t_{i+1})\Vert _\infty }{|\sin (h)|} + \left\| L_3 x\right\| _{\infty } \left| \frac{1-\cos (h)}{\sin (h)}\right| \end{aligned}$$and Eq. (34) implies
$$\begin{aligned}&\Vert x(t)\Vert _\infty -\Vert s(t)\Vert _\infty \le L\Vert e(t_{i+1})\Vert _\infty \left| \frac{1-\cos (t-t_i)}{\sin (h)}\right| \\&\quad +\, \Vert e(t_i)\Vert _\infty \left( 1+L|\sin (t-t_i)|+L\left| \cot (h)(1-\cos (t-t_i))\right| \right) \\&\quad +\, \left\| L_3 x\right\| _{\infty } \left| \frac{(1-\cos (h))(1-\cos (t-t_i))}{\sin (h)}\right| + \left\| L_3 x\right\| _{\infty } |t-t_i-\sin (t-t_i)|. \end{aligned}$$Using the recursive bound of the error on \(\Vert e(t_{i+1})\Vert _\infty \) in inequality (32), i.e.
$$\begin{aligned} \Vert e(t_{i+1})\Vert _\infty \le \Vert e(t_{i})\Vert _\infty \frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| } + \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| }, \end{aligned}$$and the upper bound for the error on \(\Vert e(t_{i})\Vert _\infty \) in inequality (33), i.e.
$$\begin{aligned} \Vert e(t_i)\Vert _\infty \le \left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \sum _{j=0}^{i-1}{\left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| } \right) ^j}, \end{aligned}$$yields
$$\begin{aligned}&\Vert x(t)\Vert _\infty -\Vert s(t)\Vert _\infty \le \left\| L_3 x\right\| _{\infty }L \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \left| \frac{1-\cos (t-t_i)}{\sin (h)}\right| \nonumber \\&\quad +\left\| L_3 x \right\| _{\infty } \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{L|\sin (h)|+L\left| \tan {\left( \frac{h}{2}\right) }\right| } \left[ \left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\right) ^i-1\right] \nonumber \\&\quad \quad \bigg (1+L|\sin (t-t_i)|+L\left| \cot (h)(1-\cos (t-t_i))\right| \nonumber \\&\qquad \quad + L\frac{1+L\left| \sin {(h)} \right| }{1-L\left| \tan {\left( \frac{h}{2}\right) } \right| }\left| \frac{1-\cos (t-t_i)}{\sin (h)}\right| \bigg )\nonumber \\&\quad + \left\| L_3 x\right\| _{\infty } \left( \left| \frac{(1-\cos (h))(1-\cos (t-t_i))}{\sin (h)}\right| + |t-t_i-\sin (t-t_i)| \right) .\quad \quad \quad \end{aligned}$$(35)
\(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Benner, P., Denißen, J. & Kohaupt, L. Trigonometric spline and spectral bounds for the solution of linear time-periodic systems. J. Appl. Math. Comput. 54, 127–157 (2017). https://doi.org/10.1007/s12190-016-1001-3
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-016-1001-3