1 Introduction

In a separable Hilbert space X consider the nonlinear control system

$$\begin{aligned} \left\{ \begin{array}{ll} u'(t)+Au(t)+p(t)Bu(t)=0,&{} t>0\\ u(0)=u_0. \end{array}\right. \end{aligned}$$
(1.1)

where \(A:D(A)\subset X\rightarrow X\) is a linear self-adjoint operator on X such that \(A\ge -\sigma I\), with \(\sigma \ge 0\), B belongs to \({\mathcal {L}}(X)\), the space of all bounded linear operators on X, and p(t) is a scalar function representing a bilinear control. We suppose that the spectrum of A consists of a sequence of real numbers \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) which can be ordered, whithout loss of generality, as \(-\sigma \le \lambda _k\le \lambda _{k+1}\rightarrow \infty \) as \(k\rightarrow \infty \). We denote by \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the corresponding eigenfunctions, \(A\varphi _k=\lambda _k\varphi _k,\) with \(\left\Vert \varphi _k \right\Vert =1\), \(\forall \,k\in {\mathbb {N}}^*\).

In the recent paper [1], we studied the stabilizability of (1.1) to the jth eigensolution of the free equation (\(p\equiv 0\)), \(\psi _j(t)=e^{-\lambda _j t}\varphi _j\), for every \(j\in {\mathbb {N}}^*\). For this purpose, we introduced the notion of j-null controllability in time \(T>0\) for the pair \(\{A,B\}\): denoting by \(y(\cdot ;y_0,p)\) the solution of the linear system

$$\begin{aligned} \left\{ \begin{array}{ll} y'(t)+Ay(t)+p(t)B\varphi _j=0,&{}t\in [0,T]\\ \\ y(0)=y_0, \end{array}\right. \end{aligned}$$

we say that \(\{A,B\}\) is j-null controllable in time \(T>0\) if for any initial condition \(y_0\in X\) there exists a control \(p\in L^2(0,T)\) such that

$$\begin{aligned} y(T;y_0,p)=0 \quad \text{ and }\quad \left\Vert p \right\Vert _{L^2(0,T)}\le N_T\left\Vert y_0 \right\Vert , \end{aligned}$$

where \(N_T\) is a positive constant depending only on T. Then, the control cost is given by

$$\begin{aligned} N(T)=\sup _{\left\Vert y_0 \right\Vert =1}\inf \left\{ \left\Vert p \right\Vert _{L^2(0,T)}\,:\, y(T;y_0,p)=0\right\} .\end{aligned}$$

In [1, Theorem 3.7] we have shown that, if \(\{A,B\}\) is j-null controllable, then (1.1) is locally superexponentially stabilizable to \(\psi _j\): for all \(u_0\) in some neighborhood of \(\varphi _j\) there exists a control \(p\in L^2_{loc}([0,+\infty ))\) such that the corresponding solution u of (1.1) satisfies

$$\begin{aligned} \left\Vert u(t)-\psi _j(t) \right\Vert \le Me^{-e^{\omega t}},\qquad \forall \,t\ge 0 \end{aligned}$$
(1.2)

for suitable constants \(\omega ,M>0\) independent of \(u_0\). Notice that such a result holds only under the condition of j-null controllability for the pair \(\{A,B\}\). In particular, no assumptions are required on the behavior of the control cost.

Moreover, in [1, Theorem 3.8] we gave sufficient conditions to ensure the j-null controllability of \(\{A,B\}\): a gap condition for the eigenvalues of A and a rank condition on B.

In this paper, we address the related, more delicate, issue of the exact controllability of (1.1) to the eigensolutions \(\psi _j\) via bilinear controls. The main differences between the results of this paper and [1, Theorem 3.7] can be summarized as follows:

  • in addition to assuming the pair \(\{A,B\}\) to be j-null controllable, we further require that the control cost \(N(\cdot )\) satisfies \(N(\tau )\le e^{\nu /\tau }\) for any \(0<\tau \le T_0\), with \(\nu ,T_0>0\),

  • under the above stronger assumptions, not only we prove local exact controllability in any time, but also global exact controllablity in large time for a wide set of initial data.

The following result ensures local exact controllability for problem (1.1) assuming a precise behavior of the control cost for small time. In the last section of this paper, we show that such a behavior of the control cost is typical of parabolic problems in one space dimension.

Theorem 1.1

Let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator such that

$$\begin{aligned} \begin{array}{ll} (a) &{} A \text{ is } \text{ self-adjoint },\\ (b) &{}\exists \,\sigma \ge 0\,:\,\langle Ax,x\rangle \ge -\sigma \left\Vert x \right\Vert ^2,\,\, \forall \, x\in D(A),\\ (c) &{}\exists \,\lambda >-\sigma \text{ such } \text{ that } (\lambda I+A)^{-1}:X\rightarrow X \text{ is } \text{ compact }, \end{array} \end{aligned}$$
(1.3)

and let \(B:X\rightarrow X\) be a bounded linear operator. Assume that \(\{A,B\}\) is j-null controllable in any time \(T>0\) for some \(j\in {\mathbb {N}}^*\) and suppose that

$$\begin{aligned} N(\tau )\le e^{\nu /\tau },\quad \forall \,0<\tau \le T_0, \end{aligned}$$
(1.4)

for some constants \(\nu =\nu (j),T_0>0\).

Then, for any \(T>0\), there exists a constant \(R_{T}>0\) such that, for any \(u_0\in B_{R_{T}}(\varphi _j)\), there exists a control \(p\in L^2(0,T)\) such that the solution u of (1.1) satisfies \(u(T)=\psi _j(T)\). Moreover, the following estimate holds

$$\begin{aligned} \left\Vert p \right\Vert _{L^2(0,T)}\le \frac{e^{-\pi ^2\Gamma _0/T}}{e^{2\pi ^2\Gamma _0/(3T)}-1}, \end{aligned}$$
(1.5)

where \(\Gamma _0\) and \(R_T\) can be computed as follows

$$\begin{aligned}&\Gamma _0:=2\nu +\max \left\{ \ln (D),0\right\} , \end{aligned}$$
(1.6)
$$\begin{aligned}&R_T:=e^{-6\Gamma _0/T_1}, \end{aligned}$$
(1.7)

with

$$\begin{aligned} T_1:=\min \left\{ \frac{6}{\pi ^2}T,1,T_0\right\} , \end{aligned}$$
(1.8)
$$\begin{aligned} D:=2\left\Vert B \right\Vert e^{2\sigma +(3\left\Vert B \right\Vert )/2+1/2}\max \left\{ 1,\left\Vert B \right\Vert \right\} . \end{aligned}$$
(1.9)

The main idea of the proof consists in applying the stability estimates of [1] on a suitable sequence of time intervals of decreasing length \(T_j\), such that \(\sum _{j=1}^\infty T_j<\infty \). Such a sequence, which can be constructed only thanks to (1.4), has to be carefully chosen in order to fit the error estimates that we take from [1]. We point out that our method is fully constructive, being based on an algorithm that allows to compute all relevant constants. In particular, we make no use of inverse mapping theorems. Indeed, our strategy relies on the resolution of a moment problem to define a suitable control which steers the solution of our problem to the desired eigensolution. To this purpose, we use an estimate of the biorthogonal family to the family of exponentials that has been established in [15].

In [1], we gave sufficient conditions for j-null controllability. However, the hypotheses of [1, Theorem 3.8] do not guarantee the validity of condition (1.4) for the control cost. In the result that follows, we provide sufficient conditions for N(T) to satisfy (1.4). It would be interesting to understand if (1.4) is also necessary for the local exact controllability of (1.1).

Theorem 1.2

Let \(A:D(A)\subset X\rightarrow X\) be such that (1.3) holds and suppose that there exists a constant \(\alpha >0\) for which the eigenvalues of A fulfill the gap condition

$$\begin{aligned} \sqrt{\lambda _{k+1}-\lambda _1}-\sqrt{\lambda _k-\lambda _1}\ge \alpha ,\quad \forall \, k\in {\mathbb {N}}^*. \end{aligned}$$
(1.10)

Let \(j\in {\mathbb {N}}^*\) be fixed and let \(B: X\rightarrow X\) be a bounded linear operator such that there exist \(b,q>0\) for which

$$\begin{aligned} \begin{array}{l} \langle B\varphi _j,\varphi _j\rangle \ne 0\quad \text{ and }\quad \left| \lambda _k-\lambda _j\right| ^q|\langle B\varphi _j,\varphi _k\rangle |\ge b,\quad \forall \,k\ne j. \end{array} \end{aligned}$$
(1.11)

Then, the pair \(\{A,B\}\) is j-null controllable in any time \(T>0\), and the control cost N(T) satisfies (1.4) with

$$\begin{aligned} T_0:=\min \left\{ 1,1/\alpha ^2\right\} , \end{aligned}$$
(1.12)

and \(\nu =\nu _j=\nu _j(M,b,q,\alpha )\), where

$$\begin{aligned}&2\nu _j=M+ \frac{M^2}{4} + (2q+3) e + \max \left\{ \ln \left( \dfrac{3M}{|\langle B\varphi _j,\varphi _j\rangle |^2}\right) ,\right. \nonumber \\&\quad \left. \ln \left( \dfrac{3MC_q}{b^2}\right) , \ln \left( \dfrac{3M C_{q,\alpha } }{b^2}\right) ,0\right\} \end{aligned}$$
(1.13)

and

$$\begin{aligned} M:= & {} C\left( 1+\frac{1}{\alpha ^2}\right) +2|\lambda _1|, \end{aligned}$$
(1.14)
$$\begin{aligned} C_q= & {} 2\left( \frac{2q}{e}\right) ^{2q},\quad C_{q,\alpha }=\frac{2\Gamma (2q+1)}{\alpha \sqrt{\lambda _2-\lambda _1}}. \end{aligned}$$
(1.15)

Here \(\Gamma (\cdot )\) is the Gamma function and C is a positive constant independent of T and \(\alpha \).

Observe that assumption (1.11) is stronger than [1, hypothesis (16)]. Nevertheless, it is satisfied by all the examples of parabolic problems that we presented in [1].

From Theorems 1.1 and 1.2 we deduce the following Corollary.

Corollary 1.3

Let \(A:D(A)\subset X\rightarrow X\) be such that (1.3) holds and suppose that there exists a constant \(\alpha >0\) for which (1.10) is satisfied. Let \(B: X\rightarrow X\) be a bounded linear operator that verifies (1.11) for some \(b,q>0\). Then, problem (1.1) is locally controllable to the jth eigensolution \(\psi _j\) in any time \(T>0\).

Furthermore, from Theorem 1.1 we deduce two semi-global controllability results in the case of an accretive operator A. In the first one, Theorem 1.4 below, we prove that all initial states lying in a suitable strip can be steered in finite time to the first eigensolution \(\psi _1\) (see Fig. 1). Moreover, we give a uniform estimate for the controllability time depending on the size of the projection of the initial datum \(u_0\) on \(\varphi _1^\perp \).

Theorem 1.4

Let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator such that (1.3) holds with \(\sigma =0\) and let \(B:X\rightarrow X\) be a bounded linear operator. Let {A,B} be a 1-null controllable pair which satisfies (1.4). Then, there exists a constant \(r_1>0\) such that for any \(R>0\) there exists \(T_{R}>0\) such that for all \(u_0\in X\) with

$$\begin{aligned} \left| \langle u_0,\varphi _1\rangle -1\right| < r_1,\qquad \left\Vert u_0-\langle u_0,\varphi _1\rangle \varphi _1 \right\Vert \le R, \end{aligned}$$
(1.16)

problem (1.1) is exactly controllable to the first eigensolution \(\psi _1(t)=e^{-\lambda _1 t}\varphi _1\) in time \(T_{R}\).

Fig. 1
figure 1

The colored region represents the set of initial conditions that can be steered to the first eigensolution in time \(T_R\) (Color figure online)

Our second semi-global result, Theorem 1.5 below, ensures the exact controllability of all initial states \(u_0\in X\setminus \varphi _1^\perp \) to the evolution of their orthogonal projection along the first eigensolution. Such a function is defined by

$$\begin{aligned} \phi _1(t)=\langle u_0,\varphi _1\rangle \psi _1(t), \quad \forall \, t \ge 0, \end{aligned}$$
(1.17)

where \(\psi _1\) is the first eigensolution. Notice that if \(\langle u_0,\varphi _1\rangle >0\) and \(\lambda _1>0\), \(\phi _1(\cdot )\) can been interpreted as a time-shift of \(\psi _1\):

$$\begin{aligned} \phi _1(t)=\psi (t-t_1)=e^{-\lambda _1(t-t_1)}\varphi _1 \end{aligned}$$

with \(t_1:=\frac{1}{\lambda _1}\log \langle u_0,\varphi _1\rangle \).

Theorem 1.5

Let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator such that (1.3) holds with \(\sigma =0\) and let \(B:X\rightarrow X\) be a bounded linear operator. Let {A,B} be a 1-null controllable pair which satisfies (1.4). Then, for any \(R>0\) there exists \(T_R>0\) such that for all \(u_0\in X\) with

$$\begin{aligned} \left\Vert u_0-\langle u_0,\varphi _1\rangle \varphi _1 \right\Vert \le R |\langle u_0,\varphi _1\rangle |, \end{aligned}$$
(1.18)

system (1.1) is exactly controllable to \(\phi _1\), defined in (1.17), in time \(T_R\).

Notice that, denoting by \(\theta \) the angle between the half-lines \(\mathbb {R}_+\varphi _1\) and \(\mathbb {R}_+ u_0\), condition (1.18) is equivalent to

$$\begin{aligned} |\tan \theta |\le R, \end{aligned}$$

which defines a closed cone, say \(Q_R\), with vertex at 0 and axis equal to \(\mathbb {R}\varphi _1\) (see Fig. 2). Therefore, Theorem 1.5 ensures a uniform controllability time for all initial conditions lying in \(Q_R\). We observe that, since R is any arbitrary positive constant, all initial conditions \(u_0\in X\setminus \varphi _1^\perp \) can be steered to the corresponding projection to the first eigensolution. Indeed, for any \(u_0\in X\setminus \varphi _1^\perp \), we define

$$\begin{aligned} R_0:=\left\Vert \frac{u_0}{\langle u_0,\varphi _1\rangle }-\varphi _1 \right\Vert . \end{aligned}$$

Then, for any \(R\ge R_0\) condition (1.18) is fulfilled:

$$\begin{aligned} \frac{1}{|\langle u_0,\varphi _1\rangle |}\left\Vert u_0-\langle u_0,\varphi _1\rangle \varphi _1 \right\Vert =R_0\le R. \end{aligned}$$
Fig. 2
figure 2

Fixed any \(R>0\), the set of initial conditions exactly controllable in time \(T_R\) to their projection along the first eigensolution is indicated by the colored cone \(Q_R\) (Color figure online)

The proof of Theorems 1.4 and 1.5 uses the strict accretivity of A in every direction \(\varphi _j\) with \(j> 1\). By letting the equation evolve freely under the action of the semigroup generated by \(-A\), the trajectory enters a neighbourhood of the 1-st eigenstate so that we can apply our local controllability result stated in Theorem 1.1.

Our approach allows for some extensions which cover more general control systems. For instance, operator B can be assumed to be unbounded provided that \(D(A^{1/2})\hookrightarrow D(B)\) and \(||B\varphi ||\le C\left( ||A^{1/2}\varphi ||+||\varphi ||\right) \), thus including the important example of the one dimensional Fokker-Planck equation. Moreover, we can also treat more general control costs satisfying

$$\begin{aligned} N(\tau )\le e^{\nu /\tau ^\alpha },\quad 0<\tau \le T_0 \end{aligned}$$

for some \(\alpha >0\), instead of (1.4). However, both extensions require a substantial amount of additional work. This is why we prefer to keep them for a forthcoming paper.

Finally, we would like to recall part of the huge literature on bilinear control of evolution equations, referring the reader to the references in [1] for more details. A seminal paper in this field is certainly the one by Ball et al. [3], which establishes that system (1.1) is not controllable along any reference trajectory. More precisely, denoting by \(u(t;u_0,p)\) the unique solution of (1.1), the attainable set from \(u_0\) defined by

$$\begin{aligned} S(u_0)=\{ u(t;u_0,p);t\ge 0, p\in L^r_{loc}([0,+\infty ),\mathbb {R}),r>1\} \end{aligned}$$

is shown in [3] to have a dense complement, and so it cannot be a neighbourhood of the reference trajectory. Notice that, since we control our bilinear problem exactly to the reference trajectory, the negative result of [3] does not represent an obstacle for this kind of controllability.

As for positive results, we would like to mention Beauchard [5], on bilinear control of the wave equation, and Beauchard and Laurent [7] on bilinear control of the Schrödinger equation (see also [4] for a first result on this topic). The results obtained in these papers rely on linearization around the ground state, the use of the inverse mapping theorem, and a regularizing effect which takes place in both problems. The latter property allows the authors to work in spaces where the operator B turns out to be unbounded. Local controllability is proved for any positive time for the Schrödinger equation and for a sufficiently (optimal) large time for the wave equation. Both papers require the condition

$$\begin{aligned} \langle B\varphi _1,\varphi _k\rangle \ne 0,\quad \forall \, k \ge 1 \end{aligned}$$
(1.19)

to be satisfied, together with a suitable asymptotic behavior with respect to the eigenvalues. Notice that the structure of the second order operator and the fact that the space dimension equals one allow the authors of [5, 7] to apply Ingham’s theory [19] which requires a gap condition on the eigenvalues. We further observe that even if the genericity of assumption (1.19) is proved in both papers [5, 7], only few explicit examples of operators B of multiplication type are available in the literature. We refer to [2] where both a general constructive original method and an original algorithm for building potentials which satisfy the infinite non-vanishing conditions (1.19), and further the asymptotic condition (1.11), are established, for the first time to our knowledge.

If (1.19) is violated then it has been first shown by Coron [17], for a model describing a particle in a moving box, that there exists a minimal time for local exact controllability to hold. This model couples the Schrödinger equation with two ordinary differential equations modeling the speed and acceleration of the box (see also Beauchard and Coron [6] for local exact controllability for large time). A further paper by Beauchard and Morancey [9] for the Schrödinger equation extends [7] to cases for which the above condition is violated, that is, when there exist integers k such that \(\langle B\varphi _1,\varphi _k\rangle =0\).

An example of controllability to trajectories for nonlinear parabolic systems is studied in [18], where, however, additive controls are considered. In such an example, one can obtain controllability to free trajectories by Carleman estimates and inverse mapping arguments. Such a strategy seems hard to adapt to the current setting.

The paper which has the strongest connection with our work is the one by Beauchard and Marbach in [8], where the authors study small-time null controllability for a scalar-input heat equation in one space dimension, with nonlinear lower order terms. Among the results of such paper, we mention null-controllability to the first eigenstate of a heat equation with bilinear control. From this result it would be possible to deduce local controllability only to the first eigenstate of the heat equation subject to Neumann boundary conditions. It is worth noting that [8] addresses a specific parabolic equation. Moreover, the methods developed therein, relying on the so-called source term procedure, are totally different from ours.

We observe that the bilinear controls we use in this paper are just scalar functions of time. This fact explains why applications mainly concern problems in low space dimension, like the results in [4,5,6,7,8,9, 17]. A stronger control action could be obtained by letting controls depend on time and space. We refer the reader to [12, 13] for more on this subject.

This paper is organized as follows. In Sect. 2, we have collected some preliminaries as well as results from [1] that we need in order to prove Theorem 1.1. Section 3 contains such a proof, while Sect. 4 is devoted to demonstrate Theorem 1.2. In Sect. 5, we give the proof of our semi-global results (Theorems 1.4 and 1.5). Finally, applications of Theorem 1.1 to parabolic problems are analyzed is Sect. 6.

2 Preliminaries

In this section, we recall a well-known result for the well-posedness of our control problem and the regularity of the solution as well as some results from [1] that are necessary for the proof of Theorem 1.1. Moreover, we will remind the fundamental definition of j-null controllable pair.

We recall our general functional frame. Let \((X,\langle \cdot ,\cdot \rangle ,\left\Vert \cdot \right\Vert )\) be a separable Hilbert space, let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator with the following properties

$$\begin{aligned} \begin{array}{ll} (a) &{} A \text{ is } \text{ self-adjoint },\\ (b) &{}\exists \,\sigma \ge 0\,:\,\langle Ax,x\rangle \ge -\sigma \left\Vert x \right\Vert ^2,\,\, \forall \, x\in D(A),\\ (c) &{}\exists \,\lambda >-\sigma \text{ such } \text{ that } (\lambda I+A)^{-1}:X\rightarrow X \text{ is } \text{ compact }. \end{array} \end{aligned}$$
(2.1)

We denote by \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) the eigenvalues of A, which can be ordered, whithout loss of generality, as \(-\sigma \le \lambda _k\le \lambda _{k+1}\rightarrow \infty \) as \(k\rightarrow \infty \), and by \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the corresponding eigenfunctions, \(A\varphi _k=\lambda _k\varphi _k,\) with \(\left\Vert \varphi _k \right\Vert =1\), \(\forall \,k\in {\mathbb {N}}^*\).

Let \(B:X\rightarrow X\) be a bounded linear operator. Fixed \(T>0\), consider the following bilinear control problem

$$\begin{aligned} \left\{ \begin{array}{ll} u'(t)+A u(t)+p(t)Bu(t)+f(t)=0,&{} t\in [0,T]\\ \\ u(0)=u_0. \end{array}\right. \end{aligned}$$
(2.2)

If \(u_0\in X\), \(p\in L^2(0,T)\) and \(f\in L^2(0,T;X)\), a function \(u\in C^0([0,T],X)\) is called a mild solution of (2.2) if it satisfies

$$\begin{aligned} u(t)=e^{-tA }u_0-\int _0^t e^{-(t-s)A}[p(s)Bu(s)+f(s)]ds, \quad \forall t\in [0,T]. \end{aligned}$$

We introduce the following notation:

$$\begin{aligned}\begin{array}{l} \left\Vert f \right\Vert _{2}:=\left\Vert f \right\Vert _{L^2(0,T;X)},\qquad \forall \,f\in L^2(0,T;X)\\ \\ \left\Vert f \right\Vert _{\infty }:=\left\Vert f \right\Vert _{C([0,T];X)}=\sup _{t\in [0,T]}\left\Vert f(t) \right\Vert ,\qquad \forall \, f\in C([0,T];X). \end{array} \end{aligned}$$

The well-posedness of (2.2) is ensured by the following proposition (see [3] for a proof).

Proposition 2.1

Let \(T>0\). For any \(u_0\in X\), \(p\in L^2(0,T)\) and \(f\in L^2(0,T;X)\) there exists a unique mild solution of (2.2).

Furthermore, \(u(\cdot )\) satisfies

$$\begin{aligned} \left\Vert u \right\Vert _{\infty }\le C(T) (\left\Vert u_0 \right\Vert +\left\Vert f \right\Vert _{2}), \end{aligned}$$
(2.3)

for a suitable positive constant C(T).

Remark 2.2

Under the hypotheses of Proposition 2.1 it is possible to prove that the solution is more regular. Indeed, for every \(\varepsilon \in (0,T)\) it holds that \(u\in H^1(\varepsilon ,T;X)\cap L^2(\varepsilon ,T;D(A))\) and the following identity is satisfied

$$\begin{aligned} u'(t)+A u(t)+p(t)Bu(t)+f(t)=0,\quad \text {for a.e. }t\in [\varepsilon ,T]. \end{aligned}$$

Furthermore, if \(u_0=0\) then \(u\in H^1(0,T;X)\cap L^2(0,T;D(A))\) (it can be deduced by applying, for instance, [10, Proposition 3.1, p. 130]).

Let us now consider the following nonlinear control problem

$$\begin{aligned} \left\{ \begin{array}{ll} v'(t)+A v(t)+p(t)Bv(t)+p(t)B\varphi _j=0,&{}t\in [0,T]\\ \\ v(0)=v_0, \end{array}\right. \end{aligned}$$
(2.4)

where \(\varphi _j\) is the jth eigenfunction of A. We denote by \(v(\cdot ;v_0,p)\) the solution of (2.4) associated with initial condition \(v_0\) and control p.

The following result establishes a bound for the solution of (2.4) in terms of the initial condition. We give its proof in “Appendix A” for the sake of clarity and completeness. This proof follows that of [1, Proposition 4.3], with a different presentation, in particular with respect to the assumptions in the statement.

Proposition 2.3

Let \(T>0\). Let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator that satisfies (2.1) and let \(B:X\rightarrow X\) be a bounded linear operator. Let \(v_0\in X\) and let \(p\in L^2(0,T)\) be such that

$$\begin{aligned} \left\Vert p \right\Vert _{L^2(0,T)}\le N_T\left\Vert v_0 \right\Vert , \end{aligned}$$
(2.5)

with \(N_T\) a positive constant.

Then, \(v(\cdot ;v_0,p)\) verifies

$$\begin{aligned} \sup _{t\in [0,T]}\left\Vert v(t;v_0,p) \right\Vert ^2\le C_1(T,\left\Vert v_0 \right\Vert )\left\Vert v_0 \right\Vert ^2, \end{aligned}$$
(2.6)

where \(C_1(T,\left\Vert v_0 \right\Vert ):=e^{(2\sigma +\left\Vert B \right\Vert )T+2\left\Vert B \right\Vert N_T\sqrt{T}\left\Vert v_0 \right\Vert }(1+\left\Vert B \right\Vert N_T^2)\) and \(\sigma \) is defined in (2.1).

For any \(0\le s_0\le s_1\), we now introduce the linear problem

$$\begin{aligned} {\left\{ \begin{array}{ll} y'(t)+Ay(t)+p(t)B\varphi _j=0,&{}t\in [s_0,s_1]\\ \\ y(s_0)=y_0 \end{array}\right. } \end{aligned}$$
(2.7)

and we denote by \(y(\cdot ;y_0,s_0,p)\) the solution associated with initial condition \(y_0\) at time \(s_0\) and control p. Let us recall that for any fixed \(T>0\) and \(j\in {\mathbb {N}}^*\), we say that the pair \(\{A,B\}\) is j-null controllable in time T if there exists a constant \(N_T\) such that for every \(y_0\in X\) there exists a control \(p\in L^2(0,T)\) with

$$\begin{aligned} \left\Vert p \right\Vert _{L^2(0,T)}\le N_T\left\Vert y_0 \right\Vert , \end{aligned}$$
(2.8)

for which the solution of (2.7) with \(s_0=0\) and \(s_1=T\) satisfies \(y(T;y_0,0,p)=0\). In this case, we define the control cost as

$$\begin{aligned} N(T)=\sup _{\left\Vert y_0 \right\Vert =1}\inf \left\{ \left\Vert p \right\Vert _{L^2(0,T)}\,:\, y(T;y_0,0,p)=0\right\} .\end{aligned}$$
(2.9)

With an approximation argument one realizes that (2.8) holds with \(N_T=N(T)\), that is, for every \(y_0\in X\) there exists \(p\in L^2(0,T)\) with \(\left\Vert p \right\Vert _{L^2(0,T)}\le N(T)\left\Vert y_0 \right\Vert \) such that \(y(T;y_0,0,p)=0\).

Now, consider the following control problem

$$\begin{aligned} \left\{ \begin{array}{ll} w'(t)+Aw(t)+p(t)Bv(t)=0,&{}t\in [0,T]\\ \\ w(0)=0, \end{array}\right. \end{aligned}$$
(2.10)

with v the solution of (2.4). We denote by \(w(\cdot ;0,p)\) the solution of (2.10) associated with control p.

In the following proposition we give a quadratic estimate of the solution of (2.10) in terms of the initial condition of the Cauchy problem solved by v. We give its proof in “Appendix A” for the sake of clarity and completeness. This proof follows that of [1, Proposition 4.4], with a different presentation and a different hypothesis (2.11) compared to the corresponding ones in the statement of [1, Proposition 4.4].

Proposition 2.4

Let \(T>0\), \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator that satisfies (2.1) and \(B:X\rightarrow X\) be a bounded linear operator. Let \(p\in L^2(0,T)\) verify (2.5) with \(N_T=N(T)\) and \(v_0\in X\) be such that

$$\begin{aligned} N(T)\left\Vert v_0 \right\Vert \le 1. \end{aligned}$$
(2.11)

Then, \(w(\cdot ;0,p)\) satisfies

$$\begin{aligned} \left\Vert w(T;0,p) \right\Vert \le K(T)\left\Vert v_0 \right\Vert ^2, \end{aligned}$$
(2.12)

where

$$\begin{aligned} K^2(T):=\left\Vert B \right\Vert ^2N(T)^2e^{(4\sigma +\left\Vert B \right\Vert +1)T+2\left\Vert B \right\Vert \sqrt{T}}\left( 1+\left\Vert B \right\Vert N(T)^2\right) . \end{aligned}$$
(2.13)

3 Proof of Theorem 1.1

Fixed any \(j\in {\mathbb {N}}^*\) and any \(T>0\), our aim is to prove local exact controllability in time T for the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} u'(t)+A u(t)+p(t)Bu(t)=0,&{} t\in [0,T]\\ \\ u(0)=u_0, \end{array}\right. \end{aligned}$$
(3.1)

to the jth eigensolution \(\psi _j=e^{-\lambda _j t}\varphi _j\) of A, that is the solution of (3.1) when \(p=0\) and \(u_0=\varphi _j\). Hereafter, we will denote by \(u(\cdot ;u_0,p)\) the solution of (3.1) associated with initial condition \(u_0\) and control p.

We recall that \(A:D(A)\subset X\rightarrow X\) is a densely defined linear operator that satisfies (1.3) and we denote by \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) and \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the eigenvalues and the eigenfunctions of A, respectively. \(B:X\rightarrow X\) is a bounded linear operator. The pair \(\{A,B\}\) is assumed to be j-null controllable in any time, with control cost that satisfies (1.4).

The proof of Theorem 1.1 is divided into two main parts: the case \(\lambda _j=0\), that we build by a series of steps, and the case \(\lambda _j\ne 0\).

3.1 Case \(\lambda _j=0\)

If \(\lambda _j=0\) our reference trajectory will be the stationary function \(\psi _j\equiv \varphi _j\). Given \(T>0\), we define \(T_f\) as

$$\begin{aligned} T_f:=\min \left\{ T,\frac{\pi ^2}{6},\frac{\pi ^2}{6}T_0\right\} , \end{aligned}$$
(3.2)

where \(T_0\) is the constant in (1.4). We will actually build a control \(p\in L^2(0,T_f)\) such that \(u(T_f;u_0,p)=\psi _j\), and then, by taking \(p(t)\equiv 0\) for \(t>T_f\), the solution u of (3.1) will remain forever on the target trajectory \(\psi _j\).

Now, we define

$$\begin{aligned} T_1:=\frac{6}{\pi ^2}T_f, \end{aligned}$$
(3.3)

and we observe that \(0<T_1\le 1\). Then, we introduce the sequence \(\{T_j\}_{j\in {\mathbb {N}}^*}\) as

$$\begin{aligned} T_j:=T_1/j^2, \end{aligned}$$
(3.4)

and the time steps

$$\begin{aligned} \tau _n=\sum _{j=1}^n T_j,\qquad \forall \, n\in {\mathbb {N}}, \end{aligned}$$
(3.5)

with the convention that \(\sum _{j=1}^0T_j=0\). Notice that \(\sum _{j=1}^\infty T_j=\frac{\pi ^2}{6}T_1=T_f\).

Remark 3.1

Note that the sequence of times \((T_j)_{j \in {\mathbb {N}}^*}\) is strictly decaying towards 0, whereas the sequence of times \((\tau _j)_{j \in {\mathbb {N}}^*}\) is strictly increasing and converges to \(T_f\).

Set \(v:=u-\varphi _j\). We will consider the equation satisfied by v on suitable intervals of time \([s_0,s_1]\) and suitable initial data \(v^0\) at the initial time \(s_0\), as follows. Given any \(0\le s_0\le s_1\le T\), and any \(v^0\) in X, v is the solution of the following Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} v'(t)+A v(t)+p(t)Bv(t)+p(t)B\varphi _j=0,&{}t\in [s_0,s_1]\\ \\ v(s_0)=v^0. \end{array}\right. \end{aligned}$$
(3.6)

We denote by \(v(\cdot ;v^0,s_0,p)\) the solution of (3.6) associated with initial condition \(v^0\) at time \(s_0\) and control p. Observe that proving the controllability of u to \(\psi _j=\varphi _j\) in time \(T_f\) is equivalent to show the null controllability of v, that is, \(v(T_f;v_0,0,p)=0\), where \(v_0=u_0-\varphi _j\).

The strategy of the proof consists first of building a control \(p_1\in L^2(0,T_1)\) such that at time \(T_1\) the solution of (3.6) can be estimated by the square of the initial condition. We then iterate the procedure on consecutive time intervals of the form \([\tau _{n-1},\tau _n]\): each time we construct a control \(p_n\in L^2(\tau _{n-1},\tau _n)\) such that the solution of (3.6) on \([\tau _{n-1},\tau _n]\) at time \(\tau _n\) is estimated by the square of the initial condition on such interval. Hence, combining all those estimates and letting n go to infinity, we finally deduce that there exists a control \(p\in L^2_{loc}(0,+\infty )\) such that \(v(T_f;v_0,0,p)=0\) and so \(u(T_f;u_0,p)=\varphi _j\).

In practice, we shall build, by induction, controls \(p_n\in L^2(\tau _{n-1},\tau _n)\) for \(n\ge 1\) such that, setting

$$\begin{aligned} \begin{array}{l} \displaystyle q_{n}(t):=\sum _{j=1}^n p_j(t)\chi _{[\tau _{j-1},\tau _j]}(t),\\ v_n:=v(\tau _n;v_0,0,q_n), \end{array} \end{aligned}$$
(3.7)

it holds that

$$\begin{aligned} \begin{array}{ll} 1.&{}\left\Vert p_{n} \right\Vert _{L^2(\tau _{n-1},\tau _n)}\le N(T_n)\left\Vert v_{n-1} \right\Vert ,\\ 2.&{} y(\tau _{n};v_{n-1},\tau _{n-1},p_n)=0,\\ 3.&{}\left\Vert v(\tau _n;v_{n-1},\tau _{n-1},p_n) \right\Vert \le e^{\left( \sum _{j=1}^n 2^{n-j}j^2-2^n6\right) \Gamma _0/T_1},\\ 4.&{}\left\Vert v(\tau _n;v_{n-1},\tau _{n-1},p_n) \right\Vert \le \prod _{j=1}^{n}K(T_j)^{2^{n-j}}\left\Vert v_0 \right\Vert ^{2^{n}}, \end{array} \end{aligned}$$
(3.8)

where \(y(\cdot ;v_{n-1},\tau _{n-1},p_n)\) is the solution of (2.7) in \([\tau _{n-1},\tau _n]\), with initial condition \(v_{n-1}\) and control \(p_n\), and \(K(\cdot )\) is defined in (2.13).

Observe that, by construction,

$$\begin{aligned} v_{n}=v(\tau _n;v_0,0,q_n)=v(\tau _n;v_{n-1},\tau _{n-1},p_n),\quad \forall \,n\ge 1. \end{aligned}$$

3.1.1 First iteration

Let us start by studying control problem (3.6) in the first time interval \([s_0,s_1]=[\tau _0,\tau _1]=[0,T_1]\). Recalling that \(\{A,B\}\) is j-null controllable in any time, given \(v_0\in X\) there exists a control \(p_1\in L^2(0,T_1)\) such that

$$\begin{aligned} \left\Vert p_1 \right\Vert _{L^2(0,T_1)}\le N(T_1)\left\Vert v_0 \right\Vert ,\quad \text {and}\quad y(T_1;v_0,0,p_1)=0, \end{aligned}$$
(3.9)

where \(N(T_1)\) is the control cost and \(y(\cdot ;v_0,0,p_1)\) is the solution of the linear problem (2.7). So, the first two items of (3.8) for \(n=1\) are fulfilled. We now apply Proposition 2.3 deducing that

$$\begin{aligned} \sup _{t\in [0,T_1]}\left\Vert v(t;v_0,0,p_1) \right\Vert ^2\le C_1(T_1,\left\Vert v_0 \right\Vert )\left\Vert v_0 \right\Vert ^2, \end{aligned}$$
(3.10)

where \(C_1(T_1,\left\Vert v_0 \right\Vert )=e^{(2\sigma +\left\Vert B \right\Vert )T_1+2\left\Vert B \right\Vert N(T_1)\sqrt{T_1}\left\Vert v_0 \right\Vert }(1+\left\Vert B \right\Vert N(T_1)^2)\).

We measure how close from 0 the solution of (3.6) is steered at time \(T_1\) by control \(p_1\). For this purpose, we introduce the function \(w(\cdot ):=v(\cdot ;v_0,0,p_1)-y(\cdot ;v_0,0,p_1)\) which satisfies the following Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} w'(t)+Aw(t)+p_1(t)Bv(t)=0,&{}t\in [0,T_1]\\ \\ w(0)=0. \end{array}\right. \end{aligned}$$
(3.11)

Thanks to Proposition 2.4, if

$$\begin{aligned} N(T_1)\left\Vert v_0 \right\Vert \le 1, \end{aligned}$$
(3.12)

then, the solution of (3.11) satisfies

$$\begin{aligned} \left\Vert w(T_1;0,p_1) \right\Vert =\left\Vert v(T_1;v_0,0,p_1) \right\Vert \le K(T_1)\left\Vert v_0 \right\Vert ^2, \end{aligned}$$
(3.13)

where \(K(\cdot )\) is defined on \((0,\infty )\) as

$$\begin{aligned} K^2(\tau ):=\left\Vert B \right\Vert ^2N(\tau )^2e^{(4\sigma +\left\Vert B \right\Vert +1)\tau +2\left\Vert B \right\Vert \sqrt{\tau }}\left( 1+\left\Vert B \right\Vert N(\tau )^2\right) . \end{aligned}$$
(3.14)

Notice that, the first equality in (3.13) holds true because control \(p_1\) steers to 0 the solution of the linear problem [see (3.9)].

Remark 3.2

Observe that function \(K(\cdot )\) satisfies

$$\begin{aligned} K^2(\tau )\le \left\Vert B \right\Vert ^2N^2(\tau )e^{(4\sigma +3\left\Vert B \right\Vert +1)}\left( 1+\left\Vert B \right\Vert N^2(\tau )\right) ,\quad \forall \,0<\tau \le 1. \end{aligned}$$

Therefore, since \(T_1=\min \{6T/\pi ^2, 1,T_0\}\), combining the above inequality with (1.4), we deduce that there exists a constant \(\Gamma _0>\nu \) such that

$$\begin{aligned} K(\tau )\le e^{\Gamma _0/\tau },\quad \forall \,0<\tau \le T_1. \end{aligned}$$
(3.15)

where \(T_0\) is defined in (1.4).

Note that a suitable choice of constant \(\Gamma _0\) such that (3.15) holds is (1.6).

We now define the radius of the neighborhood of \(\varphi _j\) where we take the initial condition \(u_0\) as in (1.7). Let \(u_0\in B_{R_T}(\varphi _j)\), or equivalently \(v_0=u_0 - \varphi _j \in B_{R_T}(0)\), be chosen arbitrarily. With this choice we have that

$$\begin{aligned} N(T_1)\left\Vert v_0 \right\Vert \le e^{\nu /T_1}e^{-6\Gamma _0/T_1}\le e^{-5\Gamma _0/T_1}\le 1, \end{aligned}$$

and (3.12) is satisfied. Therefore, we get that

$$\begin{aligned} \left\Vert v(T_1;v_0,0,p_1) \right\Vert \le K(T_1)\left\Vert v_0 \right\Vert ^2\le e^{-11\Gamma _0/T_1}, \end{aligned}$$
(3.16)

which proves 3. and 4. of (3.8) for \(n=1\).

3.1.2 Iterative step

Now, suppose that we have built controls \(p_j\in L^2(\tau _{j-1},\tau _j)\) such that (3.8) holds for each \(j=1,\dots ,n-1\). In particular, for \(j=n-1\), there exists \(p_{n-1}\in L^2(\tau _{n-1},\tau _n)\) which verifies

$$\begin{aligned} \begin{array}{ll} 1.&{}\left\Vert p_{n-1} \right\Vert _{L^2(\tau _{n-2},\tau _{n-1})}\le N(T_{n-1})\left\Vert v_{n-2} \right\Vert ,\\ 2.&{} y(\tau _{n-1};v_{n-2},\tau _{n-2},p_{n-1})=0,\\ 3.&{}\left\Vert v(\tau _{n-1};v_{n-2},\tau _{n-2},p_{n-1}) \right\Vert \le e^{\left( \sum _{j=1}^{n-1} 2^{n-1-j}j^2-2^{n-1}6\right) \Gamma _0/T_1},\\ 4.&{}\left\Vert v(\tau _{n-1};v_{n-2},\tau _{n-2},p_{n-1}) \right\Vert \le \prod _{j=1}^{n-1}K(T_j)^{2^{n-1-j}}\left\Vert v_0 \right\Vert ^{2^{n-1}}. \end{array} \end{aligned}$$
(3.17)

We shall now prove that there exists \(p_n\in L^2(\tau _{n-1},\tau _n)\) such that every item of (3.8) is fulfilled. We defined \(q_{n-1}\) and \(v_{n-1}\) as in (3.7) and we consider the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} v'(t)+Av(t)+p(t)Bv(t)+p(t)B\varphi _j=0,&{}[\tau _{n-1},\tau _n]\\ v(\tau _{n-1})=v_{n-1}, \end{array}\right. } \end{aligned}$$
(3.18)

where the control p has still to be suitably chosen. By the change of variables \(s=t-\tau _{n-1}\) and the definition (3.5), we shift the problem from \([\tau _{n-1},\tau _n]\) into the interval \([0,T_n]\). We introduce the functions \({\tilde{v}}(s)=v(s+\tau _{n-1})\) and \({\tilde{p}}(s)=p\left( s+\tau _{n-1}\right) \) and we rewrite (3.18) as

$$\begin{aligned} \left\{ \begin{array}{ll} {\tilde{v}}'(s)+A{\tilde{v}}(s)+{\tilde{p}}(s)B{\tilde{v}}(s)+{\tilde{p}}(s)B\varphi _j=0,&{}s\in \left[ 0,T_n\right] \\ \\ {\tilde{v}}(0)=v_{n-1}. \end{array} \right. \end{aligned}$$
(3.19)

Recalling that \(\{A,B\}\) is j-null controllable in any time, there exists a control \({\tilde{p}}_n\in L^2(0,T_n)\) such that

$$\begin{aligned} \left\Vert {\tilde{p}}_n \right\Vert _{L^2(0,T_n)}\le N(T_n)\left\Vert v_{n-1} \right\Vert \quad \text {and}\quad {\tilde{y}}(T_n,v_{n-1},0,{\tilde{p}}_n)=0, \end{aligned}$$

where \({\tilde{y}}(\cdot ;v_{n-1},0,{\tilde{p}}_n)\) is the solution of the linear problem (2.7) on \([0,T_n]\). Furthermore, since \(v_{n-1}=v(\tau _{n-1};v_0,0,q_{n-1})=v(\tau _{n-1};v_{n-2},\tau _{n-2},p_{n-1})\), from 3. of (3.17) we obtain that

$$\begin{aligned} \begin{aligned} N(T_n)\left\Vert v_{n-1} \right\Vert&\le e^{\nu n^2/T_1}e^{\left( \sum _{j=1}^{n-1} 2^{n-1-j}j^2-2^{n-1}6\right) \Gamma _0/T_1}\\&\le e^{(n^2+(-(n-1)^2-4(n-1)+2^{n-1}6-6-2^{n-1}6)\Gamma _0/T_1}\\&=e^{-(2n+3)\Gamma _0/T_1}\le 1, \end{aligned} \end{aligned}$$
(3.20)

where we have used that the constant of the control cost \(\nu \) is less than or equal to \(\Gamma _0\) (see Remark 3.2), and the identity

$$\begin{aligned} \sum _{j=0}^n\frac{j^2}{2^j}=2^{-n}(-n^2-4n+6(2^n-1)), \qquad n\ge 0, \end{aligned}$$
(3.21)

which can be easily checked by induction.

We now choose the control \({\tilde{p}}={\tilde{p}}_n\) in (3.19) and still denote by \({\tilde{v}}\) the corresponding solution. We set \(w={\tilde{v}}-{\tilde{y}}\). Then, w solves (2.10) with \(T=T_n\) and \(p={\tilde{p}}_n\). So, we can apply Proposition 2.4 with \(T=T_n\) to problem (3.19) and since \(w(T_n;0,{\tilde{p}}_n)={\tilde{v}}(T_n;v_{n-1},0,{\tilde{p}}_n)\), we obtain that

$$\begin{aligned} \left\Vert {\tilde{v}}(T_n;v_{n-1},0,{\tilde{p}}_n) \right\Vert \le K(T_n)\left\Vert v_{n-1} \right\Vert ^2. \end{aligned}$$

We shift back the problem into the original interval \(\left[ \tau _{n-1},\tau _{n}\right] \), we define \(p_n(t):={\tilde{p}}_n( t-\tau _{n-1})\), and we get

$$\begin{aligned} \left\Vert p_n \right\Vert _{L^2(\tau _{n-1},\tau _n)}\le N(T_n)\left\Vert v_{n-1} \right\Vert ,\quad \text {and}\quad y(\tau _n,v_{n-1},\tau _{n-1},p_n)=0, \end{aligned}$$

and

$$\begin{aligned} \left\Vert v(\tau _{n};v_{n-1},\tau _{n-1},p_n) \right\Vert \le K(T_n)\left\Vert v_{n-1} \right\Vert ^2. \end{aligned}$$
(3.22)

So, we have proved the first two items of (3.8). Moreover, thanks to 3. of (3.17) and (3.15), that is, \(K(T_n)\le e^{\Gamma _0n^2/T_1}\), we deduce that

$$\begin{aligned}&\left\Vert v(\tau _{n};v_{n-1},\tau _{n-1},p_n) \right\Vert \le e^{\Gamma _0 n^2/T_1}\left[ e^{\left( \sum _{j=1}^{n-1} 2^{n-1-j}j^2-2^{n-1}6\right) \Gamma _0/T_1}\right] ^2\nonumber \\&\quad =e^{\left( \sum _{j=1}^n 2^{n-j}j^2-2^n6\right) \Gamma _0/T_1}, \end{aligned}$$
(3.23)

that is the third item of (3.8). Finally, using again (3.22) and 4. of (3.17) we obtain that

$$\begin{aligned}&\left\Vert v(\tau _{n};v_{n-1},\tau _{n-1},p_n) \right\Vert \le K(T_n)\left[ \prod _{j=1}^{n-1}K(T_j)^{2^{n-1-j}}\left\Vert v_0 \right\Vert ^{2^{n-1}}\right] ^2\\&\quad =\prod _{j=1}^{n}K(T_j)^{2^{n-j}}\left\Vert v_0 \right\Vert ^{2^{n}}. \end{aligned}$$

This concludes the induction argument and the proof of (3.8).

We are now ready to complete the proof of Theorem 1.1 for the case \(\lambda _j=0\). We observe that for all \(n\in {\mathbb {N}}^*\)

$$\begin{aligned} \begin{aligned} \left\Vert v(\tau _{n};v_{n-1},\tau _{n-1},p_n) \right\Vert&\le \prod _{j=1}^nK(T_j)^{2^{n-j}}\left\Vert v_0 \right\Vert ^{2^n}\\&\le \prod _{j=1}^n\left( e^{\Gamma _0 j^2/T_1}\right) ^{2^{n-j}}\left\Vert v_0 \right\Vert ^{2^n}\\&=e^{\Gamma _0 2^n/T_1\sum _{j=1}^nj^2/2^j}\left\Vert v_0 \right\Vert ^{2^n}\\&\le e^{\Gamma _0 2^n/T_1\sum _{j=1}^\infty j^2/2^j}\left\Vert v_0 \right\Vert ^{2^n}\le \left( e^{6\Gamma _0/T_1}\left\Vert v_0 \right\Vert \right) ^{2^n} \end{aligned} \end{aligned}$$
(3.24)

where we have used (3.15) and \(\sum _{j=1}^\infty j^2/2^j=6\). Notice that (3.24) is equivalent to

$$\begin{aligned} \left\Vert v(\tau _{n};v_0,0,q_n) \right\Vert \le \left( e^{6\Gamma _0/T_1}\left\Vert v_0 \right\Vert \right) ^{2^n}, \end{aligned}$$
(3.25)

where \(q_n(t)=\sum _{j=1}^{n}p_j(t)\chi _{[\tau _{j-1},\tau _j]}(t)\). We now take the limit as \(n\rightarrow \infty \) in (3.25) and we get

$$\begin{aligned} \left\Vert u\left( \frac{\pi ^2}{6}T_1;u_0,q_{\infty }\right) -\varphi _j \right\Vert =\left\Vert v\left( \frac{\pi ^2}{6}T_1;v_0,0,q_{\infty }\right) \right\Vert =\left\Vert v(T_f;v_0,0,q_{\infty }) \right\Vert \le 0\nonumber \\ \end{aligned}$$
(3.26)

since by hypothesis \(u_0\in B_{R_T}(\varphi _j)\), with \(R_T\) defined in (1.7), and so \(\left\Vert v_0 \right\Vert <e^{-6\Gamma _0/T_1}\). This means that, we have built a control \(p\in L^2_{loc}([0,\infty ))\), defined by

$$\begin{aligned} p(t)=\left\{ \begin{array}{ll} \sum _{n=1}^\infty p_{n}(t)\chi _{\left[ \tau _{n-1} ,\tau _{n}\right] }(t),&{} t\in \left( 0,T_f\right] \\ \\ 0,&{}t\in (T_f,+\infty ) \end{array}\right. \end{aligned}$$
(3.27)

for which the solution u of (3.1) reaches the jth eigensolution \(\psi _j=\varphi _j\) in time \(T_f\), less than or equal to T, and stays on it forever.

Observe that, thanks to the first item of (3.8) and to (3.20), we are able to yield a bound for the \(L^2\)-norm of such a control:

$$\begin{aligned} \begin{aligned} \left\Vert p \right\Vert ^2_{L^2\left( 0,T\right) }&=\sum _{n=1}^\infty \left\Vert p_{n} \right\Vert ^2_{L^2\left( \tau _{n-1},\tau _{n}\right) }\\&\le \sum _{n=1}^\infty \left( N(T_n)\left\Vert v\left( \tau _{n-1} \right) \right\Vert \right) ^2\le \sum _{n=1}^\infty e^{-2(2n+3)C_K/T_1}\\&\le \frac{e^{-6C_K/T_1}}{e^{4C_K/T_1}-1}=\frac{e^{-\pi ^2C_K/T_f}}{e^{2\pi ^2C_K/(3T_f)}-1}. \end{aligned} \end{aligned}$$
(3.28)

Notice that since (3.2) holds, (3.28) implies (1.5).

3.2 Case \(\lambda _j\ne 0\)

Now, we face the case \(\lambda _j\ne 0\). We define the operator

$$\begin{aligned} A_j:=A-\lambda _jI. \end{aligned}$$

We proved in [1, Lemma 4.7] that if \(\{A,B\}\) is j-null controllable, then the same holds for the pair \(\{A_j,B\}\). Furthermore, it is easy to check that also condition (1.4) is verified by the control cost associated with \(\{A_j,B\}\), if the same property holds for the control cost associated with the pair \(\{A,B\}\). In particular, if we call \(N_j(\cdot )\) the control cost associated to \(\{A_j,B\}\), then

$$\begin{aligned} N_j(\tau )\le e^{(\nu +|\lambda _j|)/\tau },\qquad \forall \,0<\tau \le \min \{1,T_0\}, \end{aligned}$$

where \(\nu ,T_0>0\) are the constants in (1.4).

It is possible to check that \(A_j\) satisfies (1.3) and moreover it has the same eigenfuctions, \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\), of A, while the eigenvalues are given by

$$\begin{aligned} \mu _k=\lambda _k-\lambda _j, \qquad \forall \, k\in {\mathbb {N}}^*. \end{aligned}$$

In particular, \(\mu _j=0\).

We define the function \(z(t)=e^{\lambda _j t}u(t)\), where u is the solution of (3.1). Then, z solves the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} z'(t)+A_jz(t)+p(t)Bz(t)=0,&{} t\in [0,T],\\ \\ z(0)=u_0. \end{array}\right. \end{aligned}$$
(3.29)

We define \(T_f\) as in (3.2) and \(R_{T}\) as in (1.7). We deduce from the previous analysis that, if \(u_0\in B_{R_{T}}(\varphi _j)\), then there exists a control \(p\in L^2([0,+\infty ))\) that steers the solution z to the eigenstate \(\varphi _j\) in time \(T_f\le T\). This implies the exact controllability of u to the eigensolution \(\psi _j(t)=e^{-\lambda _j t}\varphi _j\): indeed,

$$\begin{aligned}&\left\Vert u\left( T_f;u_0,p\right) -\psi _j\left( T_f\right) \right\Vert =\left\Vert e^{-\lambda _jT_f}z\left( T_f\right) -e^{-\lambda _jT_f}\varphi _j \right\Vert \\&=e^{-\lambda _jT_f}\left\Vert z\left( T_f\right) -\varphi _j \right\Vert =0. \end{aligned}$$

Remark 3.3

We observe that, from (3.28), it follows that \(\left\Vert p \right\Vert _{L^2(0,T_{f})}\rightarrow 0\) as \(T_f\rightarrow 0\). This fact is not surprising since as \(T_f\) approaches 0, also the size of the neighborhood where the initial condition can be chosen goes to zero.

4 Proof of Theorem 1.2

Before showing the proof of Theorem 1.2, we define formally for any fixed \(j\in {\mathbb {N}}^*\) the following function

$$\begin{aligned} G_{M,j}(T):=\frac{M}{T^2}e^{M/T}\sum _{k=1}^{\infty }\frac{e^{-2\omega _kT+M\sqrt{\omega _k}}}{|\langle B\varphi _j,\varphi _k\rangle |^2}, \end{aligned}$$
(4.1)

where M is a positive constant, \(\omega _k:=\lambda _k-\lambda _1\), for all \(k\in {\mathbb {N}}^*\), \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) are the eigenvalues of A. In Lemma 4.1 below, we investigate the behavior of \(G_{M,j}(T)\) for small values of T. Such a result will be crucial for the analysis of the control cost N(T) in Theorem 1.2.

Lemma 4.1

Let \(A:D(A)\subset X\rightarrow X\) be such that (1.3) and (1.10) hold and \(B:X\rightarrow X\) be such that (1.11) holds. Then, for any \(M,T>0\) the series in (4.1) is convergent and there exists a positive constant \(\nu _j\), such that

$$\begin{aligned} G_{M,j}(T)\le e^{2\nu _j/T},\quad \forall \,0<T\le 1. \end{aligned}$$
(4.2)

Moreover, a suitable choice of \(\nu _j=\nu _j(M,b,q, \alpha )\) is (1.13).

Proof

We first observe that there exists a constant \(C>0\) such that

$$\begin{aligned} |\lambda _k-\lambda _j|\le C(\lambda _k-\lambda _1)=C\omega _k,\qquad \forall \,k\in {\mathbb {N}}^*. \end{aligned}$$

So, thanks to assumption (1.11), we have that

$$\begin{aligned} \begin{aligned} G_{M,j}(T)&=\frac{M}{T^2}e^{M/T}\sum _{k=1}^{\infty }\frac{e^{-2\omega _kT+M\sqrt{\omega _k}}}{|\langle B\varphi _j,\varphi _k\rangle |^2}\\&\le \frac{M}{T^2}e^{M/T}\left[ \frac{1}{|\langle B\varphi _j,\varphi _j\rangle |^2}+\frac{1}{b^2}\sum _{k=1,\,k\ne j}^{\infty }\left( \omega _k^{2q}e^{-\omega _kT}\right) e^{-\omega _kT+M\sqrt{\omega _k}}\right] . \end{aligned}\nonumber \\ \end{aligned}$$
(4.3)

For any \(\omega \ge 0\) we set \(f(\omega )=e^{-\omega T+M\sqrt{\omega }}\). The maximum value of f is attained at \(\omega =\left( \frac{M}{2T}\right) ^2\). So, we can bound \(G_{M,j}(T)\) as follows

$$\begin{aligned} G_{M,j}(T)\le \frac{M}{T^2}e^{M/T}\left[ \frac{1}{|\langle B\varphi _j,\varphi _j\rangle |^2}+\frac{e^{M^2/(4T)}}{b^2}\sum _{k=1}^{\infty }\omega _k^{2q}e^{-\omega _kT}\right] . \end{aligned}$$
(4.4)

Now, for any \(\omega \ge 0\) we define the function \(g(\omega )=\omega ^{2q}e^{-\omega T}\). Its derivative is given by

$$\begin{aligned} g'(\omega )=(2q-\omega T)\omega ^{2q-1}e^{-\omega T} \end{aligned}$$

and therefore we deduce that

$$\begin{aligned} g(\omega ) \text{ is } \left\{ \begin{array}{ll}\text{ increasing } &{} \text{ if } 0\le \omega <(2q)/T\\ \\ \text{ decreasing }&{}\text{ if } \omega \ge (2q)/T \end{array}\right. \end{aligned}$$

and g has a maximum at \(\omega =(2q)/T\). We define the following index:

$$\begin{aligned} k_1:=k_1(T)=\sup \left\{ k\in {\mathbb {N}}^*\,:\,\omega _k\le \frac{2q}{T}\right\} \end{aligned}$$

Note that \(k_1(T)\) goes to \(\infty \) as T converges to 0. We can rewrite the sum in (4.4) as follows

$$\begin{aligned} \sum _{k=1}^{\infty }\omega _k^{2q}e^{-\omega _kT}=\sum _{k\le k_1-1}\omega _k^{2q}e^{-\omega _kT}+\sum _{k_1\le k\le k_1+1}\omega _k^{2q}e^{-\omega _kT}+\sum _{k\ge k_1+2}\omega _k^{2q}e^{-\omega _kT}.\nonumber \\ \end{aligned}$$
(4.5)

For any \(k\le k_1-1\), we have

$$\begin{aligned} \int _{\omega _k}^{\omega _{k+1}}\omega ^{2q}e^{-\omega T}d\omega \ge (\omega _{k+1}-\omega _k)\omega _k^{2q}e^{-\omega _k T}\ge \alpha \sqrt{\omega _2}\,\omega _k^{2q}e^{-\omega _k T} \end{aligned}$$
(4.6)

and for any \(k\ge k_1+2\)

$$\begin{aligned} \int _{\omega _{k-1}}^{\omega _k}\omega ^{2q}e^{-\omega T}d\omega \ge (\omega _k-\omega _{k-1})\omega _k^{2q}e^{-\omega _k T}\ge \alpha \sqrt{\omega _2}\,\omega _k^{2q}e^{-\omega _k T}. \end{aligned}$$
(4.7)

So, by using estimates (4.6) and (4.7), (4.5) becomes

$$\begin{aligned} \sum _{k=1}^{\infty }\omega _k^{2q}e^{-\omega _kT}\le \frac{2}{\alpha \sqrt{\omega _2}}\int _0^\infty \omega ^{2q}e^{-\omega T}d\omega +\sum _{k_1\le k\le k_1+1}\omega _k^{2q}e^{-\omega _kT}. \end{aligned}$$
(4.8)

Furthermore, recalling that g has a maximum for \(\omega =2q/T\), it holds that

$$\begin{aligned} k=k_1,k_1+1\quad \Rightarrow \quad \omega _k^{2q}e^{-\omega _k T}\le \left( 2q/T\right) ^{2q}e^{-2q}. \end{aligned}$$
(4.9)

Finally, the integral term of (4.8) can be rewritten as

$$\begin{aligned} \int _0^\infty \omega ^{2q}e^{-\omega T}d\omega =\frac{1}{T}\int _0^{\infty }\left( \frac{s}{T}\right) ^{2q}e^{-s}ds=\frac{1}{T^{1+2q}}\int _0^{\infty }s^{2q}e^{-s}ds=\frac{\Gamma (2q+1)}{T^{1+2q}},\nonumber \\ \end{aligned}$$
(4.10)

where by \(\Gamma (\cdot )\) we indicate the Euler integral of the second kind.

Therefore, we conclude from (4.9) and (4.10) that there exist two constants \(C_q,C_{q,\alpha }>0\) such that

$$\begin{aligned} \sum _{k=1}^\infty \omega _k^{2q}e^{-\omega _k T}\le \frac{C_q}{T^{2q}}+\frac{C_{\alpha ,q}}{T^{1+2q}}. \end{aligned}$$
(4.11)

We use this last bound to prove that there exists \(\nu _j>0\) such that

$$\begin{aligned} G_{M,j}(T)\le & {} \frac{M}{T^2}e^{M/T}\left[ \frac{1}{|\langle B\varphi _j,\varphi _j\rangle |^2}+\frac{e^{M^2/(4T)}}{b^2}\left( \frac{C_q}{T^{2q}}+\frac{C_{\alpha ,q}}{T^{1+2q}}\right) \right] \nonumber \\\le & {} e^{2\nu _j/T} \quad \forall \ T \in (0,1], \end{aligned}$$
(4.12)

as claimed. \(\square \)

Remark 4.2

Observe that Lemma 4.1 holds even under different behaviors of the lower bound of the Fourier coefficients of \(B\varphi _j\). For instance, one can assume

$$\begin{aligned} \langle B\varphi _j,\varphi _j\rangle \ne 0\quad \text {and}\quad |\langle B\varphi _j,\varphi _k\rangle |\ge be^{-c\sqrt{|\lambda _k-\lambda _j|}},\quad \forall \,k\ne j \end{aligned}$$

with \(b,c>0\), instead of assumption (1.11).

Now we proceed with the proof of Theorem 1.2.

Proof of Theorem 1.2

Let \(T>0\) and consider problem (2.7). For any \(y_0\in X\) and \(p\in L^2(0,T)\) there exists a unique strong solution \(y\in C^0([0,T],X)\) of (2.7) that can be written as

$$\begin{aligned} y(t)=e^{-tA}y_0-\int _0^t e^{-(t-s)A}p(s)B\varphi _jds, \end{aligned}$$
(4.13)

(see, for instance, [10, Proposition 3.1, p. 130]).

Our aim is to find a control \(p\in L^2(0,T)\) for which \(y(T;y_0,0,p)=0\), that is equivalent to the following identity

$$\begin{aligned} \sum _{k\in {\mathbb {N}}^*}\langle y_0,\varphi _k\rangle e^{-\lambda _k T}\varphi _k=\int _0^T p(s)\sum _{k\in {\mathbb {N}}^*}\langle B\varphi _j,\varphi _k\rangle e^{-\lambda _k(T-s)}\varphi _kds. \end{aligned}$$

Since, by hypothesis, the eigenfunctions of A form an orthonormal basis of X, the above formula reads as

$$\begin{aligned} \langle y_0,\varphi _k\rangle =\int _0^T e^{\lambda _ks}p(s)\langle B\varphi _j,\varphi _k\rangle ds,\quad \forall \,k\in {\mathbb {N}}^*, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \int _0^T e^{\lambda _ks}p(s)ds=\frac{\langle y_0,\varphi _k\rangle }{\langle B\varphi _j,\varphi _k\rangle },\quad \forall \,k\in {\mathbb {N}}^*. \end{aligned}$$
(4.14)

By defining \(q(s):=e^{\lambda _1 s}p(s)\) and \(\omega _k:=\lambda _k-\lambda _1\ge 0\), the family of equations (4.14) can be rewritten as

$$\begin{aligned} \int _0^T e^{\omega _k s}q(s)ds=\frac{\langle y_0,\varphi _k\rangle }{\langle B\varphi _j,\varphi _k\rangle },\quad \forall \,k\in {\mathbb {N}}^*. \end{aligned}$$
(4.15)

Thanks to hypothesis (1.10), we can apply [15, Theorem 2.4] that ensures the existence of a biorthogonal family \(\{\sigma _k\}_{k\in {\mathbb {N}}^*}\) to the family of exponentials \(\{\zeta _k\}_{k\in {\mathbb {N}}^*}\), \(\zeta _k(s)=e^{\omega _ks}\), \(s\in [0,T]\).

We claim that the series

$$\begin{aligned} \sum _{k\in {\mathbb {N}}^*}\frac{\langle y_0,\varphi _k\rangle }{\langle B\varphi _j,\varphi _k\rangle }\sigma _k(s), \end{aligned}$$
(4.16)

is convergent in \(L^2(0,T)\). Indeed, thanks to the following estimate, from [15, Theorem 2.4], for the biorthogonal family \(\{\sigma _k\}_{k\in {\mathbb {N}}^*}\)

$$\begin{aligned} \left\Vert \sigma _k \right\Vert ^2_{L^2(0,T)}\le C^2_\alpha (T) e^{-2\omega _kT}e^{C \sqrt{\omega _k}/\alpha },\quad \forall \,k\in {\mathbb {N}}^*, \end{aligned}$$

with \(C>0\) independent of T and \(\alpha \), and

$$\begin{aligned} C^2_\alpha (T)=\left\{ \begin{array}{ll} C\left( \frac{1}{T}+\frac{1}{T^2\alpha ^2}\right) e^{\frac{C}{\alpha ^2 T}}&{}\text {if }T<\frac{1}{\alpha ^2},\\ \\ C^2\alpha ^2&{}\text {if }T\ge \frac{1}{\alpha ^2}, \end{array}\right. \end{aligned}$$

we obtain

$$\begin{aligned} \begin{aligned} \sum _{k\in {\mathbb {N}}^*}\left| \frac{\langle y_0,\varphi _k\rangle }{\langle B\varphi _j,\varphi _k\rangle }\right| \left\Vert \sigma _k \right\Vert _{L^2(0,T)}&\le \left\Vert y_0 \right\Vert \left( \sum _{k\in {\mathbb {N}}^*}\frac{\left\Vert \sigma _k \right\Vert ^2_{L^2(0,T)}}{|\langle B\varphi _j,\varphi _k\rangle |^2}\right) ^{1/2}\\&\le \left\Vert y_0 \right\Vert \left( C^2_\alpha (T)\sum _{k\in {\mathbb {N}}^*}\frac{e^{-2\omega _kT}e^{C\sqrt{\omega _k}/\alpha }}{|\langle B\varphi _j,\varphi _k\rangle |^2})\right) ^{1/2}. \end{aligned} \end{aligned}$$

Observe that, by Lemma 4.1, the right-hand side of the above estimate is finite for any \(T>0\).

Therefore, we define the control q as

$$\begin{aligned} q(s):=\sum _{k\in {\mathbb {N}}^*}\frac{\langle y_0,\varphi _k\rangle }{\langle B\varphi _j,\varphi _k\rangle }\sigma _k(s), \end{aligned}$$

and we deduce that \(q\in L^2(0,T)\) satisfies (4.15) and furthermore

$$\begin{aligned} \left\Vert q \right\Vert _{L^2(0,T)}\le C_\alpha (T)\Lambda _T\left\Vert y_0 \right\Vert , \end{aligned}$$

where

$$\begin{aligned} \Lambda _T:=\left( \sum _{k\in {\mathbb {N}}^*}\frac{e^{-2\omega _kT}e^{C\sqrt{\omega _k}/\alpha }}{|\langle B\varphi _j,\varphi _k\rangle |^2}\right) ^{1/2}. \end{aligned}$$
(4.17)

Finally, returning to p, we obtain that

$$\begin{aligned} \left\Vert p \right\Vert ^2_{L^2(0,T)}=\int _0^Te^{-2\lambda _1s}|q(s)|^2ds\le \max \left\{ 1,e^{-2\lambda _1 T}\right\} \left\Vert q \right\Vert ^2_{L^2(0,T)}. \end{aligned}$$
(4.18)

By taking

$$\begin{aligned} N(T):=\max \left\{ 1,e^{-\lambda _1 T}\right\} C_{\alpha }(T)\Lambda _T, \end{aligned}$$
(4.19)

we deduce that \(\{A,B\}\) is j-null controllable in any time \(T>0\) with associated control cost (4.19).

What remains to prove is estimate (1.4) for the control cost N(T) defined in (4.19), for T small. Let us define \(T_0\) as in (1.12). Then for any \(0<T< T_0\), it holds that

$$\begin{aligned} C^2_\alpha (T)=C\left( \frac{1}{T}+\frac{1}{T^2\alpha ^2}\right) e^{\frac{C}{\alpha ^2 T}}. \end{aligned}$$

We can assume without loss of generality that the constant \(C \ge 1\), since we can replace it by \(\max \left\{ 1, C\right\} \). We assume for all the sequel that \(C \ge 1\).

Since \(0<T< T_0 \le 1\), we claim that there exists \(\widetilde{M}>0\) such that

$$\begin{aligned} C_\alpha ^2(T) \le \frac{\widetilde{M}}{T^2}e^{\widetilde{M}/T} \quad \forall \ T \in (0,T_0). \end{aligned}$$
(4.20)

Indeed, we have

$$\begin{aligned} C_\alpha ^2(T)\le C\left( 1+\frac{1}{\alpha ^2}\right) \frac{1}{T^2}e^{\frac{C}{\alpha ^2T}} \quad \forall \ T \in (0,T_0). \end{aligned}$$

We set

$$\begin{aligned} \widetilde{M}:=C\left( 1+\frac{1}{\alpha ^2}\right) . \end{aligned}$$

We note that since \(C\ge 1\), we have

$$\begin{aligned} \dfrac{C}{\alpha ^2} \le \widetilde{M}. \end{aligned}$$

Hence from the two above estimates, we deduce (4.20). Moreover, we easily prove that

$$\begin{aligned} \max \left\{ 1,e^{-\lambda _1 T}\right\} \le e^{|\lambda _1|} \quad \forall \ T \in (0,T_0). \end{aligned}$$

Therefore, the control cost N(T) given by (4.19) can be bounded from above as follows

$$\begin{aligned} N(T)\le \sqrt{G_{M,j}(T)}, \end{aligned}$$

where M is defined as in (1.14) and the function \(G_{M,j}(\cdot )\) is defined in (4.1). Finally, thanks to Lemma 4.1, we deduce that N(T) fulfills property (1.4) with \(\nu =\nu _j\). \(\square \)

5 Proof of Theorems 1.4 and 1.5

Before proving Theorem 1.4, let us show a preliminary result that demonstrates the statement in the case of an accretive operator with \(\lambda _1=0\).

Lemma 5.1

Let \(A:D(A)\subset X\rightarrow X\) be a densely defined linear operator such that (1.3) holds with \(\sigma =0\) and let \(B:X\rightarrow X\) be a bounded linear operator. Let {A,B} be a 1-null controllable pair which satisfies (1.4). Furthermore, we assume \(\lambda _1=0\). Then, there exists a constant \(r_1>0\) such that for any \(R>0\) there exists \(T_{R}>0\) such that for all \(v_0\in X\) that satisfy

$$\begin{aligned} \left| \langle v_0,\varphi _1\rangle \right| < r_1,\qquad \left\Vert v_0-\langle v_0,\varphi _1\rangle \varphi _1 \right\Vert \le R, \end{aligned}$$
(5.1)

problem (2.4) is null controllable in time \(T_{R}\).

Proof

First step. We fix \(T=1\). Thanks to Theorem 1.1, there exists a constant \(r_1>0\) such that, denoting by \(u_1\) the solution of (1.1) on [0, 1], if \(\left\Vert u_1(0)- \varphi _1 \right\Vert < \sqrt{2}r_1\) then there exists a control \(p_1\in L^2(0,1)\) for which the solution of (1.1) with p replaced by \(p_1\), satisfies \(u_1(1)=\varphi _1\). We set \(v_1=u_1- \varphi _1\) on [0, 1]. We deduce that if \(\left\Vert v_1(0) \right\Vert < \sqrt{2}r_1\) then there exists a control \(p_1\in L^2(0,1)\) for which the solution \(v_1\) of (2.4) on [0, 1] with p replaced by \(p_1\), satisfies \(v_1(1)=0\).

Second step. Let \(v_0\in X\) be the initial condition of (2.4). We decompose \(v_0\) as follows

$$\begin{aligned} v_0=\langle v_0,\varphi _1\rangle \varphi _1+v_{0,1}, \end{aligned}$$

where \(v_{0,1}\in \varphi _1^\perp \) and we suppose that \(\left| \langle v_0,\varphi _1\rangle \right| < r_1\). If \(R\le r_1\), then \( \left\Vert v_0 \right\Vert ^2 \le r^2_1+R^2\le 2r^2_1\) and we can directly apply the first step of the proof with \(T_R=1\). Otherwise, we define \(t_{R}\) as

$$\begin{aligned} t_{R}:=\frac{1}{2\lambda _2}\log {\left( \frac{R^2}{r_1^2}\right) }, \end{aligned}$$
(5.2)

and in the time interval \([0,t_{R}]\) we take the control \(p\equiv 0\). Then, for all \(t\in [0,t_{R}]\), we have that

$$\begin{aligned} \left\Vert v(t) \right\Vert ^2\le & {} \left\Vert e^{-tA}\left( \langle v_0,\varphi _1\rangle \varphi _1+v_{0,1}\right) \right\Vert ^2\\\le & {} \left| \langle v_0,\varphi _1\rangle \right| ^2+e^{-2\lambda _2 t}\left\Vert v_{0,1} \right\Vert ^2 < r_1^2+e^{-2\lambda _2 t}R^2. \end{aligned}$$

In particular, for \(t=t_{R}\), it holds that \(\left\Vert v(t_{R}) \right\Vert ^2 < 2 r^2_1\).

Now, we define \(T_{R}:=t_{R}+1\) and set \(v_1(0)=v(t_R)\). Thanks to the first step of the proof, there exists a control \(p_1\in L^2(0,1)\), such that \(v_1(1)=0\), where \(v_1\) is the solution of (2.4) on [0, 1] with p replaced by \(p_1\).

Then \(v(t)=v_1(t-t_R)\) solves (2.4) in the time interval \((t_{R},T_{R}]\) with the control \(p_1(t-t_{R})\) that steers the solution v to 0 at \(T_{R}\). \(\square \)

Proof of Theorem 1.4

We start with the case \(\lambda _1=0\). Let \(u_0\in X\) satisfy (1.16). Set \(v(t):=u(t)-\varphi _1\), then v satisfies (2.4) and moreover \(v_0:=v(0)=u_0-\varphi _1\) fulfills (5.1). Thus, by Lemma 5.1, problem (1.1) is exactly controllable to the first eigensolution \(\psi _1 \equiv \varphi _1\) in time \(T_{R}\).

Now, we consider the case \(\lambda _1>0\). As in the proof of Theorem 1.1, we introduce the variable \(z(t)=e^{\lambda _1t}u(t)\) that solves problem (3.29). For such a system, since the first eigenvalue of \(A_1\) is equal 0, we have the exact controllability to \(\varphi _1\) in time \(T_{R}\). Namely \(z(T_{R})=\varphi _1\), that is equivalent to the exact controllability of u to \(\psi _1\):

$$\begin{aligned} z(T_{R})=\varphi _1\ \quad \Longleftrightarrow \quad e^{\lambda _1T_{R}}u(T_{R})=\varphi _1 \quad \Longleftrightarrow \quad u(T_{R})=\psi _1(T_{R}). \end{aligned}$$
(5.3)

The proof is thus complete. \(\square \)

Observe that the strategy of the proof uses the fact that operator A is accretive in all directions \(\varphi _j\) with \(j\ge 1\) and strictly accretive for \(j>1\). Therefore one cannot allow A to be striclty dissipative in all directions. Since the eigenvalues are counted in increasing order, the conclusion of Theorems 1.4 and 1.5 can only ensure global controllability to the first eigensolution.

The proof of Theorem 1.5 easily follows from Theorem 1.4.

Proof of Theorem 1.5

We assume (1.18). Suppose that \(\gamma :=\langle u_0,\varphi _1\rangle \ne 0\). We decompose \(u_0\) as \(u_0=\gamma \varphi _1+\zeta _1\), with \(\zeta _1:=u_0-\langle u_0,\varphi _1\rangle \varphi _1\in \varphi _1^\perp \) and define \({\tilde{u}}(t):=u(t)/\gamma \). Hence, \({\tilde{u}}\) solves

$$\begin{aligned} \left\{ \begin{array}{ll} {\tilde{u}}'(t)+A{\tilde{u}}(t)+p(t)B{\tilde{u}}(t)=0,&{} t>0\\ {\tilde{u}}(0)=\varphi _1+\tilde{\zeta _1}, \end{array}\right. \end{aligned}$$
(5.4)

where \(\tilde{\zeta _1}:=\zeta _1/\gamma \).

We apply Theorem 1.4 to (5.4) to deduce the existence of \(T_R>0\) such that \({\tilde{u}}(T_R)=\psi _1(T_R)\). Therefore, the solution of (1.1) with initial condition \(u_0\in X\) that do not vanish along the direction \(\varphi _1\) can be exactly controlled in time \(T_R\) to the trajectory \(\phi _1(\cdot )=\langle u_0,\varphi _1\rangle \psi _1(\cdot )\), where \(\phi _1\) is defined in (1.17).

Note that if \(u_0\in X\) satisfies both \(u_0\in \varphi _1^\perp \) and (1.18), then we have trivially that \(u_0\equiv 0\). We then choose \(p\equiv 0\), so that the solution of (1.1) remains constantly equal to \(\phi _1\equiv 0\). \(\square \)

6 Applications

In this section we present some examples of parabolic equations for which Theorem 1.1 can be applied. The hypotheses (1.3),(1.10) and (1.11) have been verified in [1, 16], to which we refer for more details. We observe that, thanks to [1, Remark 6.1], since the second order operators considered in the examples are accretive (\(\langle Ax,x\rangle \ge 0\), for all \(x\in D(A)\)), it suffices to prove the following gap condition

$$\begin{aligned} \exists \,\alpha >0\,:\,\sqrt{\lambda _{k+1}}-\sqrt{\lambda _k}\ge \alpha ,\quad \forall \,k\ge 1, \end{aligned}$$
(6.1)

which implies (1.10).

Furthermore, we note that the global results Theorem 1.4 and Theorem 1.5 can be applied to any example below. Note also that the given examples below are non-exhaustive.

6.1 Diffusion equation with Dirichlet boundary conditions

Let \(I=(0,1)\) and \(X=L^2(0,1)\). Consider the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x)-u_{xx}(t,x)+p(t)\mu (x)u(t,x)=0 &{} x\in I,t>0 \\ \\ u(t,0)=0,\,\,u(t,1)=0, &{} t>0\\ \\ u(0,x)=u_0(x) &{} x\in I. \end{array}\right. \end{aligned}$$
(6.2)

We denote by A the operator defined by

$$\begin{aligned} D(A)=H^2\cap H^1_0(I),\quad A\varphi =-\frac{d^2\varphi }{dx^2}. \end{aligned}$$

and it can be checked that A satisfies (1.3). We indicate by \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) and \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the families of eigenvalues and eigenfunctions of A, respectively:

$$\begin{aligned} \lambda _k=(k\pi )^2,\quad \varphi _k(x)=\sqrt{2}\sin (k\pi x),\quad \forall \, k\in {\mathbb {N}}^*. \end{aligned}$$

It is easy to see that (6.1) holds true [and so (1.10)]:

$$\begin{aligned} \sqrt{\lambda _{k+1}}-\sqrt{\lambda _k}=\pi ,\qquad \forall \, k\in {\mathbb {N}}^*. \end{aligned}$$

Let \(B:X\rightarrow X\) be the operator

$$\begin{aligned} B\varphi =\mu \varphi \end{aligned}$$

with \(\mu \in H^3(I)\) such that

$$\begin{aligned} \mu '(1)\pm \mu '(0)\ne 0\quad \text{ and } \quad \langle \mu \varphi _j,\varphi _k\rangle \ne 0\quad \forall \, k \in {\mathbb {N}}^*. \end{aligned}$$
(6.3)

Observe that, for \(k\ne j\), integrating by parts, we find that

$$\begin{aligned} \begin{aligned} \langle \mu \varphi _j,\varphi _k\rangle&=2\int _0^1\mu (x)\sin (j\pi x)\sin (k\pi x)dx\\&=\int _0^1\mu (x)\left( \cos ((k-j)\pi x)-\cos ((k+j)\pi x)\right) dx\\&=\left[ \mu (x)\left( \frac{\sin ((k-j)\pi x)}{(k-j)\pi }-\frac{\sin ((k+j)\pi x)}{(k+j)\pi }\right) \right] ^{x=1}_{x=0}\\&\quad -\int _0^1\mu '(x)\left( \frac{\sin ((k-j)\pi x)}{(k-j)\pi }-\frac{\sin ((k+j)\pi x)}{(k+j)\pi }\right) dx\\&=\left[ \mu '(x)\left( \frac{\cos ((k-j)\pi x)}{(k-j)^2\pi ^2}-\frac{\cos ((k+j)\pi x)}{(k+j)^2\pi ^2}\right) \right] ^{x=1}_{x=0}\\&\quad -\int _0^1\mu ''(x)\left( \frac{\cos ((k-j)\pi x)}{(k-j)^2\pi ^2}-\frac{\cos ((k+j)\pi x)}{(k+j)^2\pi ^2}\right) dx\\&=\left( \mu '(1)(-1)^{k+j}-\mu '(0)\right) \frac{4kj}{(k^2-j^2)^2\pi ^2}\\&\quad +\int _0^1\mu '''(x)\left( \frac{\sin ((k-j)\pi x)}{(k-j)^3\pi ^3}-\frac{\sin ((k+j)\pi x)}{(k+j)^3\pi ^3}\right) dx. \end{aligned} \end{aligned}$$

Since the integral terms represent the \(k^{th}\)-Fourier coefficients of the integrable functions \(\mu '''(x)\cos (j\pi x)\) and \(\mu '''(x)\sin (j\pi x)\), they converge to zero as k goes to infinity. Furthermore, using that

$$\begin{aligned} kj\ge \sqrt{|k^2-j^2|},\quad \forall \,k,j\in {\mathbb {N}}^* \end{aligned}$$

we deduce that there exists \(b>0\) such that

$$\begin{aligned} \left| \lambda _k-\lambda _j\right| ^{3/2}|\langle \mu \varphi _j,\varphi _k\rangle |\ge b,\qquad \forall \, k\ne j, \end{aligned}$$

(see also [1, Sect. 6.1]). For instance, a suitable function that satisfies (6.3) is \(\mu (x)=x^2\): indeed, in this case

$$\begin{aligned} \langle \mu \varphi _j,\varphi _k\rangle =\left\{ \begin{array}{ll} \frac{4kj(-1)^{k+j}}{(k^2-j^2)^2},&{} k\ne j,\\ \\ \frac{2j^2\pi ^2-3}{6j^2\pi ^2},&{}k=j \end{array}\right. \end{aligned}$$

and we observe that \(\langle \mu \varphi _j,\varphi _j\rangle \ne 0\). More generally, we can address the reader to [2] to identify large classes of potential \(\mu \) satisfying the requested properties.

Therefore, problem (6.2) is controllable to the jth eigensolution \(\psi _j\) in any time \(T>0\) as long as \(u_0\in B_{R_T}(\varphi _j)\), with \(R_T>0\) a suitable constant, where \(\psi _j(t,x)=\sqrt{2}\sin (j\pi x)e^{-j^2\pi ^2t}\).

6.2 Diffusion equation with Neumann boundary conditions

Let \(I=(0,1)\), \(X=L^2(I)\) and consider the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x)-u_{xx}(t,x)+p(t)\mu (x)u(t,x)=0 &{} x\in I,t>0 \\ \\ u_x(t,0)=0,\,\,u_x(t,1)=0, &{}t>0\\ \\ u(0,x)=u_0(x). &{} x\in I. \end{array}\right. \end{aligned}$$
(6.4)

The operator A, defined by

$$\begin{aligned} D(A)=\{ \varphi \in H^2(0,1): \varphi '(0)=0,\,\,\varphi '(1)=0\},\quad A\varphi =-\frac{d^2\varphi }{dx^2} \end{aligned}$$

satisfies (1.3) and has the following eigenvalues and eigenfunctions

$$\begin{aligned} \begin{array}{lll} \lambda _0=0,&{}\varphi _0=1\\ \lambda _k=(k\pi )^2,&{} \varphi _k(x)=\sqrt{2}\cos (k\pi x),&{} \forall \, k\ge 1. \end{array} \end{aligned}$$

Thus, the gap condition (6.1) is fulfilled with \(\alpha =\pi \). Fixed \(j\in {\mathbb {N}}\), the jth eigensolution is the function \(\psi _j(t,x)=e^{-\lambda _j t}\varphi _j(x)\).

We define \(B:X\rightarrow X\) as the multiplication operator by a function \(\mu \in H^2(I)\), \(B\varphi =\mu \varphi \), such that

$$\begin{aligned} \mu '(1)\pm \mu '(0)\ne 0\quad \text{ and } \quad \langle \mu \varphi _j,\varphi _k\rangle \ne 0\quad \forall \, k \in {\mathbb {N}}. \end{aligned}$$
(6.5)

It can be proved, by reasoning as in the previous example, that there exists \(b>0\) such that

$$\begin{aligned} \left| \lambda _k-\lambda _j\right| |\langle \mu \varphi _j,\varphi _k\rangle |\ge b\quad \forall \, k\ne j \quad \text {and}\quad \langle \mu \varphi _j,\varphi _j\rangle \ne 0, \end{aligned}$$
(6.6)

(see also [1, Sect. 6.2]). For example, \(\mu (x)=x^2\) satisfies (6.6). Indeed, it can be shown that

$$\begin{aligned} \langle \mu \varphi _0,\varphi _k\rangle =\left\{ \begin{array}{ll} \frac{2\sqrt{2}(-1)^{k}}{(k\pi )^2},&{}k\ge 1,\\ \\ \frac{1}{3},&{}k=0, \end{array}\right. \end{aligned}$$

and for \(j\ne 0\)

$$\begin{aligned} \langle \mu \varphi _j,\varphi _k\rangle =\left\{ \begin{array}{ll} \frac{4(-1)^{k+j}(k^2+j^2)}{(k^2-j^2)^2\pi ^2},&{}k\ne j,\\ \\ \frac{1}{3}+\frac{1}{2j^2\pi ^2},&{}k=j. \end{array}\right. \end{aligned}$$

Therefore, problem (6.4) is controllable to the jth eigensolution \(\psi _j\) in any time \(T>0\) as long as \(u_0\in B_{R_T}(\varphi _j)\), with \(R_T>0\) a suitable constant.

6.3 Variable coefficient parabolic equation

Let \(I=(0,1)\), \(X=L^2(I)\) and consider the problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x)-((1+x)^2u_x(t,x))_x+p(t)\mu (x)u(t,x)=0&{}x\in I,t>0\\ \\ u(t,0)=0,\quad u(t,1)=0,&{}t>0\\ \\ u(0,x)=u_0(x)&{}x\in I. \end{array} \right. \end{aligned}$$
(6.7)

We denote by \(A:D(A)\subset X\rightarrow X\) the following operator

$$\begin{aligned} D(A)=H^2\cap H^1_0(I),\qquad A\varphi =-((1+x)^2\varphi _x)_x. \end{aligned}$$

It can be checked that A satisfies (1.3) and that the eigenvalues and eigenfunctions have the following expression

$$\begin{aligned} \lambda _k=\frac{1}{4}+\left( \frac{k\pi }{\ln 2}\right) ^2,\qquad \varphi _k=\sqrt{\frac{2}{\ln 2}}(1+x)^{-1/2}\sin \left( \frac{k\pi }{\ln 2 }\ln (1+x)\right) . \end{aligned}$$

Furthermore, \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) verifies the gap condition (6.1) with \(\alpha =\pi /\ln {2}\).

We fix \(j\in {\mathbb {N}}^*\) and define the operator \(B:X\rightarrow X\) by \(B\varphi =\mu \varphi \), where \(\mu \in H^2(I)\) is such that

$$\begin{aligned} 2\mu '(1)\pm \mu '(0)\ne 0,\quad \text{ and }\quad \langle \mu \varphi _j,\varphi _k\rangle \ne 0\quad \forall \, k \in {\mathbb {N}}^*. \end{aligned}$$
(6.8)

Hence, thanks to (6.8), it is possible to show that (1.11) is fulfilled with \(q=3/2\) (see [1, Sect. 6.3]). For instance, when \(j=1\), an example of a suitable function \(\mu \) that satisfies (6.8) is \(\mu (x)=x\), see [1] for the verification.

Thus, from Theorem 1.1, we deduce that, for any \(T>0\), system (6.7) is controllable to the jth eigensolution if the initial condition \(u_0\) is close enough to \(\varphi _j\).

6.4 Diffusion equation in a 3D ball with radial data

In this example, we study the controllability of an evolution equation in the three dimensional unit ball \(B^3\) for radial data. The bilinear control problem is the following

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,r)-\Delta u(t,r)+p(t)\mu (r)u(t,r)=0 &{} r\in [0,1], t>0 \\ \\ u(t,1)=0,&{}t>0\\ \\ u(0,r)=u_0(r) &{} r\in [0,1] \end{array}\right. \end{aligned}$$
(6.9)

where the Laplacian in polar coordinates for radial data is given by the following expression

$$\begin{aligned} \Delta \varphi (r)=\partial ^2_r \varphi (r)+\frac{2}{r}\partial _r\varphi (r). \end{aligned}$$

The function \(\mu \) is a radial function as well in the space \(H^3_r(B^3)\), where the spaces \(H^k_r(B^3)\) are defined as follows

$$\begin{aligned} X:=L^2_{r}(B^3)=\left\{ \varphi \in L^2(B^3)\,|\, \exists \psi :\mathbb {R}\rightarrow \mathbb {R}, \varphi (x)=\psi (|x|)\right\} \end{aligned}$$
$$\begin{aligned} H^k_r(B^3):=H^k(B^3)\cap L^2_{r}(B^3) . \end{aligned}$$

The domain of the Dirichlet Laplacian \(A:=-\Delta \) in X is \(D(A)=H^2_{r}\cap H^1_0(B^3)\). We observe that A satisfies hypothesis (1.3). We denote by \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) and \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the families of eigenvalues and eigenvectors of A, \(A\varphi _k=\lambda _k\varphi _k\), namely

$$\begin{aligned} \varphi _k=\frac{\sin (k\pi r)}{\sqrt{2\pi }r},\qquad \lambda _k=(k\pi )^2 \end{aligned}$$
(6.10)

\(\forall \, k\in {\mathbb {N}}^*\), see [20, Sect. 8.14]. Since the eigenvalues of A are actually the same of the Dirichlet 1D Laplacian, (6.1) is satisfied, as we have seen in Example 6.1.

Fixed \(j\in {\mathbb {N}}^*\), let \(B:X\rightarrow X\) be the multiplication operator \(Bu(t,r)=\mu (r)u(t,r)\), with \(\mu \) be such that

$$\begin{aligned} \mu '(1)\pm \mu '(0)\ne 0,\quad \text{ and }\quad \langle \mu \varphi _j,\varphi _k\rangle \ne 0\quad \forall \, k\in {\mathbb {N}}^*. \end{aligned}$$
(6.11)

Then, it can be proved that

$$\begin{aligned} \left| \lambda _k-\lambda _j\right| ^{3/2}|\langle \mu \varphi _j,\varphi _k\rangle |\ge b\quad \forall \, k\ne j\quad \text {and}\quad \langle \mu \varphi _j,\varphi _j\rangle \ne 0, \end{aligned}$$
(6.12)

with b a positive constant (see [1, Sect. 6.4]). For instance, \(\mu (x)=x^2\) verifies (6.11) and (6.12):

$$\begin{aligned} \langle B\varphi _j,\varphi _k\rangle =\left\{ \begin{array}{ll} \frac{4(-1)^{k+j}kj}{(k^2-j^2)^2\pi ^2},&{}k\ne j,\\ \\ \frac{2j^2\pi ^2-3}{6j^2\pi ^2},&{}k=j. \end{array}\right. \end{aligned}$$

Therefore, by applying Theorem 1.1, we conclude that for any \(T>0\), the exists a suitable constant \(R_T>0\) such that, if \(u_0\in B_{R_T}(\varphi _j)\), problem (6.9) is exactly controllable to the jth eigensolution \(\psi _j\) in time T.

6.5 Degenerate parabolic equation

In this last section we want to address an example of a control problem for a degenerate evolution equation of the form

$$\begin{aligned} \left\{ \begin{array}{ll} u_t-\left( x^{\gamma } u_x\right) _x+p(t)x^{2-\gamma }u=0,&{} (t,x)\in (0,+\infty )\times (0,1)\\ \\ u(t,1)=0,\quad \left\{ \begin{array}{ll} u(t,0)=0,&{} \text{ if } \gamma \in [0,1),\\ \\ \left( x^{\gamma }u_x\right) (t,0)=0,&{} \text{ if } \gamma \in [1,3/2),\end{array}\right. \\ \\ u(0,x)=u_0(x). \end{array} \right. \end{aligned}$$
(6.13)

where \(\gamma \in [0,3/2)\) describes the degeneracy magnitude, for which Theorem 1.1 applies.

If \(\gamma \in [0,1)\) problem (6.13) is called weakly degenerate and the natural spaces for the well-posedness are the following weighted Sobolev spaces. Let \(I=(0,1)\) and \(X=L^2(I)\), we define

$$\begin{aligned} \begin{array}{l} H^1_{\gamma }(I)=\left\{ u\in X: u \text{ is } \text{ absolutely } \text{ continuous } \text{ on } [0,1], x^{\gamma /2}u_x\in X\right\} \\ \\ H^1_{\gamma ,0}(I)=\left\{ u\in H^1_\gamma (I):\,\, u(0)=0,\,\,u(1)=0\right\} \\ \\ H^2_\gamma (I)=\left\{ u\in H^1_\gamma (I): x^{\gamma }u_x\in H^1(I)\right\} . \end{array} \end{aligned}$$

We denote by \(A:D(A)\subset X\rightarrow X\) the linear degenerate second order operator

$$\begin{aligned} \left\{ \begin{array}{l} \forall u\in D(A),\quad Au:=-(x^{\gamma }u_x)_x,\\ \\ D(A):=\{u\in H^1_{\gamma ,0}(I),\,\, x^{\gamma }u_x\in H^1(I)\}. \end{array}\right. \end{aligned}$$
(6.14)

It is possible to prove that A satisfies (1.3) (see, for instance [11]) and furthermore, if we denote by \(\{\lambda _k\}_{k\in {\mathbb {N}}^*}\) the eigenvalues and by \(\{\varphi _k\}_{k\in {\mathbb {N}}^*}\) the corresponding eigenfunctions, it turns out that the gap condition (6.1) is fulfilled with \(\alpha =\frac{7}{16}\pi \) (see [19], p. 135).

If \(\gamma \in [1,2)\), problem (6.13) is called strong degenerate and the corresponding weighted Sobolev space are described as follows: given \(I=(0,1)\) and \(X=L^2(I)\), we define

$$\begin{aligned} \begin{array}{l} H^1_{\gamma }(I)=\left\{ u\in X: u \text{ is } \text{ absolutely } \text{ continuous } \text{ on } (0,1],\,\, x^{\gamma /2}u_x\in X\right\} \\ \\ H^1_{\gamma ,0}(I):=\left\{ u\in H^1_{\gamma }(I):\,\,u(1)=0\right\} ,\\ \\ H^2_\gamma (I)=\left\{ u\in H^1_\gamma (I):\,\, x^{\gamma }u_x\in H^1(I)\right\} . \end{array} \end{aligned}$$

In this case the operator \(A:D(A)\subset X\rightarrow X\) is defined by

$$\begin{aligned} \left\{ \begin{array}{l} \forall u\in D(A),\quad Au:=-(x^{\gamma }u_x)_x,\\ \\ D(A):=\left\{ u\in H^1_{\gamma ,0}(I):\,\, x^{\gamma }u_x\in H^1(I)\right\} \\ \qquad \,\,\,\,\,=\left\{ u\in X:\,\,u \text{ is } \text{ absolutely } \text{ continuous } \text{ in } \text{(0,1] } ,\,\, x^{\gamma }u\in H^1_0(I),\right. \\ \qquad \qquad \,\,\,\left. x^{\gamma }u_x\in H^1(I) \text{ and } (x^{\gamma }u_x)(0)=0\right\} \end{array}\right. \end{aligned}$$

and it has been proved that (1.3) holds true (see, for instance [14]) and that (6.1) is satisfied for \(\alpha =\frac{\pi }{2}\) (see [19]).

We fix \(j=1\) and for all \(\gamma \in [0,3/2)\), we define the linear operator \(B:X\rightarrow X\) by \(Bu(t,x)=x^{2-\gamma }u(t,x)\) and in [16, Proof of Theorem 2.2] we have proved that there exists a constant \(b>0\) such that

$$\begin{aligned} \left| \lambda _k-\lambda _1\right| ^{3/2}|\langle B\varphi _1,\varphi _k\rangle |\ge b\quad \forall \, k>1\quad \text {and}\quad \langle B\varphi _1,\varphi _1\rangle \ne 0. \end{aligned}$$

Finally, by applying Theorem 1.1, we ensure the exact controllability of problem (6.13) to the first eigensolution, for both weakly and strongly degenerate problems.