Abstract
In this paper, we obtain a maximum principle for controlled fractional Fokker-Planck equations. We prove the well-posedness of a stochastic differential equation driven by an α-stable process. We give some estimates of the solutions by fractional calculus. A linear-quadratic example is given at the end of the paper.
Similar content being viewed by others
1 Introduction
The real world is full of uncertainty; using stochastic models one may gain real benefits. Thus, stochastic differential equations driven by Brownian motions have been studied extensively. In spite of many obvious advantages, some models based on Brownian diffusion usually fail to provide a satisfactory description of many dynamical processes. We illustrate this by some practical phenomena as follows: long-range correlations, lack of scale invariance, discontinuity of the trajectories and so on [1, 2]. To capture such anomalous properties of physical systems, one introduces the fractional Fokker-Planck equations.
Recently, Magdziarz [3] and Lv et al. [4] obtained the stochastic representation on the fractional Fokker-Planck equation with time and space dependent drift and diffusion coefficients. They found that the corresponding stochastic process is driven by an inverse α-stable subordinator and Brownian motion. The fractional Fokker-Planck equation can be described by the following stochastic process (see [4]):
with initial value \(x(0)=\xi\). The above stochastic process is driven by the inverse α-stable subordinator and Brownian motion. Here, the inverse α-stable subordinator \(S_{\alpha}(t)\) is independent of \(B(\tau)\). We explain \(S_{\alpha}(t)\) in Section 2.
In order to make the relevant decisions (controls) based on the most updated information, the decision makers (controllers) must select an optimal decision among all possible ones to achieve the best expected result related to their goals. Such optimization problems are called stochastic optimal control problems. The range of stochastic optimal control problems covers a variety of physical, biological, economic, and management systems.
Generally, one solves the optimal control problem by the Pontryagin maximum principle. Starting with [5–8], backward stochastic differential equations have been used to describe the necessary conditions that the optimal control must satisfy. We also refer to [9–11] and the references therein for some other works. In this paper, α-stable processes involve some fractional calculations. We use fractional derivatives (of Riemann-Liouville type) to prove the well-posedness of the equations and give some estimates.
In this paper, we consider an optimal control problem for fractional Fokker-Planck equations. We examine this issue because it has a very wide range of physical applications. For instance, surface growth, transport of fluid in porous media [12], two-dimensional rotating flow [13], diffusion on fractals [14], or even in multidisciplinary areas such as in analyzing the behavior of CTAM micelles dissolved in salted water [15] or econophysics [16].
This paper is organized as follows. We begin with the well-posedness of the stochastic differential equations driven by α-stable process by Picard iteration, then we give some estimates of the solution for the controlled fractional Fokker-Planck equation in Section 2. In Section 3, we establish necessary and sufficient conditions for optimal pairs. A linear-quadratic optimal control problem is proposed in Section 4, a Riccati differential equation is derived, and the explicit expression of the optimal control is obtained. The conclusion is in Section 5.
2 Preliminaries
2.1 Statement of the problem
Let \((\Omega, \mathcal{F}, P)\) be a probability space with filtration \(\mathcal{F}_{t}\). The controlled stochastic system is described as follows:
where \(b(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\), \(\sigma(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal{U}[0, T]\rightarrow \mathbb {R}^{n}\) are given functionals, ξ is the initial value, \(u(t)\) is the control process, and \(x(t)\) is the corresponding state process. The inverse α-stable subordinator is defined in the following way:
where \(U_{\alpha}(\tau)\) is a strictly increasing α-stable Lévy process. \(U_{\alpha}\) is a pure-jump process whose Laplace transform is given by \(\mathbb{E}(e^{-kU_{\alpha}(\tau)})=e^{-\tau k^{\alpha}}\), \(0<\alpha<1\). For every jump of \(U_{\alpha}(\tau)\), there is a corresponding flat period of its inverse \(S_{\alpha}(t)\).
The space of admissible controls is defined as
The cost functional is
where \(l(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\) and \(h(t):\mathbb {R}^{n}\rightarrow\mathbb{R}^{n}\) are given continuously differentiable functionals. We introduce the following basic assumptions which will be assumed throughout the paper.
-
(H1)
b, σ, l, g are continuously differentiable with respect to x. There exists a constant \(L_{1} > 0\) such that, for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have:
-
1.
\(|\varphi(t, x, u)-\varphi(t, \hat{x}, \hat{u})| \leq L_{1}(|x-\hat{x}|+|u-\hat{u}|)\), \(\forall t\in[0, T]\), \(x,\hat{x}\in\mathbb {R}^{n}\), \(u, \hat{u}\in\mathcal{U}[0, T]\).
-
2.
\(|\varphi(t, x)|\leq C(1+|x|)\), \(x\in\mathbb{R}^{n}\), \(t \in[0, T]\).
-
1.
-
(H2)
The maps b, σ, l, h are \(C^{2}\) in x with bounded (denoted by M) derivatives. There exists a constant \(L_{2} > 0\) such that for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have
$$\begin{aligned}& \bigl\vert \varphi_{x}(t, x, u)-\varphi_{x}(t, \hat{x}, \hat{u})\bigr\vert \leq L_{2}\bigl(\vert x-\hat{x}\vert +|u- \hat{u}|\bigr), \\& \quad \forall t\in[0, T], x,\hat{x}\in\mathbb {R}^{n}, u, \hat{u}\in\mathcal{U}[0, T]. \end{aligned}$$
Then we can pose the following optimal control problem.
Problem (A)
Find a pair \((x^{*}(t), u^{*}(t))\in\mathbb {R}^{n}\times\mathcal{U}[0, T]\) such that
Now, we introduce the variational equation of (1),
and the adjoint equation of (1), respectively,
The Hamiltonian of our optimal control problem is obtained as follows:
2.2 Well-posedness of the problem
To obtain our results of maximum principle, we need the following results.
Proposition 2.1
(Itô formula; see [17, Theorem 2.4])
Suppose that \(x(\cdot)\) has a stochastic differential
for \(F\in\mathbb{L}^{1}(0,T)\), \(G\in\mathbb{L}^{2}(0,T)\). Assume \(u:\mathbb{R}\times[0,T]\rightarrow\mathbb{R}\) is continuous and that \(\frac{\partial u}{\partial t}\), \(\frac{\partial u}{\partial x}\), \(\frac {\partial^{2} u}{\partial x^{2}}\) exist and are continuous. Set
Then Y has the stochastic differential equation
Lemma 2.1
(See [4])
Let \(S_{\alpha}(t)\) be the inverse α-stable subordinator, \(g(t)\) be an integrable function. Then
Lemma 2.2
(See [4])
The following equation holds for any continuous function \(f(t)\):
Here the operator \(D^{1-\alpha}_{t}f(t)=\frac{1}{\Gamma(\alpha)}\frac {\partial}{\partial t}\int_{0}^{t}(t-s)^{\alpha-1}f(s)\, ds\) is the fractional derivative of Riemann-Liouville type. Especially, the derivative of a constant C need not be zero \(D^{1-\alpha}_{t}C=\frac {t^{\alpha-1}}{\Gamma(\alpha)}C\).
Remark 2.1
We get \(\int^{t}_{0}1\, dS_{\alpha}(t)=\int^{t}_{0}D^{1-\alpha}_{t}\, dt=\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}\). It is bounded when \(\alpha\in(0, 1)\). We set \(\frac{t^{\alpha}}{\alpha \Gamma(\alpha)}< P\).
Theorem 2.1
Let b and σ be measurable functions satisfying (H1) and (H2), \(T>0\) and T be independent of \(X(0)\). Then the stochastic differential equation
has a unique solution \(X(t)\).
Proof
Define \(Y^{(0)}(t)=X(0)\) and \(Y^{(k)}(t)=Y^{(k)}(t)(\omega)\). We consider the equation
Then, for \(k\geq1\), \(t\leq T\), we have
and
where the constant \(A_{1}\) depends on L, P, and \(E|X_{0}|^{2}\). Hence we obtain
Here the constant \(A_{2}\) depends on L, P, and \(E|X_{0}|^{2}\). We set \(A_{2}t<\frac{1}{2}\), \(m\geq n \geq0\). Then
as \(m, n\rightarrow\infty\). Therefore \(\{Y^{(n)}(t)\}_{n=0}^{\infty}\) is a Cauchy sequence in \(L^{2}(0, T)\). Hence \({Y^{(n)}(t)}_{n=0}^{\infty}\) is convergent in \(L^{2}(0, T)\). Define
Next, we prove that \(X(t)\) satisfies (8). For all n and \(t\in[0, T]\), we have
Then we get
Also
We conclude that for all \(t\in[0, T]\) we have
That is, \(X(t)\) satisfies (8).
Now we prove uniqueness. Let \(X_{1}(t)\) and \(X_{2}(t)\) be solutions of (8) with the same initial values. Then
From Lemmas 2.1 and 2.2, we get
By the Gronwall inequality, we conclude that
The uniqueness is proved. □
2.3 Some estimates of the solution
Let \(u^{*}\) and v be two admissible controls. For any \(\varepsilon\in \mathbb{R}\), we denote \(u^{\varepsilon}=u^{*}+\varepsilon(v-u^{*})\). Corresponding to \(u^{\varepsilon}\) and \(u^{*}\), there are two solutions \(x^{\varepsilon}(\cdot)\) and \(x^{*}(\cdot)\) to (1). That is,
Theorem 2.2
Let (H1)-(H2) hold. Then, for any \(K\geq1\),
Proof
We have
From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get
where \(C_{P, L}\) is a constant that depends on P, L. This proves (10). Similarly, we can prove (11).
We set \(\eta(t)=x^{\varepsilon}(t)-x^{*}(t)-\hat{x}(t)\). Then
From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get
where \(C_{1}=16(M^{2}+L^{2}C)\), \(C_{2}=(L^{2}C-M^{2})(v-\hat {u})^{2}+M^{2}(v-u^{*})^{2}\), \(M_{C_{1}, C_{2}}\) is a constant that depends on \(C_{1}\), \(C_{2}\). □
3 The maximum principle
Now, we give the sufficient conditions of Problem (A).
Theorem 3.1
Let (H1) and (H2) hold. Let \((x^{*}(t), u^{*}(t))\) be an admissible pair, and \((y(t), z(t))\) satisfies (5). Moreover, the Hamiltonian \(H(t)\) and \(h(t)\) are convex, and
Then \(u^{*}(t)\) is an optimal control.
Proof
Fix \(u\in\mathcal{U}[0, T]\) with corresponding solution \(x=x^{(u)}\). Then
where
By the definition of H, we get
We use the convexity of \(h(t)\) to obtain the inequality
Applying the Itô formula to \(h_{x}(\hat{x}(T))(x^{*}(T)-x(T))\) and taking the expectation, we get
Substituting the last equation into (16), we obtain
Since \(H(t)\) is convex, we get
Then \(u^{*}(t)\) is an optimal control. □
Then we give the necessary conditions of the stochastic control problem.
Theorem 3.2
Assume that b, σ satisfy (H1) and (H2), \(u^{*}(t) \in\mathcal {U}[0, T]\) is the optimal control of (1)-(3). Then \((y(t), z(t))\) is the solution of (5) such that
Proof
In order to treat the problem, we have
Let \((y(t), z(t))\) be the solution of (5). Then applying the differential chain rule to \(\langle y(t), \hat{x}(t)\rangle\), we have the following duality relation:
Combining (21) with (16) and by the optimality of \(u^{*}(t)\), we obtain
□
4 Application
In this section, we consider a linear-quadratic (LQ) optimal control problem as follows:
where \(A(\cdot)\), \(C(\cdot)\), \(D(\cdot)\), \(E(\cdot)\) are given matrix valued deterministic functions. η is the initial value, the cost functional is
where \(Q(t)\), \(R(t)\), \(S(t)\) are positive-definite matrices. \(x^{T}(t) \) is the transposition of \(x(t)\).
The optimal control of the LQ problem can be stated as follows.
Problem (B)
Find a pair \((x_{*}(t), u_{*}(t))\in\mathbb {R}^{n}\times\mathcal{U}[0, T]\) such that
We will proceed to a reduction of our Riccati equations. We assume P is a semimartingale with the following decomposition:
Applying the Itô formula to \(d(x^{T}(t)P(t)x(t))\), we obtain
We denote
Taking expectations on both sides of (27), adding these to (24) and using the square completion technique, we get
Now, if \((P, \Pi)\) satisfies the Riccati equation, i.e.
We set \(P(T)=S(T)\). Then we get the stochastic Riccati equation as follows:
Theorem 4.1
If the stochastic Riccati equation (29) admits a solution, then the stochastic LQ problem (23)-(24) is well-posed.
Proof
We know that \((P, \Pi)\) satisfies the Riccati equation (29) with \(K=R+E^{T}PE >0\). Then
Therefore, the stochastic LQ problem is well-posed. □
Remark 4.1
We see that if the Riccati equation (29) admits a solution \((P, \Pi)\), then the optimal feedback control would be
5 Conclusion
In this paper, we present some results as regards controlled fractional Fokker-Planck equations. The well-posedness of the system has been proved by Picard iteration. Some estimates of the solution of the controlled system have been given. Because some terms contain α-stable processes, we use the fractional integral to solve the problem. The necessary and sufficient conditions of Pontryagin type for the optimal controls have been proved. As an application, a LQ problem has been shown.
References
Barkai, E, Metzler, R, Klafter, J: From continuous time random walks to the fractional Fokker-Planck equation. Phys. Rev. E 61, 132-138 (2000)
Shlesinger, MF, Zaslawsky, GM, Klafter, J: Strange kinetics. Nature 363, 31-37 (1993)
Magdziarz, M: Stochastic representation of subdiffusion processes with time-dependent drift. Stoch. Process. Appl. 119, 3238-3252 (2009)
Lv, L, Qiu, W, Ren, F: Fractional Fokker-Planck equation with space and time dependent drift and diffusion. J. Stat. Phys. 149, 619-628 (2012)
Bismut, JM: Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 44, 384-404 (1973)
Bismut, JM: An introductory approach to duality in optimal stochastic control. SIAM Rev. 20, 62-78 (1978)
Bismut, JM: Mécanique Aléatoire. Lecture Notes in Mathematics (1981)
Peng, S: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28, 966-979 (1990)
Chen, SP, Li, XJ, Zhou, XY: Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 36, 1685-1702 (1998)
Fleming, WH, Soner, HM: Controlled Markov Processes and Viscosity Solutions. Springer, New York (2006)
Yong, J, Zhou, X: Stochastic Control: Hamiltonian Systems and HJB Equations. Springer, New York (1999)
Spohn, H: Surface dynamics below the roughening transition. J. Phys. 3, 69-81 (1993)
Solomon, TH, Weeks, ER, Swinney, HL: Observation of anomalous diffusion and Lévy flights in a two-dimensional rotating flow. Phys. Rev. Lett. 71, 3975-3978 (1993)
Stephenson, J: Some non-linear diffusion equations and fractal diffusion. Physica A 222, 234-247 (1995)
Bouchaud, JP, Ott, A, Langevin, D, Urbach, W: Anomalous diffusion in elongated micelles and its Lévy flight interpretation. J. Phys. II 1, 1465-1482 (1991)
Plerou, V, Gopikrishnan, P, Nunes Amaral, LA, Gabaix, X, Stanley, HE: Economic fluctuations and anomalous diffusion. Phys. Rev. E 62, 3023-3026 (2000)
Zhang, YT, Chen, F: Stochastic stability of fractional Fokker-Planck equation. Physica A 410, 35-42 (2014)
Acknowledgements
The author expresses sincere thanks to Professor Yong Li for his comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Wang, Q. Maximum principle for controlled fractional Fokker-Planck equations. Adv Differ Equ 2015, 45 (2015). https://doi.org/10.1186/s13662-015-0382-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-015-0382-1