Abstract
In this paper, based on the viscosity approximation method and the regularized gradient-projection algorithm, we find a common element of the solution set of a constrained convex minimization problem and the set of zero points of the maximal monotone operator problem. In particular, the set of zero points of the maximal monotone operator problem can be transformed into the equilibrium problem. Under suitable conditions, new strong convergence theorems are obtained, which are useful in nonlinear analysis and optimization. As an application, we apply our algorithm to solving the split feasibility problem and the constrained convex minimization problem in Hilbert spaces.
Similar content being viewed by others
1 Introduction
Throughout this paper, let ℕ and ℝ be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and norm \(\|\cdot\|\). Let C be a nonempty, closed, and convex subset of H. We introduce some operators which will be used in this paper.
A mapping \(f:C\rightarrow C\) is a contraction if there exists \(k\in (0,1)\) such that \(\|f(x)-f(y)\|\leq k\|x-y\|\) for all \(x,y\in C\). A nonlinear operator \(T:C\rightarrow C\) is called nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\). The set of fixed points of T is denoted by \(\operatorname{Fix}(T)\). A nonlinear mapping \(A:H\rightarrow H\) is called monotone if \(\langle x-y, Ax-Ay\rangle\geq0\) for all \(x,y\in H\).
Firstly, consider the following constrained convex minimization problem:
where \(g:C\rightarrow\mathbb{R}\) is a real-valued convex function. Assume that the constrained convex minimization problem (1.1) is solvable, and let U denote the solution set of (1.1). The gradient-projection algorithm (GPA) generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) according to the recursive formula
where the parameters \(\beta_{n}\) are real positive numbers, and \(P_{C}\) is the metric projection from H onto C. It is well known that the convergence of the algorithms (1.2) depends on the behavior of the gradient ∇g. If the gradient ∇g is only assumed to be inverse strongly monotone, then the sequence \(\{x_{n}\}\) defined by the algorithm (1.2) can only converge weakly to a minimizer of (1.1). If the gradient ∇g is Lipschitz continuous and strongly monotone, then the sequence generated by (1.2) can converge strongly to a minimizer of (1.1) provided the parameters \(\beta_{n}\) satisfy appropriate conditions.
As we all know, Xu [1] gave an averaged mapping approach to the gradient-projection method, and he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the infinite-dimensional space. Moreover, he presented two modifications of the gradient-projection method which are shown to have strong convergence.
In 2011, motivated by Xu, Ceng et al. [2] proposed the following iterative algorithm:
where \(f:C\rightarrow H\) is an l-Lipschitzian mapping with a constant \(l>0\), and \(F:C\rightarrow H\) is a k-Lipschitzian and η-strongly monotone operator with constants \(k,\eta>0\). Let \(0<\mu<2\eta/k^{2}\), \(0\leq\gamma l<\tau\), and \(\tau=1-\sqrt{1-\mu(2\eta-\mu k^{2})}\). Let \(T_{n}\) and \(\theta_{n}\) satisfy \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\). Under suitable conditions, it is proved that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by (1.3) converges strongly to a minimizer \(x^{*}\) of (1.1). There are also many other methods for solving constrained convex minimization problems, such as extragradient-projection method (see [3]) and so on.
However, we all know that the minimization problem (1.1) has more than one solution under some conditions, so regularization is needed in finding the unique solution of the minimization problem (1.1). Now, we consider the following regularized minimization problem:
where \(\alpha>0\) is the regularization parameter, g is a convex function with a \(1/L\)-ism continuous gradient ∇g. Then the RGPA generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) by the following recursive formula:
where the parameter \(\alpha_{n}>0\), β is a constant with \(0<\beta<2/L\), and \(P_{C}\) is the metric projection from H onto C. We all know that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by algorithm (1.4) converges weakly to a minimizer of (1.1) in the setting of infinite-dimensional spaces (see [4]). In 2013, however, Ceng et al. [5] established a strong convergence result via an implicit hybrid method with regularization for solving constrained convex minimization problems and fixed point problems in Hilbert spaces. This method is based on the RGPA.
Secondly, consider the problem of zero points of maximal monotone operator:
where B is a mapping of H into \(2^{H}\), the effective domain of B is denoted by domB or \(D(B)\), that is, \(\operatorname{dom}B=\{x\in H: Bx\neq\emptyset\}\). A multi-valued mapping B is said to be a monotone operator on H if \(\langle x-y, u-v\rangle\geq0\) for all \(x,y\in \operatorname{dom}B\), \(u\in Bx\), \(v\in By\). A monotone operator B on H is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we may define a single-valued operator \(J_{r}=(I+rB)^{-1}: H\rightarrow \operatorname{dom}B\), which is called the resolvent of B for r. We denote by \(A_{r}=\frac{1}{r}(I-J_{r})\) the Yosida approximation of B for \(r>0\). We know from [6] that
Let B be a maximal monotone operator on H and define the set of zero points of B as follows:
It is well known that \(B^{-1}0=\operatorname{Fix}(J_{r})\) for all \(r>0\) and the resolvent \(J_{r}\) is firmly nonexpansive, i.e.,
Thirdly, consider the equilibrium problem (EP) which is to find \(z\in C\) such that
where F is a bifunction of \(C\times C\) into ℝ, and ℝ is the set of real numbers. We denote the set of solutions of EP by \(\operatorname{EP}(F)\). Given a mapping \(T:C\rightarrow H\), let \(F(x,y)=\langle Tx,y-x\rangle\) for all \(x,y\in C\), then \(z\in \operatorname{EP}(F)\) if and only if \(\langle Tz,y-z\rangle\geq0\) for all \(y\in C\), i.e., z is a solution of the variational inequality. Numerous problems in physics, optimizations, and economics reduce to finding a solution of (1.8). Some methods have been proposed to solve the equilibrium problem; see, for instance, [7–11].
In 2000, Moudafi [12] introduced the viscosity approximation method for nonexpansive mappings, extended in [13]. Let f be a contraction on H, starting with an arbitrary initial \(x_{0}\in H\), define a sequence \(\{x_{n}\}\) recursively by
we use \(\operatorname{Fix}(T)\) to denote the set of fixed points of the mapping T, i.e., \(\operatorname{Fix}(T)=\{x\in H: x=Tx\}\).
For finding the common solution of \(\operatorname{EP}(F)\) and a fixed point problem, Takahashi and Takahashi [8] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: \(x_{1}\in H\) and
where \(\{\alpha_{n}\}\subset(0,1)\) and \(\{\gamma_{n}\}\subset(0,\infty)\) satisfy some appropriate conditions. Further, they proved \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to \(z\in \operatorname{Fix}(T)\cap \operatorname{EP}(F)\), where \(z=P_{\operatorname{Fix}(T)\cap \operatorname{EP}(F)}f(z)\).
In 2010, Zeng et al. [14] proved a strong convergence theorem for finding a common element of the solution set EP of a generalized equilibrium problem and the set \(T^{-1}0\cap\widetilde{T}^{-1}0\) for two maximal monotone operators T and \(\widetilde{T}\) defined on a Banach space X: \(x_{0}\in X\) and
where \(H_{n}=\{z\in C: \phi(z,K_{r_{n}}y_{n})\leq(\alpha_{n}+\tilde{\alpha}_{n}-\alpha _{n}\tilde{\alpha}_{n})\phi(z,x_{0})+(1-\alpha_{n})(1-\tilde{\alpha }_{n})\phi(z,x_{n})\}\), \(W_{n}=\{z\in C: \langle x_{n}-z, Jx_{0}-Jx_{n}\rangle\geq0\}\), \(\tilde{x}_{n}=J^{-1}(\alpha_{n}Jx_{0}+(1-\alpha_{n})(\beta _{n}Jx_{n}+(1-\beta_{n})JJ_{r_{n}}x_{n}))\), \(y_{n}=J^{-1}(\tilde{\alpha}_{n}Jx_{0}+(1-\tilde{\alpha}_{n})(\tilde {\beta}_{n}J\tilde{x}_{n}+(1-\tilde{\beta}_{n})J\tilde{J}_{r_{n}}\tilde {x}_{n}))\), and \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{\tilde{\alpha}_{n}\}\), \(\{\tilde{\beta}_{n}\}\), \(\{r_{n}\}\) satisfy some appropriate conditions. Then the sequence \(\{x_{n}\}\) converges strongly to \(\prod_{T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}}x_{0}\), where \(\prod_{T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}}\) is the generalized projection of X onto \(T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}\).
In 2012, Tian and Liu [15] introduced the following iterative method in a Hilbert space: \(x_{1}\in C\) and
where \(F: C\times C\rightarrow\mathbb{R}\), \(u_{n}=Q_{\beta_{n}}(x_{n})\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\), \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), and \(\{\lambda_{n}\}\subset(0,2/L)\), and \(\{\alpha_{n}\}\), \(\{r_{n}\}\), \(\{\theta_{n}\}\), satisfy appropriate conditions. Further, they proved that the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap \operatorname{EP}(F)\), which solves the variational inequality
It is the first time that the equilibrium and constrained convex minimization problems have been solved.
Defining a set-valued mapping \(A_{F}\subset H\times H\) by
we find from [16] that \(A_{F}\) is a maximal monotone operator such that the domain is included in C; see Lemma 2.5 in Section 2 for more details.
In this paper, motivated and inspired by the above results, we introduce a new iterative algorithm: \(x_{1}\in C\) and
for finding an element of \(U\cap B^{-1}0\), where \(F: C\times C\rightarrow\mathbb{R}\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Under appropriate conditions, it is proved that the sequence \(\{x_{n}\}\) generated by (1.12) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality
Equivalently, \(q=P_{U\cap B^{-1}0}f(q)\).
Finally, in Section 4, we apply the above algorithm to the split feasibility problem, and we give concrete examples and the numerical result in Section 5.
2 Preliminaries
In this section we introduce some properties and lemmas which will be useful in the proofs for the main results in the next section.
Throughout this paper, we always assume that C is a nonempty closed convex subset of a real Hilbert space H. We denote the strong convergence of \(\{x_{n}\}\) to \(x\in C\) by \(x_{n}\rightarrow x\) and the weak convergence by \(x_{n}\rightharpoonup x\). Let \(\operatorname{Fix}(T)\) denote the set of fixed points of the mapping T, \(\operatorname{EP}(F)\) denote the solution set of the equilibrium problem (1.8), and U denote the solution set of (1.1). We find that, for any \(x,y\in H\), the following inequality holds in an inner product space X:
Firstly, we recall the metric (nearest point) projection from H onto C is the mapping \(P_{C}: H\rightarrow C\) which is defined as follows: given \(x\in H\), \(P_{C}x\) is the unique point in C with the property
\(P_{C}\) is characterized as follows.
Lemma 2.1
Given \(x\in H\) and \(y\in C\). Then \(y=P_{C}x\) if and only if the following inequality holds:
Secondly, we introduce the following lemma, which is about the resolvent of the maximal monotone operator.
Lemma 2.2
Let H be a real Hilbert space and let B be a maximal monotone operator on H. For \(r>0\) and \(x\in H\), define the resolvent \(J_{r}x\). Then the following holds:
for all \(s,t>0\) and \(x\in H\). In particular,
for all \(s,t>0\) and \(x\in H\).
Thirdly, for solving the equilibrium problem for a bifunction \(F: C\times C\rightarrow\mathbb{R}\), let us assume that F satisfies the following conditions:
-
(A1)
\(F(x,x)\)=0 for all \(x\in C\);
-
(A2)
F is monotone, i.e., \(F(x,y)+F(y,x)\leq0\) for all \(x,y\in C\);
-
(A3)
for each \(x, y, z\in C\), \(\lim_{t\downarrow0}F(tz+(1-t)x,y)\leq F(x,y)\);
-
(A4)
for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and lower semicontinuous.
Then we have the following lemma, which appears implicitly in Blum and Oettli [19].
Lemma 2.3
([19])
Let F be a bifunction of \(C\times C\) into ℝ satisfying (A1)-(A4). Let \(r>0\) and \(x\in H\). Then there exists \(z\in C\) such that
The following lemma was also given in Combettes and Hirstoaga [11].
Lemma 2.4
([11])
Assume that \(F: C\times C\rightarrow\mathbb{R}\) satisfies (A1)-(A4). For \(r>0\) and \(x\in H\), define a mapping \(J_{r}: H\rightarrow C\) as follows:
for all \(x\in H\). Then the following hold:
-
(1)
\(J_{r}\) is single-valued;
-
(2)
\(J_{r}\) is a firmly nonexpansive mapping, i.e., for all \(x,y\in H\),
$$ \|J_{r}x-J_{r}y\|^{2}\leq\langle J_{r}x-J_{r}y,x-y\rangle; $$ -
(3)
\(\operatorname{Fix}(J_{r})=\operatorname{EP}(F)\);
-
(4)
\(\operatorname{EP}(F)\) is closed and convex.
We call such \(J_{r}\) the resolvent of F for \(r>0\). Using Lemma 2.3 and Lemma 2.4, Takahashi obtained the following lemma. See [20] for a more general result.
Lemma 2.5
([16])
Let H be a Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(F: C\times C\rightarrow\mathbb{R}\) satisfy (A1)-(A4). Let \(A_{F}\) be a set-valued mapping of H into itself defined by
Then \(\operatorname{EP}(F)=A^{-1}_{F}0\) and \(A_{F}\) is a maximal monotone operator with \(\operatorname{dom} A_{F}\subset C\). Furthermore, for any \(x\in H\) and \(r>0\), the resolvent \(J_{r}\) of F coincides with the resolvent of \(A_{F}\), i.e.,
Besides, the following two lemmas are extremely important in the proof of the theorems.
Lemma 2.6
([1])
Assume that \(\{a_{n}\}_{n=0}^{\infty}\) is a sequence of nonnegative real numbers such that
where \(\{\gamma_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) are sequences in \((0,1)\) and \(\{\delta_{n}\}_{n=0}^{\infty}\) is a sequence in ℝ such that
-
(i)
\(\sum_{n=0}^{\infty}\gamma_{n} = \infty\);
-
(ii)
either \(\limsup_{n\rightarrow\infty}\delta_{n} \leq0\) or \(\sum_{n=0}^{\infty}\gamma_{n}|\delta_{n}| < \infty\);
-
(iii)
\(\sum_{n=0}^{\infty}\beta_{n} < \infty\).
Then \(\lim_{n\rightarrow\infty}a_{n} = 0\).
The so-called demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.7
(Demiclosed principle [21])
Let \(T : C\rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(\{x_{n}\}_{n=1}^{\infty}\) is a sequence in C weakly converging to x and if \(\{(I-T)x_{n}\}_{n=1}^{\infty}\) converges strongly to y, then \((I-T)x = y\). In particular, if \(y = 0\), then \(x\in F(T)\).
Lemma 2.8
([22])
Let C be a nonempty, closed, and convex subset of H and let \(i_{C}\) be the indicator function of C, then \(i_{C}\) is a proper lower semicontinuous convex function on H and the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. Define \(J_{\lambda}x=(I+\lambda\partial i_{C})^{-1}x\), for all \(x\in H\). We see that, for any \(x\in H\) and \(u\in C\), \(u=J_{\lambda}x\Longleftrightarrow u=P_{C}x\).
3 Main results
In this section, we will give our main results of this paper. Let B be a maximal monotone operator on H such that the domain of B is included in C, and define the set of zero points of B as follows:
We always denote \(\operatorname{Fix}(T)\) as the fixed point set of the nonexpansive mapping T, denote U as the solution set of the constrained convex minimization problem (1.1), and denote \(\operatorname{EP}(F)\) as the solution set of the equilibrium problem (1.8).
Let f be a contraction on C with the constant \(k\in(0,1)\). Suppose that ∇g is \(1/L\)-ism continuous. Let \({ J_{r_{n}}}\) be a sequence of mappings defined as in Lemma 2.4. Consider the mapping \(G_{n}\) on C defined by
where \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\lambda_{n}\subset (0, 2/\beta-L)\), \(\beta\in(0,2/L)\), \(\{\alpha_{n}\}\subset(0,1)\). It is easy to prove that \(\nabla g_{\lambda_{n}}\) is \(\frac{1}{L+\lambda_{n}}\)-ism, \(T_{\lambda_{n}}\) is nonexpansive. It is easy to see that \(G_{n}\) is a contraction. Indeed, by Lemma 2.4, we have, for each \(x,y\in C\),
Since \(0<1-\alpha_{n}(1-k)<1\), it follows that \(G_{n}\) is a contraction. Therefore, by the Banach contraction principle, \(G_{n}\) has a unique fixed point \(x_{n}^{f}\in C\) such that
For simplicity, we will write \(x_{n}\) for \(x_{n}^{f}\) provided no confusion occurs. Then we prove the convergence of \(\{x_{n}\}\), while we claim the existence of the \(q\in U\cap B^{-1}0\), which solves the variational inequality
Equivalently, \(q=P_{U\cap B^{-1}0}f(q)\).
Theorem 3.1
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). Let g be a real-valued convex function of C into ℝ, and the gradient ∇g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap B^{-1}0\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by
where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the following conditions:
-
(i)
\(\{r_{n}\}\subset(0,\infty)\), \(\liminf_{n\rightarrow\infty}r_{n}>0\);
-
(ii)
\(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);
-
(iii)
\(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality (3.1).
Proof
It is well known that \(\hat{x}\in C\) solves the minimization problem (1.1) if and only if for each fixed \(0<\beta <2/L\), \(\hat{x}\) solves the fixed point equation
It is clear that \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).
First, we claim that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in U\cap B^{-1}0\), since \(u_{n}=J_{r _{n}}(x_{n})\), and \(p=J_{r _{n}}(p)\), we know that, for any \(n\in N\),
Thus, we derive that
It follows that
For \(x\in C\), note that
and
Then we get
It follows from (3.4) and (3.5) that
Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(M>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq M\), and
Hence \(\{x_{n}\}\) is bounded and we also find that \(\{u_{n}\}\) is bounded.
Next, we show that \(\|x_{n}-u_{n}\|\rightarrow0\). Indeed, for any \(p\in U\cap B^{-1}0\), by Lemma 2.4, we have
This implies that
Then from (3.5), (3.6), and (2.1), we derive that
Hence, we obtain
Since both \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded and \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), it follows that \(\|u_{n}-x_{n}\|\rightarrow0\).
Then we show that \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\). Indeed,
Since \(\alpha_{n}\rightarrow0\) and \(\|u_{n}-x_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\).
Thus,
and
we have \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).
Since \(\{u_{n}\}\) is bounded, without loss of generality, we can assume that \(u_{n_{i}}\rightharpoonup q\). Next, we show that \(q\in U\cap B^{-1}0\).
By (3.5), we have
Since \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\lambda_{n}\rightarrow0\), we have \(\|u_{n}-T(u_{n})\|\rightarrow 0\).
So, by Lemma 2.7, we get \(q\in \operatorname{Fix}(T)=U\).
Next, we show that \(q\in B^{-1}0\). Since \(u_{n}=J_{r_{n}}(x_{n})\), B is a maximal monotone operator, we have from (1.6) \(A_{r_{n_{i}}}x_{n_{i}}\in BJ_{r_{n_{i}}}x_{n_{i}}\), where \(A_{r}\) is the Yosida approximation of B for \(r>0\). Furthermore, we have, for any \((u,v)\in B\),
Since \(\liminf_{n\rightarrow\infty}r_{n}>0\), \(u_{n_{i_{j}}}\rightharpoonup q\) and \(x_{n_{i_{j}}}-u_{n_{i_{j}}}\rightarrow0\), we have
Since B is a maximal monotone operator, we have \(0\in Bq\) and hence \(q\in B^{-1}0\). Thus we have \(q\in U\cap B^{-1}0\).
On the other hand, we note that
Hence, we obtain from (3.3) and (3.5)
It follows that
In particular,
Since \(x_{n_{i}}\rightharpoonup q\) and \(\lambda_{n}=o(\alpha_{n})\), it follows from (3.7) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow \infty\).
Next, we show that q solves the variational inequality (3.1).
Observe that
Hence, we conclude that
Since \(T_{\lambda_{n}}J_{r_{n}}\) is nonexpansive, we find that \(I-T_{\lambda_{n}}J_{r_{n}}\) is monotone. Note that, for any given \(z\in U\cap B^{-1}0\),
Now, replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\lambda_{n}=o(\alpha_{n})\), \(\|T_{\lambda_{n}}(u_{n})-x_{n}\|\rightarrow0\), we have
From the arbitrariness of \(z\in U\cap B^{-1}0\), it follows that \(q\in U\cap B^{-1}0\) is a solution of the variational inequality (3.1). Further, by the uniqueness of the solution of the variational inequality (3.1), we conclude that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\).
The variational inequality (3.1) can be rewritten as
By Lemma 2.1, it is equivalent to the following fixed point equation:
This completes the proof. □
Theorem 3.2
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). Let g be a real-valued convex function of C into ℝ, and the gradient ∇g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap B^{-1}0\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and
where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the following conditions:
-
(C1)
\(\{r_{n}\}\subset(0,\infty)\), \(\liminf_{n\rightarrow\infty}r_{n}>0\), \(\sum_{n=1}^{\infty}|r_{n+1}-r_{n}|<\infty\);
-
(C2)
\(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);
-
(C3)
\(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\), \(\sum_{n=1}^{\infty}|\lambda_{n+1}-\lambda _{n}|<\infty\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality (3.1).
Proof
It is clear that \(\hat{x}\in C\) solves the minimization problem (1.1) if and only if for each fixed \(0<\beta<2/L\), \(\hat{x}\) solves the fixed point equation
and \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).
Now, we first show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in U\cap B^{-1}0\), since \(u_{n}=J_{r_{n}}(x_{n})\), by Lemma 2.4, we know that
Thus, we derive from (3.5) that
Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(E>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq E\). Thus,
By induction, we have
Hence, \(\{x_{n}\}\) is bounded. From (3.9), we also find that \(\{u_{n}\}\) is bounded.
Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow0\).
Indeed, since ∇g is \(1/L\)-ism, \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\) is nonexpansive, we derive that
Thus, we get
for some appropriate constant \(M_{1}>0\) such that
Since \(u_{n+1}=J_{r_{n+1}}(x_{n+1})\) and \(u_{n}=J_{r_{n}}(x_{n})\), we have from Lemma 2.2
Since \(\liminf_{n\rightarrow\infty}r_{n}>0\), without loss of generality, we may assume that there exists a real number a such that \(r_{n}\geq a>0\) for all \(n\in\mathbb{N}\).
Thus, we have
where \(M_{2}=\sup\{\frac{1}{a}\|J_{r_{n+1}}x_{n}-x_{n}\|: n\in \mathbb{N}\}\). From (3.10) and (3.11), we obtain
where \(M_{3}=\max\{M_{1},M_{2}\}\). Hence by Lemma 2.6, we have
Then, from (3.11), (3.12), and \(|r_{n+1}-r_{n}|\rightarrow0\), we have
For any \(p\in U\cap B^{-1}0\), in the same way as in the proof of Theorem 3.1, we have
Then, from (3.5) and (3.14), by the same argument as in the proof of Theorem 3.1, we derive that
and hence
Since both \(\{x_{n}\}\), \(\{f(x_{n})\}\) and \(\{u_{n}\}\) are bounded, \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), and \(\|x_{n+1}-x_{n}\|\rightarrow0\), we have
Next, we derive that
From (3.12), (3.15), and \(\alpha_{n}\rightarrow0\), we have
It follows that \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).
Now we show that
where \(q\in U\cap B^{-1}0\) is a unique solution of the variational inequality (3.1).
Indeed, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that
Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup\tilde{x}\).
By the same argument as in the proof of Theorem 3.1, we have \(\tilde {x}\in U\cap B^{-1}0\).
Since \(q=P_{U\cap B^{-1}0}f(q)\), it follows that
Finally, we show that \(x_{n}\rightarrow q\).
In fact,
So, from (3.5) and (3.9), we derive
It follows that
since \(\{x_{n}\}\) is bounded, we can take a constant \(M>0\) such that
Then we obtain
where \(\delta_{n}=\frac{2}{1+\alpha_{n}(1-k)}[\langle -(I-f)q,x_{n+1}-q\rangle+\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|M]\).
By (3.17) and \(\lambda_{n}=o(\alpha_{n})\), we get \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\). Now applying Lemma 2.6 to (3.18) one concludes that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\). The variational inequality (3.1) can be rewritten as
By Lemma 2.1, it is equivalent to the following fixed point equation:
This completes the proof. □
4 Applications
In this section, we will give some applications, which are useful in nonlinear analysis and optimization.
Theorem 4.1
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let F be a bifunction of \(C\times C\) into ℝ satisfying (A1)-(A4). Let \(J_{r}\) be the resolvent of F for \(r>0\). Let g be a real-valued convex function of C into ℝ, and the gradient ∇g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap \operatorname{EP}(F)\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and
where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap \operatorname{EP}(F)\), where \(q=P_{U\cap \operatorname{EP}(F)}f(q)\).
Proof
For a bifunction F of \(C\times C\) into ℝ satisfying (A1)-(A4), we can define a maximal monotone operator \(A_{F}\) with \(\operatorname{dom}A_{F}\subset C\). Put \(B=A_{F}\) in Theorem 3.2. Then by Lemma 2.5, we have \(u_{n}=J_{r_{n}}(x_{n})\). Thus we obtain the desired result by Theorem 3.2. □
On the other hand, based on Theorem 3.2 and Theorem 4.1, we will give another two applications of it. In 1994, Censor and Elfving [23] introduced the split feasibility problem (SFP). Then various algorithms were introduced by some authors to solve it (see [24] and [25–29]). Recently, many authors have paid attention to the SFP due to its application in signal processing and image reconstructions (see [4, 30, 31]).
Let C and Q be nonempty, closed, and convex subset of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Then the SFP under consideration in this paper can mathematically be formulated as finding a point x satisfying the following property:
where \(A:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is clear that \(x^{*}\) is a solution to the split feasibility problem (4.2) if and only if \(x^{*}\in C\) and \(Ax^{*}-P_{Q}Ax^{*}=0\). We define the proximity function g by
Then consider the constrained convex minimization problem
Then \(x^{*}\) solves the SFP (4.2) if and only if \(x^{*}\) solves the minimization problem (4.3) with the minimum equal to 0.
In particular, the so-called CQ algorithm was introduced by Byrne [24]. Take the initial guess \(x_{0}\in H_{1}\) arbitrarily, and define \(\{x_{n}\}\) recursively as follows:
where \(0<\beta<2/\|A\|^{2}\) and \(P_{C}\) denotes the projector onto C. Then the sequence \(\{x_{n}\}\) generated by (4.4) converges weakly to a solution of the SFP.
Let B be a maximal monotone operator on Hilbert space H. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). In order to obtain a strong convergence iterative sequence to solve the SFP, we propose a new algorithm as follows: \(x_{1}\in C\),
where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (4.5) converges strongly to a solution of the SFP (4.2) if the sequence \(\{\alpha_{n}\}\subset(0,1)\) and the sequence \(\{\lambda_{n}\}\) of parameters satisfy appropriate conditions. Applying Theorem 3.2, we obtain the following result.
Theorem 4.2
Assume that the split feasibility problem (4.2) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (4.5). Here the sequences \(\{r_{n}\}\) and \(\{\alpha_{n}\}\subset(0,1)\), and the sequence \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in W\cap B^{-1}0\), where W denotes the solution set of SFP (4.2).
Proof
By the definition of the proximity function g, we have
since \(P_{Q}\) is \(1/2\)-averaged mapping, then \(I-P_{Q}\) is 1-ism, for \(\forall x,y\in C\), we obtain
So, ∇g is \(1/\|A\|^{2}\)-ism.
Set \(g_{\lambda_{n}}(x)=g(x)+\frac{\lambda_{n}}{2}\|x\|^{2}\), consequently,
Then the iterative scheme (4.5) is equivalent to
where \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). □
However, in order to obtain a strong convergence iterative sequence to solve the SFP, we propose another new algorithm as follows: \(x_{1}\in C\),
where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (4.6) converges strongly to a solution of the SFP (4.2) if the sequence \(\{\alpha_{n}\}\subset(0,1)\) and the sequence \(\{\lambda_{n}\}\) of parameters satisfy appropriate conditions. Applying Theorem 3.2, we obtain the following result.
Theorem 4.3
Assume that the split feasibility problem (4.1) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (4.4). Here the sequences \(\{r_{n}\}\) and \(\{\alpha_{n}\}\subset(0,1)\), the sequence \(\{\lambda_{n}\}\) satisfies the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in W\cap \operatorname{EP}(F)\), where W denotes the solution set of SFP (4.2).
Proof
For a bifunction F of \(C\times C\) into ℝ satisfying (A1)-(A4), we can define a maximal monotone operator \(A_{F}\) with \(\operatorname{dom}A_{F}\subset C\). Put \(F=A_{F}\) in Theorem 3.2. Then by Lemma 2.5, we have \(u_{n}=J_{r_{n}}(x_{n})\). Thus we obtain the desired result by Theorem 3.2 and Theorem 4.2. □
5 Numerical results
In this section, we present the following concrete examples to judge the numerical performance of our algorithm. By using the algorithm in Theorem 4.2 and Theorem 3.2, we illustrate its realization, effectiveness, and convergence in solving a system of linear equations and a constrained convex minimization problem.
The first example is the \(4\times4\) system of linear equations, which use the algorithm in Theorem 4.2.
Example 1
In Theorem 4.2, we assume that \(H_{1}=H_{2}=\mathbb{R}^{4}\). Take \(f=\frac{1}{4}I\), where I denotes the \(4\times4\) identity matrix. Given the parameters \(\alpha_{n}=\frac{1}{n+2}\), \(\lambda_{n}=\frac{1}{(n+2)^{2}}\) for every \(n\geq0\). \(\beta=\frac{1}{100}\). Take
The SFP can be formulated as the problem of finding a point \(x^{*}\) with the property
where \(C=H_{1}\), \(Q=\{b\}\subset H_{2}\). That is, \(x^{*}\) is the solution of the system of linear equations \(Ax=b\), and
Then by Theorem 4.2 and Lemma 2.8, the sequence \(\{x_{n}\}\) is generated by
As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}=(1,3,2,4)^{T}\).
From Table 1, we can easily see that with iterative number increasing, \(x_{n}\) approaches the exact solution \(x^{*}\) and the errors gradually approach zero.
The second example is also the constrained convex minimization problem, which uses the algorithm in Theorem 3.2.
Example 2
In Theorem 3.2, we assume that \(H=\mathbb{R}\) and \(C=[0,2]\). Take \(f=\frac{1}{4}I\), where I denotes the unit function. Given the parameters \(\alpha_{n}=\frac{1}{n+2}\), \(\lambda_{n}=\frac {1}{(n+2)^{2}}\) for every \(n\geq0\). \(\beta=\frac{1}{2}\). Consider the problem (1.1) and take the function
The problem (1.1) can be written as
It is easy to see that ∇g is \(1/2\)-ism, that is, \(L=2\). In order to solve the problem (5.5), we can find a point \(x^{*}\in[0,2]\), such that \(g(x)\) reaches the minimum at \(x^{*}\), and \(x^{*}=1\).
Then by Theorem 3.2 and Lemma 2.8, the sequence \(\{x_{n}\}\) is generated by
As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}\).
From Table 2, we easily see that by using the regularization method and with iterative number increasing, \(x_{n}\) approaches to \(x^{*}\) and the errors gradually approach to zero.
From the computer programming’s point of view, the above algorithms in the concrete examples are easier to implement in this paper.
6 Conclusion
In a real Hilbert space, methods for solving the equilibrium problem and constrained convex minimization problem have been extensively studied, respectively. Recently, Tian and Liu were first to propose composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem. However, in this paper, we use the regularized gradient-projection algorithm to find the unique solution of the problems of constrained convex minimization problem and the zero points of maximal monotone operator, which also solves a certain variational inequality. In particular, under suitable conditions, the zero points of a maximal monotone operator problem can be transformed into the equilibrium problem. Then new strong convergence theorems and applications are obtained, which also solve a certain variational inequality. Finally, we apply this algorithm to the split feasibility problem and the constrained convex minimization problem, and we illustrate the effectiveness, realization, and convergence of our algorithm by giving concrete examples and numerical results.
References
Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)
Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)
Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)
Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Ceng, LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013, Article ID 240 (2013)
Takahashi, W: Convex Analysis and Approximation of Fixed Points. Yokohama Publishers, Yokohama (2000)
Flam, SD, Antipin, AS: Equilibrium programming using proximal-like algorithms. Math. Program. 78, 29-41 (1997)
Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)
He, HM, Liu, SY, Cho, YJ: An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. J. Comput. Appl. Math. 235, 4128-4139 (2011)
Qin, XL, Cho, YJ, Kang, SM: Convergence analysis on hybrid projection algorithms for equilibrium problems and variational inequality problems. Math. Model. Anal. 14, 335-351 (2009)
Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)
Moudafi, A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)
Marino, G, Xu, HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43-52 (2006)
Zeng, LC, Ansari, QH, Shyu, DS, Yao, JC: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory Appl. 2010, Article ID 590278 (2010)
Tian, M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 63(9), 1367-1385 (2014)
Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)
Eshita, K, Takahashi, W: Approximating zero points of accretive operators in general Banach spaces. JP J. Fixed Point Theory Appl. 2, 105-116 (2007)
Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Blum, E, Oettli, W: From optimization and variational inequalities to a equilibrium problems. Math. Stud. 63, 123-145 (1994)
Aoyama, K, Kimura, Y, Takahashi, W: Maximal monotone operators and maximal monotone functions for equilibrium problems. J. Convex Anal. 15, 395-409 (2008)
Hundal, H: An alternating projection that does not converge in norm. Nonlinear Anal. 57, 35-61 (2004)
Lin, LJ, Takahashi, W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429-453 (2012)
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)
Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441-453 (2002)
Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21(5), 1655-1662 (2005)
Xu, HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22(6), 2021-2034 (2006)
Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20(4), 1261-1266 (2004)
Yang, Q, Zhao, J: Generalized KM theorems and their applications. Inverse Probl. 22(3), 833-844 (2006)
Lopez, G, Martin-Marquez, V, Wang, FH, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)
Zhao, JL, Zhang, YJ, Yang, QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 219, 1644-1653 (2012)
Acknowledgements
The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declare that they have no competing interests.
Authors’ contributions
All the authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Tian, M., Jiao, SW. Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator. Fixed Point Theory Appl 2015, 11 (2015). https://doi.org/10.1186/s13663-015-0258-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-015-0258-9