Abstract
In this paper, let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. We assume that \((A+B)^{-1}0\cap U\neq\emptyset\), where \(A:C\rightarrow H\) is an α-inverse-strongly monotone mapping, \(B:H\rightarrow H\) is a maximal monotone operator, the domain of B is included in C. Let U denote the solution set of the constrained convex minimization problem. Based on the viscosity approximation method, we use a gradient-projection algorithm to propose composite iterative algorithms and find a common solution of the problems which we studied. Then we regularize it to find a unique solution by gradient-projection algorithm. The point \(q\in (A+B)^{-1}0\cap U\) which we find solves the variational inequality \(\langle(I-f)q, p-q\rangle\geq0\), \(\forall p\in(A+B)^{-1}0\cap U\). Under suitable conditions, the constrained convex minimization problem can be transformed into the split feasibility problem. Zeros of the sum of two operators can be transformed into the variational inequality problem and the fixed point problem. Furthermore, new strong convergence theorems and applications are obtained in Hilbert spaces, which are useful in nonlinear analysis and optimization.
Similar content being viewed by others
1 Introduction
Throughout this paper, let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\). Let C be a nonempty, closed, and convex subset of H. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. In the following, we introduce some operators which will be used in this paper.
-
\(f:C\rightarrow C\) is a contraction if there exists \(k\in(0,1)\) such that \(\|f(x)-f(y)\|\leq k\|x-y\|\) for all \(x,y\in C\).
-
\(T:C\rightarrow C\) is nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\).
-
\(V:C\rightarrow C\) is Lipschitz continuous if there exists a constant \(L>0\) such that \(\|Vx-Vy\|\leq L\|x-y\|\) for all \(x,y\in C\).
-
\(W:C\rightarrow H\) is a strict pseudo-contraction [1] if there exists \(t\in\mathbb{R}\) with \(0\leq t<1\) such that \(\|Wx-Wy\|^{2}\leq\|x-y\|^{2}+t\|(I-W)x-(I-W)y\|^{2}\) for all \(x,y\in C\).
-
\(P_{C}: H\rightarrow C\) is metric projection if \(\|x-P_{C}x\|\leq\|x-y\|\) for all \(x\in H\) and \(y\in C\). \(P_{C}\) is firmly nonexpansive if \(\|P_{C}x-P_{C}y\|^{2}\leq\langle P_{C}x-P_{C}y,x-y\rangle\) for all \(x,y\in H\).
-
\(A:H\rightarrow H\) is monotone if \(\langle x-y, Ax-Ay\rangle\geq0\) for all \(x,y\in H\).
-
Given a number \(\eta>0\), \(A:H\rightarrow H\) is η-strongly monotone if \(\langle x-y, Ax-Ay\rangle\geq\eta\|x-y\|^{2}\) for all \(x,y\in H\).
-
Given a number \(\alpha>0\), \(A:C\rightarrow H\) is α-inverse strongly monotone (α-ism) if \(\langle x-y, Ax-Ay\rangle\geq \alpha\|Ax-Ay\|^{2}\) for all \(x,y\in H\).
We first consider the problem of zero points of the maximal monotone operator:
where B is a mapping of H into \(2^{H}\), the effective domain of B is denoted by domB or \(D(B)\), that is, \(\operatorname{dom}B=\{x\in H: Bx\neq\emptyset\}\). A multi-valued mapping B is said to be a monotone operator on H if \(\langle x-y, u-v\rangle\geq0\) for all \(x,y\in \operatorname{dom}B\), \(u\in Bx\), \(v\in By\). A monotone operator B on H is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we may define a single-valued operator \(J_{r}=(I+rB)^{-1}: H\rightarrow \operatorname{dom}B\), which is called the resolvent of B for r. It is well known that \(B^{-1}0=\operatorname{Fix}(J_{r})\) for all \(r>0\) and the resolvent \(J_{r}\) is firmly nonexpansive, i.e.,
Some authors introduced various algorithms to solve zeros of the operators (see [2]) and monotone operators (see [3]).
We consider the following constrained convex minimization problem:
where \(g:C\rightarrow\mathbb{R}\) is a real-valued convex function. Assume that the constrained convex minimization problem (1.2) is solvable, and let U denote the solution set of (1.2). For solving constrained convex minimization problems, some methods were proposed by some authors (see [4] and [5]). The gradient-projection algorithm generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) according to the recursive formula:
or more generally,
where the parameters \(\beta_{n}\) are real positive numbers, and \(P_{C}\) is the metric projection from H onto C. It is well known that the convergence of the algorithms (1.3) depends on the behavior of the gradient ∇g. If the gradient ∇g is only assumed to be inverse-strongly monotone, then the sequence \(\{x_{n}\}\) defined by the algorithm (1.3) and (1.4) can only converge weakly to a minimizer of (1.2). If the gradient ∇g is Lipschitz continuous and strongly monotone, then the sequence generated by (1.3) and (1.4) can converge strongly to a minimizer of (1.2).
However, we all know that the minimization problem (1.2) has more than one solution under suitable conditions, so regularization is essential in finding the unique solution of the minimization problem (1.2). Some authors used methods with regularization to solve the minimization problems (see [6]), and the other methods for hierarchical minimization problems (see [7]). Now, we consider the following regularized minimization problem:
where \(\lambda>0\) is the regularization parameter, g is a convex function with a \(1/L\)-ism continuous gradient ∇g. Then the regularized gradient-projection algorithm generates a sequence \(\{ x_{n}\}_{n=0}^{\infty}\) by the following recursive formula:
where the parameter \(\lambda_{n}>0\), β is a constant with \(0<\beta<2/L\), and \(P_{C}\) is the metric projection from H onto C. We all know that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by algorithm (1.5) converges weakly to a minimizer of (1.2) in the setting of infinite-dimensional spaces (see [8]).
The subdifferential of the lower semicontinuous convex function and indicator function will also be used in this paper. See the introduction from Section 3 for more details as regards ∂h and \(\partial i_{C}\).
In 2000, Moudafi [9] introduced the viscosity approximation method for nonexpansive mappings, extended in [10]. Let f be a contraction on H, starting with an arbitrary initial \(x_{0}\in H\), define a sequence \(\{x_{n}\}\) recursively by
we use \(\operatorname{Fix}(T)\) to denote the set of fixed points of the mapping T, i.e., \(\operatorname{Fix}(T)=\{x\in H: x=Tx\}\).
In 2007, for finding a common element of equilibrium problem \(EP(F)\) and a fixed point problem, Takahashi and Takahashi [11] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: \(x_{1}\in H\) and
where \(\{\alpha_{n}\}\subset(0,1)\) and \(\{\gamma_{n}\}\subset(0,\infty)\) satisfy some appropriate conditions. Further, they proved \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to \(z\in \operatorname{Fix}(T)\cap EP(F)\), where \(z=P_{\operatorname{Fix}(T)\cap EP(F)}f(z)\).
In 2012, Tian and Liu [12] introduced the following iterative method in a Hilbert space: \(x_{1}\in C\) and
where \(F: C\times C\rightarrow\mathbb{R}\), \(u_{n}=Q_{\beta_{n}}(x_{n})\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\), \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), and \(\{\lambda_{n}\}\subset(0,2/L)\), and \(\{\alpha_{n}\}\), \(\{r_{n}\}\), \(\{\theta_{n}\}\) satisfy appropriate conditions. Further, they proved the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap EP(F)\), which solves the variational inequality
It is the first time that the equilibrium and constrained convex minimization problems have been solved.
Also in 2012, Lin and Takahashi [13] proposed the following iterative sequence in a Hilbert space: \(x_{1}=x\in H\) and \(\{x_{n}\}\subset H\) a sequence generated by
Under appropriate conditions, it is proved that the sequence \(\{x_{n}\}\) generated by (1.9) converges strongly to a point \(z_{0}\in(A+B)^{-1}0\cap F^{-1}0\) which is a unique fixed point of \(P_{(A+B)^{-1}0\cap F^{-1}0}(I-V+\gamma g)\) in \((A+B)^{-1}0\cap F^{-1}0\). This point \(z_{0}\) is also a unique solution of the hierarchical variational inequality
In 2013, Kong et al. [14] proposed a multistep hybrid extragradient method for triple hierarchical variational inequalities.
In this paper, motivated and inspired by the above results, we introduce two new iterative algorithms, the one is: \(x_{1}\in C\) and
to find a common element of \((A+B)^{-1}0\cap U\), where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(0< b\leq\beta_{n}\leq2/L\).
The other is: \(x_{1}\in C\) and
to find a unique solution of \((A+B)^{-1}0\cap U\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\).
Under suitable conditions, it is proved that both of the sequences \(\{x_{n}\}\) generated by (1.10) and (1.11) converge strongly to a point \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality
Equivalently, \(q=P_{(A+B)^{-1}0\cap U}f(q)\).
The main purpose of this paper is to find a solution of \((A+B)^{-1}0\cap U\) by using the gradient-projection algorithm. Then we use the regularized gradient-projection composite iterative method to find a unique solution of \((A+B)^{-1}0\cap U\). In the case that the maximal monotone operator \(B=\partial i_{C}\), the problem of finding a unique solution in \((A+B)^{-1}0\cap U\) is equivalent to the problem of finding a unique solution in \(VI(C,A)\cap U\). In the case \(B=\partial i_{C}\) and \(A=I-W\), \((A+B)^{-1}0\) is equivalent to \(\operatorname{Fix}(W)\).
The paper is organized as follows: in Section 2, we introduce some useful properties and lemmas. In Section 3, we prove our main results and apply our results to the variational inequality, fixed point problem and the split feasibility problem. In the final section, we give our conclusion due to the main results.
We will use the following notations:
-
1.
‘⇀’ for weak convergence and ‘→’ for strong convergence;
-
2.
\(\operatorname{Fix}(T)\) denotes the set of fixed points of the mapping T;
-
3.
U denotes the solution set of (1.2).
-
4.
‘GPA’ for the gradient-projection algorithm and ‘RGPA’ for the regularized gradient-projection algorithm.
2 Preliminaries
In this section, we give our preliminaries which will be useful for the main results in the next section.
Throughout this paper, we always assume that C is a nonempty, closed, and convex subset of a real Hilbert space H.
The following inequality holds in an inner product space X:
We need some facts and tools in a real Hilbert space H which are listed as lemmas below.
Firstly, we recall the metric (nearest point) projection from H onto C is the mapping \(P_{C}: H\rightarrow C\) which is defined as follows: given \(x\in H\), \(P_{C}x\) is the unique point in C with the property
\(P_{C}\) is characterized as follows.
Lemma 2.1
Given \(x\in H\) and \(y\in C\). Then \(y=P_{C}x\) if and only if the following inequality holds:
Then we introduce the following lemma which is about the resolvent of the maximal monotone operator.
Lemma 2.2
Let H be a real Hilbert space and let B be a maximal monotone operator on H. For \(r>0\) and \(x\in H\), define the resolvent \(J_{r}x\). Then the following holds:
for all \(s,t>0\) and \(x\in H\). In particular,
for all \(s,t>0\) and \(x\in H\).
Besides, the following two lemmas are extremely important in the proof of theorems.
Lemma 2.3
[18]
Assume that \(\{a_{n}\}_{n=0}^{\infty}\) is a sequence of nonnegative real numbers such that
where \(\{\gamma_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) are sequences in \((0,1)\) and \(\{\delta_{n}\}_{n=0}^{\infty}\) is a sequence in \(\mathbb{R}\) such that
-
(i)
\(\sum_{n=0}^{\infty}\gamma_{n} = \infty\);
-
(ii)
either \(\limsup_{n\rightarrow\infty}\delta_{n} \leq0\) or \(\sum_{n=0}^{\infty}\gamma_{n}|\delta_{n}| < \infty\);
-
(iii)
\(\sum_{n=0}^{\infty}\beta_{n} < \infty\).
Then \(\lim_{n\rightarrow\infty}a_{n} = 0\).
The so-called demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.4
(Demiclosed principle [19])
Let \(T : C\rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(\{x_{n}\}_{n=1}^{\infty}\) is a sequence in C weakly converging to x and if \(\{(I-T)x_{n}\}_{n=1}^{\infty}\) converges strongly to y, then \((I-T)x = y\). In particular, if \(y = 0\), then \(x\in F(T)\).
The lemma below shows the uniqueness of solution of the variational inequality (1.12).
Lemma 2.5
[20]
Let H be a Hilbert space, C a closed convex subset of H, and \(f:C\rightarrow C\) a contraction with coefficient \(\alpha<1\). Then
That is, \(I-f\) is strongly monotone with coefficient \(1-\alpha\).
3 Main results
We always assume that H is a real Hilbert space and C is a nonempty, closed, and convex subset of H. Let \(P_{C}: H\rightarrow C\) be the metric projection. Let \(f: C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A: C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\), and let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that ∇g is \(1/L\)-ism continuous. Consider the two mappings \(G_{n}\) and \(S_{n}\),
where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(0< b\leq\beta_{n}\leq2/L\),\(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\lambda_{n}\in(0,2/\beta-L)\), \(\beta\in(0,2/L)\), \(\{\alpha_{n}\}\subset(0,1)\). It is easy to prove that \(\nabla g_{\lambda_{n}}\) is \(\frac{1}{L+\lambda_{n}}\)-ism, \(T_{\lambda_{n}}\) is nonexpansive. It is easy to see that if \(0< r\leq2\alpha\), then \(I-rA\) is nonexpansive of C into H. Indeed, we have, for all \(x,y\in C\),
Thus, \(I-rA\) is nonexpansive of C into H.
Then we can claim that both of \(G_{n}\) and \(S_{n}\) are contractions. Indeed, by (1.1) and (3.1)-(3.3), we have, for each \(x,y\in C\),
Similarly,
Since \(0<1-\alpha_{n}(1-k)<1\), it follows that both of \(G_{n}\) and \(S_{n}\) are contractions. Thus, by the Banach contraction principle, \(G_{n}\) has a unique fixed point \(x_{n}^{f}\in C\) such that
Similarly, \(S_{n}\) has a unique fixed point \(x_{n}^{*}\in C\) such that
For simplicity, we will write \(x_{n}\) for \(x_{n}^{f}\) and \(x_{n}^{*}\) provided no confusion occurs. Furthermore, we prove the convergence of \(\{x_{n}\}\), while we claim the existence of the \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality
Equivalently, \(q=P_{(A+B)^{-1}0\cap U}f(q)\).
The following is our main result.
Theorem 3.1
Let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(P_{C}:H\rightarrow C\) be the metric projection. Let \(f:C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\). Let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that ∇g is \(1/L\)-ism continuous with \(L>0\). Assume that \((A+B)^{-1}0\cap U\neq\emptyset\). Use GPA and let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by
when regularize it by using RGPA, the sequences generated by (3.5) changed into the following sequence:
where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\), \(0< b\leq\beta_{n}\leq2/L\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the following conditions:
-
(i)
\(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);
-
(ii)
\(\{r_{n}\}\subset(0,\infty)\), \(0< l\leq r_{n}\leq2\alpha\);
-
(iii)
\(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha _{n})\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in (A+B)^{-1}0\cap U\), which solves the variational inequality (3.4).
Proof
It is well known that \(\tilde{x}\in C\) solves the minimization problem (1.2) if and only if for each fixed \(0<\beta<2/L\), \(\tilde{x}\) solves the fixed point equation
It is clear that \(\tilde{x}=T\tilde{x}\), i.e., \(\tilde{x}\in U=\operatorname{Fix}(T)\). Since T is nonexpansive, U is closed and convex.
As in [21], we have, for any \(r>0\),
If \(0< r\leq2\alpha\), we see from (1.1) and (3.3) that \(J_{r}(I-rA)\) is nonexpansive. Thus \(\operatorname{Fix}(J_{r}(I-rA))\) is closed and convex.
In the first step, we show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in(A+B)^{-1}0\cap U\), put \(M_{n}=J_{r_{n}}(I-r_{n}A)\), since \(u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n})\), and \(p=J_{r_{n}}(I-r_{n}A)(p)\), we know that for any \(n\in\mathbb{N}\),
For \(x\in C\), we note that
and
Then we get
Thus, by (3.5) and (3.8), we derive that
Then we have
and hence \(\{x_{n}\}\) is bounded. From (3.5), we also derive that \(\{u_{n}\}\) is bounded.
Similarly, by (3.6) and (3.8), we obtain
It follows from (3.9) that
Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(R>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq R\), and
Hence \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.
In the second step, we prove that \(\|x_{n}-u_{n}\|\rightarrow0\). Indeed, for any \(p\in(A+B)^{-1}0\cap U\), by (1.1), we derive that
This implies that
From (3.5), (3.10) and (2.1), we derive that
Since \(\alpha_{n}\rightarrow0\), it follows that \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\).
Similarly, from (3.6), (3.9), (3.10), and (2.1), we derive that
Hence, we obtain
Since both \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded and \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), it follows that \(\|u_{n}-x_{n}\|\rightarrow0\).
In the third step, from (3.5), we show that \(\|x_{n}-T_{n}(x_{n})\|\rightarrow0\). Indeed,
Since \(\alpha_{n}\rightarrow0\) and \(\|x_{n}-u_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{n}(x_{n})\|\rightarrow0\).
Thus,
and
we have \(\|u_{n}-T_{n}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{n}(u_{n})\| \rightarrow0\).
Similarly, from (3.6), we show that \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\). Indeed,
Since \(\alpha_{n}\rightarrow0\) and \(\|u_{n}-x_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\).
Therefore,
and
we have \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).
In the fourth step, we show that \(q\in(A+B)^{-1}0\cap U\).
Consider a subsequence \(\{u_{n_{i}}\}\) of \(\{u_{n}\}\). Since \(\{u_{n}\}\) is bounded, without loss of generality, we can assume that \(u_{n_{i}}\rightharpoonup q\).
We first see the gradient-projection algorithm generated by (3.5), from the boundedness of \(\{u_{n_{i}}\}\), \(\beta_{n_{i}}\rightarrow\beta\), and \(\|u_{n_{i}}-T_{n_{i}}(u_{n_{i}})\|\rightarrow0\), we distinguish two cases to show \(q\in U\).
Case 1. \(\lim_{i\rightarrow\infty}\beta_{n_{i}}=\beta=\frac{2}{L}\).
Observe that
Then we conclude that
Since ∇g is \(\frac{1}{L}\)-ism, \(P_{C}(I-\frac{2}{L}\nabla g)\) is nonexpansive self-mapping on C. Indeed, we have for each \(x,y\in C\)
Case 2. \(0< b\leq \lim_{i\rightarrow\infty}\beta_{n_{i}}=\beta<\frac{2}{L}\).
Observe that
Then we conclude that
Since \(P_{C}(I-\frac{2}{L}\nabla g)\) and \(P_{C}(I-\beta\nabla g)\) are both nonexpansive. Then, by the above two cases and Lemma 2.4, we derive that
This shows that \(q\in \operatorname{Fix}(T)=U\).
When we regularize it, we see the sequences generated by (3.6), which use RGPA. By (3.9), we have
Since \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\lambda_{n}\rightarrow0\), we have \(\|u_{n}-T(u_{n})\|\rightarrow0\). Thus, we get by Lemma 2.4 that \(q\in \operatorname{Fix}(T)=U\).
In the fifth step, we show that \(q\in(A+B)^{-1}0\).
Take \(r_{0}\in[l,2\alpha]\). Putting \(z_{n}=(I-r_{n}A)x_{n}\), we have from Lemma 2.2 that
we also have
Take any subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\). Since \(\{x_{n}\}\) is bounded, \(\{x_{n_{i}}\}\) is bounded and \(\{r_{n_{i}}\}\subset[l,2\alpha]\). Without loss of generality, there exist a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) and a subsequence \(\{r_{n_{i_{j}}}\}\) of \(\{r_{n_{i}}\}\) such that \(x_{n_{i_{j}}}\rightharpoonup q\) and \(r_{n_{i_{j}}}\rightarrow r_{0}\) for some \(\{r_{0}\}\subset[l,2\alpha]\). Since \(\{x_{n_{i_{j}}}\}\subset C\) and C is closed and convex, we have \(q\in C\). Using \(r_{n_{i_{j}}}\rightarrow r_{0}\) and (3.11), we have
Furthermore, we have from \(\|x_{n_{i_{j}}}-u_{n_{i_{j}}}\|\rightarrow0\) and (3.12)
Since \(J_{r_{0}}(I-r_{0}A)\) is nonexpansive, we have from Lemma 2.4 \(q=J_{r_{0}}(I-r_{0}A)q\). By (3.7), we obtain \(q\in(A+B)^{-1}0\).
Thus, we have \(q\in(A+B)^{-1}0\cap U\).
On the other hand, from the sequence \(\{x_{n}\}\) generated by (3.5), we note that
Hence, we obtain from (3.8) that
It follows that
In particular,
Since \(x_{n_{i}}\rightharpoonup q\), it follows from (3.13) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow\infty\).
Similarly, from using RGPA, and by the sequence \(\{x_{n}\}\) generated by (3.6), we note that
Hence, we obtain from (3.8) and (3.9)
It follows that
In particular,
Since \(x_{n_{i}}\rightharpoonup q\) and \(\lambda_{n}=o(\alpha_{n})\), it follows from (3.14) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow\infty\).
Finally, we show that q solves the variational inequality (3.4).
From the sequence \(\{x_{n}\}\) generated by (3.5), we observe that
Hence, we conclude that
Since \(T_{n}J_{r_{n}}(I-r_{n}A)\) is nonexpansive, we find that \(I-T_{n}J_{r_{n}}(I-r_{n}A)\) is monotone. Note that, for any given \(z\in(A+B)^{-1}0\cap U\),
Now, replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\{x_{n}\}\) is bounded, \(\|T_{n}(u_{n})-x_{n}\|\rightarrow0\), we have
From a similar step, we observe the sequence \(\{x_{n}\}\) generated by (3.6) has similar results, namely as follows:
Since \(T_{\lambda_{n}}J_{r_{n}}(I-r_{n}A)\) is nonexpansive, we have \(I-T_{\lambda_{n}}J_{r_{n}}(I-r_{n}A)\) is monotone. Note that for any given \(z\in(A+B)^{-1}0\cap U\), by (3.9), we get
Then replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\lambda_{n}=o(\alpha_{n})\), \(\|T_{\lambda_{n}}(u_{n})-x_{n}\|\rightarrow0\), we also have
Therefore from the above two sequences generated by GPA (3.5) and RGPA (3.6), we obtain the same results:
Because of the arbitrariness of \(z\in(A+B)^{-1}0\cap U\), we see that \(q\in(A+B)^{-1}0\cap U\) is a solution of the variational inequality (3.4). Further, by the uniqueness of the solution of the variational inequality (3.4), we conclude that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\).
The variational inequality (3.4) can be rewritten as
By Lemma 2.1, it is equivalent to the following fixed point equation:
This completes the proof. □
Theorem 3.2
Let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(P_{C}:H\rightarrow C\) be the metric projection. Let \(f:C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\) and let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that ∇g is \(1/L\)-ism continuous with \(L>0\). Assume that \((A+B)^{-1}0\cap U\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and
when regularize it by using RGPA, the sequences generated by (3.15) changed into the following sequences:
where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(0< b\leq\beta_{n}\leq\frac{2}{L}\), \(\sum_{n=1}^{\infty}|\beta_{n}-\beta_{n+1}|<\infty\), \(\beta\in (0,2/L)\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the following conditions:
-
(C1)
\(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);
-
(C2)
\(\{r_{n}\}\subset(0,\infty)\), \(0< l\leq r_{n}\leq2\alpha\), \(\sum_{n=1}^{\infty}|r_{n+1}-r_{n}|<\infty\);
-
(C3)
\(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\), \(\sum_{n=1}^{\infty}|\lambda_{n+1}-\lambda_{n}|<\infty\).
Then the sequences \(\{x_{n}\}\) from (3.15) and (3.16) are both converge strongly to a point \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality (3.4).
Proof
It is clear that \(\hat{x}\in C\) solves the minimization problem (1.2) if and only if for each fixed \(0< b\leq\beta\leq2/L\), \(\hat{x}\) solves the fixed point equation
and \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).
Now, we first show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in(A+B)^{-1}0\cap U\), and by (3.8) and (3.15) we derive that
By induction, we have
Hence, \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.
Similarly, we derive from (3.9) and (3.16) that
Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(a>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq a\).
Thus,
By induction, we have
Hence, \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.
Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow0\).
Indeed, since ∇g is \(1/L\)-ism, \(P_{C}(I-\beta_{n}\nabla g)\) is nonexpansive, we derive from (3.15) that
Thus, we get
for some appropriate constant \(M_{1}>0\) such that
Similarly, since ∇g is \(1/L\)-ism, \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\) is nonexpansive, we derive from (3.16) that
Thus, we get
for some appropriate constant \(M_{1}^{\prime}>0\) such that
Since \(u_{n+1}=J_{r_{n+1}}(I-r_{n+1}A)(x_{n+1})\) and \(u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n})\), we get from Lemma 2.2 and (3.3) that
Since \(0< l\leq r_{n}\leq2\alpha\), we have
where \(M_{2}=\sup\{\|A(x_{n})\|, \frac{1}{l}\|J_{r_{n+1}}(I-r_{n}A)(x_{n})-(I-r_{n}A)(x_{n})\|: n\in\mathbb{N}\}\).
From (3.17) and (3.19), we obtain
where \(M_{3}=\max\{M_{1},M_{2}\}\). Hence by Lemma 2.3, we have
Then, from (3.18), (3.20), and \(|r_{n+1}-r_{n}|\rightarrow0\), we have
For any \(p\in(A+B)^{-1}0\cap U\), by the same argument as in the proof of Theorem 3.1, we have
Then, for the GPA, generated by (3.15) and from (3.22), by the same argument as in the proof of Theorem 3.1, we derive that
and hence
Since \(\{x_{n}\}\) is bounded, \(\alpha_{n}\rightarrow0\) and \(\|x_{n}-x_{n+1}\|\rightarrow0\), we have
Next, we derive that
From (3.20), (3.23), and \(\alpha_{n}\rightarrow0\), we have
it follows that \(\|u_{n}-T_{n}(u_{n})\|\rightarrow0\).
Similarly, for the RGPA, generated by (3.16) and from (3.9) and (3.22), by the same argument as in the proof of Theorem 3.1, we derive that
and hence
Since both \(\{x_{n}\}\), \(\{f(x_{n})\}\) and \(\{u_{n}\}\) are bounded, \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), and \(\|x_{n+1}-x_{n}\|\rightarrow0\), we also derive the result (3.23).
Next, we derive that
From (3.20), (3.23), and \(\alpha_{n}\rightarrow0\), we also have
It follows that \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).
Now we show that
where \(q\in(A+B)^{-1}0\cap U\) is a unique solution of the variational inequality (3.4).
Indeed, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that
Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup\hat{x}\).
By the same argument as in the proof of Theorem 3.1, we have \(\hat{x}\in (A+B)^{-1}0\cap U\).
Since \(q=P_{(A+B)^{-1}0\cap U}f(q)\), it follows that
Finally, we show that \(x_{n}\rightarrow q\).
In fact, for the GPA, generated by (3.15),
So, from (2.1) and (3.22), we obtain
It follows that
where \(\delta_{n}=\frac{\alpha_{n}}{2(1-k)(1-\alpha_{n}k)}M+\frac {1}{(1-k)(1-\alpha_{n}k)}\langle -(I-f)q,x_{n+1}-q\rangle\), and \(M=\sup\{\|x_{n}-q\|^{2}: n\in\mathbb{N}\}\).
It is easy to see that \(\lim_{n\rightarrow\infty}2(1-k)\alpha_{n}=0\), \(\sum_{n=1}^{\infty}2(1-k)\alpha_{n}=\infty\), and \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) by (3.24). Hence, by Lemma 2.3, the sequence \(\{x_{n}\}\) converges strongly to q.
Similarly, for the RGPA, generated by (3.16),
So, from (3.9) and (3.22), we derive
It follows that
since \(\{x_{n}\}\) is bounded, we can take a constant \(M^{\prime}>0\) such that
Then we obtain
where \(\delta_{n}=\frac{2}{1+\alpha_{n}(1-k)}[\langle -(I-f)q,x_{n+1}-q\rangle+\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\| M^{\prime}]\).
By (3.24) and \(\lambda_{n}=o(\alpha_{n})\), we get \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\). Now applying Lemma 2.3 to (3.25) concludes that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\). The variational inequality (3.4) can be rewritten as
By Lemma 2.1, it is equivalent to the following fixed point equation:
This completes the proof. □
In this following, based on Theorem 3.2 and taking the RGPA for example, from the sequences generated by (3.16), we will give new strong convergence theorems in Hilbert space, which are useful in nonlinear analysis and optimization.
In 1994, Censor and Elfving [22] introduced the split feasibility problem (SFP). Then various algorithms were introduced by some authors to solve it (see [13, 17, 23], and [21, 24, 25]). Recently, many authors have paid attention to the split feasibility problem (SFP) due to its wide application in signal processing and image reconstructions (see [15, 16] and [26]).
Let C and Q be nonempty, closed, and convex subset of real Hilbert space \(H_{1}\) and \(H_{2}\), respectively. Then the SFP under consideration in this paper can mathematically be formulated as finding a point x satisfying the following property:
where \(F:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is clear that \(x^{*}\) is a solution to the split feasibility problem (3.26) if and only if \(x^{*}\in C\) and \(Fx^{*}-P_{Q}Fx^{*}=0\). We define the proximity function g by
Consider the constrained convex minimization problem
Then \(x^{*}\) solves the SFP (3.26) if and only if \(x^{*}\) solves the minimization problem (3.27) with the minimize equal to 0.
In particular, Byrne [24] introduced the so-called CQ algorithm. Take an initial guess \(x_{0}\in H_{1}\) arbitrarily, and define \(\{x_{n}\}\) recursively as follows:
where \(0<\beta<2/\|A\|^{2}\) and \(P_{C}\) denotes the projector onto C. Then the sequence \(\{x_{n}\}\) generated by (3.28) converges weakly to a solution of the SFP.
Let \(\alpha>0\) and let A be an α-inverse-strongly monotone mapping of C into H. Let B be a maximal monotone operator on Hilbert space H, such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). In order to obtain a strong convergence iterative sequence to solve the SFP, we propose a new algorithm as follows: \(x_{1}\in C\),
where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(T_{\lambda_{n}}=P_{C}(I-\beta(F^{*}(I-P_{Q})F+\lambda_{n}I))\) for all n, and \(\beta\in(0,2/\|F\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (3.29) converges strongly to a solution of the SFP (3.26) if the sequence \(\{\alpha_{n}\}\subset(0,1)\). Applying Theorem 3.2, we obtain the following result.
Theorem 3.3
Assume that the split feasibility problem (3.26) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (3.29). Here the sequence \(\{\alpha_{n}\}\) and \(\{\lambda_{n}\}\) satisfy the conditions (C1) and (C2). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in(A+B)^{-1}0\cap V\), which V denote the solution set of SFP (3.26).
Proof
By the definition of the proximity function g, we have
since \(P_{Q}\) is \(1/2\)-averaged mapping, then \(I-P_{Q}\) is 1-ism, for \(\forall x,y\in C\), we obtain
So, ∇g is \(1/\|F\|^{2}\)-ism.
Set \(g_{\lambda_{n}}(x)=g(x)+\frac{\lambda_{n}}{2}\|x\|^{2}\), consequently,
Then the iterative scheme (3.29) is equivalent to
where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\) for all n, and \(\beta\in(0,2/\|F\|^{2})\). □
On the other hand, based on Theorem 3.2, we will give another two applications of it.
Let h be a proper lower semicontinuous convex function on Hilbert space H into \((-\infty,\infty]\). Then the subdifferential ∂h of h is defined as follows:
for all \(x\in H\). From Rockafellar [23], we known that ∂h is a maximal monotone operator. Let \(i_{C}\) be the indicator function of C (C is a nonempty, closed, and convex subset of H), i.e.,
Then \(i_{C}\) is a proper lower semicontinuous convex function on H and the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. So we can define the resolvent \(J_{r}\) of \(\partial i_{C}\) for \(r>0\), i.e.,
for all \(x\in H\). We have, for any \(x\in H\) and \(q\in C\),
where \(N_{C}(q)\) is the normal cone to C at q, i.e.,
Based on Theorem 3.2, we prove a strong convergence theorem for inverse-strongly monotone operators in a Hilbert space.
Theorem 3.4
Let C be a nonempty, closed, and convex subset of the Hilbert space H. Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\). Let \(f:C\rightarrow C\) be a k-contraction mapping with \(0< k<1\). Suppose that ∇g is \(1/L\)-ism with \(L>0\). Let \(x_{1}=x\in C\) and let \(\{x_{n}\}\subset C\) be a sequence generated by
for all \(n\in\mathbb{N}\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3) which appear in the Theorem 3.2. Suppose \(VI(C,A)\cap U\neq\emptyset\). Then \(\{x_{n}\}\) converges strongly to a point \(q_{0}\in VI(C,A)\cap U\), where \(q_{0}\in VI(C,A)\cap U\) is a unique fixed point of \(P_{VI(C,A)\cap U}f\). This point \(q_{0}\in VI(C,A)\cap U\) is also a unique solution of the hierarchical variational inequality
Proof
Put \(B=\partial i_{C}\) in Theorem 3.2. Then for \(r_{n}>0\), we have that \(J_{r_{n}}=P_{C}\). Furthermore we have \((A+\partial i_{C})^{-1}0=VI(C,A)\). Indeed, for \(q\in C\), we have
Thus we obtain the desired result by Theorem 3.2. □
Recall the mapping \(W: C\rightarrow H\) is called a widely strict pseudo-contraction if there exists \(r\in\mathbb{R}\) with \(r<1\) such that
We call such W a widely r-strict pseudo-contraction. If \(0\leq r<1\), then W is a strict pseudo-contraction. Based on Theorem 3.2, we obtain the following result, which generalizes Zhou’s strong convergence theorem [25] for strict pseudo-contractions in a Hilbert space.
Theorem 3.5
Let C be a nonempty, closed, and convex subset of Hilbert space H. Let \(W:C\rightarrow H\) be a widely r-strict pseudo-contraction with \(r<1\) (\(r\in\mathbb{R}\)), suppose that \(\operatorname{Fix}(W)\neq\emptyset\). Let \(f:C\rightarrow C\) be a k-contraction with \(0< k<1\). Suppose that ∇g is \(1/L\)-ism with \(L>0\). Let \(x_{1}=x\in C\) and let \(\{x_{n}\}\subset C\) be a sequence generated by
for all \(n\in\mathbb{N}\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{\alpha_{n}\}\) and \(\{\lambda_{n}\}\) satisfy the conditions (C1) and (C3), respectively, which appear in Theorem 3.2. \(\{t_{n}\}\) satisfy:
-
(1)
\(\{t_{n}\}\subset(-\infty, 1)\);
-
(2)
\(r\leq t_{n}\leq b<1\);
-
(3)
\(\sum_{n=1}^{\infty}|t_{n}-t_{n+1}|<\infty\).
Then \(\{x_{n}\}\) converges strongly to a point \(q_{0}\in \operatorname{Fix}(W)\cap U\) which is a unique fixed point of \(P_{\operatorname{Fix}(W)\cap U}f\) in \(\operatorname{Fix}(W)\cap U\).
Proof
Put \(B=\partial i_{C}\) and \(A=I-W\) in Theorem 3.2. Furthermore, we put \(a=1-b\), \(r_{n}=1-t_{n}\), and \(2\alpha=1-r\) in Theorem 3.2. From \(\{t_{n}\}\subset(-\infty, 1)\) and \(r\leq t_{n}\leq b<1\), we get \(\{r_{n}\}\subset(0,\infty)\) and \(0< a\leq r_{n}\leq2\alpha\). We also get
and
Furthermore we have \((A+\partial i_{C})^{-1}0=\operatorname{Fix}(W)\). Indeed, for \(q\in C\), we have
Since \(\operatorname{Fix}(W)\neq\emptyset\), we get from [25] \(\operatorname{Fix}(P_{C}W)=\operatorname{Fix}(W)\). Thus we obtain the desired result by Theorem 3.2. □
Referring to Theorem 3.2, we will immediately give our conclusion in the next section.
4 Conclusion
In a real Hilbert space, methods for solving the constrained convex minimization problem have been extensively studied. Recently, Tian and Liu were first to propose composite iterative algorithms to find a common solution of an equilibrium and a constrained convex minimization problem. However, in this paper, for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces, we use two algorithms; one is the gradient-projection algorithm (GPA), the other is the regularized gradient-projection algorithm (RGPA). We use them to propose new strong convergence theorems, which find a common solution by GPA and a unique solution by RGPA. Take RGPA for example, some new strong convergence theorems are obtained. Under suitable conditions, the constrained convex minimization problem can be transformed into the SFP problem, zeros of the sum of two operators can be transformed into the variational inequality problem and the fixed point problem, which play important roles in nonlinear analysis and optimization problems.
References
Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197-228 (1967)
Ceng, LC, Ansari, QH, Khan, AR, Yao, JC: Strong convergence of composite iterative schemes for zeros of m-accretive operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 70, 1830-1840 (2009)
Ceng, LC, Ansari, QH, Khan, AR, Yao, JC: Viscosity approximation methods for strongly positive and monotone operators. Fixed Point Theory 10, 35-71 (2009)
Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74(16), 5286-5302 (2011)
Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)
Ceng, LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013, Article ID 240 (2013)
Sahu, DR, Ansari, QH, Yao, YC: An unified hybrid iterative method for hierarchical minimization problems. J. Comput. Appl. Math. 253, 208-221 (2013)
Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 10518 (2010)
Moudafi, A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)
Marino, G, Xu, HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43-52 (2006)
Takashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)
Tian, M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 63(9), 1367-1385 (2014)
Lin, LJ, Takahashi, W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429-453 (2012)
Kong, ZR, Ceng, LC, Ansari, QH, Pang, CT: Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013, Article ID 718624 (2013)
Eshita, K, Takahashi, W: Approximating zero points of accretive operators in general Banach spaces. JP J. Fixed Point Theory Appl. 2, 105-116 (2007)
Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)
Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)
Hundal, H: An alternating projection that does not converge in norm. Nonlinear Anal. 57, 35-61 (2004)
Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)
Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 8, 471-489 (2007)
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)
Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209-216 (1970)
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstraction. Inverse Probl. 20, 103-120 (2004)
Zhou, H: Convergence theorems of fixed points for k-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69, 456-462 (2008)
Yang, Q, Zhao, J: Generalized KM theorems and their applications. Inverse Probl. 22(3), 833-844 (2006)
Acknowledgements
Ming Tian was supported by the Foundation of Tianjin key Laboratory for Advanced Signal Processing and the Fundamental Research Funds for the Central Universities (No. 3122015L007). Yeong-Cheng Liou was supported in part by a grant from MOST NSC 101-2628-E-230-001-MY3 and NSC 103-2923-E-037-001-MY3. This research is supported partially by Kaohsiung Medical University ‘Aim for the Top Universities Grant, grant No. KMU-TP103F00’.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tian, M., Jiao, SW. & Liou, YC. Methods for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces. J Inequal Appl 2015, 227 (2015). https://doi.org/10.1186/s13660-015-0743-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0743-z