Abstract
In this paper we prove the strong convergence of an iterative sequence for finding a common element of the fixed points set of a strictly pseudocontractive mapping and the solution set of the constrained convex minimization problem for a convex and continuously Fréchet differentiable functional in a real Hilbert space. We apply our result to solving the split feasibility problem and the convexly constrained linear inverse problem involving the fixed point problem for a strictly pseudocontractive mapping in a real Hilbert space.
Similar content being viewed by others
1 Introduction
Let H be a real Hilbert space and C a nonempty, closed, and convex subset of H. A mapping \(T:C\rightarrow C\) is said to be nonexpansive if
and \(T:C\rightarrow C\) is said to be k-strictly pseudocontractive (see [1]) if for \(0\leq k <1\),
It is well known that every nonexpansive mapping is strictly pseudocontractive. In a real Hilbert space H, we can show that (1.2) is equivalent to
A point \(x \in C\) is called a fixed point of T if \(Tx=x\). The set of fixed points of T is denoted by \(F(T)\). The iterative approximation of fixed points for k-strictly pseudocontractive mappings has been studied extensively by many authors (see, for example, [2–13] and the references contained therein).
For any point \(u \in H\), there exists a unique point \(P_{C} u \in C\) such that
\(P_{C}\) is called the metric projection of H onto C. We know that \(P_{C}\) is a nonexpansive mapping of H onto C. It is also well known that \(P_{C}\) satisfies
for all \(x,y \in H\). Furthermore, \(P_{C} x\) is characterized by the properties \(P_{C} x \in C\) and
for all \(y \in C\).
Definition 1.1
A mapping \(T:H\rightarrow H\) is said to be firmly nonexpansive if and only if \(2T-I\) is nonexpansive, or equivalently
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where \(S:H\rightarrow H\) is nonexpansive. For example, the projections are firmly nonexpansive.
Definition 1.2
A mapping \(T:H\rightarrow H\) is said to be an averaged mapping if and only if it can be written as the average of the identity mapping I and a nonexpansive mapping; that is,
where \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is nonexpansive. More precisely, when (1.6) holds, we say that T is α-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are \(\frac{1}{2}\)-averaged mappings.
Definition 1.3
A nonlinear operator T whose domain \(D(T )\subset H\) and range \(R(T )\subset H\) is said to be:
-
(a)
monotone if
$$\langle x-y, Tx-Ty\rangle\geq0,\quad \forall x, y \in D(T), $$ -
(b)
β-strongly monotone if there exists \(\beta> 0\) such that
$$\langle x-y, Tx-Ty\rangle\geq\beta\|x-y\|^{2}, \quad\forall x, y \in D(T), $$ -
(c)
ν-inverse strongly monotone (for short, ν-ism) if there exists \(\nu> 0\) such that
$$\langle x-y, Tx-Ty\rangle\geq\nu\|Tx-Ty\|^{2},\quad \forall x, y \in D(T). $$
It can easily be seen that (i) if T is nonexpansive, then \(I-T\) is monotone; (ii) the projection mapping \(P_{C}\) is a 1-ism. The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems (see, for example, [14, 15] and the references therein).
Consider the following constrained convex minimization problem:
where \(f:C\rightarrow\mathbb{R}\) is a real-valued convex function. We say that the minimization problem (1.7) is consistent if the minimization problem (1.7) has a solution. In the sequel, we shall denote the set of solutions of problem (1.7) by Γ. If f is (Fréchet) differentiable, then the gradient-projection method (for short, GPM) generates a sequence \(\{x_{n}\}\) using the following recursive formula:
or more generally,
where in both (1.8) and (1.9) the initial guess \(x_{1}\) is taken from C arbitrarily, and the parameters, λ or \(\lambda _{n}\), are positive real numbers. The convergence of the algorithms (1.8) and (1.9) depends on the behavior of the gradient ∇f. As a matter of fact, it is known that if ∇f is α-strongly monotone and L-Lipschitzian with constants \(\alpha, L > 0\), then the operator
is a contraction; hence, the sequence \(\{x_{n}\}\) defined by the algorithm (1.8) converges in norm to the unique solution of the minimization problem (1.7). More generally, if the sequence \(\{ \lambda_{n}\}\) is chosen to satisfy the property
then the sequence \(\{x_{n}\}\) defined by the algorithm (1.9) converges in norm to the unique minimizer of (1.7). However, if the gradient ∇f fails to be strongly monotone, the operator T defined by (1.10) would fail to be contractive; consequently, the sequence \(\{x_{n}\}\) generated by the algorithm (1.9) may fail to converge strongly (see [16, Section 4]). If ∇f is Lipschitzian, then the algorithms (1.8) and (1.9) can still converge in the weak topology under certain conditions.
The gradient-projection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [17] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by gradient-projection method may fail. Recently, Xu [16] gave an alternative operator-oriented approach to algorithm (1.9); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (1.9) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample which shows that algorithm (1.8) does not converge in norm in an infinite-dimensional space, and he also presented two modifications of gradient-projection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem (1.7) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (1.7) in the consistent case.
Very recently, motivated by the work of Xu [16], Ceng et al. [18] proposed the following implicit iterative scheme:
and the following explicit iterative scheme:
for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by their schemes converge strongly to a solution of the constrained convex minimization problem (see [18] for more details). Such a solution is also a solution of a variational inequality defined over the set of fixed points of a nonexpansive mapping.
Motivated by the aforementioned results, we introduce an iterative algorithm for finding a fixed point of a strictly pseudocontractive mapping which is also a solution to a constrained convex minimization problem for a convex and continuously Fréchet differentiable functional in a real Hilbert space and prove strong convergence of the sequences generated by our scheme in a real Hilbert space. We apply our result to the split feasibility problem and the convexly constrained linear inverse problem involving the fixed point problem for a strictly pseudocontractive mapping in a real Hilbert space.
We shall adopt the following notations in this paper:
-
\(x_{n}\rightarrow x\) means that \(x_{n}\rightarrow x\) strongly;
-
\(x_{n}\rightharpoonup x\) means that \(x_{n}\rightarrow x\) weakly;
-
\(w_{w}(x_{n}):= \{x: \exists x_{n_{j}}\rightharpoonup x\}\) is the weak w-limit set of the sequence \(\{x_{n}\}_{n=1}^{\infty}\).
2 Main results
We first state some known results which will be used in the sequel.
Lemma 2.1
Let H be a real Hilbert space. Then the following result holds:
Lemma 2.2
([13])
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be k-strictly pseudocontractive mapping. Then \(I-T\) is demiclosed at 0, i.e., if \(x_{n}\rightharpoonup x\in C\) and \(x_{n}-Tx_{n}\rightarrow0\), then \(x=Tx\).
Lemma 2.3
([19])
Assume \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that
where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\) and \(\{\delta_{n}\}\) is a sequence in ℝ such that
-
(i)
\(\sum_{n=0}^{\infty}\gamma_{n}=\infty\);
-
(ii)
\(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) or \(\sum _{n=0}^{\infty}| \delta_{n}\gamma_{n}|<\infty\).
Then \(\lim_{n\rightarrow\infty}a_{n}=0\).
Following the method of proof Li and Yao [20] and Maingé [21], we now prove the following theorem.
Theorem 2.4
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant \(L>0\). Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset (0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=1}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);
-
(iv)
\(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{L}\).
Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1},x_{1}\in C\),
converge strongly to \(x^{*}\in F(T)\cap\Gamma\), where \(x^{*}:=P_{F(T)\cap\Gamma}u\).
Proof
Inspired by the method of proof of [16], it is well known that \(x^{*} \in C\) solves the minimization problem (1.7) if and only if \(x^{*}\) solves the fixed point equation
where \(\lambda>0\) is any fixed positive number. For the sake of simplicity, we may assume that (due to condition (iv))
where a and b are constants. Furthermore, it is also well known from the proof of [16] that the gradient ∇f is \(\frac{1}{L}\)-ism, \((I-\lambda_{n}\nabla f)\) is nonexpansive (see also [22]) and that \(P_{C}(I-\lambda\nabla f )\) is \(\frac{2 + \lambda L}{4}\)-averaged for \(0 < \lambda< \frac{2}{L}\). Hence we find that, for each n, \(P_{C}(I-\lambda_{n} \nabla f )\) is \(\frac{2 + \lambda_{n} L}{4}\)-averaged. Therefore, we can write
where \(S_{n}\) is nonexpansive and \(\gamma_{n} = \frac{2 + \lambda _{n} L}{4} \in[a_{1}, b_{1}]\subset(0, 1)\), where \(a_{1} = \frac{2 + a L}{4}\) and \(b_{1} = \frac{2 + b L}{4} < 1\). Then we can rewrite (2.1) as
For any \(x^{*}\in F(T)\cap\Gamma\), noticing that \(S_{n}x^{*}=x^{*}\), we have
and
But from (1.2) and (1.3), we obtain
which implies
Therefore, it follows from (2.5), (2.7), and (2.4) that
By induction, we have
Hence, \(\{u_{n}\}\) is bounded and so is \(\{x_{n}\}\). Now, using (1.2), we have
Therefore, by (2.4) and Lemma 2.1, we obtain
Since \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded, \(\exists M>0\) such that \(-2\langle x_{n}-u,u_{n+1}-x^{*}\rangle\leq M\) for all \(n\geq1\). Therefore,
Now we divide the rest of the proof into two cases.
Case 1.
Assume that \(\{\| x_{n}-x^{*}\|\}\) is monotonically decreasing sequence. Then \(\{\|x_{n}-x^{*}\|\}\) is convergent and obviously
This together with (2.11) and the condition that \(t_{n}\rightarrow 0\) implies that
From (2.1) and (1.4), we obtain (noting that \((I-\lambda_{n}\nabla f)\) is nonexpansive)
Therefore,
Also, from (2.1) and (2.8), we obtain
This implies that for some \(M^{*}>0\), we have
Using condition (i), condition (iv), and (2.12) in (2.15), we obtain
Using (2.8) in (2.14), we have
By using (2.16) in (2.17), we see that
Suppose that \(p \in w_{w}(u_{n})\) and \(\{u_{n_{j}}\}\) is a subsequence of \(\{u_{n}\}\) such that \(u_{n_{j}}\rightharpoonup p\). Observe that since \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\), we also have \(x_{n_{j}}\rightharpoonup p\). Using Lemma 2.2 and (2.13), we have \(p \in F(T)\).
We next prove that \(p \in\Gamma\). We may assume that \(\lambda _{n_{j}}\rightarrow\lambda\); then we have \(0<\lambda<\frac{2}{L}\). Set \(S:=P_{C}(I-\lambda\nabla f)\); then S is nonexpansive. Then we get
It then follows from Lemma 2.2 that \(p \in F(S)\). But \(F(S)=\Gamma \), therefore, we have \(p \in\Gamma\). Hence, \(p \in F(T)\cap\Gamma\).
Setting \(y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}Tx_{n}\), \(n\geq1\), then from (2.1) we have
It then follows that
Also,
By (2.4) and applying Lemma 2.1 to (2.18), we have
We observe that \(\limsup_{n\rightarrow\infty} \{-2\langle x^{*}-u,u_{n+1}-x^{*}\rangle \} \leq-2 \langle x^{*}-u,p-x^{*}\rangle\leq0\) (since \(x^{*}=P_{F(T)\cap\Gamma}u\)) and \(2\alpha_{n}\langle x_{n}-Tx_{n},u_{n+1}-x^{*}\rangle\rightarrow0\). Therefore by Lemma 2.3, \(\| x_{n}-x^{*}\|\rightarrow0\) and consequently \(\| u_{n}-x^{*}\| \rightarrow0\). That is, \(x_{n}\rightarrow x^{*}\), \(n\rightarrow\infty\).
Case 2.
Assume that \(\{\| x_{n}-x^{*}\|\}\) is not monotonically decreasing sequence. Set \(\Gamma_{n}=\| x_{n}-x^{*}\|^{2}\) and let \(\tau:\mathbb {N}\rightarrow\mathbb{N}\) be a mapping for all \(n\geq n_{0}\) (for some \(n_{0}\) large enough) defined by
Clearly, τ is a non-decreasing sequence such that \(\tau (n)\rightarrow\infty\) as \(n \rightarrow\infty\) and
After a similar conclusion from (2.10), it is easy to see that
Thus,
By a similar argument as above in Case 1, we conclude immediately that
Since \(\{u_{\tau(n)}\}\) is bounded, there exists a subsequence of \(\{u_{\tau(n)}\}\), still denoted by \(\{u_{\tau(n)}\}\) which converges weakly to \(p\in C\). Observe that since \(\lim_{n\rightarrow\infty}\|x_{\tau(n)}-u_{\tau(n)}\|=0\), we also have \(x_{\tau(n)}\rightharpoonup p\). Using Lemma 2.2 and the fact that \(\| x_{\tau(n)} - Tx_{\tau(n)} \| \rightarrow0\), \(n\rightarrow\infty\), we have \(p \in F(T)\). Similarly, we can show that \(p \in\Gamma\). Therefore, \(p \in F(T)\cap\Gamma\). At the same time, we note from (2.20) that, for all \(n\geq n_{0}\) (noting that \(0\leq\Gamma _{\tau(n)+1}-\Gamma_{\tau(n)}\), \(\forall n\geq n_{0}\)),
which implies
Since \(\{u_{\tau(n)+1}\}\) converges weakly to p as \(\tau (n)\rightarrow\infty\) and \(\| x_{\tau(n)} - Tx_{\tau(n)} \|\rightarrow 0\) as \(\tau(n)\rightarrow\infty\), we deduce from (2.21) (noting that \(x^{*}=P_{F(T)\cap\Gamma}u\)) that
which implies that
Therefore,
Furthermore, if \(n\geq n_{0}\) and \(n\neq\tau(n)\), it follows from the definition of \(\tau(n)\) that \(\tau(n)< n\). It is easy to see that \(\Gamma_{\tau(n)}\leq\Gamma_{\tau(n)+1}\). On the other hand, since \(\Gamma_{j}\geq\Gamma_{j+1}\) for \(\tau(n)+1\leq j\leq n\). Therefore we obtain, for all \(n\geq n_{0}\),
Hence \(\lim\Gamma_{n}=0\), that is, \(\{x_{n}\}\) converges strongly to \(x^{*}\). Furthermore, \(\{u_{n}\}\) converges strongly to \(x^{*}\). This completes the proof. □
Corollary 2.5
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant \(L>0\). Let \(\{ t_{n}\}\) be a sequence in \((0,1)\) and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfy the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=1}^{\infty}t_{n}=\infty\);
-
(iii)
\(0 < \liminf\lambda_{n}\leq\limsup\lambda_{n} <\frac{2}{L}\).
Then the sequence \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(x_{1}\in C\),
converge strongly to \(x^{*}\in\Gamma\), where \(x^{*}:=P_{\Gamma}u\).
Proof
Taking \(T=I\) in Theorem 2.4, we obtain the desired conclusion. □
Remark 2.6
Corollary 2.5 improves on Corollary 5.3 of Xu [16] in the sense that the conditions \(\sum_{n=1}^{\infty}|t_{n+1}-t_{n}|<\infty\) and \(\sum_{n=1}^{\infty }|\gamma_{n+1}-\gamma_{n}|<\infty\) assumed in Xu [16] are dispensed with in our Corollary 2.5.
Corollary 2.7
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be a k- strictly pseudocontractive mapping such that \(F(T)\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\) and \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset (0,1)\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\).
Then the sequence \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(x_{1}\in C\),
strongly converges to a fixed point \(x^{*}\) of T, where \(x^{*}:=P_{F(T)}u\).
Proof
Taking \(f\equiv0\) in Theorem 2.4, we obtain the desired conclusion. □
Remark 2.8
Corollary 2.7 complements Theorem 3.1 of Li and Yao [20].
We next apply the result in Theorem 2.4 to approximate the common fixed point of a finite family of strictly pseudocontractive mappings, which is also a solution to minimization problem (1.7) in real Hilbert spaces.
Theorem 2.9
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant \(L>0\). For each \(i=1,2,\ldots,N\), let \(T_{i}:C\rightarrow C\) be a \(k_{i}\)-strictly pseudocontractive mapping such that \(\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma\neq \emptyset\). Assume that \(\{\delta_{i}\}_{i=1}^{N}\) is a finite sequence of positive numbers such that \(\sum_{i=1}^{N}\delta_{i}=1\). Let \(\{ t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), \(k:=\max\{k_{i}:i=1,2,\ldots,N\}\), and \(\{ \lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);
-
(iv)
\(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{L}\).
Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1}\in C\),
converge strongly to \(x^{*}\in\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma\), where \(x^{*}:=P_{\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma}u\).
Proof
Define \(A:=\sum_{i=1}^{N}\delta_{i}T_{i}\). Then, by the results in [2, 10], A is a k-strictly pseudocontractive mapping and \(F(A)=\bigcap_{i=1}^{N} F(T_{i})\). We can rewrite the scheme (2.23) as
Now, Theorem 2.4 guarantees that \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to a common fixed point of the family \(\{T_{i}\}_{i=1}^{N}\), which is also a solution to minimization problem (1.7). □
3 Applications
In this section, we give an application of Theorem 2.4 to the split feasibility problem and the convexly constrained linear inverse problem.
3.1 Split feasibility problem
The split feasibility problem (SFP, for short) was introduced by Censor and Elfving [23]. The SFP problem has gained much attention of several authors due to its applications to image reconstruction, signal processing, and intensity-modulated radiation therapy (see [24–26]).
This SFP can be mathematically formulated as the problem of finding a point x with the property
where C and Q are nonempty, closed, and convex subsets of Hilbert space \(H_{1}\) and \(H_{2}\), respectively, and \(B : H_{1}\rightarrow H_{2}\) is a bounded linear operator.
Clearly, \(x^{*}\) is a solution to the split feasibility problem (3.1) if and only if \(x^{*}\in C\) and \(Bx^{*}-P_{Q} Bx^{*} = 0\). The proximity function f is defined by
and we consider the constrained convex minimization problem
Then \(x^{*}\) solves the split feasibility problem (3.1) if and only if \(x^{*}\) solves the minimization problem (3.3). In [24], the CQ algorithm was introduced to solve the SFP,
where \(0<\lambda<\frac{2}{\|B\|^{2}}\) and \(B^{*}\) is the adjoint of B. It was proved that the sequence generated by (3.4) converges weakly to a solution of the SFP.
We propose the following algorithm to obtain a strong convergence iterative sequence to solve the SFP and the fixed point problem for a k-strictly pseudocontractive mapping T. For any given \(u \in C\), let the sequences \(\{x_{n}\}\) and \(\{u_{n}\}\) be generated iteratively by \(u_{1} \in C\),
where \(\{t_{n}\}, \{\alpha_{n}\}\subset(0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{\|B\|^{2}} )\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);
-
(iv)
\(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{\|B\|^{2}}\).
We obtain the following convergence result for solving split feasibility problem (3.1) and the fixed point problem for a k-strictly pseudocontractive mapping by applying Theorem 2.4.
Theorem 3.1
Let C and Q be nonempty, closed, and convex subset of real Hilbert space \(H_{1}\) and \(H_{2}\), respectively, and \(B : H_{1}\rightarrow H_{2}\) be a bounded linear operator. Let \(f(x) =\frac{1}{2}\|Bx-P_{Q}Bx\|^{2}\) and let \(\Gamma=\arg \min_{x\in C}f(x)\). Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\) and \(\{u_{n}\}\) be generated by (3.5), where \(\{t_{n}\}, \{\alpha_{n}\} \subset(0,1)\), and \(\{\lambda_{n}\}\) in \((0,\frac{2}{\|B\|^{2}} )\) satisfying the conditions (i)-(iv) above. Then the sequences \(\{ x_{n}\}\) and \(\{u_{n}\}\) converge strongly to a solution \(x^{*}\) of the split feasibility problem (3.1) which is also a fixed point of a k-strictly pseudocontractive mapping T where \(x^{*}:=P_{F(T)\cap \Gamma}u\).
Proof
Using the definition of the proximity function f, we have
and ∇f is Lipschitz continuous, that is,
where \(L=\|B\|^{2}\).
Then we obtain
and ∇f is Lipschitzian with Lipschitz constant \(L:=\|B\|^{2}\). Then the iterative scheme (3.5) is equivalent to
where \(\{t_{n}\}, \{\alpha_{n}\}\subset(0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);
-
(iv)
\(0 < \liminf\lambda_{n}\leq\limsup\lambda_{n} <\frac{2}{L}\).
The desired conclusion follows from Theorem 2.4. □
3.2 Convexly constrained linear inverse problem
Consider the convexly constrained linear inverse problem (cf. [27])
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces and \(A:H_{1}\rightarrow H_{2}\) is a bounded linear mapping and \(b\in H_{2}\). To solve (3.9), we consider the following convexly constrained minimization problem:
In general, every solution to (3.9) is a solution to (3.10). However, a solution to (3.10) may not necessarily satisfy (3.9). Moreover, if a solution of (3.9) is nonempty then it follows from Lemma 4.2 of [28] that
It is well known that the projected Landweber method (see [29]) given by
where \(A^{*}\) is the adjoint of A and \(0<\lambda<2\alpha\) with \(\alpha =\frac{1}{\|A\|^{2}}\), converges weakly to a solution of (3.9). In what follows, we present an algorithm with strong convergence for solving (3.9) and the fixed point problem for strictly pseudocontractive mapping.
Corollary 3.2
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the convexly constrained linear inverse problem (3.9) is consistent and let Γ denote its solution set. Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{\|A\|^{2}} )\) satisfying the following conditions:
-
(i)
\(\lim_{n\rightarrow\infty}t_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}t_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);
-
(iv)
\(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{\|A\|^{2}}\).
Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1}\in C\),
converge strongly to \(x^{*}\in F(T)\cap\Gamma\), where \(x^{*}:=P_{F(T)\cap\Gamma}u\).
Proof
Let f be defined by (3.10). Then from [24] we have \(\nabla f (x) = A^{*}(Ax-b)\), \(x \in H_{1}\) which is L-Lipschitzian with constant \(L=\|A\|^{2}\). Thus, by Theorem 2.4 we obtain the required assertion. □
Remark 3.3
The convergence rate of the projection gradient method is at best linear. The linear convergence is attained with Polyak’s stepsize and for an objective function with a sharp set of minima. The linear convergence rate is not the best known rate. There are methods with a superlinear (quadratic) rate such as the Newton method and the interior point method, which uses Newton’s directions. These methods, however, require f to be twice differentiable among other conditions. The potential drawback of the projection gradient method is that it can be very slow when the gradient directions are almost perpendicular to the directions pointing toward the optimal set Γ, corresponding to
In this case, the method may exhibit zig-zag behavior, depending on the initial iterate \(x_{1}\). To overcome a possibly slow convergence of the gradient-projection method, the gradient is often scaled. In this case, the method takes the form
where \(\Lambda_{n}\) is a diagonal matrix with positive entries on its diagonal, i.e., \([\Lambda_{n}]ii > 0\) for all \(i = 1, \ldots,m\) and all n. For more details, see [30, pp.91-105].
References
Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 20, 197-228 (1967)
Acedo, GL, Xu, H-K: Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 67, 2258-2271 (2007)
Ceng, LC, Al-Homidan, S, Ansari, QH, Yao, JC: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 223, 967-974 (2009)
Chidume, CE, Abbas, M, Ali, B: Convergence of the Mann iteration algorithm for a class of pseudo-contractive mappings. Appl. Math. Comput. 194, 1-6 (2007)
Cholamjiak, P, Suantai, S: Strong convergence for a countable family of strict pseudocontractions in q-uniformly smooth Banach spaces. Comput. Math. Appl. 62, 787-796 (2011)
Hao, Y, Cho, SY: Some results on strictly pseudocontractive nonself mapping and equilibrium problems in Hilbert spaces. Abstr. Appl. Anal. 2012, Article ID 543040 (2012)
Jaiboon, C, Kumam, P: Strong convergence theorems for solving equilibrium problems and fixed point problems of strictly pseudocontraction mappings by two hybrid projection methods. J. Comput. Appl. Math. 234(3), 722-732 (2010)
Jung, JS: Iterative methods for mixed equilibrium problems and pseudocontractive mappings. Fixed Point Theory Appl. 2012, Article ID 184 (2012)
Liu, Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 71, 4852-4861 (2009)
Marino, G, Xu, H-K: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329(1), 336-346 (2007)
Qin, L-J, Wang, L: An iteration method for solving equilibrium problems, common fixed point problems of pseudocontractive mappings of Browder-Petryshyn type in Hilbert spaces. Int. Math. Forum 6(2), 63-74 (2011)
Shehu, Y: Iterative methods for family of strictly pseudocontractive mappings and system of generalized mixed equilibrium problems and variational inequality problems. Fixed Point Theory Appl. 2011, Article ID 852789 (2011)
Zhou, HY: Convergence theorems of fixed points for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69(2), 456-462 (2008)
Bertsekas, DP, Gafni, EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 17, 139-159 (1982)
Han, D, Lo, HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 159, 529-544 (2004)
Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150(2), 360-378 (2011)
Su, M, Xu, HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 1(1), 35-43 (2010)
Ceng, L-C, Ansari, QH, Yao, J-C: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)
Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 1-17 (2002)
Li, M, Yao, Y: Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 18(1), 219-228 (2010)
Maingé, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899-912 (2008)
Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137-150 (1977)
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)
Byrne, C: Unified treatment of some algorithms in signal processing and image construction. Inverse Problems 20, 103-120 (2004)
Lopez, G, Martin-Marquez, V, Wang, FH, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)
Tian, M, Huang, L: Iterative methods for constrained convex minimization problem in Hilbert spaces. Fixed Point Theory Appl. (2013). doi:10.1186/1687-1812-2013-105
Eicke, B: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 13, 413-429 (1992)
Wang, F, Xu, HK: Approximating curve and strong convergence of the CQ algorithm for the SPF. J. Inequal. Appl. 2010, Article ID 102085 (2010)
Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems. Kluwer Academic, Dordrecht (1996)
Nedić, A: Lecture Notes Optimization I. Network Mathematics Graduate Programme. Hamilton Institute, Maynooth, Ireland (2008)
Acknowledgements
This work was supported by the NSF of China (No. 11401063) and Natural Science Foundation of Chongqing (cstc2014jcyjA00016).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Cai, G., Shehu, Y. An iterative algorithm for fixed point problem and convex minimization problem with applications. Fixed Point Theory Appl 2015, 7 (2015). https://doi.org/10.1186/s13663-014-0253-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-014-0253-6