Abstract
In this paper, we introduce a regularization method for solving the variational inclusion problem of the sum of two monotone operators in real Hilbert spaces. We suggest and analyze this method under some mild appropriate conditions imposed on the parameters, which allow us to obtain a short proof of another strong convergence theorem for this problem. We also apply our main result to the fixed point problem of the nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem. Finally, we provide numerical experiments to illustrate the convergence behavior and to show the effectiveness of the sequences constructed by the inertial technique.
Similar content being viewed by others
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H. The variational inclusion problem is to find \(x^{*} \in H\) such that
where \(A:H \rightarrow H\) is an operator, and \(B: D(B) \subset H \rightarrow 2^{H}\) is a set-valued operator.
If \(A=\nabla F\) and \(B = \partial G\), where ∇F is the gradient of F, and ∂G is the subdifferential of G defined by
then problem (1.1) is reduced to the following convex minimization problem:
If \(A=0\) and \(B=\partial G\), then problem (1.1) is reduced to a proximal minimization problem, and if \(A = \nabla F\) and \(B=0\), then problem (1.1) is reduced to a constrained convex minimization problem and also to a split feasibility problem. Some typical problems arising in various branches of sciences, applied sciences, economics, and engineering, such as machine learning, image restoration, and signal recovery, can be viewed as problems of the form (1.1).
To solve the variational inclusion problem (1.1) via fixed point theory, for \(r>0\), we define the mapping \(T_{r}: H\rightarrow D(B)\) as
For \(x \in H\), we see that
which shows that the fixed point set of \(T_{r}\) coincides with the solution set of \((A+B)^{-1}(0)\). This suggests the following iteration process: \(x_{0} \in C\), and
where \(\{r_{n}\} \subset (0,\infty )\) and \(D(B) \subset C\). This method is called the forward–backward splitting algorithm [1, 2]. In the literature, many methods have been suggested to solve the variational inclusion problem (1.1) for maximal monotone operators (see also, e.g., [3–11]).
Very recently, Cholamjiak et al. [12, 13] proved the following theorems in real Hilbert spaces.
Theorem C1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(A: C\rightarrow H\) be an α-inverse strongly monotone mapping, and let B be a maximal monotone operator on H such that \(D(B) \subset C\) and \((A+B)^{-1}(0)\) is nonempty. Let \(f:C \rightarrow C\) be a k-contraction, and let \(J_{r_{n}}=(I+r_{n} B)^{-1}\). Let \(\{z_{n}\}\) be a sequence in C of the following process: \(z_{0} \in C\), and
where \(\{ \alpha _{n} \} \subset (0,1),\{e_{n}\} \subset H\), and \(\{r_{n}\} \subset (0,2\alpha )\). Suppose that the control sequences satisfy the following restrictions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq r_{n} \leq b < 2\alpha \) for some \(a,b>0\),
-
(C3)
\(\sum_{n=0}^{\infty }\|e_{n}\| < \infty \), or \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\).
Then the sequence \(\{z_{n}\}\) converges strongly to a point \(\bar{x} \in (A+B)^{-1}(0)\), where \(\bar{x} = P_{(A+B)^{-1}(0)} f(\bar{x})\).
Theorem C2
Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, and let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{\lambda }= (I+\lambda B)^{-1}\) be the resolvent of B for \(\lambda > 0\), let S be a nonexpansive mapping of C into itself such that \(\operatorname{Fix}(S) \cap (A+B)^{-1}(0) \neq \emptyset \), and let \(f:C \rightarrow C\) be a contraction. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1),\{\beta _{n} \} \subset (0,1), \{ \lambda _{n}\} \subset (0,2\alpha )\), and \(\{\theta _{n} \} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(\liminf_{n\rightarrow \infty } \beta _{n} (1-\beta _{n}) > 0\),
-
(C3)
\(0 < \liminf_{n\rightarrow \infty } \lambda _{n} \leq \limsup_{n \rightarrow \infty } \lambda _{n} < 2\alpha \),
-
(C4)
\(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n} \}\) converges strongly to a point \(\bar{x} \in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\), where \(\bar{x} = P_{\operatorname{Fix}(S) \cap (A+B)^{-1}(0)} f(\bar{x})\).
In this paper, we modify the algorithms in Theorems C1and C2under the same assumptions to solve the variational inclusion problem (1.1) as follows: let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by
for all \(n \in \mathbb{N}\). We suggest and analyze this method under some mild appropriate conditions imposed on the parameters, which allow us to obtain a short proof of another strong convergence theorem for those problem.
We also apply our main result to the fixed point problem of nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem. Finally, we provide numerical experiments to illustrate the convergence behavior and to show the effectiveness of the sequences constructed by the inertial technique.
2 Preliminaries
Let C be a nonempty closed convex subset of a real Hilbert space H. We use the notation: → denotes the strong convergence, ⇀ denotes the weak convergence,
denotes the weak limit set of \(\{x_{n}\}\), and \(\operatorname{Fix}(T) = \{x:x=Tx \}\) is the fixed point set of the mapping T.
Recall that the metric projection \(P_{C}: H \rightarrow C\) is defined as follows: for each \(x \in H\), \(P_{C} x\) is the unique point in C satisfying
The operator \(T:H\rightarrow H\) is called:
-
(i)
monotone if
$$\begin{aligned} \langle x-y,Tx-Ty \rangle \geq 0, \quad\forall x,y \in H, \end{aligned}$$ -
(ii)
L-Lipschitzian with \(L>0\) if
$$\begin{aligned} \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad\forall x,y \in H, \end{aligned}$$ -
(iii)
k-contraction if it is k-Lipschitzian with \(k \in (0,1)\),
-
(iv)
nonexpansive if it is 1-Lipschitzian,
-
(v)
firmly nonexpansive if
$$\begin{aligned} \Vert Tx-Ty \Vert ^{2} \leq \Vert x-y \Vert ^{2} - \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2}, \quad\forall x,y \in H, \end{aligned}$$ -
(vi)
α-strongly monotone if
$$\begin{aligned} \langle Tx-Ty,x-y \rangle \geq \alpha \Vert x-y \Vert ^{2},\quad \forall x,y \in H, \end{aligned}$$ -
(vii)
α-inverse strongly monotone if
$$\begin{aligned} \langle Tx-Ty,x-y \rangle \geq \alpha \Vert Tx-Ty \Vert ^{2},\quad \forall x,y \in H. \end{aligned}$$
Let B be a mapping of H into \(2^{H}\). The domain and the range of B are denoted by \(D(B) = \{x\in H: Bx \neq \emptyset \}\) and \(R(B) = \cup \{Bx:x \in D(B) \}\), respectively. The inverse of B, denoted by \(B^{-1}\), is defined by \(x\in B^{-1}y\) if and only if \(y\in Bx\). A multivalued mapping B is said to be a monotone operator on H if \(\langle x-y,u-v\rangle \geq 0\) for all \(x,y \in D(B),u \in Bx\), and \(v \in By\). A monotone operator B on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we define the single-valued resolvent operator \(J_{r}:H\rightarrow D(B)\) by \(J_{r}=(I+rB)^{-1}\). It is well known that \(J_{r}\) is firmly nonexpansive and \(\operatorname{Fix}(J_{r})=B^{-1}(0)\).
We collect together some known lemmas, which are the main tools in proving our result.
Lemma 2.1
Let H be a real Hilbert space. Then, for all \(x,y\in H\),
-
(i)
\(\|x+y\|^{2} = \|x\|^{2}+2 \langle x,y \rangle +\|y\|^{2}\),
-
(ii)
\(\|x+y\|^{2} \leq \|x\|^{2} + 2 \langle y,x+y \rangle \).
Lemma 2.2
([14])
Let C be a nonempty closed convex subset of a real Hilbert space H. Then
-
(i)
\(z=P_{C}x \Leftrightarrow \langle x-z,z -y \rangle \geq 0, \forall x\in H,y \in C\),
-
(ii)
\(z=P_{C}x \Leftrightarrow \|x-z \|^{2} \leq \|x-y\|^{2} - \| y-z \|^{2}, \forall x\in H,y \in C\),
-
(iii)
\(\| P_{C} x - P_{C} y\|^{2} \leq \langle x-y,P_{C} x - P_{C} y \rangle, \forall x,y\in H\).
Lemma 2.3
([15])
Let H be a real Hilbert space. For any \(x,y \in H\) and \(\lambda \in \mathbb{R}\), we have
Lemma 2.4
([16])
Let H and K be two real Hilbert spaces, and let \(T:K \rightarrow K\) be a firmly nonexpansive mapping such that \(\|(I-T)x\|\) is a convex function from K to \(\overline{\mathbb{R}}=[-\infty,+\infty ]\). Let \(A:H\rightarrow K\) be a bounded linear operator and \(f(x) = \frac{1}{2}\|(I-T)Ax\|^{2} \) for all \(x\in H\). Then
-
(i)
f is convex and differential,
-
(ii)
\(\nabla f(x) = A^{*}(I-T)Ax \) for all \(x\in H\) such that \(A^{*}\) denotes the adjoint of A,
-
(iii)
f is weakly lower semicontinuous on H, and
-
(iv)
∇f is \(\|A\|^{2}\)-Lipschitzian.
Lemma 2.5
([16])
Let H be a real Hilbert space, and let \(T: H\rightarrow H\) be an operator. The following statements are equivalent:
-
(i)
T is firmly nonexpansive,
-
(ii)
\(\|Tx-Ty\|^{2} \leq \langle x-y,Tx-Ty \rangle, \forall x,y \in H\),
-
(iii)
\(I-T\) is firmly nonexpansive.
Lemma 2.6
([17])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping \(A:C\rightarrow H\) be an α-inverse strongly monotone, and let \(r>0\) be a constant. Then we have
for all \(x,y \in C\). In particular, if \(0< r\leq 2\alpha \), then \(I-rA\) is nonexpansive.
Lemma 2.7
([18] (Demiclosedness principle))
Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(S:C \rightarrow C\) be a nonexpansive mapping with \(\operatorname{Fix}(S)\neq \emptyset \). If the sequence \(\{x_{n}\}\subset C\) converges weakly to x and the sequence \(\{(I-S)x_{n}\}\) converges strongly to y, then \((I-S)x = y\); in particular, if \(y=0\), then \(x\in \operatorname{Fix}(S)\).
Lemma 2.8
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{T_{n}\}\) and φ be two classes of nonexpansive mappings of C into itself such that
Then, for any bounded sequence \(\{z_{n}\} \subset C\), we have:
-
(i)
if \(\lim_{n \rightarrow \infty } \|z_{n}-T_{n}z_{n}\|=0\), then \(\lim_{n \rightarrow \infty } \|z_{n}-Tz_{n}\|=0\) for all \(T \in \varphi \), which is called the NST-condition (I),
-
(ii)
if \(\lim_{n \rightarrow \infty } \|z_{n+1}-T_{n}z_{n}\|=0\), then \(\lim_{n \rightarrow \infty } \|z_{n}-T_{m}z_{n}\|=0\) for all \(m \in \mathbb{N}\cup \{0\}\), which is called the NST-condition (II).
Lemma 2.9
([21])
Let \(\{a_{n}\}\) and \(\{c_{n}\}\) be sequences of nonnegative real numbers such that
where \(\{\delta _{n} \}\) is a sequence in \((0,1)\), and \(\{b_{n}\}\) is a real sequence. Assume that \(\sum_{n=0}^{\infty }c_{n} < \infty \). Then we have:
-
(i)
if \(b_{n} \leq \delta _{n} M\) for some \(M\geq 0\), then \(\{a_{n}\}\) is a bounded sequence,
-
(ii)
if \(\sum_{n=0}^{\infty }\delta _{n} = \infty \) and \(\limsup_{n\rightarrow \infty } b_{n}/\delta _{n} \leq 0\), then \(\lim_{n\rightarrow \infty }a_{n}=0\).
Lemma 2.10
([22])
Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers such that
and
where \(\{\gamma _{n}\}\) is a sequence in \((0,1),\{\eta _{n}\}\) is a sequence of nonnegative real numbers, and \(\{\delta _{n}\},\{\rho _{n}\}\) are real sequences such that
-
(i)
\(\sum_{n=0}^{\infty }\gamma _{n} = \infty \),
-
(ii)
\(\lim_{n\rightarrow \infty } \rho _{n} = 0\),
-
(iii)
if \(\lim_{k\rightarrow \infty } \eta _{n_{k}} = 0\), then \(\limsup_{k\rightarrow \infty } \delta _{n_{k}} \leq 0\) for any subsequence \(\{n_{k}\}\) of \(\{n\}\).
Then \(\lim_{n\rightarrow \infty } s_{n} = 0\).
3 Main result
Theorem 3.1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, and let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{\lambda }=(I+\lambda B)^{-1}\) be the resolvent of B for \(\lambda > 0\), let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap (A+B)^{-1}(0) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2\alpha ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq \lambda _{n} \leq b < 2\alpha \) for some \(a,b>0\),
-
(C3)
\(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),
-
(C4)
\(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).
Proof
Picking \(z\in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\) and fixing \(n \in \mathbb{N}\), it follows that \(z=S(z)=J_{\lambda _{n}}(z-\lambda _{n} Az)\). Let
Firstly, we will show that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. Since
Therefore by (3.1) and the nonexpansiveness of \(S,J_{\lambda _{n}}\), and \(I-\lambda _{n} A\) in Lemma 2.6 we obtain
So, by condition (C4), putting \(M = \frac{1}{1-k} ( \|f(z)-z\|+ \sup_{n\in \mathbb{N}} \frac{\theta _{n}}{\alpha _{n}} \|x_{n}-x_{n-1}\| ) \geq 0\) in Lemma 2.9(i), we conclude that the sequence \(\{\|x_{n}-z\|\}\) is bounded, that is, the sequence \(\{x_{n}\}\) is bounded, and so is \(\{y_{n}\}\). Moreover, by condition (C4), \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \) implies \(\lim_{n\rightarrow \infty } \|e_{n}\| =0\), that is, \(\lim_{n \rightarrow \infty } e_{n} =0\). It follows that the sequence \(\{e_{n}\}\) is also bounded, and so is \(\{w_{n}\}\).
Since \(P_{\operatorname{Fix}(S)\cap (A+B)^{-1}(0)} f\) is a k-contraction on C, by Banach’s contraction principle there exists a unique element \(x^{*} \in C\) such that \(x^{*} = P_{\operatorname{Fix}(S)\cap (A+B)^{-1}(0)} f(x^{*})\), that is, \(x^{*} \in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\). It follows that \(x^{*}=S(x^{*})=J_{\lambda _{n}}(x^{*}-\lambda _{n} Ax^{*})\). Now we will show that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). On the other hand, we have
This implies that
Therefore by (3.2), Lemma 2.6, and the firm nonexpansiveness of \(J_{\lambda _{n}}\) we obtain
We also have
This implies that
Hence by (3.3), (3.4), and the nonexpansiveness of S we obtain
It follows that
and
which are of the forms
and
respectively, where \(s_{n}=\|x_{n}-x^{*}\|^{2}, \gamma _{n} = \frac{\alpha _{n} (1-k)}{1+\alpha _{n} (1-k)}, \delta _{n} = \frac{2}{1-k} \langle f(x^{*})-x^{*},w_{n}-x^{*} \rangle + \frac{2(1-\alpha _{n})}{1-k} \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| \|y_{n}-x^{*}\| +\frac{2(1-\alpha _{n})}{1-k} \frac{\|e_{n}\|}{\alpha _{n}}\|(y_{n}-\lambda _{n} Ay_{n})-(x^{*}- \lambda _{n} Ax^{*})\| + \frac{1-\alpha _{n}}{1-k} \frac{\|e_{n}\|}{\alpha _{n}}\) \(\|e_{n}\|, \eta _{n} = \lambda _{n} (2\alpha -\lambda _{n})\|Ay_{n}-Ax^{*} \|^{2} +\|(I-J_{\lambda _{n}})(y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}})(x^{*}-\lambda _{n} Ax^{*})\|^{2} \), and \(\rho _{n} = \frac{2\alpha _{n}}{1+\alpha _{n} (1-k)} \| f(x^{*})-x^{*} \| \|w_{n}-x^{*} \| +2\alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \| x_{n}-x_{n-1} \| \|y_{n}-x^{*} \| +2\|(y_{n}-\lambda _{n} Ay_{n})-(x^{*}-\lambda _{n} Ax^{*})\| \|e_{n}\| +\|e_{n}\|^{2}\). Therefore, using conditions (C1) and (C4), we can check that all those sequences satisfy conditions (i) and (ii) in Lemma 2.10. To complete the proof, we verify that condition (iii) in Lemma 2.10 is satisfied. Let \(\lim_{i\rightarrow \infty }\eta _{n_{i}} = 0\). Then by condition (C2) we have
and
It follows by conditions (C2) and (C4) and by (3.5) that
Consider a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\). As \(\{x_{n}\}\) is bounded, so is \(\{x_{n_{i}}\}\), and thus there exists a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) that weakly converges to \(x \in C\). Without loss of generality, we can assume that \(x_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). On the other hand, by conditions (C1) and (C4) we have
It follows that \(y_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). Therefore by (3.6) and the demiclosedness at zero in Lemma 2.7 we obtain \(x \in \operatorname{Fix}(J_{\lambda _{n_{i}}}(I-\lambda _{n_{i}}A))\), that is, \(x \in (A+B)^{-1}(0)\). Next, we will show that \(x \in \operatorname{Fix}(S)\). By the nonexpansiveness of S we have
It follows by (3.6), (3.7), and conditions (C1) and (C4) that
Then by NST-condition (II) in Lemma 2.8(ii) we get
Hence by (3.8) and the demiclosedness at zero in Lemma 2.7 again we obtain \(x \in \operatorname{Fix}(S)\), that is, \(x\in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\). Since
by (3.6) and (3.7) and conditions (C1) and (C4) we obtain
This implies that \(w_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). Therefore by Lemma 2.2(i) we obtain
It follows by conditions (C1), (C3), and (C4) that \(\limsup_{i\rightarrow \infty } \delta _{n_{i}} \leq 0\). So by Lemma 2.10 we conclude that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). This completes the proof. □
Remark 3.2
([23])
We remark here that as, by condition (C4), \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\) \(\|x_{n}-x_{n-1}\| = 0\), the theorem is easily implemented in numerical computation since the values of \(\|x_{n}-x_{n-1}\|\) are known before choosing \(\theta _{n}\). Indeed, the parameter \(\theta _{n}\) can be chosen as \(0 \leq \theta _{n} \leq \bar{\theta _{n}}\) such that
where \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\).
4 Applications and numerical examples
In this section, we give some applications of our result to the fixed point problem of the nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem.
4.1 Fixed point problem of the nonexpansive variational inequality problem
The variational inequality problem is to find \(x^{*}\in C\) such that
We denote the solution set of (4.1) by \(VI(C,A)\). It is well known that \(\operatorname{Fix}(P_{C}(I-rA)) = VI(C,A)\) for all \(r>0\). Define the indicator function of C, denoted by \(i_{C}\), as \(i_{C}(x)=0\) if \(x \in C\) and \(i_{C}(x) = \infty \) if \(x \notin C\). We see that \(\partial i_{C}\) is maximal monotone. So, for \(r>0\), we can define \(J_{r}=(I+r \partial i_{C})^{-1}\). Moreover, \(x=J_{r} y\) if and only if \(x=P_{C} y\). Hence by Theorem 3.1 we obtain the following result.
Theorem 4.1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap VI(C,A) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2\alpha ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq \lambda _{n} \leq b < 2\alpha \) for some \(a,b>0\),
-
(C3)
\(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),
-
(C4)
\(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).
We next provide a formulation that will be used in our example and its numerical results.
Proposition 4.2
For \(\rho > 0\) and \(C = \{x\in \mathbb{R}^{N}: \|x\|_{2} \leq \rho \}\), we have
Example 4.3
Let \(C = \{a\in \mathbb{R}^{2}: \|a\|_{2} \leq 1 \}\). Find a point \(x^{*} \in C\) that satisfies the following variational inequality:
Let \(H=(\mathbb{R}^{2},\|\cdot \|_{2} )\). For each \(x=(u_{1},u_{2})^{T},y=(v_{1},v_{2})^{T} \in \mathbb{R}^{2}\), we have
where \(Ax = (2u_{1}-1,2u_{2}-1)^{T}\) for \(x = (u_{1},u_{2})^{T} \in \mathbb{R}^{2}\). Note that A is α-inverse strongly monotone with \(\alpha =\frac{1}{2}\) and \(\frac{1}{L}\)-Lipschitzian with \(L=\frac{1}{2}\).
We set \(S(x)= P_{C}(1-u_{1},1-u_{2})^{T}\) and \(f(x)=\frac{3x}{5}\) for \(x = (u_{1},u_{2})^{T} \in C\). Then S is nonexpansive, and f is k-contraction such that \(k \in [\frac{3}{5},1)\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.
On the best of our result, we consider the type of the sequences \(\{\lambda _{n} \}\) as in Table 1 so that it converges to L, and also constant sequences converge to the value near L and near to its boundary values, to study the best choice types of the sequences \(\{\lambda _{n} \}\) for amount to the least loops in recursive computing of the sequence \(\{x_{n} \}\) using the algorithm in Theorem 4.1.
We choose the initial points \(x_{0}=(-1,0)^{T}, x_{1} = (0,1)^{T} \in C\) (indeed, \(x_{0},x_{1}\) can be chosen arbitrarily in H) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.1 with an error 10−6. As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate of \(x^{*}\) is \((0.5,0.5)^{T}\) as in Table 1, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 1, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 2.
In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).
4.2 Common fixed point problem of nonexpansive strict pseudocontractions
A mapping \(T:C\rightarrow C\) is called β-strictly pseudocontractive if there exists \(\beta \in [0,1)\) such that
for all \(x,y \in C\). It is well known that if T is β-strictly pseudocontractive, then \(I-T\) is \(\frac{1-\beta }{2}\)-inverse strongly monotone. Moreover, by putting \(A=I-T\) we have \(\operatorname{Fix}(T)=VI(C,A)\). So by Theorem 4.1 we obtain the following result.
Theorem 4.4
Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a β-strict pseudocontraction of H into itself, let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap \operatorname{Fix}(T) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,1-\beta ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq \lambda _{n} \leq b < 1-\beta \) for some \(a,b>0\),
-
(C3)
\(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),
-
(C4)
\(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).
Example 4.5
Let \(C = \{a\in \mathbb{R}^{3}: \|a\|_{2} \leq 2 \}\). Find a common fixed point \(x^{*} \in C\) of the mappings S and T defined as follows:
Let \(H=(\mathbb{R}^{3},\|\cdot \|_{2} )\). Note that T is β-strictly pseudocontractive with \(\beta =\frac{1}{2}\), \(I-T\) is \(\frac{1}{L}\)-Lipschitzian with \(L=\frac{1}{4}\), and S is nonexpansive. We set \(f(x)=\frac{3x}{5}\) for \(x \in C\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.
We choose the initial points \(x_{0}=(1,-1,-1)^{T}, x_{1} = (-1,0,1)^{T} \in C\) (indeed, \(x_{0},x_{1}\) can be chosen arbitrarily in H) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.4 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=\frac{1}{4}\). As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate of \(x^{*}\) is \((1,1,1)^{T}\) as in Table 2, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 3, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converse to zero in all the best choices types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 4.
In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).
4.3 Convex minimization problem
We next consider the following convex minimization problem (CMP): find \(x^{*} \in H\) such that
where \(F: H\rightarrow \mathbb{R}\) is a convex differentiable function, and \(G:H \rightarrow \mathbb{R}\) is a convex function. It is well known that if ∇F is \((1/L)\)-Lipschitz continuous, then it is L-inverse strongly monotone [26]. Moreover, ∂G is maximal monotone [27]. Putting \(A=\nabla F\) and \(B=\partial G\), by Theorem 3.1 we obtain the following result.
Theorem 4.6
Let H be a real Hilbert space. Let \(F: H\rightarrow \mathbb{R}\) be a convex differentiable function with \((1/L)\)-Lipschitz continuous gradient ∇F, and let \(G: H \rightarrow \mathbb{R}\) be a convex and lower semicontinuous function. Let \(J_{\lambda }=(I+\lambda \partial G)^{-1}\) be the resolvent of ∂G for \(\lambda > 0\), let S be a nonexpansive mapping of H into itself such that \(\Omega:= \operatorname{Fix}(S)\cap (\nabla F+\partial G)^{-1}(0) \neq \emptyset \), and let f be a k-contraction mapping of H into itself. Let \(x_{0},x_{1} \in H\), and let \(\{x_{n}\} \subset H\) be the sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2L), \{e_{n} \} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq \lambda _{n} \leq b < 2L\) for some \(a,b>0\),
-
(C3)
\(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),
-
(C4)
\(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).
We next provide the formulation which will be used in our example and its numerical results.
Proposition 4.7
([28])
Let \(G: \mathbb{R}^{N} \rightarrow \mathbb{R} \) be given by \(G(x)=\|x\|_{1}\) for \(x \in \mathbb{R}^{N}\). For \(r > 0\) and \(x=(x_{1},x_{2},\ldots,x_{N})^{T} \in \mathbb{R}^{N}\), we have \((I+r \partial G)^{-1} (x) = y\) such that \(y = (y_{1},y_{2},\ldots,y_{N})^{T} \in \mathbb{R}^{N}\) where \(y_{i} = \operatorname{sign}(x_{i}) \max \{|x_{i}|-r,0\}\) for \(i=1,2,\ldots,N\).
Example 4.8
Find a point minimizing the following \(\ell _{1}\)-least square problem:
where \(x =(u,v,w)^{T} \in \mathbb{R}^{3}\).
Let \(H = (\mathbb{R}^{3},\|\cdot \|_{2})\), \(F(x)=\frac{1}{2}\|x\|_{2}^{2}+(-2,1,-3)x+3\), and \(G(x)=\|x\|_{1}\) for all \(x \in \mathbb{R}^{3}\). Then \(\nabla F(x) = (u-2,v+1,w-3)^{T}\) for all \(x \in \mathbb{R}^{3}\). It follows that F is convex and differentiable on \(\mathbb{R}^{3}\) with \(L=1\) of \(\frac{1}{L}\)-Lipschitz continuous gradient ∇F. Moreover, G is convex and lower semicontinuous but not differentiable on \(\mathbb{R}^{3}\).
We set \(S(x) =(2-u,-v,4-w)^{T}\) and \(f(x) = \frac{x}{5}\) for \(x \in \mathbb{R}^{3}\). Then S is nonexpansive, and f is k-contraction with \(k \in [\frac{1}{5},1)\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.
For each \(n \in \mathbb{N}\), by Proposition 4.7 we have
We choose the initial points \(x_{0}=(1,-2,-1)^{T}\) and \(x_{1} = (-2,-1,2)^{T}\) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.6 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=1\). As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate minimum of \(F+G\) is \((1,0,2)^{T}\) and its approximate minimum value is 0.5 as in Table 3, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 5, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 6.
In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).
4.4 Split feasibility problem
We next consider the following split feasibility problem (SFP), which was first introduced by Cencer and Elfving [29]: find
where C and Q are two nonempty closed convex subsets of two real Hilbert spaces H and K, respectively, and \(A: H\rightarrow K\) is a bounded linear operator. Then problem (4.3) becomes to find \(x^{*} \in C\) in the following minimization problem:
which is a particular case of the convex minimization problem (4.2) when \(G = 0\). It is well known from Lemma 2.4 that F is a convex differentiable function with \(\|A\|^{2}\)-Lipschitz continuous gradient ∇F and weakly lower semicontinuous function on H, and \(\nabla F(x) = A^{*}(I-P_{Q})Ax\) for all \(x \in H\), where \(A^{*}\) denotes the adjoint of A. Putting \(F(x) = \frac{1}{2}\|Ax - P_{Q} Ax\|^{2}\) for \(x \in H, \partial G =0\), and \(S=P_{C}\), by Theorem 4.6 we obtain the following result.
Theorem 4.9
Let C and Q are two nonempty closed convex subsets of two real Hilbert spaces H and K, respectively. Let \(A: H\rightarrow K\) be a bounded linear operator, and let f be a k-contraction mapping of H into itself. Assume that the SFP (4.3) has a nonempty solution set Γ. Let \(x_{0},x_{1} \in H\), and let \(\{x_{n}\} \subset H\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0, \frac{2}{\|A\|^{2}}), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(0< a\leq \lambda _{n} \leq b < \frac{2}{\|A\|^{2}}\) for some \(a,b>0\),
-
(C3)
\(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),
-
(C4)
\(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).
Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Gamma \), where \(x^{*} = P_{\Gamma }f(x^{*})\), which is a minimizer of the minimum-norm solution of the SFP (4.3).
Example 4.10
Let \(C =\{a \in \mathbb{R}^{4}: \|a\|_{2} \leq 2 \}\). Find some point \(x^{*} \in C\) that satisfies the following system of linear equations:
where \(x,y,z,w \in \mathbb{R}\).
Let \(H=(\mathbb{R}^{4},\|\cdot \|_{2})\) and \(K=(\mathbb{R}^{3},\|\cdot \|_{2})\). We set
\(Q = \{b:b=(1,2,3)^{T}\}\) and \(f(u) = \frac{u}{5}\) for \(u \in \mathbb{R}^{4}\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.
We choose the initial points \(x_{0}=(1,1,0,2)^{T}\) and \(x_{1} = (2,1,3,0)^{T}\) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.9 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=\frac{1}{\|A\|^{2}}\) of \(\frac{1}{L}\)-Lipschitz continuous gradient ∇F defined by (4.4) such that \(\|A\|\) is the square root of the maximum eigenvalue of \(A^{T} A\). By Proposition 4.2, as \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that \(x^{*}\) is our solution, and the numerical results are listed in Table 4. We also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 7, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 8.
In this example, we found that the sequences \(\{\lambda _{n} \}\) in the A2, A3, A4, B1, B2, C1, and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).
Remark 4.11
A new iterative shrinkage thresholding algorithm (NISTA) with an error is obtained in our main result, based on the forward–backward splitting method with an error as follows: \(x_{1} \in C\), and
where \(\{\lambda _{n}\} \subset (0,\infty ), \{e_{n}\} \subset H, D(B) \subset C\), and \(J_{\lambda _{n}}^{B} = J_{\lambda _{n}} = (I+\lambda _{n} B)^{-1}\). It can be applied to solve many kinds of problems in optimization. For the fast convergence of the sequence \(\{x_{n}\}\) to its solution using this method with an α-inverse A strongly monotone (or \(\frac{1}{L}\)-Lipschitzian with \(L = \alpha \)), we choose the inertial parameter \(\lambda _{n}\), which depends on L as a momentum to controlled by the operator A in the forward step of algorithm using an alternating sequence \(\{\lambda _{n}\}\subset (0,2L)\) such that \(\lambda _{n} \rightarrow L\) as \(n \rightarrow \infty \), which guarantees the fast convergence of the sequence \(\{x_{n}\}\) to its solution. For instance,
Furthermore, we can choose the parameter \(\theta _{n}\) that controls the momentum of \(x_{n} -x_{n-1}\) for the fast convergence of the sequence \(\{x_{n}\}\) to its solution as follows:
where \(N \in \mathbb{N},\theta \in [0,1)\), and \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\). For instance, \(\sigma _{n} = \frac{1}{2^{n}}\) for all \(n \in \mathbb{N}\), which guarantees the fast convergence of the sequence \(\{x_{n}\}\) to its solution, except for complex problems (e.g., the image/signal recovery problems). In this case, the parameter \(\theta _{n}\) can be chosen as follows:
where \(N \in \mathbb{N},\theta \in [0,1)\), \(\sigma _{n} \in [0,1)\) are such that \(\sigma _{n} \rightarrow 1\) as \(n \rightarrow \infty \), and \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\).
5 Conclusion
We obtain the regularization method for solving the variational inclusion problem of the sum of two monotone operators in real Hilbert spaces. Under some mild appropriate conditions on the parameters, we obtain a short proof of another strong convergence theorem for this problem.
Availability of data and materials
Not applicable.
References
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. 72, 383–390 (1979)
Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)
Lopez, G., Martin-Marquez, V., Wang, F., Xu, H.-K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)
Takahashi, W.: Viscosity approximation methods for resolvents of accretive operators in Banach spaces. J. Fixed Point Theory Appl. 1, 135–147 (2007)
Wang, F., Cui, H.: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 54, 485–491 (2012)
Xu, H.-K.: A regularization method for the proximal point algorithm. J. Glob. Optim. 36, 115–125 (2006)
Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20(1), 1–17 (2018)
Khan, S.A., Suantai, S., Cholamjiak, W.: Shrinking projection methods involving inertial forward–backward splitting methods for inclusion problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(2), 645–656 (2019)
Cholamjiak, W., Pholasa, N., Suantai, S.: A modified inertial shrinking projection method for solving inclusion problems and quasi-nonexpansive multivalued mappings. Comput. Appl. Math. 37(5), 5750–5774 (2018)
Cholamjiak, W., Khan, S.A., Yambangwai, D., Kazmi, K.R.: Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114(2), 1–20 (2020)
Cholamjiak, P., Cholamjiak, W., Suantai, S.: A modified regularization method for finding zeros of monotone operators in Hilbert spaces. J. Inequal. Appl. 2015, 220 (2015)
Cholamjiak, P., Kesornprom, S., Pholasa, N.: Weak and strong convergence theorems for the inclusion problem and the fixed-point problem of nonexpansive mappings. Mathematics 167(7), 485–491 (2019)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Tang, J.F., Chang, S.S., Yuan, F.: A strong convergence theorem for equilibrium problems and split feasibility problems in Hilbert spaces. Fixed Point Theory Appl. 2014, 36 (2014)
Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)
Geobel, K., Kirk, W.A.: Topic in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8(1), 11–34 (2007)
Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276–286 (2008)
Takahashi, W., Xu, H.-K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013)
Suantai, S., Pholasa, N., Cholamjiak, P.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14(4), 1595–1615 (2018)
Tianchai, P.: Gradient projection method with a new step size for the split feasibility problem. J. Inequal. Appl. 2018, 120 (2018)
Sirirut, T., Tianchai, P.: On solving of constrained convex minimize problem using gradient projection method. Int. J. Math. Math. Sci. 2018, Article ID 1580837 (2018)
Baillon, J.B., Haddad, G.: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(\ell _{1}\)-regularized minimization with applications to compressed sensing. CAAM Technical Report, TR07-07, (2007)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. J. Numer. Algorithms 8, 221–239 (1994)
Acknowledgements
The author would like to thank the Faculty of Science, Maejo University, for its financial support.
Funding
This research was supported by Faculty of Science, Maejo University.
Author information
Authors and Affiliations
Contributions
Author read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares that he has no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tianchai, P. The zeros of monotone operators for the variational inclusion problem in Hilbert spaces. J Inequal Appl 2021, 126 (2021). https://doi.org/10.1186/s13660-021-02663-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02663-2