Abstract
In this paper, we establish a new iterative algorithm by combining Nadezhkina and Takahashi’s modified extragradient method and Xu’s algorithm. The mentioned iterative algorithm presents the common solution of the split variational inequality problems and fixed point problems. We show that the sequence produced by our algorithm is weakly convergent. Finally, we give some applications of the main results. This article extends the previous results in this area.
Similar content being viewed by others
1 Introduction
Variational inequality problem (VIP) is the problem of finding a point \(x^{*}\) in a subset C of a Hilbert space H such that
where \(f:C \rightarrow H\) is a mapping, and we denote its solution set of (1.1) by \(\operatorname{VI}(C,f)\). The VIP was introduced by Stampacchia [24]. In 1966, Hartman and Stampacchia [17] suggested the VIP as a tool for the study of partial differential equations. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, economics equilibrium, and so on. Moreover, it contains fixed point problems, optimization problems, complementarity problems, and systems of nonlinear equations as special cases (see [3, 12, 20,21,22, 29, 38, 40, 41]). Using the projection technique in [26], we know that the VIP is equivalent to the fixed point problem, that is,
where \(\gamma> 0\) and \(P_{C}\) is the metric projection of H onto C. In [36], the following sequence \(\{x_{n}\}\) of Picard iterates is a strongly convergent sequence in \(\operatorname{VI}(C,f)\) because \(P_{C}(I - \gamma f)\) is a contraction on C, where f is η-strongly monotone and k-Lipschitz continuous, \(0 < \gamma< \frac{2\eta}{k^{2}}\):
However, algorithm (1.2) cannot be used to solve VIP when f is monotone and k-Lipschitz continuous, which can be seen from the counterexample in [43]. During the last decade, many authors devoted their attention to studying algorithms for solving the VIP. One of the methods is the extragradient method which was introduced and studied in 1976 by Korpelevich [19] in the finite dimensional Euclidean space \({\mathbb {R}}^{n}\):
when f is monotone and k-Lipschitz continuous. Then sequence \(\{ x_{n}\}\) converges to the solution of VIP.
Takahashi and Toyoda [28] illustrated that if \(S:C \rightarrow C\) is a nonexpansive mapping and I is the identity mapping on H, then \(f=I-S\) is \(\frac{1}{2}\)-inverse strongly monotone and \(\operatorname{VI}(C,f)= \operatorname{Fix}(S)\). Motivated and inspired by the mentioned fact, they introduced and studied the following method for finding a common element of \(\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)\):
when \(S: C \rightarrow C\) is a nonexpansive mapping and \(f:C \rightarrow H\) is a ν-inverse strongly monotone mapping.
After that, Nadezhkina and Takahashi [27] suggested the following modified extragradient method motivated by the idea of Korpelevich [19]:
when \(S: C \rightarrow C\) is a nonexpansive mapping and \(f:C \rightarrow H\) is a monotone and k-Lipschitz continuous mapping. They showed that the sequence generated by the mentioned method converges weakly to an element in \(\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)\).
Since then, it has been used to study the problems of finding a common solution of VIP and fixed point problem (see [42] and the references therein).
The split feasibility problem (SFP) proposed by Censor and Elfving [10] is finding a point
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1} \rightarrow H_{2}\) is a bounded linear operator. Since then, the SFP has been widely used in many applications such as signal processing, intensity-modulation therapy treatment planning, phase retrievals and other fields (see [5, 6, 9, 15, 18, 37] and the references therein).
One of the popular methods for solving the SFP is the CQ algorithm presented by Byrne [5] in 2002 as follows:
where \(0 < \gamma< \frac{2}{\|A\|^{2}}\) and \(A^{*}\) is the adjoint operator of A.
Since (1.7) can be viewed as a fixed point algorithm for averaged mappings, Xu [34] applied the K-M algorithm to present the following algorithm for solving the SFP:
The split variational inequality problem (SVIP) is the problem of finding a point
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f:H_{1} \rightarrow H_{1}\) and \(g:H_{2} \rightarrow H_{2}\) are mappings, and \(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator. The SVIP was first investigated by Censor et al. [11]; it includes split feasibility problem, split zero problem, variational inequality problem and split minimizations problem as special cases (see [5, 7, 11, 16, 31, 39]).
In 2017, Tian and Jiang [32] considered the following iteration method by combining extragradient method with CQ algorithm for solving the SVIP:
where \(A:H_{1} \rightarrow H_{2}\) is a bounded linear operator, \(f:C \rightarrow H_{1}\) is a monotone and k-Lipschitz continuous mapping, and \(g:H_{2} \rightarrow H_{2}\) is a δ-inverse strongly monotone mapping.
In this paper, we establish a new iterative algorithm by combining Nadezhkina and Takahashi’s modified extragradient method and Xu’s algorithm. The mentioned iterative algorithm presents the common solution of the split variational inequality problems and fixed point problems. We show that the sequence produced by our algorithm is weakly convergent. Finally, we give some applications of the main results. This article extends the results that appeared in [32].
2 Preliminaries
In order to solve the our results, we recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let C be a closed convex subset of a real Hilbert space H.
A mapping \(T:C \rightarrow C\) is said to be k-Lipschitz continuous with \(k>0\), if
for all \(x, y \in C\). A mapping T is said to be nonexpansive when \(k=1\). We say that \(x \in C\) is a fixed point of T if \(Tx=x\) and the set of all fixed points of T is denoted by \(\operatorname{Fix}(T)\). It is well known that if C is a nonempty bounded closed convex subset of H and \(T:C \rightarrow C\) is nonexpansive, then \(\operatorname{Fix}(T) \neq\emptyset\). Moreover, for a fixed \(\alpha\in(0,1)\), a mapping \(T:H \rightarrow H\) is α-averaged if and only if it can be written as the average of the identity mapping on H and a nonexpansive mapping \(S:H \rightarrow H\), i.e.,
Recall that a mapping \(f: C \rightarrow H\) is called η-strongly monotone with \(\eta> 0\) if
for all \(x, y \in C\). If \(\eta=0\), then the mapping f is said to be monotone. Further, a mapping f is said to be ν-inverse strongly monotone with \(\nu>0\) (ν-ism) if
for all \(x, y \in C\). In [1], we know that a η-strongly monotone mapping f is monotone and a ν-ism mapping f is monotone and \(\frac{1}{\nu}\)-Lipschitz continuous. Moreover, \(I-\lambda f\) is nonexpansive where \(\lambda\in(0,2\nu)\), see [34] for more details on averaged and ν-ism mappings.
Lemma 2.1
([8])
Given \(x \in H\) and \(z \in C\). Then the following statements are equivalent:
-
(i)
\(z = P_{C}x\);
-
(ii)
\(\langle x - z, z - y \rangle\geq0\) for all \(y \in C\);
-
(iii)
\(\|x - y\|^{2} \geq\|x - z\|^{2} + \|y - z\|^{2}\) for all \(y \in C\).
We need the following definitions about set-valued mappings for proving our main results.
Definition 2.2
([30])
Let \(B:H \rightrightarrows H\) be a set-valued mapping with the effective domain \(D(B) = \{x \in H : Bx \neq\emptyset\}\).
The set-valued mapping B is said to be monotone if, for each \(x, y \in D(B)\), \(u \in Bx\), and \(v \in By\), we have
Also the monotone set-valued mapping B is said to be maximal if its graph \(G(B)=\{(x, y) : y \in Bx\}\) is not properly contained in the graph of any other monotone set-valued mappings.
The following property of the maximal monotone mappings is very convenient and helpful to use:
A monotone mapping B is maximal if and only if, for \((x,u) \in H \times H\),
For a maximal monotone set-valued mapping B on H and \(r > 0\), the operator
is called the resolvent of B.
Remark 2.3
In [14], we obtain that \(\operatorname{Fix}(J_{r}) = B^{-1}0\) for all \(r > 0\) and \(J_{r}\) is firmly nonexpansive, that is,
Indeed, by the definition of scalar multiplication, addition, and inversion operations, we have
Hence, for all \((x,y),(x^{*},y^{*})\in G(B)\), we get
Let \(f:C \rightarrow H\) be a monotone and k-Lipschitz continuous mapping. In [2], we know that a normal cone to C defined by
is a maximal monotone mapping and a resolvent of \(N_{C}\) is \(P_{C}\).
The following results play the crucial role in the next section.
Lemma 2.4
([27])
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(B : H_{1} \rightrightarrows{H_{1}}\) be a maximal monotone mapping and \(J_{r}\) be the resolvent of B for \(r > 0\). Suppose that \(T : H_{2} \rightarrow H_{2}\) is a nonexpansive mapping and \(A : H_{1} \rightarrow H_{2}\) is a bounded linear operator. Assume that \(B^{-1}0 \cap A^{-1}\operatorname{Fix}(T) \neq\emptyset\). Let \(r, \gamma> 0 \) and \(z \in H_{1}\). Then the following statements are equivalent:
-
(i)
\(z = J_{r}(I - \gamma A^{*}(I - T)A)z\);
-
(ii)
\(0 \in A^{*}(I - T)Az + Bz\);
-
(iii)
\(z \in B^{-1}0 \cap A^{-1}\operatorname{Fix}(T)\).
Lemma 2.5
([23])
Let \(\{\alpha_{n}\}\) be a real sequence satisfying \(0< a \leq\alpha _{n}\leq b< l\) for all \(n \geq0\), and let \(\{v_{n}\}\) and \(\{w_{n}\}\) be two sequences in H such that, for some \(\sigma\geq0\),
Then \(\lim_{n \rightarrow \infty}\|v_{n}-w_{n}\|=0\).
Lemma 2.6
([35])
Let \(\{x_{n}\}\) be a sequence in H satisfying the properties:
-
(i)
\(\lim_{n \rightarrow \infty}\|x_{n}-u\|\) exists for each \(u \in C\);
-
(ii)
\(\omega_{w}(x_{n}) \subset C\).
Then \(\{x_{n}\}\) converges weakly to a point in C.
Theorem 2.7
([27])
Let \(f:C \rightarrow H\) be a monotone and k-Lipschitz continuous mapping. Assume that \(S:C \rightarrow C\) is a nonexpansive mapping such that \(\operatorname {VI}(C,f)\cap\operatorname{Fix}(S) \neq\emptyset\). Let \(\{x_{n}\}\) and \(\{ y_{n}\}\) be sequences generated by (1.5), where \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b \in(0,\frac{1}{k})\) and \(\{\alpha_{n}\} \subset[c,d]\) for some \(c,d \in(0,1)\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge weakly to the same point \(z \in\operatorname{VI}(C,f)\cap\operatorname{Fix}(S) \neq\emptyset\), where \(z = \lim_{n \rightarrow \infty} P_{\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)}x_{n}\).
Theorem 2.8
([34])
Assume that the solution set of SFP is consistent and \(0< \gamma< \frac{2}{\|A\|^{2}}\). Let \(\{x_{n}\}\) be defined by the averaged CQ algorithm (1.8) where \(\{\alpha_{n}\}\) is a sequence in \([0,\frac{4}{2+\gamma\|A\|^{2}} ]\) satisfying the condition
Then the sequence \(\{x_{n}\}\) is weakly convergent to a point in the solution set of SFP.
3 Main results
Our aim in this section is to consider an iterative method by combining Nadezhkina and Takahashi’s modified extragradient method with Zhao and Yang’s algorithm for solving the split variational inequality problems and fixed point problems.
Throughout our results, unless otherwise stated, we assume that C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Suppose that \(A : H_{1} \rightarrow H_{2}\) is a nonzero bounded linear operator, \(f : C\rightarrow H_{1}\) is a monotone and k-Lipschitz continuous mapping, and \(g: H_{2} \rightarrow H_{2}\) is a δ-inverse strongly monotone mapping. Suppose that \(T:H_{2} \rightarrow H_{2}\) and \(S : C \rightarrow C\) are nonexpansive. Let \(\{\mu_{n}\}, \{ \alpha_{n}\} \subset(0,1)\), \(\{\gamma_{n}\} \subset[a, b]\) for some \(a, b \in(0, \frac{1}{\|A\|^{2}})\) and \(\{\lambda_{n}\} \subset[c, d]\) for some \(c, d \in(0,\frac{1}{k})\).
Firstly, we present an algorithm for solving the variational inequality problems and split common fixed point problems, that is, finding a point \(x^{*}\) such that
Theorem 3.1
Set \(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname{Fix}(S) : Az \in\operatorname{Fix}(T)\}\) and assume that \(\varGamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma\), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
It follows from Theorem 3.1 [32] that \(P_{C}(I-\gamma _{n}A^{*}(I-T)A)\) is \(\frac{1+\gamma_{n}\|A\|^{2}}{2}\)-averaged. It is easy to see from Lemma 2.2 [25] that \(\mu_{n} I+(1-\mu_{n})P_{C}(I - \gamma_{n} A^{*}(I - T)A)\) is \(\mu_{n}+(1-\mu_{n})\frac {1+\gamma_{n}\|A\|^{2}}{2}\)-averaged. So, \(y_{n}\) can be rewritten as
where \(\beta_{n}=\mu_{n}+(1-\mu_{n})\frac{1+\gamma_{n}\|A\|^{2}}{2}\) and \(V_{n}\) is a nonexpansive mapping for each \(n \in {\mathbb {N}}\).
Let \(u \in\varGamma\), we get that
Thus
Set \(t_{n}=P_{C}(y_{n}-\lambda_{n} fz_{n})\) for all \(n \geq0\). It follows from Lemma 2.1 that
Using Lemma 2.1 again, this yields
and so
For each \(n \in {\mathbb {N}}\), we obtain that
That is,
So,
By the convexity of the norm and (3.6), we have
Hence, there exists \(c\geq0\) such that
and then \(\{x_{n}\}\) is bounded. This implies that \(\{y_{n}\}\) and \(\{t_{n}\} \) are also bounded. From (3.5) and (3.7), we deduce that
Therefore, it follows from (3.8) that
By (3.3), we get that
Relation (3.7) implies
and so
Moreover, by the definition of \(z_{n}\), we have
Hence
Using the triangle inequality, we see that
and
This implies that
The definition of \(y_{n}\) implies
Thus
Let \(z \in\omega_{w}(x_{n})\). Then there exists a subsequence \(\{x_{n_{i}}\} \) of \(\{x_{n}\}\) which converges weakly to z. We obtain that \(\{ A^{*}(I-T)Ax_{n_{i}}\}\) is bounded because \(A^{*}(I-T)A\) is \(\frac{1}{2\|A\| ^{2}}\)-inverse strongly monotone. By the firm nonexpansiveness of \(P_{C}\), we see that
Without loss of generality, we may assume that \(\gamma_{n_{i}} \rightarrow \hat {\gamma} \in(0,\frac{1}{\|A\|^{2}})\), and so
we have
By the demiclosedness principle [33], we have
Using Corollary 2.9 [32], this yields
Next, we claim that z∈ \(\operatorname{VI}(C,f)\). From (3.9), (3.10) and (3.11), we know that \(y_{n_{i}} \rightharpoonup z\), \(z_{n_{i}} \rightharpoonup z\) and \(t_{n_{i}} \rightharpoonup z\). Define the set-valued mapping \(B: H \rightrightarrows H\) by
In [27], this implies that B is maximal monotone and we have \(0 \in Bv\) iff \(v \in\operatorname{VI}(C,f)\). If \((v,w) \in D(B)\), then \(w \in Bv=f(v)+N_{C}v\) and so \(w-f(v) \in N_{C}v\). Thus, for any \(p \in C\), we get
Since \(v \in C\), it follows from the definition of \(z_{n}\) and Lemma 2.1 that
Consequently,
By using (3.17) with \(\{z_{n_{i}}\}\), we obtain
Thus
By taking \(i\rightarrow \infty\) in the above inequality, we deduce
By the maximal monotonicity of B, we get \(0 \in Bz\) and so \(z \in \operatorname{VI}(C,f)\). Now, we will show that \(z \in\operatorname{Fix}(S)\). Since S is nonexpansive, it follows from (3.4) and (3.6) that
and by taking limit superior in the above inequalities and using (3.8), we obtain
Further,
and so Lemma 2.5 implies
From the fact that
This implies that
Now, by the demiclosedness principle [33], we have \(z \in\operatorname{Fix}(S)\). Consequently, \(\omega_{w}(x_{n}) \subset\varGamma\). By Lemma 2.6, the sequence \(\{x_{n}\}\) is weakly convergent to a point z in Γ and Lemma 3.2 [28] assures \(z=\lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\). □
Remark 3.2
We can obtain the following statements:
-
(i)
If \(f=0\), \(T=P_{Q}\), and \(S=I\), then problem (3.1) coincides with the SFP and algorithm (3.2) reduces to algorithm (1.8) for solving the SFP.
-
(ii)
If \(T=I\), then problem (3.1) coincides with the VIP and FPP and algorithm (3.2) reduces to algorithm (1.5) for solving the VIP and FPP.
-
(iii)
If \(S=I\), then problem (3.1) coincides with problem 3.1 in [32] and if \(\alpha_{n}, \mu_{n} =0\), we obtain that algorithm (3.2) reduces to algorithm 3.2 in [32].
The following result provides suitable conditions in order to guarantee the existence of a common solution of the split variational inequality problems and fixed point problems, that is, finding a point \(x^{*}\) such that
Theorem 3.3
Set \(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in\operatorname{VI}(Q,g)\}\) and assume that \(\varGamma \neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in N\), where \(\theta\in(0,2\delta)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma\), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
It is clear from δ-inverse strongly monotonicity of g that it is \(\frac{1}{\delta}\)-Lipschitz continuous and so, for \(\theta\in (0,2\delta)\), we obtain that \(I-\theta g\) is nonexpansive. Since \(P_{Q}\) is firmly nonexpansive, then \(P_{Q}(I-\theta g)\) is nonexpansive. By taking \(T=P_{Q}(I-\theta g)\) in Theorem 3.1, we obtain that \(\{ x_{n}\}\) converges weakly to a point \(z\in\operatorname{VI}(C,f) \cap\operatorname {Fix}(S)\) and \(Az \in\operatorname{Fix}(P_{Q}(I-\theta g))\). It follows from \(Az =P_{Q}(I-\theta g)Az\) and Lemma 2.1 that \(Az \in\operatorname {VI}(Q,g)\). This completes the proof. □
Remark 3.4
We can obtain the following statements:
-
(i)
If \(f=0\), \(g=0\), and \(S=I\), then problem (3.19) coincides with the SFP and algorithm (3.20) reduces to algorithm (1.8) for solving the SFP.
-
(ii)
If \(g=0\) and \(Q=H_{2}\), then problem (3.19) coincides with the VIP and FPP and algorithm (3.20) reduces to algorithm (1.5) for solving the VIP and FPP.
-
(iii)
If \(S=I\), then problem (3.19) coincides with problem 3.1 in [32] and if \(\alpha_{n}, \mu_{n} =0\), then algorithm (3.20) reduces to algorithm (1.10).
4 Applications
In this section, by using the main results, we give some applications to the weak convergence of the produced algorithms for the equilibrium problem, zero point problem and convex minimization problem.
The equilibrium problem was formulated by Blum and Oettli [4] in 1994 for finding a point \(x^{*}\) such that
where \(F:C \times C \rightarrow {\mathbb {R}}\) is a bifunction. The solution set of equilibrium problem (4.1) is denoted by \(\operatorname{EP}(C,F)\).
In [4], we know that if F is a bifunction such that
-
(A1)
\(F(x,x)=0\) for all \(x \in C\);
-
(A2)
F is monotone, that is, \(F(x,y) + F(y,x) \leq0\) for all \(x, y \in C\);
-
(A3)
for each \(x, y, z \in C\), \(\limsup_{t\downarrow0} F(tz + (1 - t)x, y) \leq F(x, y)\);
-
(A4)
for each fixed \(x \in C\), \(y \mapsto F(x,y)\) is lower semicontinuous and convex,
then there exists \(z \in C\) such that
where r is a positive real number and \(x \in H\).
For \(r > 0\) and \(x\in H\), the resolvent \(T_{r} : H \rightarrow C\) of a bifunction F which satisfies conditions (A1)–(A4) is formulated as follows:
and has the following properties:
-
(i)
\(T_{r}\) is single-valued and firmly nonexpansive;
-
(ii)
\(\operatorname{Fix}(T_{r}) = \operatorname{EP}(C,F)\);
-
(iii)
\(\operatorname{EP}(C,F)\) is closed and convex.
For more details, see [13].
The following result is related to the equilibrium problems by applying Theorem 3.1.
Theorem 4.1
Let \(F : C \times C \rightarrow {\mathbb {R}}\) be a bifunction satisfying conditions (A1)–(A4). Set \(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname {Fix}(S) : Az \in\operatorname{EP}(C,F)\}\) and suppose that \(\varGamma\neq \emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\), where \(T_{r}\) is a resolvent of F for \(r > 0\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma \), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
Since \(T_{r}\) is nonexpansive, the proof follows from Theorem 3.1 by taking \(T_{r}=T\). □
The following results are the application of Theorem 3.1 to the zero point problem.
Theorem 4.2
Let \(B : H_{2} \rightrightarrows{H_{2}}\) be a maximal monotone mapping with \(D(B) \neq\emptyset\). Set \(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname{Fix}(S) : Az \in B^{-1}0\}\) and assume that \(\varGamma\neq \emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\), where \(J_{r}\) is a resolvent of B for \(r > 0\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma \), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
Since \(J_{r}\) is firmly nonexpansive and \(\operatorname{Fix}(J_{r})=B^{-1}0\), the proof follows from Theorem 3.1 by taking \(J_{r}=T\). □
Theorem 4.3
Let \(B : H_{2}\rightrightarrows{H_{2}}\) be a maximal monotone mapping with \(D(B) \neq\emptyset\) and \(F : H_{2} \rightarrow H_{2}\) be a δ-inverse strongly monotone mapping. Set \(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in(B + F)^{-1}0\}\) and assume that \(\varGamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\), where \(J_{r}\) is a resolvent of B for \(r \in (0,2\delta)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma\), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
Since F is δ-inverse strongly monotone, then \(I-rF\) is nonexpansive. By the nonexpansiveness of \(J_{r}\), we obtain that \(J_{r}(I-rF)\) is also nonexpansive. We know that \(z \in(B+F)^{-1}0\) if and only if \(z=J_{r}(I-rF)z\). Thus the proof follows from Theorem 3.1 by taking \(J_{r}(I-rF)=T\). □
Let ϕ be a real-valued convex function from C to \({\mathbb {R}}\), the typical form of constrained convex minimization problem is finding a point \(x^{*} \in C\) satisfying
Denote the solution set of constrained convex minimization problem (4.5) by \(\arg\min_{x \in C} \phi(x)\).
By applying Theorem 3.3, we get the following result.
Theorem 4.4
Let \(\phi: H_{2} \rightarrow {\mathbb {R}}\) be a differentiable convex function and suppose that ∇ϕ is a δ-inverse strongly monotone mapping. Set \(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in\arg \min_{y\in Q} \phi(y)\}\) and assume that \(\varGamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\), where \(\theta\in(0,2\delta)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\varGamma\), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
Since ϕ is convex, for each \(x,y \in C\), we have
It follows that \(\langle\nabla\phi(x),x-z\rangle\geq\phi(x) -\phi (z) \geq\langle\nabla\phi(z),x-z\rangle\). This implies that ∇ϕ is monotone. By Lemma 4.6 [32] and taking \(g= \nabla\phi\), the proof follows from Theorem 3.3. □
We obtain the following result for solving the split minimization problems and fixed point problems by applying Theorem 3.3.
Theorem 4.5
Let \(\phi_{1}: H_{1} \rightarrow {\mathbb {R}}\) and \(\phi_{2} : H_{2} \rightarrow {\mathbb {R}}\) be differentiable convex functions. Suppose that \(\nabla\phi_{1}\) is a k-Lipschitz continuous mapping and \(\nabla\phi_{2}\) is δ-inverse strongly monotone. Set \(\varGamma= \{z \in\operatorname{arg\,min}_{x\in C} \phi_{1}(x) \cap\operatorname {Fix}(S) : Az \in\operatorname{arg\,min}_{y\in Q} \phi_{2}(y)\}\) and assume that \(\varGamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{z_{n}\}\) be generated by \(x_{1} = x \in C\) and
for each \(n \in {\mathbb {N}}\), where \(\theta\in(0,2\delta)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z \in\varGamma\), where \(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).
Proof
The convexity of \(\phi_{1}\) implies that \(\nabla\phi_{1}\) is monotone. The result follows from Lemma 4.6 [32] by taking \(f = \nabla \phi_{1}\) and \(g = \nabla\phi_{2}\) in Theorem 3.3. □
References
Alghamdi, M.A., Shahzad, N., Zegeye, H.: On solutions of variational inequality problems via iterative methods. Abstr. Appl. Anal. 2014, Article ID 424875 (2014)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Billups, S.C., Murty, K.G.: Complementarity problems. J. Comput. Appl. Math. 124, 303–318 (2000)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Byrne, C., Censor, Y., Gibali, A., Reich, S.: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Cho, S.Y., Qin, X., Yao, J.C., Yao, Y.: Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 19, 251–264 (2018)
Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Fang, N.N., Gong, Y.P.: Viscosity iterative methods for split variational inclusion problems and fixed point problems of a nonexpansive mapping. Commun. Optim. Theory 2016, Article ID 11 (2016)
Gibali, A.: Two simple relaxed perturbed extragradient methods for solving variational inequalities in Euclidean spaces. J. Nonlinear Var. Anal. 2, 49–61 (2018)
Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271–310 (1966)
Kim, J.K., Salahuddin: A system of nonconvex variational inequalities in Banach spaces. Commun. Optim. Theory 2016, Article ID 20 (2016)
Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747–756 (1976)
Mancino, O.G., Stampacchia, G.: Convex programming and variational inequalities. J. Optim. Theory Appl. 9(1), 3–23 (1972)
Qin, X., Cho, S.Y., Wang, L.: Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 67, 1377–1388 (2018). https://doi.org/10.1080/02331934.2018.1491973
Qin, X., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2017)
Schu, J.: Weak and strong convergence to fixed points of asymptotically nonexpansive mappings. Bull. Aust. Math. Soc. 43, 153–159 (1991)
Stampacchia, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)
Suwannaprapa, M., Petrot, N., Suantai, S.: Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl. 2017, 6 (2017)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Takahashi, W., Nadezhkina, N.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)
Takahashi, W., Toyoda, M.: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417–428 (2003)
Takahashi, W., Wen, C.F., Yao, J.C.: An implicit algorithm for the split common fixed point problem in Hilbert spaces and applications. Appl. Anal. Optim. 1, 423–439 (2017)
Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23(2), 205–221 (2015)
Tian, M., Jiang, B.N.: Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert spaces. J. Inequal. Appl. 2016, 286 (2016)
Tian, M., Jiang, B.N.: Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space. J. Inequal. Appl. 2017, 123 (2017)
Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Yao, Y., Marino, G., Liou, Y.C.: A hybrid method for monotone variational inequalities involving pseudocontractions. Fixed Point Theory Appl. 2011, 180534 (2011)
Yao, Y.H., Agarwal, R.P., Postolache, M., Liou, Y.C.: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 183 (2014)
Yao, Y.H., Liou, Y.C., Yao, J.C.: Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 127 (2015)
Yao, Y.H., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017)
Yuan, H.: A splitting algorithm in a uniformly convex and 2-uniformly smooth Banach space. J. Nonlinear Funct. Anal. 2018, Article ID 26 (2018)
Zegeye, H., Shahzad, N., Yao, Y.H.: Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453–471 (2015)
Zeng, L.C., Yao, J.C.: Strong convergence theorems for fixed point problems and variational inequality problems. Taiwan. J. Math. 10(5), 1293–1303 (2006)
Zhou, H., Zhou, Y., Feng, G.: Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequal. Appl. 2015, 68 (2015)
Acknowledgements
The first author is thankful to the Science Achievement Scholarship of Thailand. We would like express our deep thanks to the Department of Mathematics, Faculty of Science, Naresuan University for the support.
Funding
The research was supported by the Science Achievement Scholarship of Thailand and Naresuan University.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Lohawech, P., Kaewcharoen, A. & Farajzadeh, A. Algorithms for the common solution of the split variational inequality problems and fixed point problems with applications. J Inequal Appl 2018, 358 (2018). https://doi.org/10.1186/s13660-018-1942-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1942-1