Abstract
In this paper, we introduce two general iterative methods (one implicit method and one explicit method) for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Then we establish strong convergence of the proposed implicit and explicit iterative methods to a solution of the GSVI with the above constraints, which is the unique solution of a certain variational inequality. The results presented in this paper improve, extend, and develop the corresponding results in the earlier and recent literature.
Similar content being viewed by others
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H with inner product \(\langle \cdot , \cdot \rangle \) and induced norm \(\Vert \cdot \Vert \). We denote by \(P_{C}\) the metric projection of H onto C and by \(\operatorname {Fix}(S)\) the set of fixed points of the mapping S. Recall that a mapping \(T:C\to H\) is nonexpansive if \(\Vert Tx-Ty\Vert \leq \Vert x-y \Vert \), \(\forall x, y\in C\). A mapping \(T: C\to H\) is called pseudocontractive if
This inequality can be equivalently rewritten as
where I is the identity mapping.
\(T: C\to H\) is said to be k-strictly pseudocontractive if there exists a constant \(k\in [0, 1)\) such that
A mapping \(V: C\to H\) is said to be l-Lipschitzian if there exists a constant \(l\geq 0\) such that
A mapping \(F: C\to H\) is called monotone if
and F is called α-inverse-strongly monotone if there exists a constant \(\alpha >0\) such that
If F is an α-inverse-strongly monotone mapping, then it is obvious that F is \(\frac{1}{\alpha }\)-Lipschitz continuous, that is, \(\Vert Fx-Fy\Vert \leq \frac{1}{\alpha }\Vert x-y\Vert \) for all \(x, y\in C\).
A mapping \(F: C\to H\) is called β-strongly monotone if there exists a constant \(\beta >0\) such that
A linear operator \(A: H\to H\) is said to be strongly positive on H if there exists a constant \(\bar{\gamma }>0\) such that
Let \(F: C\to H\) be a mapping. The classical variational inequality problem (VIP) is to find \(x^{*}\in C\) such that
We denote the set of solutions of VIP (1.1) by \(\operatorname {VI}(C, F)\).
In 2008, Ceng et al. [1] considered the following general system of variational inequalities (GSVI):
where \(F_{1}\), \(F_{2}\) are α-inverse-strongly monotone and β-inverse-strongly monotone, respectively, and \(\lambda \in (0, 2\alpha )\) and \(\nu \in (0, 2\beta )\) are two constants. Many iterative methods have been developed for solving GSVI (1.2); see [2,3,4,5,6,7] and the references therein.
Subsequently, Alofi et al. [8] also introduced two composite iterative algorithms based on the composite iterative methods in Ceng et al. [9] and Jung [10] for solving the problem of GSVI (1.2). Moreover, they showed strong convergence of the proposed algorithms to a common solution of these two problems.
Very recently, Kong et al. [11] established the strong convergence of two hybrid steepest-descent schemes to the same solution of GSVI (1.2), which is also a common solution of finitely many variational inclusions and a minimization problem.
Lemma 1.1
(see [12, Proposition 3.1])
Let C be a nonempty closed convex subset of a real Hilbert space H. For given \(x^{*}, y^{*} \in C\), \((x^{*}, y^{*})\) is a solution of GSVI (1.3) for continuous monotone mappings \(F_{1}\) and \(F_{2}\) if and only if \(x^{*}\) is a fixed point of the composite \(R=F_{1, \lambda }F_{2, \nu }: H\to C\) of nonexpansive mappings \(F_{1, \lambda }: H\to C\) and \(F_{2,\nu }: H \to C\), where \(y^{*}=F_{2, \nu }x^{*}\),
and
For simplicity, we denote by \(\operatorname {GSVI}(C,F_{1}, F_{2})\) the fixed point set of mapping R.
In the meantime, inspired by Ceng et al. [1], Jung [12] introduced a general system of variational inequalities (GSVI) for two continuous monotone mappings \(F_{1}\) and \(F_{2}\) of finding \((x^{*}, y^{*}) \in C\times C\) such that
where \(\lambda , \nu >0\) are two constants. In order to find an element of \(\operatorname {Fix}(R)\cap \operatorname {Fix}(T)\), he proposed one implicit algorithm generating a net \(\{x_{t}\}\):
with \(t\in (0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\) and \(\theta_{t}\in (0, \min \{\frac{1}{2}, \Vert A\Vert ^{-1}\})\), and an explicit algorithm generating a sequence \(\{x_{n}\}\):
with \(\{\alpha_{n}\}\subset [0,1]\), \(\{\beta_{n}\}\subset (0,1]\), \(\{r _{n}\}\subset (0,\infty )\), and \(x_{0}\in C\) any initial guess, where \(T_{r_{t}}x=\{z\in C: \langle y-z, Tz\rangle -\frac{1}{r_{t}}\langle y-z, (1+r_{t})z-x\rangle \leq 0, \forall y\in C\}\) for \(r_{t}\in (0, \infty )\), and \(T_{r_{n}}x=\{z\in C: \langle y-z, Tz\rangle -\frac{1}{r _{n}}\langle y-z, (1+r_{n})z-x \rangle \leq 0, \forall y\in C\}\) for \(r_{n}\in (0,\infty )\). Moreover, he established strong convergence of the proposed iterative algorithms to an element \(\widetilde{x}\in \operatorname {Fix}(R)\cap \operatorname {Fix}(T)\), which uniquely solves the variational inequality
On the other hand, the generalized mixed equilibrium problem (GMEP) is to find \(x\in C\) such that
We denote the set of solutions of GMEP (1.6) by \(\operatorname {GMEP}({\varTheta }, \varphi , B)\). GMEP (1.6) is very general in the sense that it includes many problems as special cases, namely optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others. For different aspects and solution methods, we refer to [13,14,15,16,17,18] and the references therein.
In this paper, we introduce implicit and explicit iterative methods for finding a solution of GSVI (1.3) with solutions belonging also to the common solution set \(\bigcap^{N}_{i=1}\operatorname {GMEP}({\varTheta }_{i}, \varphi_{i}, B_{i})\) of finitely many generalized mixed equilibrium problems and the fixed point set of a continuous pseudocontractive mapping T. First, GSVI (1.3) and each generalized mixed equilibrium problem both are transformed into fixed point problems of nonexpansive mappings. Then we establish strong convergence of the proposed iterative methods to an element of \(\bigcap^{N}_{i=1}\operatorname {GMEP}({\varTheta } _{i}, \varphi_{i}, B_{i})\cap \operatorname {GSVI}(C, F_{1}, F_{2})\cap \operatorname {Fix}(T)\), which is the unique solution of a certain variational inequality.
2 Preliminaries and lemmas
Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. We write \(x_{n}\to x\) and \(x_{n} \rightharpoonup x\) to indicate the strong convergence of the sequence \(\{x_{n}\}\) to x and the weak convergence of the sequence \(\{x_{n}\}\) to x, respectively.
For every point \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}(x)\), such that
\(P_{C}\) is called the metric projection of H onto C. It is well known that \(P_{C}\) is nonexpansive and is characterized by the property
In a Hilbert space H, the following equality holds:
The following lemma is an immediate consequence of an inner product.
Lemma 2.1
In a real Hilbert space H, there holds the following inequality:
Next we list some elementary conclusions for the MEP.
It is first assumed as in [19] that \({\varTheta }: C\times C\to {\mathbf{R}}\) is a bifunction satisfying conditions (A1)–(A4) and \(\varphi : C\to {\mathbf{R}}\) is a lower semicontinuous and convex function with restriction (B1) or (B2), where
-
(A1)
\({\varTheta }(x, x)=0\) for all \(x\in C\);
-
(A2)
Θ is monotone, i.e., \({\varTheta }(x, y)+{\varTheta }(y, x)\leq 0\) for any \(x, y\in C\);
-
(A3)
Θ is upper-hemicontinuous, i.e., for each \(x, y, z\in C\),
$$ \limsup_{t\to 0^{+}}{{\varTheta }}\bigl(tz+(1-t)x, y\bigr)\leq { {\varTheta }}(x, y); $$ -
(A4)
\({\varTheta }(x, \cdot )\) is convex and lower semicontinuous for each \(x\in C\);
-
(B1)
for \(\forall x\in H\) and \(r>0\), there exists a bounded subset \(D_{x}\subset C\) and \(y_{x}\in C\) such that, for \(\forall z\in C \setminus D_{x}\),
$$ {{\varTheta }}(z, y_{x})+\varphi (y_{x})-\varphi (z)+ \frac{1}{r}\langle y_{x}-z, z-x\rangle < 0; $$ -
(B2)
C is a bounded set.
Proposition 2.1
([19])
Assume that \({\varTheta }: C\times C \to {\mathbf{R}}\) satisfies (A1)–(A4), and let \(\varphi : C\to {\mathbf{R}}\) be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For \(r>0\) and \(x\in H\), define a mapping \(T^{({\varTheta },\varphi )}_{r}: H\to C\) as follows:
for all \(x\in H\). Then the following hold:
-
(i)
for each \(x\in H\), \(T^{({\varTheta }, \varphi )}_{r}(x)\) is nonempty and single-valued;
-
(ii)
\(T^{({\varTheta },\varphi )}_{r}\) is firmly nonexpansive, that is, for any \(x, y\in H\),
$$ \bigl\Vert T^{({\varTheta }, \varphi )}_{r}x-T^{({\varTheta }, \varphi )} _{r}y \bigr\Vert ^{2}\leq \bigl\langle T^{({\varTheta }, \varphi )}_{r}x-T^{({\varTheta }, \varphi )}_{r}y, x-y\bigr\rangle ; $$ -
(iii)
\(\operatorname {Fix}(T^{({\varTheta }, \varphi )}_{r})=\operatorname {MEP}({\varTheta }, \varphi )\);
-
(iv)
\(\operatorname {MEP}({\varTheta }, \varphi )\) is closed and convex;
-
(v)
\(\Vert T^{({\varTheta },\varphi )}_{s}x-T^{({\varTheta }, \varphi )} _{t}x\Vert ^{2}\leq \frac{s-t}{s}\langle T^{({\varTheta },\varphi )}_{s}x-T ^{({\varTheta },\varphi )}_{t}x,T^{({\varTheta }, \varphi )}_{s}x-x \rangle \) for all \(s, t>0\) and \(x\in H\).
Proposition 2.2
Let \(F:C\to H\) be an α-inverse-strongly monotone mapping. Then, for all \(x,y\in C\) and \(\lambda >0\), one has
In particular, if \(\lambda \in (0, 2\alpha ]\), \(I-\lambda F: C\to H\) is a nonexpansive mapping.
We will use the following lemmas for the proof of our main results in the sequel.
Lemma 2.2
([20])
Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{\omega_{n}\}\), \(\{\delta_{n}\}\), and \(\{\gamma_{n}\}\) satisfy the following conditions:
-
(i)
\(\{\omega_{n}\}\subset [0, 1]\) and \(\sum^{\infty }_{n=0}\omega _{n}=\infty \) or, equivalently, \(\prod^{\infty }_{n=0}(1-\omega_{n})=0\);
-
(ii)
\(\limsup_{n\to \infty }\delta_{n}\leq 0\) or \(\sum^{\infty }_{n=0} \omega_{n}\vert \delta_{n}\vert <\infty \);
-
(iii)
\(\gamma_{n}\geq 0\) (\(n\geq 0\)), \(\sum^{\infty }_{n=0}\gamma_{n}< \infty \).
Then \(\lim_{n\to \infty }s_{n}=0\).
Lemma 2.3
(Demiclosedness principle [21])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(S: C\to C\) be a nonexpansive mapping with \(\operatorname {Fix}(S)\neq \emptyset \). Then the mapping \(I-S\) is demiclosed. That is, if \(\{x_{n}\}\) is a sequence in C such that \(x_{n}\rightharpoonup x^{*}\) and \((I-S)x_{n}\to y\), then \((I-S)x^{*}=y\). Here I is the identity mapping of H.
Lemma 2.4
([22])
Let H be a real Hilbert space. Let \(A: H\to H\) be a strongly positive bounded linear operator with a constant \(\bar{\gamma }>1\). Then
That is, \(A-I\) is strongly monotone with a constant \(\bar{\gamma }-1\).
Lemma 2.5
([22])
Assume that \(A: H\to H\) is a strongly positive bounded linear operator with a coefficient \(\bar{\gamma }>0\) and \(0<\zeta \leq \Vert A\Vert ^{-1}\). Then \(\Vert I-\zeta A\Vert \leq 1-\zeta \bar{ \gamma }\).
Lemma 2.6
([23])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(G: C\to H\) be a ρ-Lipschitzian and η-strongly monotone mapping with constants \(\rho , \eta >0\). Let \(0<\mu <\frac{2\eta }{\rho^{2}}\) and \(0< t<\sigma \leq 1\). Then \(S:=\sigma I-t\mu G:C\to H\) is a contractive mapping with constant \(\sigma - t\tau \), where \(\tau =1- \sqrt{1-\mu (2\eta -\mu \rho^{2})}\).
Lemma 2.7
([24])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(F: C\to H\) be a continuous monotone mapping. Then, for \(r>0\) and \(x\in H\), there exists \(z\in C\) such that
For \(r>0\) and \(x\in H\), define \(F_{r}:H\to C\) by
Then the following hold:
-
(i)
\(F_{r}\) is single-valued;
-
(ii)
\(F_{r}\) is firmly nonexpansive, that is,
$$ \Vert F_{r}x-F_{r}y \Vert ^{2}\leq \langle x-y, F_{r}x-F_{r}y\rangle ,\quad \forall x, y\in H; $$ -
(iii)
\(\operatorname {Fix}(F_{r})=\operatorname {VI}(C, F)\);
-
(iv)
\(\operatorname {VI}(C, F)\) is a closed convex subset of C.
Lemma 2.8
([24])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(T: C\to H\) be a continuous pseudocontractive mapping. Then, for \(r>0\) and \(x\in H\), there exists \(z\in C\) such that
For \(r>0\) and \(x\in H\), define \(T_{r}: H\to C\) by
Then the following hold:
-
(i)
\(T_{r}\) is single-valued;
-
(ii)
\(T_{r}\) is firmly nonexpansive, that is,
$$ \Vert T_{r}x-T_{r}y \Vert ^{2}\leq \langle x-y, T_{r}x-T_{r}y\rangle ,\quad \forall x, y\in H; $$ -
(iii)
\(\operatorname {Fix}(T_{r})=\operatorname {Fix}(T)\);
-
(iv)
\(\operatorname {Fix}(T)\) is a closed convex subset of C.
3 Main results
Throughout this section, we always assume the following:
-
\(B_{i}:C\to H\) is a \(\mu_{i}\)-inverse-strongly monotone mapping for each \(i=1, 2,\ldots, N\);
-
\({\varTheta }_{i}: C\times C\to {\mathbf{R}}\) is a bifunction satisfying conditions (A1)–(A4) for each \(i=1, 2,\ldots, N\);
-
\(\varphi_{i}: C\to {\mathbf{R}}\) is a proper lower semicontinuous and convex function with restriction (B1) or (B2) for each \(i=1, 2,\ldots, N\);
-
\(A: H\to H\) is a strongly positive linear bounded self-adjoint operator with a constant \(\bar{\gamma }\in (1, 2)\);
-
\(V: C\to C\) is l-Lipschitzian with constant \(l\in [0, \infty )\);
-
\(G: C\to C\) is a ρ-Lipschitzian and η-strongly monotone mapping with constants \(\rho >0\) and \(\eta >0\);
-
constants μ, l, τ, and γ satisfy \(0<\mu <\frac{2 \eta }{\rho^{2}}\) and \(0\leq \gamma l<\tau \), where \(\tau =1-\sqrt{1- \mu (2\eta -\mu \rho^{2})}\);
-
\(F_{1}, F_{2}: C\to H\) are continuous monotone mappings and \(T:C\to C\) is a continuous pseudocontractive mapping such that \({\varOmega }:=\bigcap^{N}_{i=1}\operatorname {GMEP}({\varTheta }_{i}, \varphi _{i}, B_{i})\cap \operatorname {GSVI}(C, F_{1}, F_{2})\cap \operatorname {Fix}(T)\neq \emptyset \);
-
\(R_{t}=F_{1, \lambda_{t}}F_{2, \nu_{t}}:H\to C\), where \(F_{1, \lambda _{t}}, F_{2, \nu_{t}}: H\to C\) are defined as follows:
$$\begin{aligned}& F_{1, \lambda_{t}}x=\biggl\{ z\in C: \langle y-z, F_{1}z\rangle + \frac{1}{ \lambda_{t}}\langle y-z, z-x\rangle \geq 0, \forall y\in C\biggr\} , \\& F_{2, \nu_{t}}x=\biggl\{ z\in C: \langle y-z, F_{2}z\rangle + \frac{1}{\nu _{t}}\langle y-z,z-x\rangle \geq 0, \forall y\in C\biggr\} , \end{aligned}$$for \(\lambda_{t}, \nu_{t}\in (0, \infty )\), \(t\in (0, 1)\), \(\lim_{t \to 0}\lambda_{t}=\lambda >0\), and \(\lim_{t\to 0}\nu_{t}=\nu >0\);
-
\(R_{n}=F_{1, \lambda_{n}}F_{2, \nu_{n}}: H\to C\), where \(F_{1, \lambda _{n}}, F_{2, \nu_{n}}: H\to C\) are defined as follows:
$$\begin{aligned}& F_{1, \lambda_{n}}x=\biggl\{ z\in C: \langle y-z, F_{1}z\rangle + \frac{1}{ \lambda_{n}}\langle y-z, z-x\rangle \geq 0, \forall y\in C\biggr\} , \\& F_{2, \nu_{n}}x=\biggl\{ z\in C: \langle y-z, F_{2}z\rangle + \frac{1}{\nu _{n}}\langle y-z, z-x\rangle \geq 0, \forall y\in C\biggr\} , \end{aligned}$$for \(\lambda_{n}, \nu_{n}\in (0, \infty )\), \(\lim_{n\to \infty } \lambda_{n}=\lambda >0\), and \(\lim_{n\to \infty }\nu_{n}=\nu >0\);
-
\(T_{r_{t}}: H\to C\) is a mapping defined by
$$ T_{r_{t}}x=\biggl\{ z\in C: \langle y-z,Tz\rangle -\frac{1}{r_{t}}\bigl\langle y-z, (1+r_{t})z-x\bigr\rangle \geq 0, \forall y\in C\biggr\} $$for \(r_{t}\in (0, \infty )\), \(t\in (0, 1)\), and \(\liminf_{t\to 0}r_{t}>0\);
-
\(T_{r_{n}}: H\to C\) is a mapping defined by
$$ T_{r_{n}}x=\biggl\{ z\in C: \langle y-z,Tz\rangle -\frac{1}{r_{n}}\bigl\langle y-z, (1+r_{n})z-x\bigr\rangle \geq 0, \forall y\in C\biggr\} $$for \(r_{n}\in (0, \infty )\), and \(\liminf_{n\to \infty }r_{n}>0\);
-
\(T^{({\varTheta }_{i}, \varphi_{i})}_{r_{i, t}}:H\to C\) is a mapping defined by
$$ T^{({\varTheta }_{i}, \varphi_{i})}_{r_{i, t}}x=\biggl\{ z\in C: {\varTheta }_{i}(z,y)+\varphi_{i}(y)-\varphi_{i}(z) + \frac{1}{r_{i, t}} \langle y-z,z-x\rangle \geq 0, \forall y\in C\biggr\} $$for \(\{r_{i, t}\}_{t\in (0, 1)}\subset [c_{i}, d_{i}]\subset (0, 2\mu _{i})\) and \(i\in \{1, 2,\ldots, N\}\);
-
\(T^{({\varTheta }_{i},\varphi_{i})}_{r_{i,n}}: H\to C\) is a mapping defined by
$$ T^{({\varTheta }_{i},\varphi_{i})}_{r_{i, n}}x=\biggl\{ z\in C:{\varTheta }_{i}(z, y)+\varphi_{i}(y)-\varphi_{i}(z) + \frac{1}{r_{i, n}}\langle y-z, z-x\rangle \geq 0,\forall y\in C\biggr\} $$for \(\{r_{i, n}\}^{\infty }_{n=1}\subset [c_{i}, d_{i}]\subset (0, 2 \mu_{i})\) and \(i\in \{1, 2,\ldots, N\}\).
By Proposition 2.1 and Lemmas 2.7 and 2.8, we note that \(T^{({\varTheta }_{i},\varphi_{i})}_{r_{i,t}}\), \(T^{({\varTheta }_{i}, \varphi _{i})}_{r_{i,n}}\), \(F_{1,\lambda_{t}}\), \(F_{1,\lambda_{n}}\), \(F_{2,\nu_{t}}\), \(F _{2,\nu_{n}}\), \(T_{r_{t}}\), and \(T_{r_{n}}\) are nonexpansive, \(\operatorname {GMEP}( {\varTheta }_{i},\varphi_{i},B_{i})=\operatorname {Fix}(T^{({\varTheta }_{i}, \varphi_{i})}_{r_{i,t}}(I-r_{i,t}B_{i}))=\operatorname {Fix}(T^{({\varTheta } _{i},\varphi_{i})}_{r_{i,n}}(I-r_{i,n}B_{i}))\), and \(\operatorname {Fix}(T)= \operatorname {Fix}(T_{r_{t}})=\operatorname {Fix}(T_{r_{n}})\). So it is known that the composite mappings \(R_{t}=F_{1,\lambda_{t}}F_{2,\nu_{t}}\) and \(R_{n}=F_{1,\lambda_{n}}F_{2,\nu_{n}}\) are nonexpansive. Also, we note that \(\operatorname {GSVI}(C,F_{1},F_{2})=\operatorname {Fix}(R_{t})=\operatorname {Fix}(R_{n})\) by Lemma 1.1.
In this section, for \(t\in (0, 1)\), \(n\geq 1\) and \(i\in \{1, 2,\ldots, N \}\), we put
and \({\varDelta}^{0}_{t}={\varDelta}^{0}_{n}=I\).
We now introduce the first general iterative scheme that generates a net \(\{x_{t}\}\) in an implicit way:
where \(t\in (0,\min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\) and \(\theta_{t}\in (0,\min \{\frac{1}{2}, \Vert A\Vert ^{-1}\})\).
We prove the strong convergence of \(\{x_{t}\}\) as \(t\to 0\) to a point \(\widetilde{x}\in {{\varOmega }}\), which is a unique solution to the VI
In the meantime, we also propose the second general iterative scheme that generates a sequence \(\{x_{n}\}\) in an explicit way:
where \(\{\alpha_{n}\}, \{\beta_{n}\}\subset [0, 1]\) and \(x_{0}\in C\) is an arbitrary initial guess, and establish the strong convergence of \(\{x_{n}\}\) as \(n\to \infty \) to the same point \(\widetilde{x}\in {{\varOmega }}\), which is the unique solution to VI (3.2).
Next, for \(t\in (0,\min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\) and \(\theta_{t}\in (0, \min \{\frac{1}{2},\Vert A\Vert ^{-1}\})\), consider a mapping \(Q_{t}: C\to C\) defined by
It is easy to see that \(Q_{t}\) is a contractive mapping with constant \(1-\theta_{t}(\bar{\gamma }-1+t(\tau -\gamma l))\). Indeed, by Propositions 2.1 and 2.2 and Lemmas 2.5 and 2.6, we have
Since \(\bar{\gamma }\in (1, 2)\), \(\tau -\gamma l>0\) and \(0< t<\min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\}\leq \frac{2-\bar{\gamma }}{\tau -\gamma l}\), it follows that \(0<\bar{ \gamma }-1+t(\tau -\gamma l)<1\), which together with \(0<\theta_{t}< \min \{\frac{1}{2},\Vert A\Vert ^{-1}\}<1\) yields \(0<1-\theta_{t}(\bar{\gamma }-1+t(\tau -\gamma l))<1\). Hence \(Q_{t}\) is a contractive mapping. By the Banach contraction principle, \(Q_{t}\) has a unique fixed point, denoted by \(x_{t}\), which uniquely solves the fixed point equation (3.1).
We summarize the basic properties of \(\{x_{t}\}\).
Theorem 3.1
Let \(\{x_{t}\}\) be defined via (3.1). Then
-
(i)
\(\{x_{t}\}\) is bounded for \(t\in (0, \min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\);
-
(ii)
\(\lim_{t\to 0}\Vert x_{t}-R_{t}x_{t}\Vert =0\), \(\lim_{t\to 0}\Vert x_{t}-{\varDelta}^{N}_{t}x_{t}\Vert =0\), and \(\lim_{t\to 0}\Vert x_{t}-T_{r_{t}}x_{t} \Vert =0\) provided \(\lim_{t\to 0}\theta_{t}=0\);
-
(iii)
\(x_{t}: (0, \min \{1,\frac{2-\bar{\gamma }}{\tau -\gamma l}\}) \to H\) is locally Lipschitzian provided \(\theta_{t}: (0,\min \{1,\frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\to (0,\min \{\frac{1}{2},\Vert A\Vert ^{-1}\})\) is locally Lipschitzian, \(r_{t}, \lambda_{t},\nu_{t}: (0,\min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\to (0, \infty )\) are locally Lipschitzian, and \(r_{i,t}: (0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\}) \to [c_{i}, d_{i}]\) is locally Lipschitzian for each \(i=1, 2,\ldots, N\);
-
(iv)
\(x_{t}\) defines a continuous path from \((0,\min \{1,\frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\) into H provided \(\theta_{t}: (0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\to (0,\min \{ \frac{1}{2}, \Vert A\Vert ^{-1}\})\) is continuous, \(r_{t},\lambda_{t},\nu_{t}: (0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\to (0, \infty )\) are continuous, and \(r_{i, t}: (0,\min \{1,\frac{2-\bar{\gamma }}{ \tau -\gamma l}\})\to [c_{i}, d_{i}]\) is continuous for each \(i=1, 2,\ldots, N\).
Proof
Let \(z_{t}=R_{t}x_{t}\), \(u_{t}={\varDelta}^{N}_{t}z _{t}\), and \(v_{t}=T_{r_{t}}u_{t}\). Take \(p\in \varOmega \). Then \(p=T_{r_{t}}p\) by Lemma 2.8(iii), \(p={\varDelta}^{i}_{t}p\) (\(=T^{( {\varTheta }_{i},\varphi_{i})}_{r_{i,t}}(I-r_{i, t}B_{i})p\)) by Proposition 2.1(iii), and \(p=R_{t}p\) by Lemma 1.1.
(i) Utilizing Proposition 2.1(ii) and Proposition 2.2, we have
Moreover, it is easy from the nonexpansivity of \(R_{t}\) to see that
which together with the nonexpansivity of \(T_{r_{t}}\) and (3.4) implies that
By (3.5), we have
So, it follows that
Hence \(\{x_{t}\}\) is bounded and so are \(\{Vx_{t}\}\), \(\{u_{t}\}\), \(\{v _{t}\}\), \(\{z_{t}\}\), and \(\{Gv_{t}\}\).
(ii) By the definition of \(\{x_{t}\}\), we have
using the boundedness of \(\{Vx_{t}\}\), \(\{v_{t}\}\), and \(\{Gv_{t}\}\) in the proof of assertion (i). That is,
In view of (3.5) and Lemma 2.7(ii), we get
which immediately yields
From (3.6) and the boundedness of \(\{x_{t}\}\) and \(\{v_{t}\}\), we have
Again from (3.5) and Lemma 2.7(ii), we obtain
which hence leads to
Again from (3.6) and the boundedness of \(\{x_{t}\}\) and \(\{v_{t}\}\), we have
So it follows from (3.7) and (3.8) that
That is,
Furthermore, from (3.5) and Proposition 2.1(ii) and Proposition 2.2, it follows that
which together with \(\{r_{i, t}\}_{t\in (0,1)}\subset [c_{i},d_{i}] \subset (0,2\mu_{i})\) for \(i\in \{1,2,\ldots,N\}\) implies that
From (3.6) and the boundedness of \(\{x_{t}\}\) and \(\{v_{t}\}\), we have
Also, by Proposition 2.1(ii), we obtain that, for each \(i=1, 2,\ldots, N\),
which immediately implies that
This together with (3.5) leads to
which hence implies
From (3.6) and the boundedness of \(\{x_{t}\}\) and \(\{v_{t}\}\), we have
which together with (3.10) implies that, for each \(i=1, 2,\ldots, N\),
Note that
From (3.11), it is easy to see that
Also, observe that
From (3.9) and (3.12), it is easy to see that
In the meantime, again from (3.5) and Lemma 2.7(ii), we obtain
which immediately yields
From (3.6) and the boundedness of \(\{x_{t}\}\) and \(\{v_{t}\}\), we have
Taking into account that
we deduce from (3.9), (3.12), and (3.14) that
(iii) Let \(t,t_{0}\in (0,\min \{1,\frac{2-\bar{\gamma }}{\tau -\gamma l}\})\). Since \(v_{t}=T_{r_{t}}u_{t}\) and \(v_{t_{0}}=T_{r_{t_{0}}} u _{t_{0}}\), we get
and
Putting \(y=v_{t_{0}}\) in (3.16) and \(y=v_{t}\) in (3.17), we obtain
and
Adding up (3.18) and (3.19), we have
Since T is pseudocontractive, we know that \(I-T\) is a monotone mapping such that
and hence
Taking into account that \(\liminf_{t\to 0}r_{t}>0\), without loss of generality, we may assume that \(r_{t}>b>0\) \(\forall t\in (0, \min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\) for some \(b>0\). Then from (3.20) we have
which immediately yields
where \(\tilde{L}_{1}=\sup \{\Vert v_{t}-u_{t}\Vert : t\in (0,\min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\}\).
Also, taking into account that \(\lim_{t\to 0}\lambda_{t}=\lambda >0\) and \(\lim_{t\to 0}\nu_{t}=\nu >0\), without loss of generality, we may assume that \(\min \{\lambda_{t}, \nu_{t}\}>a>0\) \(\forall t\in (0,\min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\) for some \(a>0\). Since \(z_{t}=F_{1, \lambda_{t}}y_{t}\) and \(z_{t_{0}}=F_{1, \lambda_{t_{0}}}y_{t_{0}}\), where \(y_{t}=F_{2, \nu_{t}}x_{t}\) and \(y_{t_{0}}=F_{2,\nu_{t_{0}}}x _{t_{0}}\) for \(t, t_{0}\in (0, \min \{1, \frac{2-\bar{\gamma }}{ \tau -\gamma l}\})\), by using arguments similar to those of (3.21), we get
and
where \(\tilde{L}_{2}=\sup \{\Vert z_{t}-y_{t}\Vert +\Vert y_{t}-x_{t}\Vert : t\in (0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\}\). Substituting (3.23) for (3.22), we obtain
In the meantime, by Proposition 2.1(ii), (v) and Proposition 2.2, we deduce that
where
for some \(\widetilde{L}_{3}>0\). This together with (3.21) and (3.24) implies that
Taking into account that both \(\theta_{t_{0}}\in (0, \min \{ \frac{1}{2}, \Vert A\Vert ^{-1}\})\) and \(0\leq \gamma l<\tau =1- \sqrt{1- \mu (2\eta -\mu \rho^{2})}\) imply
we calculate from (3.1)
This immediately implies that
Since \(\theta_{t}:(0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\to (0,\min \{\frac{1}{2},\Vert A \Vert ^{-1}\})\) is locally Lipschitzian, \(r_{t},\lambda_{t},\nu_{t}:(0, \min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\to (0, \infty )\) are locally Lipschitzian, and \(r_{i, t}: (0,\min \{1,\frac{2-\bar{\gamma }}{\tau -\gamma l}\})\to [c_{i},d_{i}]\) is locally Lipschitzian for each \(i=1, 2,\ldots, N\), we deduce that \(x_{t}:(0, \min \{1, \frac{2-\bar{ \gamma }}{\tau -\gamma l}\})\to H\) is locally Lipschitzian.
(iv) From the last inequality in (iii), the desired result follows immediately. □
We prove the following strong convergence theorem for the net \(\{x_{t}\}\) as \(t\to 0\), which guarantees the existence of solutions of the variational inequality (3.2).
Theorem 3.2
Let the net \(\{x_{t}\}\) be defined via (3.1). If \(\lim_{t\to 0}\theta_{t}=0\), then \(x_{t}\) converges strongly to \(\widetilde{x}\in {{\varOmega }}\) as \(t\to 0\), which solves VI (3.2). Equivalently, we have \(P_{{\varOmega }} (2I-A)\widetilde{x}= \widetilde{x}\).
Proof
We first note that the uniqueness of a solution of VI (3.2) is a consequence of the strong monotonicity of \(A-I\) (due to Lemma 2.4). See [2, 4, 5] for this fact.
Next, we prove that \(x_{t}\to \widetilde{x}\) as \(t\to 0\). For simplicity, let \(v_{t}=T_{r_{t}}u_{t}\), \(u_{t}={\varDelta}^{N}_{t}z _{t}\), \(y_{t}= F_{2,\nu_{t}}x_{t}\), and \(z_{t}=R_{t}x_{t}=F_{1,\lambda _{t}}y_{t}\). For any given \(p\in {{\varOmega }}\), we observe that \(T_{r_{t}}p=p\), \({{\varDelta}}^{N}_{t}p =p\), and \(R_{t}p=p\). From (3.1), we write
where \(w_{t}=(I-\theta_{t}A)v_{t}+\theta_{t}(t\gamma Vx_{t}+(I-t \mu G)v_{t})\). In terms of (2.1) and (3.5), we have
Therefore,
Since \(\{x_{t}\}\) is bounded as \(t\to 0\) (due to Theorem 3.1(i)), there exists a subsequence \(\{t_{n}\}\) in \((0,\min \{1, \frac{2-\bar{\gamma }}{\tau -\gamma l}\})\) such that \(t_{n}\to 0\) and \(x_{t_{n}}\rightharpoonup x^{*}\). We first show that \(x^{*}\in {{\varOmega }}\). To this end, we divide its proof into four steps.
Step 1. We claim that \(\lim_{n\to \infty }\Vert x_{t_{n}}-z_{t_{n}} \Vert =0\), \(\lim_{n\to \infty }\Vert z_{t_{n}}-u_{t_{n}}\Vert =0\), and \(\lim_{n\to \infty }\Vert u_{t_{n}}-v_{t_{n}}\Vert =0\), where \(z_{t_{n}}=R_{t _{n}}x_{t_{n}}\), \(u_{t_{n}}={\varDelta}^{N}_{t_{n}}z_{t_{n}}\), and \(v_{t_{n}}=T_{r_{t_{n}}}u_{t_{n}}\). Indeed, according to (3.9), (3.12), and (3.14) in the proof of Theorem 3.1, we obtain the assertion.
Step 2. We claim that \(x^{*}\in \operatorname {Fix}(T)\). In fact, from the definition of \(v_{t_{n}}=T_{r_{t_{n}}}u_{t_{n}}\), we have
Set \(w_{t}=tv+(1-t)x^{*}\) for all \(t\in (0,1]\) and \(v\in C\). Then \(w_{t}\in C\). From (3.27) it follows that
By Step 1, we have \(\frac{v_{t_{n}}-u_{t_{n}}}{r_{t_{n}}}\to 0\) as \(n\to \infty \). Moreover, since \(x_{t_{n}}\rightharpoonup x^{*}\), by Step 1 we have \(v_{t_{n}}\rightharpoonup x^{*}\). Since \(I-T\) is monotone, we also have that \(\langle w_{t}-v_{t_{n}}, (I-T)w_{t}-(I-T)v _{t_{n}}\rangle \geq 0\). Thus, from (3.28) it follows that
and hence
Letting \(t\to 0\), we know from the continuity of \(I-T\) that
Putting \(v=Tx^{*}\), we get \(\Vert (I-T)x^{*}\Vert ^{2}=0\), which leads to \(x^{*}\in \operatorname {Fix}(T)\).
Step 3. We claim that \(x^{*}\in \operatorname {GSVI}(C, F_{1}, F_{2})\). Indeed, note that \(\lim_{t\to 0}\lambda_{t}=\lambda >0\) and \(\lim_{t\to 0}\nu_{t}=\nu >0\). For each \(x\in C\), we put \(x(t):=F_{1, \lambda_{t}}x\), \(x(0):=F_{1,\lambda }x\), \(y(t):=F_{2, \nu_{t}}x\), and \(y(0):=F_{2,\nu }x\). Then, by Lemma 1.1, we have \(\operatorname {GSVI}(C, F_{1}, F_{2})=\operatorname {Fix}(R)\), where \(R=F_{1,\lambda }F_{2,\nu }\) and R is nonexpansive. Moreover, it is easy to see that
and
Putting \(y=x(0)\) in (3.29) and \(y=x(t)\) in (3.30), we obtain
and
Adding up (3.31) and (3.32), we have
Since \(F_{1}\) is a monotone mapping, we know that
and hence
So it follows that
which immediately yields
By using arguments similar to those of (3.33), we have
Now, putting \(t=t_{n}\), \(x=F_{2,\nu }x_{t_{n}}\) in (3.33), and \(t=t_{n}\), \(x=x_{t_{n}}\) in (3.34), respectively, we deduce that
and
Since \(\lim_{n\to \infty }\lambda_{t_{n}}=\lambda >0\) and \(\lim_{n\to \infty }\nu_{t_{n}}=\nu >0\), it follows from the last two inequalities that
Also, we observe that
Since \(R_{t_{n}}x_{t_{n}}-x_{t_{n}}\to 0\) (due to Step 1), from (3.35) and (3.36) we get
Taking into account that \(x_{t_{n}}\rightharpoonup x^{*}\) and \(x_{t_{n}}-Rx_{t_{n}}\to 0\) (due to (3.37)), from Lemma 2.3 we get \(x^{*}=Rx^{*}\), that is, \(x^{*}\in \operatorname {Fix}(R)=\operatorname {GSVI}(C,F_{1},F _{2})\).
Step 4. We claim that \(x^{*}\in \bigcap^{N}_{i=1}\operatorname {GMEP}( {\varTheta }_{i}, \varphi_{i}, B_{i})\). In fact, since \({\varDelta}^{i}_{t_{n}}z_{t_{n}}=T^{({\varTheta }_{i}, \varphi_{i})}_{r_{i, t _{n}}}(I-r_{i, t_{n}}B_{i}){\varDelta}^{i-1}_{t_{n}}z_{t_{n}}\), for each \(i=1, 2,\ldots, N\), we have
By (A2), we have
Let \(w_{t}=tv+(1-t)x^{*}\) for all \(t\in (0,1]\) and \(v\in C\). This implies that \(w_{t}\in C\). Then we have
By the same arguments as in the proof of Theorem 3.1, we have \(\Vert B_{i}{{\varDelta}}^{i}_{t_{n}}z_{t_{n}}-B_{i}{{\varDelta}}^{i-1} _{t_{n}}z_{t_{n}}\Vert \to 0\) as \(n\to \infty \). In the meantime, by the monotonicity of \(B_{i}\), we obtain \(\langle w_{t}-{\varDelta}^{i} _{t_{n}}z_{t_{n}},B_{i}w_{t}-B_{i}{{\varDelta}}^{i}_{t_{n}} z_{t_{n}} \rangle \geq 0\). Then by (A4) we get
Utilizing (A1), (A4), and the last inequality, we obtain
and hence
Letting \(t\to 0\), we have, for each \(v\in C\),
This implies that \(x^{*}\in \operatorname {GMEP}({\varTheta }_{i},\varphi_{i}, B_{i})\) and hence \(x^{*}\in \bigcap^{N}_{i=1}\operatorname {GMEP}({\varTheta } _{i}, \varphi_{i}, B_{i})\). This together with Steps 2 and 3 attains \(x^{*}\in {{\varOmega }}\).
Finally, we show that \(x^{*}\) is a solution of VI (3.2). In fact, putting \(x_{t_{n}}\) in place of \(x_{t}\) in (3.26) and taking the limit as \(t_{n}\to 0\), we obtain
In particular, \(x^{*}\) solves the following VI:
or the equivalent dual variational inequality
That is, \(x^{*}\in {{\varOmega }}\) is a solution of VI (3.2). Hence \(x^{*}=\widetilde{x}\) by uniqueness. In a summary, we have proven that each cluster point of \(\{x_{t}\}\) (as \(t\to 0\)) equals x̃. Therefore \(x_{t}\to \widetilde{x}\) as \(t\to 0\). VI (3.2) can be rewritten as
So, in terms of (2.1), this is equivalent to the fixed point equation
This completes the proof. □
Taking \(T\equiv I\), \(G\equiv I\), \(\mu =1\), and \(\gamma =1\) in Theorem 3.2, we have the following corollary.
Corollary 3.1
Let \(\{x_{t}\}\) be defined by
If \(\lim_{t\to 0}\theta_{t}=0\), then \(x_{t}\) converges strongly as \(t\to 0\) to \(\widetilde{x}\in {{\varOmega }}:=\bigcap^{N}_{i=1} \operatorname {GMEP}( {\varTheta }_{i}, \varphi_{i}, B_{i})\cap \operatorname {GSVI}(C, B_{1}, B_{2})\), which is the unique solution of the VI
Proof
If \(T\equiv I\), then \(T_{r}\) in Lemma 2.8 is the identity mapping. Thus the result follows from Theorem 3.2. □
We are now in a position to prove the strong convergence of the sequence \(\{x_{n}\}\) generated by the general explicit iterative scheme (3.3) to \(\widetilde{x}\in {{\varOmega }}\), which is the unique solution to VI (3.2).
Theorem 3.3
Let \(\{x_{n}\}\) be the sequence generated by the explicit algorithm (3.3). Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{r_{n} \}\), \(\{\lambda_{n}\}\), \(\{\nu_{n}\}\), and \(\{r_{i, n}\}^{N}_{i=1}\) satisfy the following conditions:
-
(C1)
\(\{\alpha_{n}\}\subset [0, 1]\) and \(\{\beta_{n}\}\subset (0, 1]\), \(\alpha_{n}\to 0\) and \(\beta_{n}\to 0\) as \(n\to \infty \);
-
(C2)
\(\sum^{\infty }_{n=0}\beta_{n}=\infty \);
-
(C3)
\(\sum^{\infty }_{n=0}\vert \alpha_{n+1}-\alpha_{n}\vert <\infty \), and \(\vert \beta_{n+1}-\beta_{n}\vert \leq o(\beta_{n+1})+\sigma_{n}\), \(\sum^{\infty }_{n=0}\sigma_{n}<\infty \) (the perturbed control condition);
-
(C4)
\(\{r_{n}\}\subset (0,\infty )\), \(\liminf_{n\to \infty }r_{n}>0\), and \(\sum^{\infty }_{n=0}\vert r_{n+1}-r_{n}\vert <\infty \);
-
(C5)
\(\{\lambda_{n}\}\subset (0, \infty )\), \(\lim_{n\to \infty }\lambda _{n}=\lambda >0\), and \(\sum^{\infty }_{n=0}\vert \lambda_{n+1}-\lambda_{n}\vert < \infty \);
-
(C6)
\(\{\nu_{n}\}\subset (0,\infty )\), \(\lim_{n\to \infty }\nu_{n}= \nu >0\), and \(\sum^{\infty }_{n=0}\vert \nu_{n+1}-\nu_{n}\vert <\infty \);
-
(C7)
\(\{r_{i, n}\}\subset [c_{i}, d_{i}]\subset (0,2\mu_{i})\) \(\forall i\in \{1, 2,\ldots, N\}\), and \(\sum^{\infty }_{n=0}(\sum^{N}_{i=1}\vert r _{i,n+1}-r_{i,n}\vert )<\infty \).
Then \(\{x_{n}\}\) converges strongly to \(\widetilde{x}\in {{\varOmega }}:= \bigcap^{N}_{i=1}\operatorname {GMEP}({\varTheta }_{i}, \varphi_{i},B_{i}) \cap \operatorname {GSVI}(C,F_{1},F_{2})\cap \operatorname {Fix}(T)\), which is the unique solution of VI (3.2).
Proof
First, note that from condition (C1), without loss of generality, we assume that \(\alpha_{n}\tau <1\), \(\beta_{n} \bar{\gamma }<1\) and \(\frac{2\beta_{n}(\bar{\gamma }-1)}{1-\beta_{n}}<1\) for all \(n\geq 0\). Let \(\widetilde{x}\in {{\varOmega }}\) be the unique solution of VI (3.2). (The existence of x̃ follows from Theorem 3.2.)
From now, we put \(z_{n}=R_{n}x_{n}\), \(u_{n}={\varDelta}^{N}_{n}z_{n}\), and \(v_{n}=T_{r_{n}}u_{n}\). Take \(p\in \varOmega \). Then \(p=T_{r_{n}}p\) by Lemma 2.8(iii), \(p={\varDelta}^{i}_{n}p\) (\(=T^{({\varTheta }_{i}, \varphi_{i})}_{r_{i, n}}(I-r_{i, n}B_{i})p\)) by Proposition 2.1(iii), and \(p=R_{n}p\) by Lemma 1.1.
We divide the proof into several steps as follows.
Step 1. We show that \(\{x_{n}\}\) is bounded. Indeed, utilizing Proposition 2.1(ii) and Proposition 2.2, we have
It is easy from the nonexpansion of \(R_{n}\) to see that
which together with the nonexpansion of \(T_{r_{n}}\) and (3.39) implies that
By induction, we derive
This implies that \(\{x_{n}\}\) is bounded and so are \(\{Vx_{n}\}\), \(\{u _{n}\}\), \(\{v_{n}\}\), \(\{w_{n}\}\), \(\{z_{n}\}\), and \(\{Gv_{n}\}\). As a consequence, with the control condition (C1), we get
Step 2. We show that \(\lim_{n\to \infty }\Vert x_{n+1}-x_{n}\Vert =0\). To this end, let \(y_{n}=F_{2, \nu_{n}}x_{n}\), \(y_{n-1}=F_{2, \nu_{n-1}}x _{n-1}\), \(z_{n}=F_{1,\lambda_{n}}y_{n}\), and \(z_{n-1}=F_{1, \lambda_{n-1}}y _{n-1}\). Then we derive
and
Putting \(y=y_{n}\) in (3.42) and \(y=y_{n-1}\) in (3.43), we obtain
and
Adding up (3.44) and (3.45), we have
which together with the monotonicity of \(F_{2}\) implies that
and hence
It follows that
which immediately yields
By using arguments similar to those of (3.46), we get
Substituting (3.46) for (3.47), we have
Note that \(v_{n}=T_{r_{n}}u_{n}\) and \(v_{n-1}=T_{r_{n-1}}u_{n-1}\). By using arguments similar to those of (3.46), we obtain
Also, utilizing arguments similar to those of (3.25) in the proof of Theorem 3.1, we have
where \(\widetilde{M}_{1}>0\) is a constant such that, for each \(n\geq 0\),
So it follows from (3.48), (3.49), and (3.50) that
Since \(\liminf_{n\to \infty }r_{n}>0\), \(\lim_{n\to \infty }\lambda_{n}= \lambda >0\), and \(\lim_{n\to \infty }\nu_{n}=\nu >0\), it is easy to see from (3.51) that, for each \(n\geq 0\),
where \(\widetilde{M}>0\) is a constant such that
Now, simple calculations yield that
In terms of (3.52) and Lemma 2.6, we obtain
where \(\widetilde{M}_{2}=\sup_{n\geq 0}\{\gamma \Vert Vx_{n}\Vert +\mu \Vert Gv _{n}\Vert +\widetilde{M}\}\). By (3.53) and Lemma 2.5, we derive
where \(\widetilde{M}_{3}=\sup_{n\geq 0}\{\Vert A\Vert \Vert v_{n}\Vert +\Vert w_{n}\Vert \}\). By taking \(s_{n+1}=\Vert x_{n+1}-x_{n}\Vert \), \(\omega_{n}=\beta_{n}(\bar{ \gamma }-1)\), \(\omega_{n}\delta_{n}=\widetilde{M}_{3}o(\beta_{n})\), and
we deduce from (3.54) that
Hence, by conditions (C2)–(C7) and Lemma 2.2, we obtain
Step 3. We show that \(\lim_{n\to \infty }\Vert x_{n+1}-w_{n}\Vert =0\). Indeed, from (3.41) and condition (C1), we derive
Step 4. We show that \(\lim_{n\to \infty }\Vert x_{n}-w_{n}\Vert =0\). In fact, by Step 2 and Step 3, we get
Step 5. We show that \(\lim_{n\to \infty }\Vert x_{n}-z_{n}\Vert =0\) and \(\lim_{n\to \infty }\Vert x_{n}-Rx_{n}\Vert =0\). In fact, we first derive \(\lim_{n\to \infty }\Vert x_{n}-z_{n}\Vert =0\) by using arguments similar to those of (3.9) in the proof of Theorem 3.1, and then we obtain \(\lim_{n\to \infty }\Vert x_{n}-Rx_{n}\Vert =0\) by using arguments similar to those of (3.37) in the proof of Theorem 3.2.
Step 6. We show that \(\lim_{n\to \infty }\Vert z_{n}-u_{n}\Vert =0\) and \(\lim_{n\to \infty }\Vert x_{n}-{\varDelta}^{N}_{n}x_{n}\Vert =0\). In fact, by using arguments similar to those of (3.12) and (3.13) in the proof of Theorem 3.1, we obtain the desired conclusions.
Step 7. We show that \(\lim_{n\to \infty }\Vert u_{n}-v_{n}\Vert =0\) and \(\lim_{n\to \infty }\Vert x_{n}-T_{r_{n}}x_{n}\Vert =0\). In fact, by using arguments similar to those of (3.14) and (3.15) in the proof of Theorem 3.1, we obtain the desired conclusions.
Step 8. We show that \(\limsup_{n\to \infty }\langle (I-A) \widetilde{x},x_{n}-\widetilde{x}\rangle \leq 0\). To this end, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that
Without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup \hat{x}\). Utilizing Steps 5, 6, and 7 and arguments similar to those of Steps 2, 3, and 4 in the proof of Theorem 3.2, we derive \(\hat{x} \in {{\varOmega }}\). Thus, from VI (3.2), we conclude
Step 9. We show that \(\lim_{n\to \infty }\Vert x_{n}-\widetilde{x} \Vert =0\). Note that \(\widetilde{x}\in {{\varOmega }}\). From (3.3), \(\widetilde{x}=R_{n}\widetilde{x}\), \(\widetilde{x}={\varDelta}^{N} _{n}\widetilde{x}\), and \(\widetilde{x}=T_{r_{n}}\widetilde{x}\), we obtain
and
Applying (2.1), (3.40) and Lemmas 2.1, 2.5, and 2.6, we deduce that
and hence
It then follows from (3.55) that
where \(\xi_{n}=\frac{2\beta_{n}(\bar{\gamma }-1)}{1-\beta_{n}}\), \(\delta_{n}=\frac{1}{2(\bar{\gamma }-1)}[2\alpha_{n}\Vert \gamma Vx_{n}- \mu G\widetilde{x}\Vert \Vert w_{n}-\widetilde{x}\Vert +\beta_{n}\bar{\gamma } ^{2}\Vert x_{n}-\widetilde{x}\Vert ^{2}+2\langle (I-A)\widetilde{x},x_{n+1}- \widetilde{x}\rangle ]\). It can be readily seen from Step 2 and conditions (C1) and (C2) that \(\xi_{n}\to 0\), \(\sum^{\infty }_{n=0}\xi _{n}=\infty \), and \(\limsup_{n\to \infty }\delta_{n}\leq 0\). By Lemma 2.2, we conclude that \(\lim_{n\to \infty }\Vert x_{n}- \widetilde{x}\Vert =0\). This completes the proof. □
Taking \(T\equiv I\), \(G\equiv I\), \(\mu =1\), and \(\gamma =1\) in Theorem 3.3, we have the following corollary.
Corollary 3.2
Let \(\{x_{n}\}\) be generated by the following iterative algorithm:
Assume that the sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{\lambda _{n}\}\), \(\{\nu_{n}\}\), and \(\{r_{i, n}\}^{N}_{i=1}\) satisfy conditions (C1)–(C3) and (C5)–(C7) in Theorem 3.3. Then \(\{x_{n}\}\) converges strongly to \(\widetilde{x}\in {{\varOmega }}:= \bigcap^{N}_{i=1}\operatorname {GMEP}( {\varTheta }_{i}, \varphi_{i}, B_{i})\cap \operatorname {GSVI}(C, F_{1}, F_{2})\), which is the unique solution of VI (3.38).
Remark 3.1
Compared with Proposition 3.3, Theorem 3.4, and Theorem 3.7 in [11], respectively, our Theorems 3.1, 3.2, and 3.3 improve and develop them in the following aspects:
-
(i)
GSVI (1.3) with solutions being also fixed points of a continuous pseudocontinuous mapping in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7] is extended to GSVI (1.3) with solutions being also common solutions of a finite family of generalized mixed equilibrium problems (GMEPs) and fixed points of a continuous pseudocontinuous mapping in our Theorems 3.1, 3.2, and 3.3;
-
(ii)
in the argument process of our Theorems 3.1, 3.2, and 3.3, we use the variable parameters \(\lambda_{t}\) and \(\nu_{t}\) (resp., \(\lambda_{n}\) and \(\nu_{n}\)) in place of the fixed parameters λ and ν in the proof of [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], and additionally deal with a pool of variable parameters \(\{r_{i, t}\}^{N}_{i=1}\) (resp., \(\{r_{i, n}\}^{N}_{i=1}\)) involving a finite family of GMEPs;
-
(iii)
the iterative schemes in our Theorems 3.1, 3.2, and 3.3 are more advantageous and more flexible than the iterative schemes in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], because they can be applied to solving three problems (i.e., GSVI (1.3), a finite family of GMEPs, and the fixed point problem of a continuous pseudocontractive mapping) and involve much more parameter sequences;
-
(iv)
it is worth emphasizing that our general implicit iterative scheme (3.1) is very different from Jung’s composite implicit iterative scheme in [12], because the term “\(T_{r_{t}}Rx_{t}\)” in Jung’s implicit scheme is replaced by the term “\(T_{r_{t}}{{\varDelta}}^{N}_{t}R _{t} x_{t}\)” in our implicit scheme (3.1). Moreover, the term “\(T _{r_{n}}Rx_{n}\)” in Jung’s explicit scheme is replaced by the term “\(T_{r_{n}}{{\varDelta}}^{N}_{n}R_{n}x_{n}\)” in our explicit scheme (3.3).
4 Numerical examples
The purpose of this section is to give two examples and numerical results to illustrate the applicability, effectiveness, and stability of our algorithm.
Example 4.1
(Example of Theorem 3.3)
Let \(H=\mathbf{R}\) and \(C=[0, 100]\). Let the inner product \(\langle \cdot , \cdot \rangle : \mathbf{R}\times \mathbf{R}\rightarrow {\mathbf{R}}\) be defined by \(\langle x, y \rangle =xy\). Let \(N=2\), \(Vx=2x\), \(Gx=\frac{1}{2}x\), \(Tx=x\), \(B_{1}x= \frac{1}{2}x\), \(B_{2}x=\frac{1}{3}x\), \(F_{1}x=\frac{1}{2}x\), \(F_{2}x=x\), \(\varTheta_{1}(x, y)=y^{2}-x^{2}\), \(\varTheta_{2}(x, y)=-3x^{2}+xy+2y^{2}\), \(\varphi_{1}x=x^{2}\), \(\varphi_{2}x=0\), and \(Ax=\frac{3}{2}x\). Let \(\alpha_{n}=\frac{1}{n}\), \(\beta_{n}=\frac{1}{3(n+1)}\), \(r_{n}=1\), \(r_{1,n}= \frac{1}{2}\), \(r_{2,n}=1\), \(\lambda_{n}=1\), \(\nu_{n}=\frac{1}{2}\), \(\gamma = \frac{1}{8}\), \(\mu =\frac{2}{3}\). It is easy to calculate that \(T^{({\varTheta }_{1}, \varphi_{1})}_{r_{1, n}}x=\frac{1}{3}x\), \(T^{( {\varTheta }_{2}, \varphi_{2})}_{r_{2, n}}x=\frac{1}{6}x\), \(T_{r_{n}}x=x\), \(F_{1, \lambda_{n}}x=\frac{1}{2}x\), and \(F_{2, \nu_{n}}x=\frac{1}{2}x\). Choose an arbitrary initial guess \(x_{1}=4\). We get the numerical results of Algorithm (3.3).
Table 1 shows the value of the sequence \(\{x_{n}\}\).
Figure 1 shows the convergence of the iterative sequence of Algorithm (3.3).
Solution: We can see from both Table 1 and Fig. 1 that the sequence \(\{x_{n}\}\) converges to 0, that is, 0 is the solution in Example 4.1. In addition, it is also easy to check from Example 4.1 that \(\bigcap^{2}_{i=1}\operatorname {GMEP}({\varTheta }_{i}, \varphi_{i}, B_{i}) \cap \operatorname {GSVI}(C, F_{1}, F_{2})\cap \operatorname {Fix}(T)=\{0\}\). Therefore, the iterative algorithm of Theorem 3.3 is efficient.
Example 4.2
(Example of Theorem 3.7 in [12])
Let \(H=\mathbf{R}\) and \(C=[0, 100]\). Let the inner product \(\langle \cdot , \cdot \rangle : \mathbf{R}\times \mathbf{R}\rightarrow {\mathbf{R}}\) be defined by \(\langle x, y\rangle =xy\). Let \(Vx=2x\), \(Gx=\frac{1}{2}x\), \(Tx=x\), \(F_{1}x=\frac{1}{2}x\), \(F_{2}x=x\), and \(Ax=\frac{3}{2}x\). Let \(\alpha_{n}=\frac{1}{n}\), \(\beta _{n}=\frac{1}{3(n+1)}\), \(r_{n}=1\), \(\lambda =1\), \(\nu =\frac{1}{2}\), \(\gamma =\frac{1}{8}\), \(\mu =\frac{2}{3}\). Choose an arbitrary initial guess \(x_{1}=4\). We get the numerical results of Algorithm (1.5) (Algorithm (3.10) of [12]).
Table 2 shows the value of the sequence \(\{x_{n}\}\).
The Fig. 2 shows the convergence of the iterative sequence of Algorithm (1.5).
Solution: We can see from both Table 2 and Fig. 2 that the sequence \(\{x_{n}\}\) converges to 0, that is, 0 is the solution in Example 4.2. In addition, it is also easy to check from Example 4.2 that \(\operatorname {GSVI}(C, F_{1}, F_{2})\cap \operatorname {Fix}(T)=\{0\}\).
Remark 4.1
From Tables 1, 2 and Figs. 1, 2, it is readily seen that the convergence of \(\{x_{n}\}\) to 0 in Example 4.1 is faster than the one of \(\{x_{n}\}\) to 0 in Example 4.2. Therefore, our algorithm is more applicable, efficient, and stable than the algorithm in [12].
5 Application
In this section, applying our main result Theorem 3.3, we can prove strong convergence theorems for approximating the solution of the standard constrained convex optimization problem.
Let C be a closed convex subset of H. The standard constrained convex optimization problem is to find \(x^{\ast }\in C\) such that
where \(f : C\to {\mathbf{R}}\) is a convex, Fréchet differentiable function. The set of the solutions of (5.1) is denoted by \(\varPhi_{f}\).
Lemma 5.1
(Optimality condition, [25])
A necessary condition of optimality for a point \(x^{\ast }\in C\) to be a solution of the minimization problem (5.1) is that \(x^{\ast }\) solves the variational inequality
for all \(x\in C\). Equivalently, \(x^{\ast }\in C\) solves the fixed point equation
for every \(\lambda >0\). If, in addition, f is convex, then the optimality condition (5.2) is also sufficient.
Theorem 5.1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(f_{i}\) (\(i=1,2,\ldots, N\)): \(C\to {\mathbf{R}}\) be a real-valued convex function with the gradient \(\nabla f_{i}\) being \(\frac{1}{L_{f_{i}}}\)-inverse strongly monotone and continuous with \(L_{f_{i}}>0\). Let \({\varTheta }_{i}\), \(\varphi_{i}\), A, V, G, \(F_{1}\), \(F _{2}\), \(R_{n}\), \(F_{1, \lambda_{n}}\), \(F_{2, \nu_{n}}\), \(T_{r_{n}}\), and \(T^{({\varTheta }_{i},\varphi_{i})}_{r_{i,n}}\) be defined as in Theorem 3.3. Given \(x_{1}\in C\) and let \(\{x_{n}\}\) be the sequence generated by the following explicit algorithm:
where \({\varLambda }^{i}_{n}=T^{({\varTheta }_{i},\varphi_{i})}_{r _{i, n}}(I-r_{i, n}\nabla f_{i})T^{({\varTheta }_{i-1}, \varphi_{i-1})} _{r_{i-1,n}}(I-r_{i-1,n}\nabla f_{i-1})\cdots T^{({\varTheta }_{1}, \varphi_{1})}_{r_{1, n}}(I-r_{1,n}\nabla f_{1})\) and \({\varLambda } ^{0}_{n}=I\). Assume that \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{r_{n}\}\), \(\{\lambda_{n}\}\), \(\{\nu_{n}\}\), and \(\{r_{i, n}\}^{N}_{i=1}\) satisfy conditions (C1)–(C7) in Theorem 3.3. Then \(\{x_{n}\}\) converges strongly to \(\widetilde{x}\in {{\varOmega }}:=\bigcap^{N}_{i=1}{\operatorname {MEP}}( {\varTheta }_{i}, \varphi_{i})\cap \bigcap^{N}_{i=1}\varPhi_{f_{i}} \cap \operatorname {GSVI}(C,F_{1},F_{2})\cap \operatorname {Fix}(T)\), which is the unique solution of VI (3.2).
Proof
By using Lemma 5.1 and Theorem 3.3, we obtain the desired conclusion directly. □
6 Conclusions
We introduced and analyzed one general implicit iterative scheme and another general explicit iterative scheme for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Moreover, we established strong convergence of the proposed implicit and explicit iterative schemes to a solution of the GSVI, which is the unique solution of a certain variational inequality. Our Theorems 3.1–3.3 not only improve and develop the main results of [1] and [12] but also improve and develop Theorems 3.1 and 3.2 of [9], Theorems 3.1 and 3.2 of [10], and Proposition 3.1, Theorems 3.2 and 3.5 of [11].
References
Ceng, L.C., Wang, C.Y., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008)
Siriyan, K., Kangtunyakarn, A.: A new general system of variational inequalities for convergence theorem and application. Numer. Algorithms 12, 1–25 (2018)
Bnouhachem, A.: A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem. Fixed Point Theory Appl. 2014, 22 (2014)
Ceng, L.C., Liou, Y.C., Wen, C.F., Wu, Y.J.: Hybrid extragradient viscosity method for general system of variational inequalities. J. Inequal. Appl. 2015, 150 (2015)
Alofi, A., Latif, A., Mazrooei, A.A., Yao, J.C.: Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 17(4), 669–682 (2016)
Rouhani, B.D., Kazmi, K.R., Farid, M.: Common solutions to some systems of variational inequalities and fixed point problems. Fixed Point Theory 18(1), 167–190 (2017)
Eslamian, M., Saejung, S., Vahidi, J.: Common solution of a system of variational inequality problems. UPB Sci. Bull., Ser. A 77(1), 55–62 (2015)
Alofi, A.S.M., Latif, A., Al-Marzooei, A.E., Yao, J.C.: Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 17, 669–682 (2016)
Ceng, L.C., Guu, S.M., Yao, J.C.: A general composite iterative algorithm for nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 61, 2447–2455 (2011)
Jung, J.S.: A general composite iterative method for strictly pseudocontractive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 173 (2014)
Kong, Z.R., Ceng, L.C., Liou, Y.C., Wen, C.F.: Hybrid steepest-descent methods for systems of variational inequalities with constraints of variational inclusions and convex minimization problems. J. Nonlinear Sci. Appl. 10, 874–901 (2017)
Jung, J.S.: Strong convergence of some iterative algorithms for a general system of variational inequalities. J. Nonlinear Sci. Appl. 10, 3887–3902 (2017)
Peng, J.W., Yao, J.C.: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 12, 1401–1432 (2008)
Kong, Z.R., Ceng, L.C., Ansari, Q.H., Pang, C.T.: Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013, Article ID 718624 (2013)
Ceng, L.C., Ansari, Q.H., Schaible, S.: Hybrid extragradient-like methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 53, 69–96 (2012)
Ceng, L.C., Yao, J.C.: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 72, 1922–1937 (2010)
Ceng, L.C., Lin, Y.C., Wen, C.F.: Iterative methods for triple hierarchical variational inequalities with mixed equilibrium problems, variational inclusions, and variational inequalities constraints. J. Inequal. Appl. 2015, 16 (2015)
Ceng, L.C., Hu, H.Y., Wong, M.M.: Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 15, 1341–1367 (2011)
Peng, J.W., Yao, J.C.: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 12, 1401–1432 (2008)
Jung, J.S.: A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010, Article ID 251761 (2010)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)
Marino, G., Xu, H.K.: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 318, 43–52 (2006)
Yamada, I.: The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 473–504. North-Holland, Amsterdam (2001)
Zegeye, H.: An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011, Article ID 621901 (2011)
Suwannaut, S., Kangtunyakran, A.: The combination of the set of solutions of equilibrium problem for convergence theorem of the set of fixed points of strictly pseudo-contractive mappings and variational inequalities problem. Fixed Point Theory Appl. 2013, 291 (2013)
Funding
L.-C. Ceng was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of the Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).
Author information
Authors and Affiliations
Contributions
All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, QW., Guan, JL., Ceng, LC. et al. General iterative methods for systems of variational inequalities with the constraints of generalized mixed equilibria and fixed point problem of pseudocontractions. J Inequal Appl 2018, 315 (2018). https://doi.org/10.1186/s13660-018-1899-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1899-0