Abstract
The purpose of this paper is to present a new iterative scheme for finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities in infinite-dimensional Banach spaces. Under mild conditions, a strong convergence theorem for approximating this common solution is proved. The methods in the paper are novel and different from those in the early and recent literature.
Similar content being viewed by others
1 Introduction
Variational inequalities theory, which was introduced by Stampacchia [1] in the early 1960s, has emerged as an interesting and fascinating branch of applicable mathematics with a wide range of applications in industry, finance, economics, social, pure and applied sciences. It has been shown that this theory provides the most natural, direct, simple, unified and efficient framework for a general treatment of a wide class of unrelated linear and nonlinear problems; see, for example, [2–5] and the references therein. Variational inequalities have been extended and generalized in several directions using novel and new techniques.
In 1968, Brézis [6] initiated the study of the existence theory of a class of variational inequalities, later known as variational inclusions, using proximal-point mappings due to Moreau [7]. Variational inclusions include variational, quasi-variational, variational-like inequalities as special cases. Variational inclusions can be viewed as an innovative and novel extension of the variational principles and thus have wide applications in the fields of optimization, control, economics and engineering sciences.
In recent years, much attention has been given to study the system of variational inclusions/inequalities, which occupies a central and significant role in the interdisciplinary research among analysis, geometry, biology, elasticity, optimization, imaging processing, biomedical sciences and mathematical physics. One can see an immense breadth of mathematics and its simplicity in the works of this research. A number of problems leading to the system of variational inclusions/inequalities arise in applications to variational problems and engineering. It is well known that the system of variational inclusions/inequalities can provide new insight regarding problems being studied and can stimulate new and innovative ideas for problem solving.
In 2000, Ansari and Yao [8] introduced the system of generalized implicit variational inequalities and proved the existence of its solution. They derived the existence results for a solution of system of generalized variational inequalities, from which they established the existence of a solution of system of optimization problems.
Ansari et al. [9] introduced the system of vector equilibrium problems and proved the existence of its solution. Moreover, they also applied their results to the system of vector variational inequalities. The results of [8] and [9] were used as tools to solve the Nash equilibrium problem for non-differentiable and (non)convex vector-valued functions.
Let \(A,B : C \rightarrow E\) be two nonlinear mappings. In 2010, Yao et al. [10] introduced a system of general variational inequalities problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
In 2-uniformly smooth Banach spaces, Kangtunyakarn [11], recently, introduced a new system of variational inequalities problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
If \(a=0\), then problem (1.2) reduces to the problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
which is introduced by Cai and Bu [12]. In Hilbert spaces, problem (1.3) reduces to the problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
which is introduced by Ceng et al. [13]. If \(A = B\), then problem (1.4) collapses the problem of finding \((x^{*}, y^{*}) \in C\times C\) such that
which is introduced by Verma [14]. In particular, if we let \(x^{*} = y^{*}\) in (1.5), then problem (1.4) is nothing but the classical variational inequality problem: find \(x^{*}\in C\) such that
The set of solutions of problem (1.6) is denoted by \(\operatorname{VI}(C,A)\).
Motivated by the works mentioned above, we shall consider the following problem in q-uniformly smooth Banach spaces: find \((x^{*}, y^{*}) \in C\times C\) such that
where \(a\in[0,1]\), \(\lambda> 0\) and \(\mu> 0\) are three constants. This problem is called a modified system of variational inequalities, which clearly includes problems (1.1)-(1.6) as special cases.
In order to find a common element of the set of solutions of problem (1.2) and the set of fixed points of nonlinear operators, Kangtunyakarn [11] also studied the following algorithm in a 2-uniformly smooth Banach space:
where \(S^{A}\) is the \(S^{A}\)-mapping generated by \(S_{1}, S_{2},\ldots, S_{N}\), \(T_{1}, T_{2},\ldots, T_{N}\), \(G:C \rightarrow C\) is the mapping defined by \(Gx= Q_{C}(I-\lambda A)(aI+(1-a)Q_{C}(I-\mu B))x\), and \(Q_{C} \) is a sunny nonexpansive retraction of E onto C. Then, under mild conditions, they established a strong convergence theorem.
On the other hand, we know that the quasi-variational inclusion problem in the setting of Hilbert spaces has been extensively studied in the literature; see, for instance, [15–23]. There is, however, little work in the existing literature on this problem in the setting of Banach spaces. The main difficulties are due to the fact that the inner product structure of Hilbert spaces fails to be true in Banach spaces. To overcome these difficulties, López et al. [24] used a new technique to carry out certain initiative investigations on splitting methods for accretive operators in Banach spaces. They considered the following algorithms with errors in Banach spaces:
and
where \(u\in E\), \(\{a_{n}\}, \{b_{n}\}\subset E\) and \(J_{r_{n}} = (I +r_{n}B)^{-1}\) is the resolvent of B. Then they established the weak and strong convergence of algorithms (1.9) and (1.10), respectively.
Recently, Khuangsatung and Kangtunyakarn [25] introduced the following algorithm in Hilbert spaces for finding a common element of the set of fixed points of a k-strictly pseudononspreading mapping, the set of solutions of a finite family of variational inclusion problems and the set of solutions of a finite family of equilibrium problems:
And, under suitable conditions, they proved the strong convergence of the sequence \(\{w_{n}\}\).
Motivated and inspired by Zhang et al. [19], Qin et al. [26], López et al. [24], Takahashi et al. [27] and Khuangsatung and Kangtunyakarn [25], we suggest and analyze a new iterative algorithm for finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities in infinite-dimensional Banach spaces. We also prove the convergence analysis of the proposed algorithm under some suitable conditions. The results obtained in this paper improve and extend the corresponding results announced by many others.
2 Preliminaries
Throughout this paper, we denote by E and \(E^{*}\) a real Banach space and the dual space of E, respectively. We use \(\operatorname{Fix}(T )\) to denote the set of fixed points of T and \(\mathscr{B}_{r}\) to denote the closed ball with center zero and radius r. Let C be a subset of E and \(q > 1\) be a real number. The (generalized) duality mapping \(J_{q} : E \rightarrow2^{E^{*}}\) is defined by
for all \(x \in E\), where \(\langle \cdot,\cdot\rangle\) denotes the generalized duality pairing between E and \(E^{*}\). It is well known that if E is smooth, then \(J_{q}\) is single-valued, which is denoted by \(j_{q}\).
Let C be a nonempty closed convex subset of a real Banach space E. Let \(A : E\rightarrow E\) be a single-valued nonlinear mapping, and let \(M : E\rightarrow2^{E}\) be a multivalued mapping. The so-called quasi-variational inclusion problem is to find a \(z \in E\) such that
The set of solutions of (2.1) is denoted by \(\operatorname{VI}(E, A, M)\).
Definition 2.1
Let E be a Banach space. Then a function \(\delta_{E} : [0, 2]\rightarrow[0,1]\) is said to be the modulus of convexity of E if
If \(\delta_{E}(\epsilon)>0\) for all \(\epsilon\in(0, 2]\), then E is uniformly convex.
Definition 2.2
The function \(\rho_{E}: [0, 1) \rightarrow[0, 1)\) is said to be the modulus of smoothness of E if
A Banach space E is said to be:
-
(1)
uniformly smooth if \(\frac{\rho_{E}(t)}{t}\rightarrow0\) as \(t\rightarrow0\);
-
(2)
q-uniformly smooth if there exists a fixed constant \(c > 0\) such that \(\rho_{E}(t)\leq ct^{q}\), where \(q\in(1,2]\).
It is known that a uniformly convex Banach space is reflexive and strictly convex.
Definition 2.3
A mapping \(T : C\rightarrow E\) is said to be:
-
(1)
nonexpansive if
$$ \Vert Tx-Ty\Vert \leq \Vert x-y\Vert \quad \text{for all } x, y \in C; $$ -
(2)
r-contractive if there exists \(r\in[0, 1)\) such that
$$ \Vert Tx-Ty\Vert \leq r\Vert x-y\Vert \quad \text {for all } x, y \in C; $$ -
(3)
accretive if for all \(x, y \in C\), there exists \(j_{q}(x- y) \in J_{q}(x- y)\) such that
$$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq0; $$ -
(4)
η-strongly accretive if for all \(x, y \in C\), there exist \(\eta> 0\) and \(j_{q}(x-y) \in J_{q} (x - y)\) such that
$$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq\eta \Vert x-y\Vert ^{q}; $$ -
(5)
μ-inverse-strongly accretive if for all \(x, y \in C\), there exist \(\mu> 0\) and \(j_{q}(x-y) \in J_{q} (x - y)\) such that
$$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq\mu \Vert Tx-Ty\Vert ^{q}. $$
Definition 2.4
A set-valued mapping \(T: \operatorname{Dom}(T) \rightarrow2^{E}\) is said to be:
-
(1)
accretive if for any \(x, y \in \operatorname{Dom}(T)\), there exists \(j_{q}(x-y)\in J_{q}(x-y)\) such that for all \(u\in T(x)\) and \(v \in T(y)\),
$$ \bigl\langle u-v,j_{q}(x - y)\bigr\rangle \geq0; $$ -
(2)
m-accretive if T is accretive and \((I + \rho T)(\operatorname{Dom}(T))= E\) for every (equivalently, for some) \(\rho> 0\), where I is the identity mapping.
Let \(M : \operatorname{Dom}(M)\rightarrow2^{E}\) be m-accretive. The mapping \(J_{M,\rho}: E \rightarrow \operatorname{Dom}(M)\) defined by
is called the resolvent operator associated with M, where ρ is any positive number and I is the identity mapping. It is well known that \(J_{M,\rho}\) is single-valued and nonexpansive.
We need some facts and tools which are listed as lemmas below.
Lemma 2.5
([28])
Let E be a Banach space and \(J_{q}\) be a generalized duality mapping. Then, for any given \(x, y\in E\), the following inequality holds:
Lemma 2.6
([29])
Let \(\{\alpha_{n}\}\) be a sequence of nonnegative numbers satisfying the property
where \(\{\gamma_{n} \}\), \(\{b_{n} \}\), \(\{c_{n} \}\) satisfy the restrictions:
-
(i)
\(\sum_{n=1}^{\infty}\gamma_{n}=\infty\), \(\lim_{n\rightarrow \infty}\gamma_{n} =0\);
-
(ii)
\(b_{n}\geq0\), \(\sum_{n=1}^{\infty}b_{n}<\infty\);
-
(iii)
\(\limsup_{n\rightarrow\infty}c_{n} \leq0\).
Then \(\lim_{n\rightarrow\infty}\alpha_{n} =0\).
Lemma 2.7
([28])
Let \(1 < p <\infty\), \(q \in(1, 2]\), \(r > 0\) be given. If E is a real q-uniformly smooth Banach space, then there exists a constant \(C_{q} > 0\) such that
Lemma 2.8
([30])
Let C be a nonempty closed convex subset of a real q-uniformly smooth Banach space E. Let the mapping \(A : C\rightarrow E\) be an α-inverse-strongly accretive operator. Then the following inequality holds:
In particular, if \(0 <\lambda\leq(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}}\), then \(I-\lambda A\) is nonexpansive.
Recall that if C and D are nonempty subsets of a Banach space E such that C is closed convex and \(D\subset C\), then a mapping \(Q : C\rightarrow D\) is sunny [31] provided
for all \(x \in C\) and \(t \geq0\), whenever \(Qx + t(x -Q(x)) \in C\). A mapping \(Q : C\rightarrow D\) is called a retraction if \(Qx = x\) for all \(x \in D\). Furthermore, Q is a sunny nonexpansive retraction from C onto D if Q is a retraction from C onto D which is also sunny and nonexpansive. A subset D of C is called a sunny nonexpansive retraction of C if there exists a sunny nonexpansive retraction from C onto D. The following lemma collects some properties of the sunny nonexpansive retraction.
Lemma 2.9
Let C be a closed convex subset of a smooth Banach space E. Let D be a nonempty subset of C. Let \(Q : C \rightarrow D\) be a retraction and let j, \(j_{q}\) be the normalized duality mapping and generalized duality mapping on E, respectively. Then the following are equivalent:
-
(i)
Q is sunny and nonexpansive;
-
(ii)
\(\Vert Qx-Qy\Vert ^{2}\leq\langle x-y, j(Qx-Qy)\rangle\), \(\forall x, y \in C\);
-
(iii)
\(\langle x - Qx, j(y- Qx)\rangle\leq0\), \(\forall x \in C\), \(y \in D\);
-
(iv)
\(\langle x - Qx, j_{q} (y- Qx)\rangle\leq0\), \(\forall x \in C\), \(y \in D\).
Lemma 2.10
Let \(A: C\rightarrow E\) and \(M: C\supseteq \operatorname{Dom}(M)\rightarrow2^{E}\) be two nonlinear operators. Denote \(J_{r}\) by
and \(T_{r}\) by
Then it holds for all \(r > 0\) that \(\operatorname{Fix}(T_{r}) = \operatorname {VI}(E, A, M)\).
Proof
From the definition of \(T_{r}\), it follows that
This completes the proof. □
Lemma 2.11
([24])
Assume that C is a nonempty closed subset of a real uniformly convex and q-uniformly smooth Banach space E. Suppose that \(A: C \rightarrow E\) is α-inverse-strongly accretive and M is an m-accretive operator in E, with \(\operatorname{Dom}(M)\subseteq C\). Then it holds that:
-
(i)
Given \(0 < s\leq r\) and \(x\in E\),
$$ \Vert T_{s}x-T_{r}x\Vert \leq\biggl\vert 1- \frac{s}{r} \biggr\vert \Vert x-T_{r}x\Vert \quad \textit{and} \quad \Vert x-T_{s}x\Vert \leq2\Vert x-T_{r}x\Vert . $$ -
(ii)
Given \(k> 0\), there exists a continuous, strictly increasing and convex function \(\phi_{q}: [0, \infty)\rightarrow[0, \infty)\) with \(\phi_{q} (0) = 0\) such that for all \(x, y \in\mathscr{B}_{k}\),
$$\begin{aligned} \|{T_{r}x-T_{r}y}\|^{q} \leq& \|{x-y} \|^{q}-r\bigl(\alpha q-r^{q-1}C_{q}\bigr)\|{Ax-Ay} \|^{q} \\ &{}-\phi_{q} \bigl(\bigl\Vert {(I-J_{r}) (I-rA)x -(I-J_{r}) (I-rA)y}\bigr\Vert \bigr). \end{aligned}$$
3 Main results
For every \(i = 1, 2,\ldots,N\), let \(A_{i}:C\rightarrow E\) and \(M:C\supseteq \operatorname{Dom}(M)\rightarrow2^{E}\) be nonlinear mappings. From (2.1), we introduce the combination of variational inclusion problems in Banach spaces as follows: find a point \(x^{*}\in C\) such that
where \(\lambda_{i}\) is a real positive number for all \(i = 1, 2,\ldots,N\) with \(\sum_{i=1}^{N}\lambda_{i}=1\). The set of solutions of (3.1) in Banach spaces is denoted by \(\operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\).
To prove the strong convergence results, we also need the following four lemmas.
Lemma 3.1
Let C be a nonempty closed convex subset of a real smooth Banach space E. Let \(N\geq1\) be some positive integer, \(A_{i}: C\rightarrow E\) be \(\eta_{i}\)-inverse-strongly accretive with \(\eta =\min\{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\), and M be m-accretive in E with \(\operatorname{Dom}(M)\subseteq C\). Let \(\{\lambda_{i}\}\) be a real number sequence in \((0,1)\) with \(\sum_{i=1}^{N}\lambda_{i}=1\) and \(\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\neq\emptyset\). Then
Proof
It is obvious that \(\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\subseteq\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\). Next we prove that
Suppose that \(x_{1}\in \operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\) and \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\). We have from Lemma 2.10 that
Since \(\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M) \subseteq\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\), we have \(x_{2}\in\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\). Again, from Lemma 2.10, we have
In light of the nonexpansiveness of \(J_{r}\), we deduce that
which means that
By Lemma 2.10, without loss of generality, we may assume \(r\in (0,(\frac{q\eta}{C_{q} })^{\frac{1}{q-1}} )\). We then deduce that
Again since \(x_{1}\in \operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\) and \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M) \), we find that
and
We derive from (3.3) and (3.4) that
It then follows from (3.2) and (3.5) that
By virtue of \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) and (3.2), we see
for all \(i =1,2,\ldots,N\), which yields that
Hence, we obtain the desired result. □
Lemma 3.2
Let E, C, M, η, \(\lambda_{i}\) and \(A_{i}\) be the same as those in Lemma 3.1. Then the mapping \(\sum_{i=1}^{N}\lambda_{i}A_{i}\) is η-inverse-strongly accretive.
Proof
Let \(x, y \in C\). It follows that
Consequently, the mapping \(\sum_{i=1}^{N}\lambda_{i}A_{i}\) is η-inverse-strongly accretive. □
Lemma 3.3
Assume that C is a nonempty closed subset of a real uniformly convex and q-uniformly smooth Banach space E. Let \(S:C\rightarrow C\) be nonexpansive, \(A:C\rightarrow E\) be η-inverse-strongly accretive, and \(M:\operatorname{Dom}(M)\rightarrow2^{E}\) be m-accretive with \(\operatorname{Dom}(M)\subseteq C\). Assume \(r\in(0, (\frac{q\eta}{C_{q} })^{\frac{1}{q-1}})\) and \(\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\neq\emptyset\). Then \(\operatorname{Fix}(ST_{r})=F(T_{r}S)=\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\).
Proof
It is easy to check that \(\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\subseteq\operatorname{Fix}(ST_{r})\) and \(\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\subseteq\operatorname{Fix}(T_{r}S)\). We are left to show that \(\operatorname{Fix}(ST_{r})\subseteq\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r}) \) and \(\operatorname{Fix}(T_{r}S)\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\).
We first prove \(\operatorname{Fix}(ST_{r})\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\). Suppose that \(\hat{x}\in \operatorname{Fix}(ST_{r})\) and \(\tilde{x}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\). We have by Lemma 2.11 that
Hence, we have from \(r\in(0,(\frac{q\eta}{C_{q} })^{\frac{1}{q-1}})\) and the property of \(\phi_{q}\) that
It follows that
Hence, we have
By the assumption of \(\hat{x}\in\operatorname{Fix}(ST_{r})\), we have \(\hat {x}=S\hat{x}\). This means that \(\hat{x}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\).
We now prove \(\operatorname{Fix}( T_{r}S)\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\). Suppose that \(\tilde{u}\in \operatorname{Fix}(T_{r}S)\) and \(\hat{u}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\). Repeating the above proof again, we get that
It follows that
Hence, we have
By the assumption of \(\tilde{u}\in\operatorname{Fix}(T_{r}S)\), we have \(\tilde{u}=S\tilde{u}\) and \(\tilde{u}=T_{r} \tilde{u}\). This means that \(\tilde{u}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\), which implies \(\operatorname{Fix}( T_{r}S)\subseteq\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\). □
Lemma 3.4
Let C be a nonempty closed convex subset of a q-uniformly smooth Banach space E, and let \(A,B : C\rightarrow E\) be two nonlinear mappings. Let \(Q_{C}\) be a sunny nonexpansive retraction from E onto C. For \(\forall\lambda, \mu>0\) and \(a \in[0, 1]\), define a mapping
Then \((x^{*}, y^{*})\) is a solution of problem (1.7) if and only if \(x^{*} = Gx^{*}\), where \(y^{*}= Q_{C}(I-\mu B) x^{*}\).
Proof
First, we prove ‘⟹’.
Let \((x^{*}, y^{*})\) be a solution of (1.7), and we have
From Lemma 2.9, we have
and \(y^{*}= Q_{C}(I-\mu B)x^{*}\).
It follows that
which implies that \(x^{*}\in\operatorname{Fix}(G)\), where \(y^{*}= Q_{C}(I-\mu B) x^{*}\).
Next we prove ‘⟸’.
Let \(x^{*}\in\operatorname{Fix}(G)\) and \(y^{*}= Q_{C}(I-\mu B) x^{*}\). Then
It follows from Lemma 2.9 that
Then we find that \((x^{*}, y^{*})\) is a solution of problem (1.7). □
Example 3.5
([11])
Let \(\mathbb{R}\) be a real line with the Euclidean norm and let \(A,B: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(Ax=\frac{x-1}{4}\) and \(Bx =\frac{x-1}{2}\) for all \(x \in\mathbb {R}\). The mapping \(G : \mathbb{R}\rightarrow\mathbb{R}\) is defined by
for all \(x\in\mathbb{R}\). Then \(1\in\operatorname{Fix}(G)\) and \((1, 1)\) is a solution of problem (1.7).
Theorem 3.6
Let E be a uniformly convex and q-uniformly smooth Banach space. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow E\) be \(\eta_{i}\)-inverse-strongly accretive with \(\eta =\min\{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\). Let M be m-accretive on E with \(\operatorname{Dom}(M)\subseteq C\), \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow E\) be α- and β-inverse-strongly accretive, respectively. Define a mapping \(Gx:= Q_{C}(I-\lambda A)(aI+(1-a)Q_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume \(\lambda\in(0,(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}} )\), \(\mu\in(0,(\frac{q\beta}{C_{q} })^{\frac{1}{q-1}} )\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset(0,1)\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
-
(i)
\(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);
-
(ii)
\(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}<(\frac{q\eta}{C_{q}})^{\frac{1}{q-1}}\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);
-
(iv)
\(\sum_{n=1}^{N} \lambda_{i}=1\).
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\), which solves the variational inequality
And \((x,y)\) solves problem (1.7), where \(y=Q_{C}(I-\mu B)x\).
Proof
Let \(\{y_{n}\}\) be a sequence generated by
where \(T_{n}:=J_{r_{n}}(I-r_{n}\sum_{i=1}^{N}\lambda_{i}A_{i})\). Hence to show the desired result, it suffices to prove that \(y_{n}\rightarrow x\). Indeed, by virtue of Lemma 2.8, Lemma 3.2, (iii), \(\lambda \in(0,(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}} )\) and \(\mu\in(0,(\frac{q\beta}{C_{q} })^{\frac{1}{q-1}} )\), we find that \(T_{n}:C\rightarrow C\) and \(G:C\rightarrow C\) are nonexpansive. And hence,
By virtue of Lemma 2.6 and (3.9), we see \(\lim_{n\rightarrow\infty} \Vert y_{n}-x_{n}\Vert =0\).
First, we prove that the sequence \(\{y_{n}\}\) is bounded.
Taking \(x\in \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N}(A_{i}+M)^{-1}(0)\), we find \(x\in \operatorname{Fix}(G)\cap\operatorname{Fix}(T_{n})\) by Lemma 2.10 and Lemma 3.1. It follows from (3.8) and Lemma 3.3 that
By induction, we have
Hence, \(\{y_{n}\}\) is bounded, so are \(\{f(y_{n})\}\), \(\{T_{n}(y_{n})\}\) and \(\{ T_{n}G(y_{n})\}\).
Next, we prove that
Write \(V=\sum_{i=1}^{N}\lambda_{i}A_{i}\). Noticing Lemma 3.2, we get that the mapping V is η-inverse-strongly accretive. Putting \(z_{n}=T_{n}Gy_{n}\), we derive from Lemma 2.11 that
where \(M_{1}>\sup_{n\geq1}\{\frac{\Vert Gy_{n+1}-J_{r_{\beta _{n}}}(1-r_{\beta_{n}} V)Gy_{n+1}\Vert }{r_{\beta_{n}}}\}\), \(r_{\alpha_{n}}=\min\{ r_{n+1},r_{n}\}\) and \(r_{\beta_{n}}=\max\{ r_{n+1},r_{n}\}\).
Combining (3.8) and (3.11), we find that
where \(M_{2}>\sup_{n\geq1}\{\Vert f(y_{n})-z_{n}\Vert \}\). It follows from Lemma 2.6, (ii) and (iii) that \(\lim_{n\rightarrow\infty} \Vert y_{n+1}-y_{n}\Vert =0\).
Again, using Lemma 2.5, Lemma 2.11 and Lemma 3.3, we obtain
where \(M_{3}>\sup_{n\geq1}\{ \langle f(y_{n})-x, j_{q}(y_{n+1}-x)\rangle\}\). Meanwhile, by the fact that \(a^{r}-b^{r}\leq ra^{r-1}(a-b)\) for all \(r\geq1\), we find that
It follows immediately from (ii), (iii), (3.12) and the property of \(\phi_{q}\) that
which implies that
In view of condition (iii), there exists \(\varepsilon> 0\) such that \(r_{n}\geq\varepsilon\) for all \(n\geq1\). Then we get, by Lemma 2.11, that
We show \(\lim_{n\rightarrow\infty} \Vert T_{\varepsilon}Gy_{n}-y_{n}\Vert =0\).
Thanks to (3.10), (3.13), (3.14) and (ii), we see
Next we prove that
Equivalently (should \(\Vert y_{n}-x\Vert \neq0\)), we need to prove that
To this end, let \(x_{t}\) satisfy \(x_{t} = tf(x_{t}) +(1-t) T_{\varepsilon}G x_{t}\). By Xu’s Theorem 4.1 in [32], we get \(x_{t} \rightarrow x\in \operatorname{Fix}(T_{\varepsilon}G)=\operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) (by Lemma 2.10, Lemma 3.1 and Lemma 3.3) as \(t\rightarrow0\), which x solves the variational inequality
Using subdifferential inequality, we deduce that
which implies that
Using (3.15), taking the upper limit as \(n\rightarrow\infty\) firstly, and then as \(t\rightarrow0\) in (3.16), we have
Since E is a uniformly smooth Banach space, we have that the duality mapping j is norm-to-norm uniform on any bounded subset of E, which ensures that the limits \(\limsup_{t\rightarrow0}\) and \(\limsup_{n\rightarrow\infty}\) are interchangeable. Then we have
Finally, we show \(\Vert y_{n}-x\Vert \rightarrow0\).
By Lemma 3.3 and the fact that \(ab\leq\frac{1}{q}a^{q} +\frac{q-1}{q}b^{\frac{q}{q-1}}\), we get
which implies that
Apply Lemma 2.6 to (3.17) to conclude \(y_{n} \rightarrow x\in \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) as \(n \rightarrow\infty\), which solves the variational inequality
And \((x, y)\) is a solution of the modified system of variational inequalities problem (1.7) due to Lemma 3.4, where \(y=Q_{C}(I-\mu B)x\). This completes the proof. □
Remark 3.7
Theorem 3.6 improves and extends Theorem 3.7 of López et al. [24] in the sense:
-
From the problem of finding a solution for a variational inclusion problem with two accretive operators to problem of finding a common solution for a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities.
Remark 3.8
Theorem 3.6 improves and extends Theorem 2.1 of Zhang et al. [19], Theorem 3.1 of Qin et al. [26], Theorem 3.1 of Takahashi et al. [27] and Theorem 3.1 of Khuangsatung and Kangtunyakarn [25] in the following senses:
-
From Hilbert spaces to uniformly convex and q-uniformly smooth Banach spaces.
-
From finding a common element of the set of solutions for the variational inclusion problem with two accretive operators and the set of fixed points of nonexpansive mappings to finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities.
As a direct consequence of Theorem 3.6, we obtain the following corollary.
Corollary 3.9
Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow H\) be \(\eta_{i}\)-inverse-strongly monotone with \(\eta=\min \{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\). Let M be maximal monotone in H with \(\operatorname{Dom}(M)\subseteq C\), \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\), where \(\operatorname{Proj}_{C}\) is the metric projection from H onto C. Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, A_{i}, M)\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
-
(i)
\(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);
-
(ii)
\(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2\eta\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);
-
(iv)
\(\sum_{n=1}^{N} \lambda_{i}=1\).
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, A_{i}, M)\), which solves the variational inequality
4 Applications
In this section, we give some applications of our main results in the framework of Hilbert spaces. Let C be a nonempty, closed and convex subset of a Hilbert space, and let \(f : C \times C \rightarrow\mathbb{R}\) be a bifunction satisfying the following conditions:
-
(A1)
\(f(x, x) = 0\) for all \(x \in C\);
-
(A2)
f is monotone, i.e., \(f (x, y) +f (y, x)\leq0\) for all \(x,y \in C\);
-
(A3)
for all \(x, y, z \in C\),
$$ \limsup_{t\downarrow0}f\bigl(tz +(1 -t)x, y\bigr)\leq f (x, y); $$ -
(A4)
for all \(x \in C, f(x,\cdot)\) is convex and lower semi-continuous.
Then the mathematical model related to equilibrium problems (with respect to C) is to find \(\hat{x }\in C\) such that
for all \(y \in C\). The set of such solutions \(\hat{x }\) is denoted by \(\operatorname{EP}(f)\). The following lemma appears implicitly in Blum and Oettli [33].
Lemma 4.1
Let C be a nonempty, closed and convex subset of H and let \(f : C \times C \rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4). Let \(r >0\) and \(x\in H\). Then there exists \(z \in C\) such that
The following lemma is given in Combettes and Hirstoaga [34].
Lemma 4.2
Assume that \(f : C \times C \rightarrow \mathbb{R}\) satisfies (A1)-(A4). For \(r >0\) and \(x \in H\), define a mapping \(S_{r}: H\rightarrow C\) as follows:
for all \(x \in H\). Then the following hold:
-
(i)
\(S_{r}\) is single-valued;
-
(ii)
\(S_{r}\) is a firmly nonexpansive mapping, i.e., for all \(x,y \in H\), \(\Vert S_{r}x-S_{r}y\Vert ^{2}\leq\langle S_{r}x-S_{r}y, x-y\rangle\);
-
(iii)
\(\operatorname{Fix}(S_{r}) = \operatorname{EP}(f)\);
-
(iv)
\(\operatorname{EP}(f)\) is closed and convex.
We call such \(S_{r}\) the resolvent of f for \(r > 0\). Using Lemma 4.1 and Lemma 4.2, Takahashi et al. [27] proved the following result.
Lemma 4.3
Let H be a Hilbert space and let C be a nonempty, closed and convex subset of H. Let \(f : C\times C\rightarrow\mathbb{R}\) satisfy (A1)-(A4). Let \(A_{f}\) be a multivalued mapping of H into itself defined by
Then \(\operatorname{EP}(f) = A_{f}^{-1}0\) and \(A_{f}\) is a maximal monotone operator. Further, for any \(x\in H\) and \(r >0\), the resolvent \(S_{r}\) of f coincides with the resolvent of \(A_{f}\); i.e., \(S_{r}x = (I +rA_{f} )^{-1}x\).
Theorem 4.4
Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let \(f:C\times C\rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4) and let \(S_{\delta}\) be the resolvent of f for \(\delta> 0\). Let \(\psi: C\rightarrow C\) be r-contractive, \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \operatorname{EP}(f)\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset(0, +\infty)\) for all \(n\in \mathbb{N}\) satisfy the following conditions:
-
(i)
\(\sum_{i=1}^{\infty} \Vert e_{n}\Vert <\infty\);
-
(ii)
\(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);
-
(iii)
\(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\).
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \operatorname{EP}(f)\), which solves the variational inequality
Proof
Put \(A_{i}= 0\) for \(i=1,2,\ldots,N\) in Corollary 3.9. From Lemma 4.3, we know that \(J_{r_{n}} = S_{r_{n}}\) for all \(n \in\mathbb{N}\). So, we obtain the desired result by Corollary 3.9. □
Let \(g : H \rightarrow(-\infty, +\infty]\) be a proper convex lower semi-continuous function. Then, the subdifferential ∂g of g is defined as follows:
From Rockafellar [35], we know that ∂g is maximal monotone. It is easy to verify that \(0\in\partial g(x)\) if and only if \(g(x) = \min_{ y\in H} g(y)\). Let \(I_{C}\) be the indicator function of C, i.e.,
Then \(I_{C}\) is a proper lower semi-continuous convex function on H, and the subdifferential \(\partial I_{C}\) of \(I_{C}\) is a maximal monotone operator. Furthermore, suppose that C is a nonempty closed convex subset. Then
For more details, one can refer to [27].
Theorem 4.5
Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow H\) be \(\eta_{i}\)-inverse-strongly monotone with \(\eta=\min \{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\) for each \(i\in\{1,2,\ldots, N\}\). Let \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(C, A_{i})\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
-
(i)
\(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);
-
(ii)
\(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2\eta\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);
-
(iv)
\(\sum_{n=1}^{N} \lambda_{i}=1\).
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(C, A_{i})\), which solves the variational inequality
Proof
Put \(B= \partial I_{C}\). Next, we show that \(\operatorname{VI}(C,A_{i}) = \operatorname{VI}(H, A_{i}, \partial I_{C})\). Notice that
In view of Theorem 3.6, we find the desired result immediately. □
Let \(W : H\rightarrow\mathbb{R}\) be a convex and differentiable function and \(M : H \rightarrow\mathbb{R}\) be a convex function. Consider the convex minimization problem \(\min_{x\in H}(Wx + Mx)\). From [35], we know if ∇W is \(\frac{1}{L}\)-Lipschitz continuous, then it is L-inverse-strongly monotone. Hence, we have the following theorem.
Theorem 4.6
Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer. Let \(W_{i}: H\rightarrow\mathbb {R}\) be a convex and differentiable function and \(\nabla W_{i}\) be \(\frac{1}{L_{i}}\)-Lipschitz continuous with \(L=\min\{L_{1}, L_{2},\ldots, L_{N}\} \) for each \(i\in\{1,2,\ldots, N\}\). Let M be a convex and lower semi-continuous function, \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be a convex and differentiable function and let ∇A, ∇B be α- and β-Lipschitz continuous, respectively. Define a mapping
Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G')\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, \nabla W_{i}, \partial M)\neq\emptyset \). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
-
(i)
\(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);
-
(ii)
\(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2L\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);
-
(iv)
\(\sum_{n=1}^{N} \lambda_{i}=1\).
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G')\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, \nabla W_{i}, \partial M)\), which solves the variational inequality
Proof
Put \(M=\partial M\), \(A=\nabla A\), \(B=\nabla B\), \(A_{i}=\nabla W_{i}\) for each \(i\in\{1,2,\ldots, N\}\) in Theorem 3.6. Then we get the desired conclusions immediately. □
5 Numerical examples
The purpose of this section is to give two numerical examples supporting Theorem 3.6.
Example 5.1
Let \(\mathbb{R}\) be a real line with the Euclidean norm. For all \(x \in\mathbb{R}\), let \(A,B, M, f: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(Ax=\frac{1}{3}x\), \(Bx =\frac{1}{6}x\), \(Mx =x\) and \(f(x)=\frac {1}{2}x\), respectively. For each \(i\in\{1,2,\ldots, N\}\), let \(A_{i}: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(A_{i}x=\frac{i}{6}x\) for all \(x\in\mathbb{R}\). Let \(a=\frac{1}{2}\), \(\lambda=2\), \(\mu =3\), \(\lambda_{i}=\frac{2}{3^{i}}+\frac{1}{N3^{N}}\) for each \(i\in\{ 1,2,\ldots, N\}\), and \(e_{n}=\frac{e_{1}}{n^{2}}\) (\(i=1,2,\ldots\)), where \(\vert e_{1}\vert <\infty\). Let the sequence \(\{x_{n}\}\) be generated iteratively by (3.7), where \(\alpha_{n}=\frac{1}{n}\) and \(r_{n}=\frac{1}{n+2}+\frac{1}{N}\). Then the sequence \(\{x_{n}\}\) converges strongly to 0.
Solution: It can be observed that all the assumptions of Theorem 3.6 are satisfied. It is also easy to check that
We rewrite (3.7) as follows:
Using algorithm (5.1) and choosing \(e_{1}=x_{1}=5\) with \(N=1\) and \(N=100\) (see Table 1), we see that Figure 1 and numerical results demonstrate Theorem 3.6.
Next, we present a numerical example in \(\mathbb{R}^{3}\) that also supports our result.
Example 5.2
Let the inner product \(\langle\cdot, \cdot\rangle: \mathbb{R}^{3} \times\mathbb{R}^{3} \rightarrow\mathbb{R}\) be defined by \(\langle \mathbf{x}, \mathbf{y}\rangle=\mathbf{x} \cdot\mathbf{y}= x_{1}\cdot y_{1}+x_{2}\cdot y_{2}+x_{3}\cdot y_{3}\) and the usual norm \(\Vert \cdot \Vert : \mathbb{R}^{3}\rightarrow\mathbb{R}\) be defined by \(\Vert \mathbf{x}\Vert =\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}\) for all \(\mathbf{x}=(x_{1}, x_{2}, x_{3})\), \(\mathbf{y}=(y_{1}, y_{2}, y_{3})\in \mathbb{R}^{3}\). Let \(A,B, M, f: \mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) be defined by \(A\mathbf{x}=\frac{1}{4}\mathbf{x}\), \(B\mathbf{x}=f\mathbf {x}=\frac{1}{6}\mathbf{x}\) and \(M\mathbf{x}=\mathbf{x}\) for all \(\mathbf{x}\in\mathbb{R}^{3}\), respectively. For each \(i\in\{ 1,2,\ldots, N\}\), let \(A_{i}: \mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) be defined by \(A_{i}\mathbf{x}=\frac{i}{6}\mathbf{x}\) for all \(\mathbf{x}\in\mathbb{R}^{3}\). Let \(a=\frac{1}{2}\), \(\lambda =3\), \(\mu=6\), \(\lambda_{i}=\frac{5}{6^{i}}+\frac{1}{N6^{N}}\) for each \(i\in\{1,2,\ldots, N\}\) and \(e_{n}=\frac{e_{1}}{n^{2}}\) (\(n = 1, 2, \ldots \)), where \(e_{1}\in\mathbb{R}^{3}\) and \(\Vert e_{1}\Vert <\infty \). Let the sequence \(\{\mathbf{x}_{n}\}\) be generated iteratively by (3.7), where \(\alpha_{n}=\frac{1}{n}\) and \(r_{n}=\frac{1}{n+2}+\frac{1}{N}\). Then the sequence \(\{\mathbf {x}_{n}\}\) converges strongly to 0.
Solution: It can be observed that all the assumptions of Theorem 3.6 are satisfied. It is also easy to check \(\operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)=\{0\}\).
We rewrite (3.7) as follows:
Utilizing algorithm (5.2) and choosing \(\mathbf{x}_{1}=e_{1}=(1,6,12)\) with \(N=100\), we report the numerical results in Table 2. In addition, Figure 2 also demonstrates Theorem 3.6.
References
Stampacchia, G: Formes bilineaires coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413-4416 (1964)
Peng, J, Wang, Y, Shyu, D, Yao, JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008, Article ID 720371 (2008)
Qin, X, Chang, SS, Cho, YJ, Kang, SM: Approximation of solutions to a system of variational inclusions in Banach spaces. J. Inequal. Appl. 2010, Article ID 916806 (2010)
Hao, Y: On variational inclusion and common fixed point problems in Hilbert spaces with applications. Appl. Math. Comput. 217, 3000-3010 (2010)
Yao, Y, Yao, J: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 186, 1551-1558 (2007)
Brézis, H: Équations et inéquations non linéaires dans les espaces vectoriels en dualité. Ann. Inst. Fourier (Grenoble) 18, 115-175 (1968)
Moreau, JJ: Proximité et dualité dans un espaces hilbertien. Bull. Soc. Math. Fr. 93, 273-299 (1965)
Ansari, QH, Yao, JC: Systems of generalized variational inequalities and their applications. Appl. Anal. 76, 203-217 (2000)
Ansari, QH, Schaible, S, Yao, JC: The system of generalized vector equilibrium problems with applications. J. Glob. Optim. 22, 3-16 (2002)
Yao, Y, Noor, MA, Noor, K, Liou, YC, Yaqoob, H: Modified extragradient methods for a system of variational inequalities in Banach spaces. Acta Appl. Math. 110, 1211-1224 (2010)
Kangtunyakarn, A: The modification of system of variational inequalities for fixed point theory in Banach spaces. Fixed Point Theory Appl. 2014, Article ID 123 (2014)
Cai, G, Bu, S: Modified extragradient methods for variational inequality problems and fixed point problems for an infinite family of nonexpansive mappings in Banach spaces. J. Glob. Optim. 55, 437-457 (2013)
Ceng, LC, Wang, C, Yao, YC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375-390 (2008)
Verma, RU: On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-Line 3, 65-68 (1999)
Chang, SS: Existence and approximation of solutions for set-valued variational inclusions in Banach space. Nonlinear Anal. 47(1), 583-594 (2001)
Chang, SS: Set-valued variational inclusions in Banach spaces. J. Math. Anal. Appl. 248(2), 438-454 (2000)
Noor, MA: Generalized set-valued variational inclusions and resolvent equations. J. Math. Anal. Appl. 228(1), 206-220 (1998)
Hartman, P, Stampacchia, G: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271-310 (1966)
Zhang, SS, Lee, J, Chan, CK: Algorithms of common solutions for quasi-variational inclusion and fixed point problems. Appl. Math. Mech. 29, 571-581 (2008)
Manaka, H, Takahashi, W: Weak convergence theorems for maximal monotone operators with nonspreading mappings in a Hilbert space. CUBO 13, 11-24 (2011)
Kamimura, S, Takahashi, W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226-240 (2000)
Aoyama, K, Iiduka, H, Takahashi, W: Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, Article ID 35390 (2006)
Zegeye, H, Shahzad, N: Strong convergence theorems for a common zero of a finite family of m-accretive mappings. Nonlinear Anal. 66(5), 1161-1169 (2007)
López, G, Martín-Márquez, V, Wang, FH, Xu, HK: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)
Khuangsatung, W, Kangtunyakarn, A: Algorithm of a new variational inclusion problem and strictly pseudononspreading mapping with application. Fixed Point Theory Appl. 2014, Article ID 209 (2014)
Qin, XL, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014)
Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)
Xu, HK: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127-1138 (1991)
Xu, HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 1-17 (2002)
Song, YL, Ceng, LC: A general iteration scheme for variational inequality problem and common fixed point problems of nonexpansive mappings in q-uniformly smooth Banach spaces. J. Glob. Optim. 57, 1327-1348 (2013)
Reich, S: Asymptotic behavior of contractions in Banach spaces. J. Math. Anal. Appl. 44, 57-70 (1973)
Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)
Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994)
Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)
Rockafellar, R: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97-116 (1976)
Acknowledgements
This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Song, Y., Ceng, L. Strong convergence of a general iterative algorithm for a finite family of accretive operators in Banach spaces. Fixed Point Theory Appl 2015, 90 (2015). https://doi.org/10.1186/s13663-015-0335-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-015-0335-0