Abstract
For a variational inequality problem, the inertial projection and contraction method have been studied. It has a weak convergence result. In this paper, we propose a strong convergence iterative method for finding a solution of a variational inequality problem with a monotone mapping by projection and contraction method and inertial hybrid algorithm. Our result can be used to solve other related problems in Hilbert spaces.
Similar content being viewed by others
1 Introduction
The variational inequality (VI) problem plays an important role in nonlinear analysis and optimization. It is a generalization of the nonlinear complementarity problem. Recently, it has had considerable applications in many fields. The VI problem was introduced by Fichera [1, 2] for solving Signorini problem. Later, it was studied by Stampacchia [3] for solving mechanic problems.
Let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and the norm \(\|\cdot \|\). Let C be a nonempty closed convex subset of H. The variational inequality problem is to find a point \(x^{*}\in C\) such that
where F is a mapping of H into H. The solution set of VI (1.1) is denoted by \(VI(C,F)\).
Using properties of the metric projection, we can easily see that \(x^{*}\in VI(C,F)\) if and only if
Many scholars are devoted to the research of variational inequality problems. Some authors have proposed several iterative methods for solving VI (1.1). A simple iterative method [4] is
or more generally,
The convergence of (1.2) and (1.3) depends on the properties of F. If F is strongly monotone and Lipschitz continuous, (1.2) and (1.3) have strong convergence results under certain conditions of parameters. If F is inverse strongly monotone, (1.2) and (1.3) have weak convergence results under some suitable conditions.
In 1976, Korpelevich [5] proposed the following so-called extragradient method for solving VI (1.1) when F is monotone and Lipschitz continuous in the finite-dimensional Euclidean space \(\mathbb{R}^{n}\):
for each \(n\in \mathbb{N}\). Under some suitable conditions, the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to the same point \(z\in VI(C,F)\). The recent variants of Korpelevich’s method can be found in [6].
In 1997, He [7] proposed another method to solve VI with monotone mappings. His method is called projection and contraction method:
for each \(n\in \mathbb{N}\), where \(\gamma \in (0,2)\),
and
This method has a convergence result under certain conditions.
In 2017, Dong et al. [8] proposed the following so-called inertial projection and contraction method:
for each \(n\in \mathbb{N}\), where \(\gamma \in (0,2)\),
and
They proved that the sequence \(\{x_{n}\}\) generated by (1.6) converges weakly to a point in \(VI(C,F)\) under certain conditions.
Sometimes, a weak convergence result is not very good. We want to get a strong convergence result. Very recently, Dong et al. [9] used hybrid method to modify an inertial forward-backward algorithm for solving zero point problems in Hilbert spaces:
They proved that \(\{x_{n}\}\) converges strongly to \(P_{(A+B)^{-1}(0)}x _{0}\) under some suitable conditions.
Based on the work above, we propose an inertial hybrid method for finding a solution of a variational inequality problem with a monotone mapping. As applications, we use algorithm we proposed to solve other related problems in Hilbert spaces.
2 Preliminaries
In this section, we introduce some mathematical symbols, definitions, and lemmas which can be used in the proofs of our main results.
Throughout this paper, let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\|\cdot \|\). Let \(\{x_{n}\}\) be a sequence in H, we write “\(x _{n}\rightharpoonup x\)” to indicate that the sequence \(\{x_{n}\}\) converges weakly to x and “\(x_{n}\rightarrow x\)” to indicate that the sequence \(\{x_{n}\}\) converges strongly to x. z is called a weak cluster point of \(\{x_{n}\}\) if there exists a subsequence \(\{x_{n _{i}}\}\) of \(\{x_{n}\}\) converging weakly to z. We write \(\omega _{w}(x_{n})\) to indicate the set of all weak cluster points of \(\{x_{n}\}\). A fixed point of a mapping \(T:H\rightarrow H\) is a point \(x\in H\) such that \(Tx=x\), and we denote the set of all fixed points of mapping T by \(\mathit{Fix}(T)\).
We introduce definitions of some operators we will use in the following sections.
Definition 2.1
Let \(T:H\rightarrow H\) be the nonlinear operators.
- (i)
T is nonexpansive if
$$ \Vert Tx-Ty \Vert \leq \Vert x-y \Vert , \quad \forall x,y\in H. $$ - (ii)
T is firmly nonexpansive if
$$ \langle Tx-Ty, x-y\rangle \geq \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y\in H. $$We can easily show that a firmly nonexpansive mapping is always nonexpansive by using the Cauchy–Schwarz inequality.
- (iii)
T is α-averaged, with \(0<\alpha <1\), if
$$ T=(1-\alpha )I+\alpha S, $$where \(S:H\rightarrow H\) is nonexpansive. The term “averaged mapping” was introduced in the early paper by Baillon, Bruck, and Reich [13]. It is obvious that \(\mathit{Fix}(S)=\mathit{Fix}(T)\). We can easily show that a firmly nonexpansive mapping is \(\frac{1}{2}\)-averaged.
- (iv)
T is L-Lipschitz continuous, with \(L\geq 0\), if
$$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad \forall x,y\in H. $$We call T a contractive mapping when \(0\leq L<1\).
Definition 2.2
Let \(F:H\rightarrow H\) be a nonlinear mapping.
- (i)
F is monotone if
$$ \langle Fx-Fy, x-y\rangle \geq 0, \quad \forall x,y\in H. $$ - (ii)
F is η-strongly monotone, with \(\eta >0\), if
$$ \langle Fx-Fy, x-y\rangle \geq \eta \Vert x-y \Vert ^{2}, \quad \forall x,y \in H. $$ - (iii)
F is v-inverse strongly monotone (v-ism), with \(v>0\), if
$$ \langle Fx-Fy, x-y\rangle \geq v \Vert Fx-Fy \Vert ^{2}, \quad \forall x,y \in H. $$We can easily show that a v-ism mapping is \(\frac{1}{v}\)-Lipschitz continuous by using the Cauchy–Schwarz inequality.
We introduce some definitions and propositions about projections.
Proposition 2.3
([4])
LetCbe a nonempty closed convex subset ofH. Then, for each \(x\in H\), there exists a unique point \(z\in C\)such that
Definition 2.4
([4])
Let C be a nonempty closed convex subset of H. Define
\(P_{C}\) is called the metric projection on C. We can show that \(P_{C}\) is firmly nonexpansive.
Lemma 2.5
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Given \(x\in H\)and \(z\in C\). Then \(z=P_{C}x\)if and only if there holds the inequality
Lemma 2.6
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Given \(x\in H\)and \(z\in C\). Then \(z=P_{C}x\)if and only if there holds the inequality
More properties of metric projections can be found in [12].
Next, we introduce some definitions and propositions about set-valued mappings.
Definition 2.7
([17])
Let H be a real Hilbert space. Let A be a set-valued mapping of H into \(2^{H}\). We denote the effective domain of A by \(D(A)\), \(D(A)\) is defined by
The graph of A is defined by
A set-valued mapping A is called monotone if
A monotone mapping A is called maximal if its graph is not properly contained in the graph of any other monotone mappings on \(D(A)\).
In fact, we cannot use the definition of the maximal monotone mapping conveniently, a property of the maximal monotone mapping is usually used: A monotone mapping B is maximal if and only if for \((x,u) \in H\times H\), \(\langle x-y, u-v\rangle \geq 0\) for each \((y,v) \in G(A)\) implies \((x,u)\in G(A)\). This property is just a reformulation of the definition of maximal monotone mappings.
Definition 2.8
Let \(A:H\rightarrow 2^{H}\) be a mapping and \(r>0\). The resolvent of A is
Lemma 2.9
Let \(A:H\rightarrow 2^{H}\)be a maximal monotone mapping and \(r>0\). Then \(J_{r}^{A}:H\rightarrow D(A)\)is firmly nonexpansive.
In particular, let C be a nonempty closed convex subset of a real Hilbert space H, recall the normal cone [19] to C at \(x\in C\):
We can easily show that \(N_{C}\) is a maximal monotone mapping and its resolvent is \(P_{C}\). So we can consider the resolvent of a maximal monotone mapping as a generalization of metric projection operator.
Lemma 2.10
([19])
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. LetFbe a monotone and Lipschitz continuous mapping ofCintoH. Define
ThenTis maximal monotone and \(0\in Tv\)if and only if \(v\in VI(C,F)\).
3 Main result
In this section, we propose a strong convergence algorithm for finding a solution of a variational inequality problem. The algorithm we propose is based on the work in Sect. 1.
Let H be a real Hilbert space. Let C be a nonempty closed convex subset of H. Let F be a mapping of H into H.
Algorithm 1
Choose \(x_{0}\), \(x_{1}\in H\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula
for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\), and
where
If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (3.1) to calculate the next iterate \(x_{n+2}\).
Theorem 3.1
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 1. If \(y_{n}=w_{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in VI(C,F)\).
Proof
From the expression of \(d(w_{n},y_{n})\) and the condition imposed on F, we have
On the other hand,
So we have
Hence \(y_{n}=w_{n}\) and \(d(w_{n},y_{n})=0\) are equivalent. Using Lemma 2.5, we can get the desired result. □
Theorem 3.2
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 1. If \(y_{n}\neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{VI(C,F)}x_{1}\).
Proof
We divide the proof into four steps.
Step 1. We show that \(VI(C,F)\subset C_{n}\cap Q_{n}\) for each \(n\in \mathbb{N}\).
It is obvious that \(C_{n}\) and \(Q_{n}\) are half-spaces for each \(n\in \mathbb{N}\).
On the other hand,
Combining (3.3) and (3.4), we have
Let \(u\in VI(C,F)\), we have
By the definition of \(y_{n}\) and Lemma 2.5,
So we have
Combining (3.6) and (3.7), we get
By the expression of \(w_{n}\), we have
It follows from (3.8) and (3.9) that
Therefore, \(u\in C_{n}\) for each \(n\in \mathbb{N}\). Hence, \(VI(C,F) \subset C_{n}\) for each \(n\in \mathbb{N}\).
For \(n=1\), we have \(Q_{1}=H\) and hence \(VI(C,F)\subset C_{1}\cap Q _{1}\).
Suppose that \(x_{k}\) is given and \(VI(C,F)\subset C_{k}\cap Q_{k}\) for some \(k\in \mathbb{N}\). It follows from \(x_{k+1}\) and Lemma 2.5 that
It means that \(VI(C,F)\subset Q_{k+1}\). Hence, \(VI(C,F)\subset C_{k+1} \cap Q_{k+1}\).
By induction, we obtain \(VI(C,F)\subset C_{n}\cap Q_{n}\) for each \(n\in \mathbb{N}\).
Step 2. We show that \(\{x_{n}\}\) is bounded.
From
and Lemma 2.5, we have
and hence
Since \(VI(C,F)\subset Q_{n}\), we have
In particular, since \(x_{n+1}\in Q_{n}\), we obtain
Therefore, there exists
It means that \(\{x_{n}\}\) is bounded.
Step 3. We show that \(\omega _{w}(x_{n})\subset VI(C,F)\).
Since \(x_{n}=P_{Q_{n}}x_{1}\), \(x_{n+1}\in Q_{n}\) and Lemma 2.6, we obtain
and hence
From
and that \(\{x_{n}\}\) is bounded, we have
Since \(x_{n+1}\in C_{n}\), we have
and hence
Combining (3.14), (3.15), and (3.16), we obtain
From (3.1), (3.2), (3.5), and (3.17), we have
Since \(\{x_{n}\}\) is bounded, we can take a suitable subsequence \(\{x_{n_{i}}\}\) such that \(x_{n_{i}}\rightharpoonup z\). So we have \(w_{n_{i}}\rightharpoonup z\) and \(y_{n_{i}}\rightharpoonup z\). Let
Then from Lemma 2.10, we know that T is maximal monotone and \(0\in Tv\) if and only if \(v\in VI(C,F)\). For each \((v,w)\in G(T)\), we have
and hence
By the definition of \(N_{C}\), we obtain
On the other hand, from \(v\in C\) and the expression of \(y_{n}\), we have
and hence
Therefore, from (3.19) and (3.20), we obtain
As \(i\rightarrow \infty \), we have
Since T is maximal monotone, we have \(0\in Tz\) and hence \(z\in VI(C,F)\). So we obtain \(\omega _{w}(x_{n})\subset VI(C,F)\).
Step 4. We show that \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow \infty \).
Since the norm is convex and lower continuous and \(z\in VI(C,F)\), it follows from (3.11) that
So we have
From \(x^{*}=P_{VI(C,F)}x_{1}\), we obtain \(z=x^{*}\), i.e., \(\omega _{w}(x _{n})=\{x^{*}\}\). So we have
and
Hence \(x_{n}-x_{1}\rightharpoonup x^{*}-x_{1}\). Since H satisfies the K-K property, we can obtain \(x_{n}-x_{1}\rightarrow x^{*}-x_{1}\), i.e., \(x_{n}\rightarrow x^{*}\). □
Remark 3.3
If we set \(\alpha _{n}=0\) for each \(n\in \mathbb{N}\), we can get the following algorithm:
4 Applications
In this section, we introduce some applications which are useful in nonlinear analysis and optimization problems in Hilbert spaces.
4.1 Constrained convex minimization problem
Let C be a nonempty closed convex subset of a real Hilbert space H. The constrained convex minimization problem [14] is to find a point \(x^{*}\in C\) such that
where f is a real-valued convex function. We denote the solution set of problem (4.1) by Ω.
We need the following lemma.
Lemma 4.1
LetHbe real Hilbert space, and letCbe a nonempty closed convex subset ofH. Letfbe a convex function ofHinto \(\mathbb{R}\). Iffis differentiable, then \(z\in \varOmega \)if and only if \(z\in VI(C,\nabla f)\).
Let H be a real Hilbert space. Let C be a nonempty closed convex subset of H. Let f be a real-valued convex function of H. Assume that f is differentiable.
Algorithm 2
Choose \(x_{0}\), \(x_{1}\in H\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula
for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\) and
where
If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (4.2) to calculate the next iterate \(x_{n+2}\).
Theorem 4.2
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Letfbe real-valued convex function ofH. Assume thatfis differentiable and ∇fisL-Lipschitz continuous with \(L>0\). Assume that \(\varOmega \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b< \frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 2. If \(y_{n}=w_{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in \varOmega \).
Proof
Since f is convex, we conclude that ∇f is monotone. Putting \(F=\nabla f\) in Theorem 3.1, we get the desired result by Lemma 4.1. □
Theorem 4.3
LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let \(F:H\rightarrow H\)be a monotone andL-Lipschitz continuous mapping with \(L>0\). Assume that \(VI(C,F)\neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 2. If \(y_{n}\neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{\varOmega }x_{1}\).
Proof
Since f is convex, we conclude that ∇f is monotone. Putting \(F=\nabla f\) in Theorem 3.2, we get the desired result by Lemma 4.1. □
4.2 Split feasibility problem
Next, we consider the split feasibility problem.
The split feasibility problem (SFP) was proposed by Censor and Elfving [21] in 1994. The SFP is to find a point \(x^{*}\) such that
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, A is a bounded linear operator of \(H_{1}\) and \(H_{2}\) with \(A\neq 0\).
In 2004, Byrne [22] proposed the following algorithm for solving (4.3):
In this section, we introduce a new algorithm to solve (4.3). We need the following lemmas.
Lemma 4.4
([20])
Let \(H_{1}\)and \(H_{2}\)be real Hilbert spaces. LetCandQbe nonempty closed convex subsets of \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator of \(H_{1}\)into \(H_{2}\)with \(A\neq 0\). Assume that \(C\cap A^{-1}Q\)is nonempty. Let \(\lambda \geq 0\). Then \(z\in C\cap A^{-1}Q\)if and only if \(z\in VI(C,A^{*}(I-P_{Q})A)\), where \(A^{*}\)is the adjoint operator ofA.
Lemma 4.5
([20])
Let \(H_{1}\)and \(H_{2}\)be real Hilbert spaces. LetAbe a bounded linear operator of \(H_{1}\)into \(H_{2}\)such that \(A\neq 0\). LetQbe a nonempty closed convex subset of \(H_{2}\). Then \(A^{*}(I-P_{Q})A\)is monotone and \(\|A\|^{2}\)-Lipschitz continuous.
We propose the following algorithm for solving SFP (4.3).
Algorithm 3
Choose \(x_{0}\), \(x_{1}\in H_{1}\) arbitrarily. Calculate the \((n+1)\)th iterate \(x_{n+1}\) via the formula
for each \(n\geq 1\), where \(\gamma \in (0,2)\), \(\lambda _{n}>0\), and
where
If \(y_{n}=w_{n}\) or \(d(w_{n},y_{n})=0\), then calculate \(x_{n+1}\) and the iterative process stops; otherwise, we set \(n:=n+1\) and go on to (4.5) to calculate the next iterate \(x_{n+2}\).
Theorem 4.6
LetCandQbe nonempty closed convex subsets of real Hilbert spaces \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator with \(A\neq 0\). Set \(\varGamma =C\cap A^{-1}Q\). Assume that \(\varGamma \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 3. If \(y_{n}=w _{n}\)or \(d(w_{n},y_{n})=0\), then \(x_{n+1}\in \varGamma \).
Proof
Putting \(F=A^{*}(I-P_{Q})A\) in Theorem 3.1, we get the desired result by Lemmas 4.4 and 4.5. □
Theorem 4.7
LetCandQbe nonempty closed convex subsets of real Hilbert spaces \(H_{1}\)and \(H_{2}\), respectively. LetAbe a bounded linear operator with \(A\neq 0\). Set \(\varGamma =C\cap A^{-1}Q\). Assume that \(\varGamma \neq \emptyset \)and \(0< a\leq \lambda _{n}\leq b<\frac{1}{L}\). Let \(\{x_{n}\}\)be a sequence generated by Algorithm 3. If \(y_{n} \neq w_{n}\)for each \(n\in N\), then \(\{x_{n}\}\)converges strongly to \(x^{*}=P_{\varGamma }x_{1}\).
Proof
Putting \(F=A^{*}(I-P_{Q})A\) in Theorem 3.2, we get the desired result by Lemmas 4.4 and 4.5. □
5 Numerical experiments
In this section, we give some numerical results to illustrate the effectiveness of our iterative scheme in Sect. 3 and compare with extragradient method [5] and iterative scheme (1.2). All the programs are written in Matlab 7.10 and performed on a PC Desktop Intel® Core™ i5-2450M CPU @ 2.50 GHz 2.50 GHz, RAM 4.00 GB. All the projections over C and \(C_{n}\cap Q_{n}\) are computed effectively by the function quadprog in Matlab 7.10 Optimization Toolbox.
Example 1
Let \(H=\mathbb{R}\) and \(C=[-2,5]\). Let F be a function given by
for each \(x\in \mathbb{R}\). For all \(x,y\in H\), we have
Therefore, F is monotone and 2-Lipschitz continuous.
Choose \(x_{0}=2\), \(\lambda _{n}=\lambda \), \(\alpha _{n}=2\), and \(\gamma =1\) for our iterative scheme (3.1). It is easy to find that \(VI(C,F)=\{0\}\). We denote \(x^{*}=0\) and use \(\|x_{n}-x^{*}\|\leq 10^{-5}\) for stopping criterion. The numerical results for this example are described in Table 1.
Example 2
Let \(H=\mathbb{R}^{m}\). We consider a classical problem [23, 24]. The feasible set is \(C=\mathbb{R}^{m}\) and \(F:\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}\) is a linear operator in the form
for each \(x\in \mathbb{R}^{m}\), where \(A=(a_{i,j})_{1\leq i,j\leq m}\) is a matrix in \(\mathbb{R}^{m\times m}\) whose terms are given by
Then F is monotone and \(\|A\|\)-Lipschitz continuous. This is a classical example of a problem where the usual gradient method does not converge. We can easily see that \(VI(C,F)=F^{-1}(0)\) and the zero vector is the unique element in \(VI(C,F)\). We denote \(x^{*}=(0,0,\ldots ,0)^{T}\).
Choose \(x_{1}=(1,1,\ldots ,1)^{T}\) and \(\lambda _{n}=\lambda =0.2/\|A \|\) in each iterative scheme. Take \(x_{0}=(2,2,\ldots ,2)^{T}\), \(\alpha _{n}=2\), and \(\gamma =1\) in our iterative scheme (3.1). We show the numerical results for the cases \(m=10,20,30,40\) respectively in Fig. 1, Fig. 2, Fig. 3, and Fig. 4.
6 Conclusion
For a variational inequality problem, Algorithms (1.2) and (1.3) have been studied. Considering that sometimes the conditions of operators are not strong enough, He proposed the projection and contraction algorithm. In 2017, Dong et al. proposed the inertial projection and contraction algorithm originated from the second-order dynamical systems. Recently, Dong et al. proposed a strong convergence method for solving zero point problems by using hybrid method. Motivated by their work, we propose an inertial hybrid algorithm for solving variational inequality problems in Hilbert spaces and obtain strong convergence theorems.
References
Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincci, VIII. Scr., Rend., Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)
Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincci, Mem., Cl. Sci. Fis. Mat. Nat., Sez. 7, 91–140 (1964)
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413–4416 (1964)
Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747–756 (1976)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012)
He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)
Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, T.M.: Inertial projection and contraction algorithms for variational inequalites. J. Glob. Optim. 70, 687–704 (2018)
Dong, Q.L., Jiang, D., Cholamjiak, P., Shehu, Y.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19(4), 3097–3118 (2017)
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 26, 10518 (2010)
Tian, M., Jiang, B.N.: Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space. J. Inequal. Appl. 2017, 123 (2017)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Baillon, J.-B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1–9 (1978)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)
Takahashi, W., Toyoda, M.: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417–428 (2003)
Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)
Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23(2), 205–221 (2015)
Tian, M., Jiao, S.W., Liou, Y.C.: Methods for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces. J. Inequal. Appl. 2015, 227 (2015)
Takahashi, W., Nadezhkina, N.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)
Tian, M., Jiang, B.N.: Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert space. J. Inequal. Appl. 2016, 286 (2016)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Malitsky, Yu.V.: Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)
Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179, 197–211 (2018)
Availability of data and materials
Not applicable.
Funding
This work was supported by Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China [grant number 2019 ASP-TJ02].
Author information
Authors and Affiliations
Contributions
All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tian, M., Jiang, BN. Inertial hybrid algorithm for variational inequality problems in Hilbert spaces. J Inequal Appl 2020, 12 (2020). https://doi.org/10.1186/s13660-020-2286-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-020-2286-1