Abstract
In this paper, we introduce and analyze a multi-step hybrid extragradient algorithm by combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method and the gradient-projection method (GPM) with regularization in the setting of infinite-dimensional Hilbert spaces. It is proven that, under appropriate assumptions, the proposed algorithm converges strongly to a solution of the convex minimization problem (CMP) with constraints of several problems: finitely many generalized mixed equilibrium problems (GMEPs), finitely many variational inclusions, and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to the unique solution of a hierarchical variational inequality problem (over the fixed point set of a strictly pseudocontractive mapping) with constraints of finitely many GMEPs, finitely many variational inclusions and the CMP. The results presented in this paper improve and extend the corresponding results announced by many others.
MSC:49J30, 47H09, 47J20, 49M05.
Similar content being viewed by others
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H and be the metric projection of H onto C. Let be a nonlinear mapping on C. We denote by the set of fixed points of S and by R the set of all real numbers. A mapping is called L-Lipschitz-continuous (or L-Lipschitzian) if there exists a constant such that
In particular, if then S is called a nonexpansive mapping; if then S is called a contraction.
A mapping is said to be ζ-inverse-strongly monotone if there exists such that
It is clear that every inverse-strongly monotone mapping is a monotone and Lipschitz-continuous mapping. A mapping is said to be ξ-strictly pseudocontractive if there exists such that
In this case, we also say that T is a ξ-strict pseudocontraction. In particular, whenever , T becomes a nonexpansive mapping from C into itself.
Let be a nonlinear mapping on C. We consider the following variational inequality problem (VIP): find a point such that
The solution set of VIP (1.1) is denoted by .
The VIP (1.1) was first discussed by Lions [1]. There are many applications of VIP (1.1) in various fields; see e.g., [2–5]. It is well known that, if A is a strongly monotone and Lipschitz-continuous mapping on C, then VIP (1.1) has a unique solution. In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (1.1) in Euclidean space :
with a given number, which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many authors, who improved it in various ways; see e.g., [7–19] and references therein, to name but a few.
Consider the following constrained convex minimization problem (CMP):
where is a real-valued convex functional. We denote by Γ the solution set of the CMP (1.2). If f is Fréchet differentiable, then the gradient-projection method (GPM) generates a sequence via the recursive formula
or more generally,
where in both (1.3) and (1.4), the initial guess is taken from C arbitrarily, the parameters, λ or , are positive real numbers, and is the metric projection from H onto C. The convergence of the algorithms (1.3) and (1.4) depends on the behavior of the gradient ∇f. As a matter of fact, it is well known that if ∇f is strongly monotone and Lipschitzian; namely, there are constants satisfying the properties
then, for , the operator is a contraction; hence, the sequence defined by (1.3) converges in norm to the unique solution of the CMP (1.2). More generally, if the sequence is chosen to satisfy the property , then the sequence defined by (1.4) converges in norm to the unique minimizer of the CMP (1.2). However, if the gradient ∇f fails to be strongly monotone, the operator T defined by would fail to be contractive; consequently, the sequence generated by (1.3) may fail to converge strongly (see Section 4 in Xu [20]).
Theorem 1.1 (see [[20], Theorem 5.2])
Assume the CMP (1.2) is consistent and let Γ denote its solution set. Assume the gradient ∇f satisfies the Lipschitz condition with constant . Let be a ρ-contraction with coefficient . Let a sequence be generated by the following hybrid gradient-projection algorithm (GPA):
Assume the sequence satisfies the condition , and, in addition, the following conditions are satisfied for and : (i) , (ii) , (iii) , and (iv) . Then the sequence converges in norm to a minimizer of CMP (1.2), which is also the unique solution to the VIP
In other words, is the unique fixed point of the contraction , .
On the other hand, let S and T be two nonexpansive mappings. In 2009, Yao et al. [21] considered the following hierarchical variational inequality problem (HVIP): find hierarchically a fixed point of T, which is a solution to the VIP for monotone mapping ; namely, find such that
The solution set of the hierarchical VIP (1.7) is denoted by Λ. It is not hard to check that solving the hierarchical VIP (1.7) is equivalent to the fixed point problem of the composite mapping , i.e., find such that . The authors [21] introduced and analyzed the following iterative algorithm for solving the HVIP (1.7):
Theorem 1.2 (see [[21], Theorem 3.2])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let S and T be two nonexpansive mappings of C into itself. Let be a fixed contraction with . Let and be two sequences in . For any given , let be the sequence generated by (1.8). Assume that the sequence is bounded and that
-
(i)
;
-
(ii)
, ;
-
(iii)
, and ;
-
(iv)
;
-
(v)
there exists a constant such that for each , where .
Then converges strongly to which solves the HVIP
Furthermore, let be a real-valued function, be a nonlinear mapping and be a bifunction. In 2008, Peng and Yao [9] introduced the generalized mixed equilibrium problem (GMEP) of finding such that
We denote the set of solutions of GMEP (1.9) by . The GMEP (1.9) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others. The GMEP is further considered and studied; see e.g., [7, 10, 15, 17, 19, 22–24]. In particular, if , then GMEP (1.9) reduces to the generalized equilibrium problem (GEP) which is to find such that
It was introduced and studied by Takahashi and Takahashi [25]. The set of solutions of GEP is denoted by .
If , then GMEP (1.9) reduces to the mixed equilibrium problem (MEP) which is to find such that
It was considered and studied in [26]. The set of solutions of MEP is denoted by .
If , , then GMEP (1.9) reduces to the equilibrium problem (EP) which is to find such that
It was considered and studied in [27, 28]. The set of solutions of EP is denoted by . It is worth to mention that the EP is a unified model of several problems, namely, variational inequality problems, optimization problems, saddle point problems, complementarity problems, fixed point problems, Nash equilibrium problems, etc.
It was assumed in [9] that is a bifunction satisfying conditions (A1)-(A4) and is a lower semicontinuous and convex function with restriction (B1) or (B2), where
(A1) for all ;
(A2) Θ is monotone, i.e., for any ;
(A3) Θ is upper-hemicontinuous, i.e., for each ,
(A4) is convex and lower semicontinuous for each ;
(B1) for each and , there exists a bounded subset and such that, for any ,
(B2) C is a bounded set.
Given a positive number . Let be the solution set of the auxiliary mixed equilibrium problem, that is, for each ,
In addition, let B be a single-valued mapping of C into H and R be a multivalued mapping with . Consider the following variational inclusion: find a point such that
We denote by the solution set of the variational inclusion (1.10). In particular, if , then . If , then problem (1.10) becomes the inclusion problem introduced by Rockafellar [29]. It is known that problem (1.10) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria, and game theory, etc. Let a set-valued mapping be maximal monotone. We define the resolvent operator associated with R and λ as follows:
where λ is a positive number.
In 1998, Huang [30] studied problem (1.10) in the case where R is maximal monotone and B is strongly monotone and Lipschitz-continuous with . Subsequently, Zeng et al. [31] further studied problem (1.10) in the case which is more general than Huang’s one [30]. Moreover, the authors [31] obtained the same strong convergence conclusion as in Huang’s result [30]. In addition, the authors also gave a geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more details, refer to [12, 24, 32–34] and the references therein.
Motivated and inspired by the above facts, we introduce and analyze a multistep hybrid extragradient algorithm by combining Korpelevich’s extragradient method, the viscosity approximation method, thehybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method (GPM) with regularization in the setting of infinite-dimensional Hilbert spaces. It is proven that under appropriate assumptions the proposed algorithm converges strongly to a solution of the CMP (1.2) with constraints of several problems: finitely many GMEPs, finitely many variational inclusions, and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to the unique solution of a hierarchical variational inequality problem (over the fixed point set of a strictly pseudocontractive mapping) with constraints of finitely many GMEPs, finitely many variational inclusions, and the CMP (1.2). Our results represent the supplementation, improvement, extension, and development of the corresponding results in Xu [[20], Theorems 4.1 and 5.2] and Yao et al. [[21], Theorems 3.1 and 3.2].
2 Preliminaries
Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by and , respectively. Let C be a nonempty closed convex subset of H. We write to indicate that the sequence converges weakly to x and to indicate that the sequence converges strongly to x. Moreover, we use to denote the weak ω-limit set of the sequence , i.e.,
The metric (or nearest point) projection from H onto C is the mapping which assigns to each point the unique point satisfying the property
Definition 2.1 Let T be a nonlinear operator with the domain and the range . Then T is said to be
-
(i)
monotone if
-
(ii)
β-strongly monotone if there exists a constant such that
-
(iii)
ν-inverse-strongly monotone if there exists a constant such that
It is easy to see that the projection is 1-inverse-strongly monotone. Inverse-strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see e.g., [35]. It is obvious that if T is ν-inverse-strongly monotone, then T is monotone and -Lipschitz-continuous. Moreover, we also have, for all and ,
So, if , then is a nonexpansive mapping.
Some important properties of projections are gathered in the following proposition.
Proposition 2.1 For given and :
-
(i)
, ;
-
(ii)
, ;
-
(iii)
, .
Consequently, is nonexpansive and monotone.
Definition 2.2 A mapping is said to be
-
(a)
nonexpansive if
-
(b)
firmly nonexpansive if is nonexpansive, or equivalently, if T is 1-inverse-strongly monotone (1-ism),
alternatively, T is firmly nonexpansive if and only if T can be expressed as
where is nonexpansive; projections are firmly nonexpansive.
It can easily be seen that if T is nonexpansive, then is monotone.
Next we list some elementary conclusions for the MEP.
Proposition 2.2 (see [26])
Assume that satisfies (A1)-(A4) and let be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For and , define a mapping as follows:
for all . Then the following hold:
-
(i)
for each , is nonempty and single-valued;
-
(ii)
is firmly nonexpansive, that is, for any ,
-
(iii)
;
-
(iv)
is closed and convex;
-
(v)
for all and .
Definition 2.3 A mapping is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,
where and is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are -averaged mappings.
Proposition 2.3 (see [36])
Let be a given mapping.
-
(i)
T is nonexpansive if and only if the complement is -ism.
-
(ii)
If T is ν-ism, then for , γT is -ism.
-
(iii)
T is averaged if and only if the complement is ν-ism for some . Indeed, for , T is α-averaged if and only if is -ism.
Proposition 2.4 (see [36, 37])
Let be given operators.
-
(i)
If for some and if S is averaged and V is nonexpansive, then T is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(iii)
If for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
-
(v)
If the mappings are averaged and have a common fixed point, then
The notation denotes the set of all fixed points of the mapping T, that is, .
We need some facts and tools in a real Hilbert space H which are listed as lemmas below.
Lemma 2.1 Let X be a real inner product space. Then we have the following inequality:
Lemma 2.2 Let H be a real Hilbert space. Then the following hold:
-
(a)
for all ;
-
(b)
for all and with ;
-
(c)
if is a sequence in H such that , it follows that
It is clear that, in a real Hilbert space H, is ξ-strictly pseudocontractive if and only if the following inequality holds:
This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then is -inverse strongly monotone; for further details, we refer to [38] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions.
Lemma 2.3 (see [[38], Proposition 2.1])
Let C be a nonempty closed convex subset of a real Hilbert space H and be a mapping.
-
(i)
If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition
-
(ii)
If T is a ξ-strictly pseudocontractive mapping, then the mapping is semiclosed at 0, that is, if is a sequence in C such that and , then .
-
(iii)
If T is ξ-(quasi-)strict pseudocontraction, then the fixed point set of T is closed and convex so that the projection is well defined.
Lemma 2.4 (see [11])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that . Then
Lemma 2.5 (see [[39], Demiclosedness principle])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with . Then is demiclosed. That is, whenever is a sequence in C weakly converging to some and the sequence strongly converges to some y, it follows that . Here I is the identity operator of H.
Lemma 2.6 Let be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition 2.1(i)) implies
Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in and let . Associated with a nonexpansive mapping , we define the mapping by
where is an operator such that, for some positive constants , F is κ-Lipschitzian and η-strongly monotone on H; that is, F satisfies the conditions:
for all .
Lemma 2.7 (see [[40], Lemma 3.1])
is a contraction provided ; that is,
where .
Remark 2.1 (i) Since F is κ-Lipschitzian and η-strongly monotone on H, we get . Hence, whenever , we have .
(ii) In Lemma 2.7, put and . Then we know that , , and .
Lemma 2.8 (see [41])
Let be a sequence of nonnegative real numbers satisfying the property:
where and are such that:
-
(i)
;
-
(ii)
either or ;
-
(iii)
where , for all .
Then .
Recall that a set-valued mapping is called monotone if for all , and imply
A set-valued mapping T is called maximal monotone if T is monotone and for each , where I is the identity mapping of H. We denote by the graph of T. It is well known that a monotone mapping T is maximal if and only if, for , for every implies . Next we provide an example to illustrate the concept of a maximal monotone mapping.
Let be a monotone, k-Lipschitz-continuous mapping and let be the normal cone to C at , i.e.,
Define
Then is maximal monotone (see [29]) such that
Let be a maximal monotone mapping. Let be two positive numbers.
Lemma 2.9 (see [42])
We have the resolvent identity
Remark 2.2 For , we have the following relation:
In terms of Huang [30] (see also [31]), we have the following property for the resolvent operator .
Lemma 2.10 is single-valued and firmly nonexpansive, i.e.,
Consequently, is nonexpansive and monotone.
Lemma 2.11 (see [12])
Let R be a maximal monotone mapping with . Then for any given , is a solution of problem (1.5) if and only if satisfies
Lemma 2.12 (see [31])
Let R be a maximal monotone mapping with and let be a strongly monotone, continuous, and single-valued mapping. Then for each , the equation has a unique solution for .
Lemma 2.13 (see [12])
Let R be a maximal monotone mapping with and be a monotone, continuous and single-valued mapping. Then for each . In this case, is maximal monotone.
3 Main results
In this section, we will introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of the CMP (1.2) with constraints of several problems: finitely many GMEPs and finitely many variational inclusions and the fixed point problem of a strict pseudocontraction in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method [43], Mann’s iteration method and the gradient-projection method (GPM) with regularization. Under appropriate assumptions, we prove the strong convergence of the proposed algorithm to a solution of the CMP (1.2), which is also the unique solution of a hierarchical variational inequality problem (HVIP).
Let C be a nonempty closed convex subset of a real Hilbert space H and be a convex functional with L-Lipschitz-continuous gradient ∇f. We denote by Γ the solution set of the CMP (1.2). Then the CMP (1.2) is generally ill-posed. Consider the following Tikhonov regularization problem:
where is the regularization parameter. Hence, we have
We are now in a position to state and prove the main result in this paper.
Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let M, N be two positive integers. Let be a convex functional with L-Lipschitz-continuous gradient ∇f. Let be a bifunction from to R satisfying (A1)-(A4) and be a proper lower semicontinuous and convex function with restriction (B1) or (B2), where . Let be a maximal monotone mapping and let and be -inverse-strongly monotone and -inverse-strongly monotone, respectively, where , . Let be a ξ-strictly pseudocontractive mapping, be a nonexpansive mapping and be a ρ-contraction with coefficient . Let be κ-Lipschitzian and η-strongly monotone with positive constants such that and where . Assume that . Let , with , with , and , where and . For arbitrarily given , let be a sequence generated by
where for all . Suppose that
(C1) , and ;
(C2) , and ;
(C3) and ;
(C4) and for and ;
(C5) , and for all ;
(C6) , and .
Then we have:
-
(i)
;
-
(ii)
;
-
(iii)
converges strongly to a minimizer of the CMP (1.2), which is a unique solution in Ω to the HVIP
Proof First of all, observe that
and
Since and , we know that and hence the mapping is -strongly monotone. Moreover, it is clear that the mapping is -Lipschitzian. Thus, there exists a unique solution in Ω to the VIP
That is, . Now, we put
for all and ,
for all , and , where I is the identity mapping on H. Then we have and .
In addition, we show that is ν-averaged for each , where
Indeed, the Lipschitz continuity of ∇f implies that ∇f is -ism (see [20] (also [44])); that is,
Observe that
Therefore, it follows that is -ism. Thus, by Proposition 2.3(ii), is -ism. From Proposition 2.3(iii), the complement is -averaged. Consequently, noting that is -averaged and utilizing Proposition 2.4(iv), we find that, for each , is ν-averaged with
This shows that is nonexpansive. Taking into account that and , we get
Without loss of generality, we may assume that for each . So, is nonexpansive for each . Similarly, since
it is well known that is nonexpansive for each .
We divide the rest of the proof into several steps.
Step 1. We prove that is bounded.
Indeed, take a fixed arbitrarily. Utilizing (2.1) and Proposition 2.2(ii) we have
Utilizing (2.1) and Lemma 2.10 we have
Combining (3.2) and (3.3), we have
For simplicity, put for each . Note that for . Hence, from (3.4), it follows that
Since for all and T is ξ-strictly pseudocontractive, utilizing Lemma 2.4 we obtain from (3.1) and (3.5)
Utilizing Lemma 2.7, we deduce from (3.1), (3.6), , and that, for all ,
By induction, we get
Thus, is bounded (due to ) and so are the sequences , , and .
Step 2. We prove that .
Indeed, utilizing (2.1) and (2.3), we obtain
where
for some and for some .
Utilizing Proposition 2.2(ii), (v), we deduce that
where is a constant such that, for each ,
Furthermore, we define for all . It follows that
Since for all , utilizing Lemma 2.4 and the nonexpansivity of we have
and
Hence it follows from (3.7)-(3.11) that
In the meantime, a simple calculation shows that
So, it follows from (3.12) that
where for some .
On the other hand, we define for all . Then it is well known that for all . Simple calculations show that
Since V is a ρ-contraction with coefficient and S is a nonexpansive mapping, we conclude that
which, together with (3.13) and , implies that
where for some . Consequently,
where for some . From conditions (C1)-(C5) it follows that and
Thus, utilizing Lemma 2.8, we immediately conclude that
So, from it follows that
Step 3. We prove that , , and .
Indeed, utilizing Lemmas 2.1 and 2.2(b), from (3.1), (3.4)-(3.5), and we deduce that
and hence
which, together with , immediately yields
Since , , , , and is bounded, we have
Observe that
and
for and . Combining (3.5), (3.15), (3.18), and (3.19), we get
which immediately leads to
Since , , , , , , , and , , are bounded sequences, we have
for all and .
Furthermore, by Proposition 2.2(ii) and Lemma 2.2(a) we have
which implies that
By Lemma 2.2(a) and Lemma 2.10, we obtain
which immediately leads to
Combining (3.20) and (3.23) we conclude that
which yields
Since , , , and , , and are bounded sequences, we deduce from (3.17), (3.21), and that
Also, combining (3.3), (3.20), and (3.22) we deduce that
which yields
Since , for , and , , are bounded sequences, we deduce from (3.17), (3.21), and that
Hence from (3.24) and (3.25) we get
and
respectively. Thus, from (3.26) and (3.27) we obtain
On the other hand, note that . Then, utilizing Lemma 2.1 and the -inverse strong monotonicity of ∇f, we deduce from (2.1) that
Combining (3.4), (3.20), and (3.29) we obtain
which, together with and , leads to
Since , , and are bounded sequences, we deduce from (3.17) and that
So, it is clear that
Again, utilizing Proposition 2.1(iii), from and , we get
which immediately leads to
Combining (3.4), (3.20), and (3.31) we obtain
which immediately yields
Since , , and are bounded sequences, we deduce from (3.17) and that
Observe that
Thus, from (3.30) and (3.32) we have
Taking into account that , from (3.28) and (3.33) we get
Utilizing the relation , we have
which, together with (3.17) and (3.34), implies that
Since , we obtain
Step 4. We prove that .
Indeed, since H is reflexive and is bounded, there exists at least a weak convergence subsequence of . Hence we know that . Now, take an arbitrary . Then there exists a subsequence of such that . From (3.24)-(3.26), (3.28), and (3.34) we have that , , , and , where and . Utilizing Lemma 2.3(ii), we deduce from and (3.35) that . Next, we prove that . As a matter of fact, since is -inverse-strongly monotone, is a monotone and Lipschitz-continuous mapping. It follows from Lemma 2.13 that is maximal monotone. Let , i.e., . Again, since , , , we have
that is,
In terms of the monotonicity of , we get
and hence
In particular,
Since (due to (3.24)) and (due to the Lipschitz-continuity of ), we conclude from and that
It follows from the maximal monotonicity of that , i.e., . Therefore, . Next we prove that . Since , , , we have
By (A2), we have
Let for all and . This implies that . Then we have
By (3.25), we have as . Furthermore, by the monotonicity of , we obtain . Then by (A4) we obtain
Utilizing (A1), (A4), and (3.37), we obtain
and hence
Letting , we have, for each ,
This implies that and hence . Thus, .
Furthermore, let us show that . In fact, define
where . Then is maximal monotone and if and only if ; see [29]. Let . Then we have , and hence . So, we have for all . On the other hand, from and , we get , and hence,
Therefore, from and , we have
Hence, it is easy to see that as . Since is maximal monotone, we have , and hence . Consequently, . This shows that .
Step 5. We prove that where .
Indeed, take an arbitrary . Then there exists a subsequence of such that . Utilizing (3.16), we find that, for all ,
which implies that
Since , , , and
from (3.38) we conclude that
that is,
Since is -strongly monotone and -Lipschitz-continuous, by Minty’s lemma [39] we know that (3.39) is equivalent to the VIP
This shows that . Taking into account , we know that . Thus, ; that is, .
Next we prove that . As a matter of fact, utilizing (3.16) with , we get
Since , , and (due to ), we deduce that , , and
Therefore, applying Lemma 2.8 to (3.41) we infer that . This completes the proof. □
In Theorem 3.1, putting and , we obtain the following.
Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a convex functional with L-Lipschitz-continuous gradient ∇f. Let be a bifunction from to R satisfying (A1)-(A4), be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and be -inverse-strongly monotone. Let be a maximal monotone mapping and be -inverse-strongly monotone, for . Let be a ξ-strictly pseudocontractive mapping, be a nonexpansive mapping and be a ρ-contraction with coefficient . Let be κ-Lipschitzian and η-strongly monotone with positive constants such that and where . Assume that . Let , with , with , and , for . For arbitrarily given , let be a sequence generated by
where for all . Suppose that
(C1) , and ;
(C2) , and ;
(C3) and ;
(C4) and for ;
(C5) , and for all ;
(C6) , and .
Then we have:
-
(i)
;
-
(ii)
;
-
(iii)
converges strongly to a minimizer of the CMP (1.2), which is a unique solution in Ω to the HVIP
In Theorem 3.1, putting , we obtain the following.
Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a convex functional with L-Lipschitz-continuous gradient ∇f. Let be a bifunction from to R satisfying (A1)-(A4), be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and be -inverse-strongly monotone. Let be a maximal monotone mapping and be -inverse-strongly monotone. Let be a ξ-strictly pseudocontractive mapping, be a nonexpansive mapping and be a ρ-contraction with coefficient . Let be κ-Lipschitzian and η-strongly monotone with positive constants such that and where . Assume that . Let , with , with , and , . For arbitrarily given , let be a sequence generated by
where for all . Suppose that
(C1) , and ;
(C2) , and ;
(C3) and ;
(C4) and ;
(C5) , and for all ;
(C6) , and .
Then we have:
-
(i)
;
-
(ii)
;
-
(iii)
converges strongly to a minimizer of the CMP (1.2), which is a unique solution in Ω to the HVIP
In Theorem 3.1, putting , , and where I is the identity mapping on H, we obtain the following.
Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a convex functional with L-Lipschitz-continuous gradient ∇f. Let be a bifunction from to R satisfying (A1)-(A4), be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and be -inverse-strongly monotone. Let be a maximal monotone mapping and be -inverse-strongly monotone. Let be a ξ-strictly pseudocontractive mapping, be a nonexpansive mapping and be a ρ-contraction with coefficient . Assume that . Let , , with , with , and , . For arbitrarily given , let be a sequence generated by
where for all . Suppose that
(C1) , and ;
(C2) , and ;
(C3) and ;
(C4) and ;
(C5) , and for all ;
(C6) , and .
Then we have:
-
(i)
;
-
(ii)
;
-
(iii)
converges strongly to a minimizer of the CMP (1.2), which is a unique solution in Ω to the HVIP
Remark 3.1 It is obvious that our iterative scheme (3.1) is very different from Xu’s iterative one (1.4) and Yao et al.’s iterative one (1.8). Here, Xu’s iterative scheme in [[20], Theorem 5.2] is extended to develop our four-step iterative scheme (3.1) for the CMP (1.2) with constraints of finite many GMEPs, finite many variational inclusions and the fixed point problem of a strict pseudocontraction by combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method (GPM) with regularization. It is worth pointing out that under the lack of the assumptions similar to those in [[21], Theorem 3.2], e.g., is bounded, and , for some , the sequence generated by (3.1) converges strongly to a point , which is a unique solution of the HVIP (over the fixed point set of strictly pseudocontractive mapping T), i.e., , .
Remark 3.2 Our Theorem 3.1 improves, extends, supplements, and develops Xu [[20], Theorem 5.2] and Yao et al. [[21], Theorems 3.1 and 3.2] in the following aspects:
-
(a)
Our HVIP with the unique solution satisfying
is more general than the problem of finding a point satisfying in [21] and very different from the problem of finding a point satisfying in [[20], Theorem 5.2]. It is worth emphasizing that the nonexpansive mapping T in [[21], Theorems 3.1 and 3.2] is extended to the strict pseudocontraction T and the CMP in [[20], Theorem 5.2] is generalized to the setting of finitely many GMEPs, finitely many variational inclusions and the fixed point problem of a strict pseudocontraction T.
-
(b)
Our four-step iterative scheme (3.1) for the CMP with constraints of several problems is more flexible, more advantageous and more subtle than Xu’s one-step iterative one (1.4) and than Yao et al.’s two-step iterative one (1.8) because it can be used to solve two kinds of problems, e.g., the HVIP (over the fixed point set of strictly pseudocontractive mapping T), and the problem of finding a common point of four sets: , , and Γ. In addition, our Theorem 3.1 also drops the crucial requirements in [[21], Theorem 3.2(v)] that and , for some .
-
(c)
The techniques for the argument in our Theorem 3.1 are very different from [[21], Theorems 3.1 and 3.2] and from [[20], Theorem 5.2] because we make use of the properties of strictly pseudocontractive mappings (see Lemmas 2.3 and 2.4), the ones of resolvent operators and maximal monotone mappings (see Proposition 2.2, Remark 2.2, and Lemmas 2.9-2.13), the ones of averaged mappings (see Propositions 2.3 and 2.4), the inclusion problem ( for maximal monotone operator ) (see (2.2)), and the contractive coefficient estimates for the contractions associated with nonexpansive mappings (see Lemma 2.7).
-
(d)
Compared with the restrictions on the parameter sequences of [[21], Theorem 3.2] and [[20], Theorem 5.2], respectively, the hypotheses (C3)-(C6) in our Theorem 3.1 are added because our Theorem 3.1 involves the quite complex problem, i.e., the HVIP (over the fixed point set of strictly pseudocontractive mapping T) with constraints of several problems: CMP (1.2), finitely many GMEPs and finitely many variational inclusions.
References
Lions JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris; 1969.
Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.
Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.
Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs; 1986.
Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.
Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.
Cai G, Bu SQ: Strong and weak convergence theorems for general mixed equilibrium problems and variational inequality problems and fixed point problems in Hilbert spaces. J. Comput. Appl. Math. 2013, 247: 34–52.
Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006,10(5):1293–1303.
Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.
Ceng LC, Ansari QH, Schaible S: Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53: 69–96. 10.1007/s10898-011-9703-4
Yao Y, Liou YC, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480. 10.1016/j.camwa.2010.03.036
Ceng LC, Ansari QH, Wong MM, Yao JC: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012,13(2):403–422.
Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z
Ceng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7
Ceng LC, Yao JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72: 1922–1937. 10.1016/j.na.2009.09.033
Ceng LC, Wong MM: Relaxed extragradient method for finding a common element of systems of variational inequalities and fixed point problems. Taiwan. J. Math. 2013,17(2):701–724.
Ceng LC, Liao CW, Pang CT, Wen CF: Convex minimization with constraints of systems of variational inequalities, mixed equilibrium, variational inequality, and fixed point problems. J. Appl. Math. 2014., 2014: Article ID 105928
Ceng LC, Ansari QH, Yao JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061
Ceng LC, Petrusel A: Relaxed extragradient-like method for general system of generalized mixed equilibria and fixed point problem. Taiwan. J. Math. 2012,16(2):445–478.
Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z
Yao Y, Liou YC, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009,31(1–2):433–445. 10.1007/s12190-008-0222-5
Ceng LC, Guu SM, Yao JC: Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 92
Ceng LC, Hu HY, Wong MM: Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 2011,15(3):1341–1367.
Ceng LC, Al-Homidan S: Algorithms of common solutions for generalized mixed equilibria, variational inclusions, and constrained convex minimization. Abstr. Appl. Anal. 2014., 2014: Article ID 132053
Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042
Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022
Colao V, Marino G, Xu HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 2008, 344: 340–352. 10.1016/j.jmaa.2008.02.041
Ceng LC, Petrusel A, Yao JC: Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. J. Optim. Theory Appl. 2009, 143: 37–58. 10.1007/s10957-009-9549-9
Rockafellar RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056
Huang NJ: A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998,35(10):9–14. 10.1016/S0898-1221(98)00067-4
Zeng LC, Guu SM, Yao JC: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005,50(3–4):329–337. 10.1016/j.camwa.2005.06.001
Ceng LC, Guu SM, Yao JC: Iterative approximation of solutions for a class of completely generalized set-valued quasi-variational inclusions. Comput. Math. Appl. 2008, 56: 978–987. 10.1016/j.camwa.2008.01.026
Zhang SS, Lee HWJ, Chan CK: Algorithms of common solutions for quasi-variational inclusions and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-y
Ceng LC, Guu SM, Yao JC: Iterative algorithm for finding approximate solutions of mixed quasi-variational-like inclusions. Comput. Math. Appl. 2008, 56: 942–952. 10.1016/j.camwa.2008.01.024
Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157
Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055
Goebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003,119(1):185–201.
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. (2) 2002,66(1):240–256. 10.1112/S0024610702003332
Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Groningen; 1976.
Yamada I: The hybrid steepest-descent method for the variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Batnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504.
Baillon JB, Haddad G: Quelques propriétés des opérateurs angle-bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664
Acknowledgements
LC Ceng was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and PhD Program Foundation of Ministry of Education of China (20123127110002). YC Liou was supported in part by the Grants NSC 101-2628-E-230-001-MY3 and NSC 103-2923-E-037-001-MY3. CF Wen was partially supported by the Grant NSC 103-2923-E-037-001-MY3.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ceng, LC., Liou, YC. & Wen, CF. Extragradient method for convex minimization problem. J Inequal Appl 2014, 444 (2014). https://doi.org/10.1186/1029-242X-2014-444
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-444