Abstract
In this paper, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of a general system of variational inequalities, the solution set Γ of a split feasibility problem and the fixed point set of a strictly pseudocontractive mapping S in the setting of infinite-dimensional Hilbert spaces. We prove that the sequences generated by the proposed algorithms converge strongly to an element of under mild conditions.
AMS Subject Classification:49J40, 47H05, 47H19.
Similar content being viewed by others
1 Introduction
Let ℋ be a real Hilbert space with the inner product and the norm . Let C be a nonempty closed convex subset of ℋ. The (nearest point or metric) projection of ℋ onto C is denoted by . Let be a self-mapping on C. We denote by the set of fixed points of S and by R the set of all real numbers. For a given nonlinear operator , we consider the variational inequality problem (VIP) of finding such that
The solution set of VIP (1.1) is denoted by . Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving and equilibrium problems. It is now well known that variational inequalities are equivalent to fixed point problems, the origin of which can be traced back to Lions and Stampacchia [1]. This alternative formulation has been used to suggest and analyze a projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding fixed points of nonexpansive mappings or strict pseudo-contraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [2–11] and the references therein.
For finding an element of when C is closed and convex, S is nonexpansive and A is α-inverse strongly monotone, Takahashi and Toyoda [12] introduced the following Mann-type iterative algorithm:
where is the metric projection of ℋ onto C, , is a sequence in and is a sequence in . They showed that if , then the sequence converges weakly to some . Nadezhkina and Takahashi [13] and Zeng and Yao [9] proposed extragradient methods motivated by Korpelevich [14] for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem. Further, these iterative methods were extended in [15] to develop a new iterative method for finding elements in .
Let be two mappings. Recently, Ceng, Wang and Yao [5] introduced and considered the problem of finding such that
which is called a general system of variational inequalities (GSVI), where and are two constants. The set of solutions of problem (1.3) is denoted by . In particular, if , then problem (1.3) reduces to the new system of variational inequalities (NSVI) introduced and studied by Verma [16]. Further, if additionally, then the NSVI reduces to VIP (1.1).
Recently, Ceng, Wang and Yao [5] transformed problem (1.3) into a fixed point problem in the following way.
Lemma 1.1 (see [5])
For given , is a solution of problem (1.3) if and only if is a fixed point of the mapping defined by
where .
In particular, if the mapping is -inverse strongly monotone for , then the mapping G is nonexpansive provided for .
Utilizing Lemma 1.1, they introduced and studied a relaxed extragradient method for solving GSVI (1.3). Throughout this paper, the set of fixed points of the mapping G is denoted by Ξ. Based on the extragradient method and viscosity approximation method, Yao et al. [8] proposed and analyzed a relaxed extragradient method for finding a common solution of GSVI (1.3) and a fixed point problem of a strictly pseudo-contractive mapping .
Theorem YLK (see [[8], Theorem 3.2])
Let C be a nonempty bounded closed convex subset of a real Hilbert space ℋ. Let the mapping be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let the sequences , and be generated iteratively by
where for , and , , , are four sequences in such that
-
(i)
and for all ;
-
(ii)
and ;
-
(iii)
and ;
-
(iv)
.
Then the sequence generated by (1.4) converges strongly to and is a solution of GSVI (1.3), where .
Subsequently, Ceng, Guu and Yao [17] further presented and analyzed an iterative scheme for finding a common element of the solution set of VIP (1.1), the solution set of GSVI (1.3) and the fixed point set of a strictly pseudo-contractive mapping .
Theorem CGY (see [[17], Theorem 3.1])
Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let be α-inverse strongly monotone and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let the sequences , and be generated iteratively by
where for , and such that
-
(i)
and for all ;
-
(ii)
and ;
-
(iii)
and ;
-
(iv)
;
-
(v)
and .
Then the sequence generated by (1.5) converges strongly to and is a solution of GSVI (1.3), where .
On the other hand, let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces and , respectively. The split feasibility problem (SFP) is to find a point with the property
where and denotes the family of all bounded linear operators from to .
In 1994, the SFP was first introduced by Censor and Elfving [18], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, e.g., [19] and the references therein. Recently, it has been found that the SFP can also be applied to study intensity-modulated radiation therapy; see, e.g., [20–22] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, e.g., [19–29] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [30] of finding an element x such that
It has been extensively investigated in the literature using the projected Landweber iterative method [31]. Comparatively, the SFP has received much less attention so far due to the complexity resulting from the set Q. Therefore, whether various versions of the projected Landweber iterative method [31] can be extended to solve the SFP remains an interesting open topic. For example, it is not clear whether the dual approach to (1.7) of [32] can be extended to the SFP. The original algorithm given in [18] involves the computation of the inverse (assuming the existence of the inverse of A) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm of Byrne [19, 24] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [33]. The CQ algorithm only involves the computation of the projections and onto the sets C and Q, respectively, and is therefore implementable in the case where and have closed-form expressions, for example, C and Q are closed balls or half-spaces. However, it remains a challenge how to implement the CQ algorithm in the case where the projections and/or fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.
Very recently, Xu [23] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.
Furthermore, Korpelevich [14] introduced the so-called extragradient method for finding a solution of a saddle point problem. She proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.
Throughout this paper, assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let be a continuous differentiable function. The minimization problem
is ill-posed. Therefore, Xu [23] considered the following Tikhonov regularization problem:
where is the regularization parameter. The regularized minimization (1.9) has a unique solution which is denoted by . The following results are easy to prove.
Proposition 1.1 (see [[34], Proposition 3.1])
Given , the following statements are equivalent:
-
(i)
solves the SFP;
-
(ii)
solves the fixed point equation
where , and is the adjoint of A;
-
(iii)
solves the variational inequality problem (VIP) of finding such that
(1.10)
It is clear from Proposition 1.1 that
for all , where and denote the set of fixed points of and the solution set of VIP (1.10), respectively.
Proposition 1.2 (see [34])
The following statements hold:
-
(i)
the gradient
is -Lipschitz continuous and α-strongly monotone;
-
(ii)
the mapping is a contraction with coefficient
where ;
-
(iii)
if the SFP is consistent, then the strong exists and is the minimum-norm solution of the SFP.
Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [13], Ceng, Ansari and Yao [34] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of , where is a nonexpansive mapping.
Theorem CAY (see [[34], Theorem 3.1])
Let be a nonexpansive mapping such that . Let and be the sequences in C generated by the following extragradient algorithm:
where , for some and for some . Then both the sequences and converge weakly to an element .
Motivated and inspired by the research going on in this area, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of GSVI (1.3), the solution set Γ of SFP (1.6) and the fixed point set of a strictly pseudocontractive mapping . These iterative algorithms are based on the regularization method, the viscosity approximation method, the relaxed method in [8] and the hybrid method in [10]. Furthermore, it is proven that the sequences generated by the proposed algorithms converge strongly to an element of under mild conditions.
Observe that both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP and that our problem of finding an element of is more general than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Hence there is no doubt that our strong convergence results are very interesting and quite valuable. It is worth emphasizing that our relaxed and hybrid viscosity iterative algorithms involve a ρ-contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible, more advantageous and more subtle than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Furthermore, relaxed extragradient iterative scheme (1.4) and hybrid extragradient iterative scheme (1.5) are extended to develop our relaxed viscosity iterative algorithms and hybrid viscosity iterative algorithms, respectively. In our strong convergence results, the relaxed and hybrid viscosity iterative algorithms drop the requirement of boundedness for the domain in which various mappings are defined; see, e.g., Yao et al. [[8], Theorem 3.2]. Therefore, our results represent the modification, supplementation, extension and improvement of [[23], Theorem 5.7], [[34], Theorem 3.1], [[17], Theorem 3.1] and [[8], Theorem 3.2] to a great extent.
2 Preliminaries
Let ℋ be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let K be a nonempty, closed and convex subset of ℋ. Now, we present some known results and definitions which will be used in the sequel.
The metric (or nearest point) projection from ℋ onto K is the mapping which assigns to each point the unique point satisfying the property
The following properties of projections are useful and pertinent to our purpose.
Proposition 2.1 (see [35])
For given and ,
-
(i)
, ;
-
(ii)
, ;
-
(iii)
, , which hence implies that is nonexpansive and monotone.
Definition 2.1 A mapping is said to be
-
(a)
nonexpansive if
-
(b)
firmly nonexpansive if is nonexpansive, or equivalently,
alternatively, T is firmly nonexpansive if and only if T can be expressed as
where is nonexpansive; projections are firmly nonexpansive.
Definition 2.2 Let T be a nonlinear operator with domain and range .
-
(a)
T is said to be monotone if
-
(b)
Given a number , T is said to be β-strongly monotone if
-
(c)
Given a number , T is said to be ν-inverse strongly monotone (ν-ism) if
It can be easily seen that if S is nonexpansive, then is monotone. It is also easy to see that a projection is 1-ism.
Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely to solving practical problems in various fields, for instance, in traffic assignment problems; see, e.g., [36, 37].
Definition 2.3 A mapping is said to be an averaged mapping if it can be written as an average of the identity I and a nonexpansive mapping, that is,
where and is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are -averaged maps.
Proposition 2.2 (see [24])
Let be a given mapping.
-
(i)
T is nonexpansive if and only if the complement is -ism.
-
(ii)
If T is ν-ism, then for , γT is -ism.
-
(iii)
T is averaged if and only if the complement is ν-ism for some . Indeed, for , T is α-averaged if and only if is -ism.
Proposition 2.3 (see [24, 38])
Let be given operators.
-
(i)
If for some and if S is averaged and V is nonexpansive, then T is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(iii)
If for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
-
(v)
If the mappings are averaged and have a common fixed point, then
The notation denotes the set of all fixed points of the mapping T, that is, .
It is clear that, in a real Hilbert space ℋ, is k-strictly pseudo-contractive if and only if the following inequality holds:
This immediately implies that if S is a k-strictly pseudo-contractive mapping, then is -inverse strongly monotone; for further details, we refer to [39] and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings.
In order to prove the main results of this paper, the following lemmas will be required.
Lemma 2.1 (see [40])
Let and be bounded sequences in a Banach space X and let be a sequence in with . Suppose for all integers and . Then .
Lemma 2.2 (see [[39], Proposition 2.1])
Let C be a nonempty closed convex subset of a real Hilbert space ℋ and be a mapping.
-
(i)
If S is a k-strict pseudo-contractive mapping, then S satisfies the Lipschitz condition
-
(ii)
If S is a k-strict pseudo-contractive mapping, then the mapping is semiclosed at 0, that is, if is a sequence in C such that weakly and strongly, then .
-
(iii)
If S is k-(quasi-)strict pseudo-contraction, then the fixed point set of S is closed and convex so that the projection is well defined.
The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.
Lemma 2.3 (see [35])
Let be a sequence of nonnegative real numbers satisfying the property
where and are such that
-
(i)
;
-
(ii)
either or ;
-
(iii)
, where , .
Then .
Lemma 2.4 (see [8])
Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let be a k-strictly pseudo-contractive mapping. Let γ and δ be two nonnegative real numbers such that . Then
The following lemma is an immediate consequence of an inner product.
Lemma 2.5 In a real Hilbert space ℋ, the following inequality holds:
Let K be a nonempty closed convex subset of a real Hilbert space ℋ and let be a monotone mapping. The variational inequality problem (VIP) is to find such that
The solution set of the VIP is denoted by . It is well known that
A set-valued mapping is called monotone if for all , and imply that . A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping is maximal if and only if for , for every implies that . Let be a monotone and Lipschitz continuous mapping and let be the normal cone to K at , that is,
Define
It is known that in this case the mapping T is maximal monotone and if and only if ; for further details, we refer to [41] and the references therein.
3 Relaxed viscosity methods and their convergence criteria
In this section, we propose and analyze the following relaxed viscosity iterative algorithms for finding a common element of the solution set of GSVI (1.3), the solution set of SFP (1.6) and the fixed point set of a strictly pseudo-contractive mapping .
Algorithm 3.1 Let for , , and such that for all . For given arbitrarily, let , , be the sequences generated by the Mann-type viscosity iterative scheme with regularization
Algorithm 3.2 Let for , , and such that for all . For given arbitrarily, let , , be the sequences generated by the Mann-type viscosity iterative scheme with regularization
Next, we first give the strong convergence criteria of the sequences generated by Algorithm 3.1.
Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let , , be the sequences generated by Algorithm 3.1, where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Proof First, taking into account , without loss of generality, we may assume that for some .
Now, let us show that is ζ-averaged for each , where
Indeed, it is easy to see that is -ism, that is,
Observe that
Hence, it follows that is -ism. Thus, is -ism according to Proposition 2.2(ii). By Proposition 2.2(iii), the complement is -averaged. Therefore, noting that is -averaged and utilizing Proposition 2.3(iv), we know that for each , is ζ-averaged with
This shows that is nonexpansive. Furthermore, for with , we have
Without loss of generality, we may assume that
Consequently, it follows that for each integer , is -averaged with
This immediately implies that is nonexpansive for all .
Next, we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take arbitrarily. Then , for , and
For simplicity, we write
for each . Then for each . From Algorithm 3.1 it follows that
Utilizing Lemma 2.5, we also have
Since is -inverse strongly monotone for and for , we know that for all ,
Hence it follows from (3.1) and (3.3) that
Since for all , utilizing Lemma 2.4, we obtain from (3.4)
Now, we claim that
As a matter of fact, if , then it is clear that (3.6) is valid, that is,
Assume that (3.6) holds for , that is,
Then we conclude from (3.5) and (3.7) that
By induction, we conclude that (3.6) is valid. Hence, is bounded. Since , , and are Lipschitz continuous, it is easy to see that , , and are bounded, where for all .
Step 2. .
Indeed, define for all . It follows that
Since for all , utilizing Lemma 2.4, we have
Next, we estimate . Observe that
and
Combining (3.10) with (3.11), we get
which hence implies that
Hence it follows from (3.8), (3.9) and (3.13) that
Since , , and are bounded, it follows from conditions (i), (iii), (v) and (vi) that
Hence by Lemma 2.1, we get . Thus,
Step 3. and , where .
Indeed, utilizing Lemma 2.4 and the convexity of , we obtain from Algorithm 3.1 and (3.2)-(3.3) that
Therefore,
Since , , , and for some , it follows that
Step 4. .
Indeed, by firm nonexpansiveness of , we have
that is,
Moreover, using the argument technique similar to the above one, we derive
that is,
Utilizing (3.2), (3.15) and (3.16), we have
So, from Algorithm 3.1 and (3.17), it follows that
which hence implies that
Since , , , , , and , it follows from the boundedness of , , and that
Consequently, it immediately follows that . Also, since and , we have
Thus, we have
Note that
It hence follows that
Step 5. , where .
Indeed, since is bounded, there exists a subsequence of such that
Also, since H is reflexive and is bounded, without loss of generality, we may assume that weakly for some . First, it is clear from Lemma 2.2 that . Now, let us show that . We note that
where is defined as that in Lemma 1.1. According to Lemma 2.2, we obtain . Further, let us show that . As a matter of fact, since , and , we deduce that weakly and weakly. Let
where . Then T is maximal monotone and if and only if ; see [41] for more details. Let . Then we have
and hence
So, we have
On the other hand, from
we have
and hence,
Therefore, from
we have
Hence, we get
Since T is maximal monotone, we have , and hence . Thus it is clear that . Therefore, . Consequently, in terms of Proposition 2.1(i), we obtain from (3.21) that
Step 6. .
Indeed, from (3.2) and (3.3) it follows that
Note that
Utilizing Lemmas 2.4 and 2.5, we obtain from (3.2) and the convexity of
Note that . It follows that . It is clear that
because and . In addition, note also that , and is bounded. Hence we get . Therefore, all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that as . This completes the proof. □
Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . For fixed and given arbitrarily, let the sequences , , be generated iteratively by
where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Next, utilizing Corollary 3.1, we give the following result.
Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be a nonexpansive mapping such that . For fixed and given arbitrarily, let the sequences , be generated iteratively by
where , and such that
-
(i)
;
-
(ii)
and ;
-
(iii)
;
-
(iv)
and .
Then the sequences , converge strongly to the same point if and only if .
Proof In Corollary 3.1, put and . Then , for all , and the iterative scheme (3.20) is equivalent to
This is equivalent to (3.21). Since S is a nonexpansive mapping, S must be a k-strictly pseudo-contractive mapping with . In this case, it is easy to see that conditions (i)-(vi) in Corollary 3.1 all are satisfied. Therefore, in terms of Corollary 3.1, we obtain the desired result. □
Now, we are in a position to present the strong convergence criteria of the sequences generated by Algorithm 3.2.
Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let , , be the sequences generated by Algorithm 3.2, where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Proof First, taking into account , without loss of generality, we may assume that for some . Repeating the same argument as that in the proof of Theorem 3.1, we can show that is ζ-averaged for each , where . Further, repeating the same argument as that in the proof of Theorem 3.1, we can also show that for each integer , is -averaged with .
Next, we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take arbitrarily. Then , for , and
For simplicity, we write
for each . Then for each . Utilizing the arguments similar to those of (3.1) and (3.2) in the proof of Theorem 3.1, from Algorithm 3.2 we can obtain
and
Since is -inverse strongly monotone and for , utilizing the argument similar to that of (3.3) in the proof of Theorem 3.1, we can obtain that for all ,
Hence it follows from (3.22) and (3.24) that
Since for all , by Lemma 2.4 we can readily see from (3.25) that
Repeating the same argument as that of (3.6) in the proof of Theorem 3.1, by induction we can prove that
Thus, is bounded. Since , , and are Lipschitz continuous, it is easy to see that , , , and are bounded, where for all .
Step 2. .
Indeed, define for all . Then, utilizing the arguments similar to those of (3.8)-(3.11) in the proof of Theorem 3.1, we can obtain that
(due to Lemma 2.4)
and
So, from (3.30) and (3.31), we get
Hence it follows from (3.28), (3.29) and (3.32) that
Since , , and are bounded, it follows from conditions (i), (iii), (v) and (vi) that
Hence by Lemma 2.1, we get . Thus,
Step 3. and , where .
Indeed, utilizing the arguments similar to those of Step 3 in the proof of Theorem 3.1, we can obtain the desired conclusion.
Step 4. .
Indeed, utilizing the arguments similar to those of Step 4 in the proof of Theorem 3.1, we can obtain the desired conclusion.
Step 5. , where .
Indeed, utilizing the arguments similar to those of Step 5 in the proof of Theorem 3.1, we can obtain the desired conclusion.
Step 6. .
Indeed, utilizing the arguments similar to those of Step 6 in the proof of Theorem 3.1, we can obtain the desired conclusion. This completes the proof. □
Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . For fixed and given arbitrarily, let the sequences , , be generated iteratively by
where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Remark 3.1 In Corollary 3.3, let S be a nonexpansive mapping and put and . Then , , , and the iterative scheme (3.34) is equivalent to
This is equivalent to (3.21) in Corollary 3.2. In this case, it is easy to see that Corollary 3.3 reduces to Corollary 3.2. Thus Corollary 3.3 includes Corollary 3.2 as a special case.
Remark 3.2 Our Theorems 3.1 and 3.2 improve, extend and develop [[23], Theorem 5.7], [[34], Theorem 3.1], [[8], Theorem 3.2] and [[17], Theorem 3.1] in the following aspects:
-
(i)
Compared with the relaxed extragradient method in [[8], Theorem 3.2], our relaxed viscosity iterative algorithms (i.e., Algorithms 3.1 and 3.2) drop the requirement of boundedness for the domain in which various mappings are defined.
-
(ii)
Because both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our Theorems 3.1 and 3.2, as strong convergence results, are very interesting and quite valuable.
-
(iii)
The problem of finding an element of in our Theorems 3.1 and 3.2 is more general than the corresponding problems in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.
-
(iv)
The hybrid extragradient method for finding an element of in [[17], Theorem 3.1] is extended to develop our relaxed viscosity iterative algorithms (i.e., Algorithms 3.1 and 3.2) for finding an element of .
-
(v)
The proof of our results is very different from that of [[17], Theorem 3.1] because our argument technique depends on Lemma 2.3, the restriction on the regularization parameter sequence and the properties of the averaged mappings to a great extent.
-
(vi)
Because Algorithms 3.1 and 3.2 involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more advantageous than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.
4 Hybrid viscosity methods and their convergence criteria
In this section, we propose and analyze the following hybrid viscosity iterative algorithms for finding a common element of the solution set of GSVI (1.3), the solution set of SFP (1.6) and the fixed point set of a strictly pseudo-contractive mapping .
Algorithm 4.1 Let for , , and such that and for all . For given arbitrarily, let , and be the sequences generated by
Algorithm 4.2 Let for , , and such that for all . For given arbitrarily, let , and be the sequences generated by
Next, we first give the strong convergence criteria of the sequences generated by Algorithm 4.1.
Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let , , be the sequences generated by Algorithm 4.1, where for , , and such that
-
(i)
;
-
(ii)
, and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
and ;
-
(vi)
;
-
(vii)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Proof First, taking into account , without loss of generality, we may assume that for some . Repeating the same argument as that in the proof of Theorem 4.1, we can show that is ζ-averaged for each , where . Further, repeating the same argument as that in the proof of Theorem 3.1, we can also show that for each integer , is -averaged with .
Next, we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take arbitrarily. Then , for , and
For simplicity, we write , ,
for each . Then for each . Utilizing the arguments similar to those of (3.1), (3.2) and (3.3) in the proof of Theorem 3.1, we deduce from Algorithm 4.1 that
and
Furthermore, repeating the same arguments as in (4.1) and (4.2), we can obtain that
and
Hence it follows from (4.1), (4.3) and (4.4) that
Since for all , utilizing Lemma 2.4, we obtain from (4.6)
By induction, we can derive
Hence, is bounded. Since , , and are Lipschitz continuous, it is easy to see that , , , and are bounded, where
Step 2. .
Indeed, define for all . It follows that
Since for all , utilizing Lemma 2.4, we have
Next, we estimate . Utilizing the arguments similar to those of (3.10) and (3.11) in the proof of Theorem 3.1, we obtain that
and
Thus,
This together with (4.12) implies that
Hence it follows from (4.10), (4.11) and (4.15) that
Since , , , and are bounded, it follows from conditions (i), (iii), (iv), (vi) and (vii) that
Hence by Lemma 2.1, we get . Thus,
Step 3. and , where .
Indeed, utilizing Lemma 2.4 and the convexity of , we obtain from Algorithm 4.1 and (4.2), (4.3), (4.5) that
Therefore,
Since , , , , and , it follows that
Step 4. .
Indeed, observe that
This together with implies that and hence . By firm nonexpansiveness of , we have
that is,
Moreover, using the argument technique similar to the above one, we derive
that is,
Utilizing (4.2), (4.5), (4.17) and (4.18), we have
Thus, from Algorithm 4.1 and (4.19), it follows that
which hence implies that
Since , , , , , , and , it follows from the boundedness of , , , and that
Consequently, it immediately follows that
Also, note that
This together with implies that
Since
it follows that
Step 5. , where .
Indeed, since is bounded, there exists a subsequence of such that
Also, since H is reflexive and is bounded, without loss of generality, we may assume that weakly for some . Taking into account that and as , we deduce that weakly and weakly.
First, it is clear from Lemma 2.2 and that . Now, let us show that . Note that
where is defined as in Lemma 1.1. According to Lemma 2.2, we get . Further, let us show that . As a matter of fact, define
where . Then T is maximal monotone and if and only if ; see [41] for more details. Utilizing the arguments similar to those of Step 5 in the proof of Theorem 3.1 and the relations
we can derive
Since T is maximal monotone, we have and hence . Thus it is clear that . Therefore, . Consequently, in terms of Proposition 2.1(i), we obtain from (4.21) that
Step 6. .
Indeed, observe that
Utilizing Lemmas 2.4 and 2.5, we obtain from (4.2), (4.3) and (4.5) and the convexity of that
Note that . It follows that . It is clear that
because and . In addition, note also that , and is bounded. Hence we get . Therefore, all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that as . In the meantime, taking into account that and as , we infer that
This completes the proof. □
Corollary 4.1 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudocontractive mapping such that . For fixed and given arbitrarily, let the sequences , , be generated iteratively by
where for , , and such that
-
(i)
;
-
(ii)
, and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
and ;
-
(vi)
;
-
(vii)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Next, utilizing Corollary 4.1, we give the following improvement and extension of the main result in [34] (i.e., [[34], Theorem 3.1]).
Corollary 4.2 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be a nonexpansive mapping such that . For fixed and given arbitrarily, let the sequences , be generated iteratively by
where , and such that
-
(i)
;
-
(ii)
for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , converge strongly to the same point if and only if .
Proof In Corollary 4.1, put and . Then , , , and the iterative scheme (4.22) is equivalent to
This is equivalent to (4.23). Since S is a nonexpansive mapping, S must be a k-strictly pseudocontractive mapping with . In this case, it is easy to see that conditions (i)-(vii) in Corollary 4.1 all are satisfied. Therefore, in terms of Corollary 4.1, we obtain the desired result. □
Now, we are in a position to present the strong convergence criteria of the sequences generated by Algorithm 4.2.
Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudo-contractive mapping such that . Let be a ρ-contraction with . For given arbitrarily, let the sequences , , be generated by Algorithm 4.2, where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
Proof First, taking into account , without loss of generality, we may assume that for some . Repeating the same argument as that in the proof of Theorem 4.1, we can show that is ζ-averaged for each , where . Further, repeating the same argument as that in the proof of Theorem 4.1, we can also show that for each integer , is -averaged with .
Next, we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take arbitrarily. Then , for , and
For simplicity, we write
for each . Then for each . Utilizing the arguments similar to those of (4.1) and (4.2) in the proof of Theorem 4.1, from Algorithm 4.2 we can obtain
and
Since is -inverse strongly monotone and for , utilizing the argument similar to that of (4.3) in the proof of Theorem 4.1, we can obtain that for all ,
Utilizing the argument similar to that of (4.4) and (4.5) in the proof of Theorem 4.1, from (4.12) we can obtain
and
Hence it follows from (4.24), (4.26) and (4.27) that
Since for all , by Lemma 2.4 we can readily see from (4.29) that
By induction, we can derive
Hence, is bounded. Since , , and are Lipschitz continuous, it is easy to see that , , , and are bounded, where for all .
Step 2. .
Indeed, define for all . It follows that
Since for all , utilizing Lemma 2.4, we have
Next, we estimate . Utilizing the arguments similar to those of (4.12), (4.13) and (4.14), we can obtain that
and hence
This together with Algorithm 4.2 implies that
Hence it follows from (4.32), (4.33) and (4.37) that
Since , , , and are bounded, it follows from conditions (i), (iii), (v) and (vi) that
Hence by Lemma 2.1, we get . Thus,
Step 3. and , where .
Indeed, utilizing Lemma 2.4 and the convexity of , we obtain from Algorithm 4.2 and (4.25), (4.26), (4.28) that
Therefore,
Since , , , and , it follows that
Step 4. .
Indeed, utilizing the Lipschitz continuity of , we have
This together with implies that and hence . Utilizing the arguments similar to those of (4.17) and (4.18) in the proof of Theorem 4.1, we get
and
Utilizing (4.28), (4.39) and (4.40), we have
Thus, utilizing Lemma 2.1, from Algorithm 4.2 and (4.41) it follows that
which hence implies that
Since , , , , , , and , it follows from the boundedness of , , , and that
Consequently, it immediately follows that
This together with implies that
Since
we have
Step 5. , where .
Indeed, since is bounded, there exists a subsequence of such that
Also, since H is reflexive and is bounded, without loss of generality, we may assume that weakly for some . First, it is clear from Lemma 2.2 and that . Now, let us show that . Note that
where is defined as that in Lemma 1.1. According to Lemma 2.2, we get . Further, let us show that . As a matter of fact, since , and , we deduce that weakly and weakly. Let
where . Then T is maximal monotone and if and only if ; see [41] for more details. Utilizing the arguments similar to those of Step 5 in the proof of Theorem 4.1 and the relations
we can derive
Since T is maximal monotone, we have and hence . Thus it is clear that . Therefore, . Consequently, in terms of Proposition 2.1(i), we obtain from (4.43) that
Step 6. .
Indeed, from (4.25), (4.26) and (4.28) it follows that
Utilizing the arguments similar to those of Step 6 in the proof of Theorem 4.1, we can infer that
and
It is easy to see that all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that as .
Finally, from and , it follows that and . This completes the proof. □
Corollary 4.3 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a k-strictly pseudo-contractive mapping such that . For fixed and given arbitrarily, let , , be the sequences generated iteratively by
where for , , and such that
-
(i)
;
-
(ii)
and for all ;
-
(iii)
and ;
-
(iv)
and ;
-
(v)
;
-
(vi)
and .
Then the sequences , , converge strongly to the same point if and only if . Furthermore, is a solution of GSVI (1.3), where .
In addition, utilizing Corollary 4.3, we derive the following result.
Corollary 4.4 Let C be a nonempty closed convex subset of a real Hilbert space . Let and be a nonexpansive mapping such that . For fixed and given arbitrarily, let , be the sequences generated iteratively by
where , and such that
-
(i)
;
-
(ii)
and ;
-
(iii)
;
-
(iv)
and .
Then the sequences , converge strongly to the same point if and only if .
Proof In Corollary 4.3, put and . Then , for all , and the iterative scheme (4.44) is equivalent to
This is equivalent to (4.45). Since S is a nonexpansive mapping, S must be a k-strictly pseudo-contractive mapping with . In this case, it is easy to see that all the conditions (i)-(vi) in Corollary 4.3 are satisfied. Therefore, in terms of Corollary 4.3, we obtain the desired result. □
Remark 4.1 Theorems 4.1 and 4.2 improve, extend and develop [[23], Theorem 5.7], [[34], Theorem 3.1], [[8], Theorem 3.2] and [[17], Theorem 3.1] in the following aspects:
-
(i)
Compared with the relaxed extragradient iterative algorithm in [[8], Theorem 3.2], our hybrid viscosity iterative algorithms (i.e., Algorithms 4.1 and 4.2) remove the requirement of boundedness for the domain C in which various mappings are defined.
-
(ii)
Because both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our results as strong convergence theorems are very interesting and quite valuable.
-
(iii)
The problem of finding an element of in Theorems 4.1 and 4.2 is more general than the corresponding problems in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.
-
(iv)
The hybrid extragradient method for finding an element of in [[17], Theorem 3.1] is extended to develop our hybrid viscosity iterative algorithms (i.e., Algorithms 4.1 and 4.2) for finding an element of .
-
(v)
The proof of our results is very different from that of [[17], Theorem 3.1] because our argument technique depends on Lemma 2.3, the restriction on the regularization parameter sequence and the properties of the averaged mappings to a great extent.
-
(vi)
Because Algorithms 4.1 and 4.2 involve two inverse strongly monotone mappings and , a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.
References
Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–512.
Bnouhachem A, Noor MA, Hao Z: Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 2009, 70: 1321–1329.
Ceng LC, Ansari QH, Yao JC: Viscosity approximation methods for generalized equilibrium problems and fixed point problems. J. Glob. Optim. 2009, 43: 487–502.
Ceng LC, Huang S: Modified extragradient methods for strict pseudo-contractions and monotone mappings. Taiwan. J. Math. 2009, 13(4):1197–1211.
Ceng LC, Wang CY, Yao JC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390.
Ceng LC, Yao JC: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215.
Ceng LC, Yao JC: Relaxed viscosity approximation methods for fixed point problems and variational inequality problems. Nonlinear Anal. 2008, 69: 3299–3309.
Yao Y, Liou YC, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480.
Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.
Nadezhkina N, Takahashi W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 2006, 16(4):1230–1241.
Ceng LC, Lin YC, Petruşel A: Hybrid method for designing explicit hierarchical fixed point approach to monotone variational inequalities. Taiwan. J. Math. 2012, 16: 1531–1555.
Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428.
Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201.
Korpelevich GM: An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metod. 1976, 12: 747–756.
Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186(2):1551–1558.
Verma RU: On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-Line 1999, 3(8):65–68.
Ceng LC, Guu SM, Yao JC: Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011., 2011: Article ID 626159. doi:10.1155/2011/626159
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239.
Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453.
Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365.
Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084.
Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256.
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120.
Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665.
Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034.
Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266.
Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799.
Sezan MI, Stark H: Applications of convex projection theory to image recovery in tomography and related areas. In Image Recovery Theory and Applications. Edited by: Stark H. Academic Press, Orlando; 1987:415–462.
Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert spaces. Numer. Funct. Anal. Optim. 1992, 13: 413–429.
Landweber L: An iterative formula for Fredholm integral equations of the first kind. Am. J. Math. 1951, 73: 615–624.
Potter LC, Arun KS: A dual approach to linear inverse problems with convex constraints. SIAM J. Control Optim. 1993, 31: 1080–1092.
Combettes PL, Wajs V: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4: 1168–1200.
Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642.
Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597.
Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159.
Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544.
Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504.
Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346.
Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88.
Acknowledgements
Dedicated to Professor Hari M Srivastava.
The first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). The second author was partially supported by the grant NSC 99-2115-M-037-002-MY3.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors take equal roles in deriving results and writing of this paper.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Ceng, LC., Yao, JC. Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl 2013, 43 (2013). https://doi.org/10.1186/1687-1812-2013-43
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2013-43