Abstract
In this paper we introduce a multi-step implicit iterative scheme with regularization for finding a common solution of the minimization problem (MP) for a convex and continuously Fréchet differentiable functional and the common fixed point problem of an infinite family of nonexpansive mappings in the setting of Hilbert spaces. The multi-step implicit iterative method with regularization is based on three well-known methods: the extragradient method, approximate proximal method and gradient projection algorithm with regularization. We derive a weak convergence theorem for the sequences generated by the proposed scheme. On the other hand, we also establish a strong convergence result via an implicit hybrid method with regularization for solving these two problems. This implicit hybrid method with regularization is based on the CQ method, extragradient method and gradient projection algorithm with regularization.
MSC:49J30, 47H09, 47J20.
Similar content being viewed by others
1 Introduction
Let H be a real Hilbert space with the inner product and the norm , let C be a nonempty closed convex subset of H and let be the metric projection of H onto C. Let be a self-mapping on C. We denote by the set of fixed points of S and by R the set of all real numbers. A mapping is called L-Lipschitz continuous if there exists a constant such that for all . In particular, if , then A is called a nonexpansive mapping [1]; if then A is called a contraction.
Let be a convex and continuously Fréchet differentiable functional. Consider the minimization problem (MP) of minimizing f over the constraint set C
We denote by Γ the set of minimizers of MP (1.1) which are assumed to be nonempty.
On the other hand, consider the following variational inequality problem (VIP): find a such that
The solution set of VIP (1.2) is denoted by .
We remark that VIP (1.2) was first discussed by Lions [2] and now is well known. There are a lot of different approaches towards solving VIP (1.2) in finite-dimensional and infinite-dimensional spaces, and the research is intensively investigated. VIP (1.2) has many applications in computational mathematics, mathematical physics, operations research, mathematical economics, optimization theory, and other fields; see, e.g., [3–6] and the references therein.
Recently, motivated by the work of Takahashi and Zembayashi [7], Cholamjiak [8] introduced a new hybrid projection algorithm for finding a common element of the set of solutions of the equilibrium problem and the set of solutions of the variational inequality problem and the set of fixed points of relatively quasi-nonexpansive mappings in a Banach space. Here, the involved operator in [8] is an inverse-strongly monotone operator. Furthermore, Nadezhkina and Takahashi [9] introduced an iterative process for finding an element of and obtained a strong convergence theorem.
Theorem NT (see [[9], Theorem 3.1])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a monotone and L-Lipschitz-continuous mapping and be a nonexpansive mapping such that . Let , and be the sequences generated by
where for some and for some . Then the sequences , and converge strongly to .
Also, it is remarkable that the joint work of Nadezhkina and Takahashi [9], which introduced a new iterative method, combines Korpelevich’s extragradient method and the so-called CQ method. We note that Nadezhkina and Takahashi employed the monotonicity and Lipschitz-continuity of A to define a maximal monotone operator T [10]. However, if the mapping A is pseudomonotone Lipschitz-continuous, then T is not necessarily a maximal monotone operator. To overcome this difficulty, Ceng et al. [11] suggested another iterative method. They established necessary and sufficient mild conditions such that the sequences generated by their proposed method converge weakly to some common solution of VIP (1.2) and the common fixed point problem of a finite family of nonexpansive mappings.
Theorem CTY ([[11], Theorem 3.1])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be a pseudomonotone, k-Lipschitz-continuous and -sequentially-continuous mapping of C into H, and let be N nonexpansive mappings of C into itself such that . Let , , be the sequences generated by
for every , where , is an error sequence in H such that and the following conditions hold:
-
(i)
, and ;
-
(ii)
for some ;
-
(iii)
for some .
Then the sequences , , converge weakly to the same element of if and only if , .
In this paper, we aim to find a common solution of the minimization problem (MP) for a convex and continuously Fréchet differentiable functional and the common fixed point problem of an infinite family of nonexpansive mappings in the setting of Hilbert spaces. Motivated and inspired by the research going on in this area, we propose two iterative schemes for this purpose. One is called a multi-step implicit iterative method with regularization which is based on three well-known methods: extragradient method, approximate proximal method and gradient projection algorithm with regularization. Another is an implicit hybrid method with regularization which is based on the CQ method, extragradient method and gradient projection algorithm with regularization. Weak and strong convergence results for these two schemes are established, respectively. Recent results in this direction can be found, e.g., in [7–32].
2 Preliminaries
Let H be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let C be a nonempty closed convex subset of H. We write to indicate that the sequence converges weakly to x and to indicate that the sequence converges strongly to x. Moreover, we use to denote the weak ω-limit set of the sequence , i.e.,
The metric (or nearest point) projection from H onto C is the mapping which assigns to each point the unique point satisfying the property
Some important properties of projections are gathered in the following proposition.
Proposition 2.1 For given and :
-
(i)
, ;
-
(ii)
, ;
-
(iii)
, .
Consequently, is nonexpansive and monotone.
Definition 2.1 A mapping is said to be:
-
(a)
pseudomonotone if for all
-
(b)
monotone if
-
(c)
η-strongly monotone if there exists a constant such that
-
(d)
α-inverse-strongly monotone (α-ism) if there exists a constant such that
It is obvious that if A is α-inverse-strongly monotone, then A is monotone and -Lipschitz continuous.
Recall that a mapping is said to be nonexpansive [1] if
Denote by the set of fixed points of S; that is, . It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields.
We need some facts and tools which are listed as lemmas below.
Lemma 2.1 Let X be a real inner product space. Then the following inequality holds:
Lemma 2.2 Let be a bounded sequence in a reflexive Banach space X. If , then .
Lemma 2.3 Let be a monotone mapping. In the context of the variational inequality problem, the characterization of the projection (see Proposition 2.1(i)) implies
Lemma 2.4 Let H be a real Hilbert space. Then the following hold:
-
(a)
for all ;
-
(b)
for all and with ;
-
(c)
If is a sequence in H such that , it follows that
Lemma 2.5 ([[33], Lemma 2.5])
Let H be a real Hilbert space. Given a nonempty closed convex subset of H and points and given also a real number , the set
is convex (and closed).
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be an infinite family of nonexpansive mappings of C into itself and let be a sequence in . For any , define a mapping of C into itself as follows:
Such is called a W-mapping generated by and . We need the following lemmas for proving our main results.
Lemma 2.6 [34]
Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be nonexpansive mappings of C into itself such that is nonempty, and let be real numbers such that for all . Then, for every and , the limit exists.
Using Lemma 2.6, one can define a mapping W of C into itself as follows:
Lemma 2.7 ([34])
Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be nonexpansive mappings of C into itself such that is nonempty, and let be real numbers such that for all . Then
Lemma 2.8 ([35])
If is a bounded sequence in C, then
Lemma 2.9 ([[36], Demiclosedness principle])
Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be a nonexpansive mapping such that . Then S is demiclosed on C, i.e., if and , then .
To prove a weak convergence theorem by the multi-step implicit iterative method with regularization for MP (1.1) and infinitely many nonexpansive mappings , we need the following lemma due to Osilike et al. [37].
Lemma 2.10 ([[37], p.80])
Let , and be sequences of nonnegative real numbers satisfying the inequality
If and , then exists. If, in addition, has a subsequence which converges to zero, then .
Corollary 2.1 ([[38], p.303])
Let and be two sequences of nonnegative real numbers satisfying the inequality
If converges, then exists.
Lemma 2.11 ([36])
Every Hilbert space H has the Kadec-Klee property; that is, given a sequence and a point , we have
It is well known that every Hilbert space H satisfies Opial’s condition [39], i.e., for any sequence with , the inequality
holds for every with .
A set-valued mapping is called monotone if for all , and imply . A monotone mapping is maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if for , for all implies . Let be a monotone, L-Lipschitz continuous mapping and let be the normal cone to C at , i.e., . Define
It is known that in this case T is maximal monotone, and if and only if ; see [10].
3 Weak convergence theorem
In this section, we derive weak convergence criteria for a multi-step implicit iterative method with regularization for finding a common solution of the common fixed point problem of infinitely many nonexpansive mappings and MP (1.1) for a convex functional with an L-Lipschitz continuous gradient ∇f. This implicit iterative method with regularization is based on the extragradient method, approximate proximal method and gradient projection algorithm (GPA) with regularization.
Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a W-mapping defined by (2.1), let be an L-Lipschitz continuous mapping with , and let be an infinite family of nonexpansive mappings of C into itself such that is nonempty and bounded. Let , and be the sequences generated by
where an error sequence with , and with
Assume the following conditions hold:
-
(i)
and ;
-
(ii)
for some ;
-
(iii)
and ;
-
(iv)
, and for some ;
-
(v)
and satisfy .
Then the sequences , and generated by (3.1) converge weakly to some .
Remark 3.1 In the proof of Theorem 3.1 below, we show that every is closed and convex and that , .
Now, we observe that for all and all ,
Hence, by the Banach contraction principle, we know that for each there exists a unique such that
Also, observe that for all and all ,
So, by the Banach contraction principle, we know that for each there exists a unique such that
In addition, observe that for all and all ,
Thus, by the Banach contraction principle, we know that for each there exists a unique such that
Therefore, the sequences , and generated by (3.1) are well defined.
Next, we divide our detailed proof into several propositions. For this purpose, in the sequel, we assume that all our assumptions are satisfied.
Proposition 3.1 , .
Proof First we note that every set is closed and convex. As a matter of fact, since the defining inequality in is equivalent to the inequality
by Lemma 2.5 we also have that is convex and closed for every . Also, note that the L-Lipschitz continuity of the gradient ∇f implies that ∇f is -ism [31], that is,
Observe that
Hence, it follows that is -ism. Now, take arbitrarily. Taking into account , , we deduce that
and
which implies that
Meantime, we also have
which hence implies that
Thus, from (3.5) and (3.6) it follows that
which together with implies that
Furthermore, from Proposition 2.1(ii), the monotonicity of ∇f, and , we have
Since and -Lipschitz continuous, by Proposition 2.1(i) we have
So, we have
Therefore, from (3.7) and (3.8), together with and , by Lemma 2.4(b) we have
which implies that . Therefore,
and this completes the proof. □
Proposition 3.2 The sequences , , and are all bounded.
Proof Since and for all , from the monotonicity of ∇f, we have
which hence implies that
Note that is equivalent to the inequality
Taking in the last inequality, we deduce
which implies that
From (3.10) and (3.11) we get
which can be rewritten as
It follows that
Hence,
By induction, we can obtain
Since , we immediately conclude that the sequence is bounded. Thus, from , , , (3.5), (3.6) and (3.9) it follows that , and are bounded. This completes the proof. □
Proposition 3.3 The following statements hold:
-
(i)
exists for each ;
-
(ii)
;
-
(iii)
.
Proof For each , we get from (3.13)
Since the conditions and lead to , by Corollary 2.1, we know that exists for each . Note that by Lemma 2.4(a) we have from (3.12)
Since and as , from the existence of and the boundedness of , we obtain that
Since , it follows that
which implies that
and hence
For each , from (3.9) we have
Since , , , and as , from the boundedness of , , and we conclude that
Utilizing the arguments similar to those in (3.8),
Hence,
It follows that
Since , , , and as , from the boundedness of , , and we deduce that
Taking into consideration that
we also have
Since , we have
Then
and hence . Observe also that
So, we have . On the other hand, since is bounded, from Lemma 2.8, we have . Therefore, we obtain
and this completes the proof. □
Proposition 3.4 .
Proof By Proposition 3.3(iii), we know that
Take arbitrarily. Then there exists a subsequence of such that ; hence, we have . Note that from Lemma 2.9 it follows that is demiclosed at zero. Thus, . Now, let us show . Since and , we have and . Let
where is the normal cone to C at . We have already mentioned that in this case the mapping T is maximal monotone, and if and only if ; see [10] for more details. Let be the graph of T and let . Then we have and hence . So, we have for all . On the other hand, from and , we have
and hence
Therefore, from for all and , we have
Since (due to the Lipschitz continuity of ∇f), (due to ), and , we obtain as . Since T is maximal monotone, we have and hence . Clearly, . Consequently, . This implies that . □
Finally, according to Propositions 3.1-3.4, we prove the remainder of Theorem 3.1.
Proof It is sufficient to show that is a single-point set because and as . Since , let us take two points arbitrarily. Then there exist two subsequences and of such that and , respectively. In terms of Proposition 3.4, we know that . Meantime, according to Proposition 3.3(i), we also know that there exist both and . Let us show that . Assume that . From the Opial condition [39] it follows that
This leads to a contradiction. Thus, we must have . This implies that is a singleton. Without loss of generality, we may write . Consequently, by Lemma 2.2 we obtain that . Since and as , we also have that and . This completes the proof. □
Remark 3.2 Our Theorem 3.1 improves, extends, supplements and develops Nadezhkina and Takahashi [[9], Theorem 3.1] and Ceng et al. [[11], Theorem 3.1] in the following aspects.
-
(i)
The combination of the problem of finding an element of in [[9], Theorem 3.1] and the one of finding an element of in [[11], Theorem 3.1] is extended to develop the one of finding an element of in our Theorem 3.1.
-
(ii)
Our Theorem 3.1 drops the required condition , in [[11], Theorem 3.1].
-
(iii)
The iterative scheme in [[11], Theorem 3.1] is extended to develop the iterative scheme (3.1) of our Theorem 3.1 by virtue of the iterative scheme of [[9], Theorem 3.1]. The iterative scheme (3.1) of our Theorem 3.1 is more advantageous and more flexible than the iterative scheme of [[11], Theorem 3.1] because it involves several parameter sequences , , , , , and .
-
(iv)
The iterative scheme (3.1) in our Theorem 3.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1] because the final iteration steps of computing in [[9], Theorem 3.1] and [[11], Theorem 3.1] are replaced by the implicit iteration step in the iterative scheme (3.1) of our Theorem 3.1.
-
(v)
The argument technique of our Theorem 3.1 combines the argument one in [[9], Theorem 3.1] and the argument one in [[11], Theorem 3.1]. Because the problem of finding an element of in our Theorem 3.1 involves a countable family of nonexpansive mappings , the proof of our Theorem 3.1 depends on the properties of the W-mapping (see Lemmas 2.6-2.8 of Section 2 in this paper). Therefore, the proof of our Theorem 3.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1].
4 Strong convergence theorem
In this section, we prove a strong convergence theorem via an implicit hybrid method with regularization for finding a common element of the set of common fixed points of an infinite family of nonexpansive mappings and the set of solutions of MP (1.1) for a convex functional with an L-Lipschitz continuous gradient ∇f. This implicit hybrid method with regularization is based on the CQ method, extragradient method and gradient projection algorithm (GPA) with regularization.
Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a W-mapping defined by (2.1), let be an L-Lipschitz continuous mapping with , and let be an infinite family of nonexpansive mappings of C into itself such that is nonempty and bounded. Let , and be the sequences generated by
where and
Assume the following conditions hold:
-
(i)
and ;
-
(ii)
for some ;
-
(iii)
and ;
-
(iv)
, and for some .
Then the sequences , and generated by (4.1) converge strongly to the same point .
Proof Utilizing the condition , , and repeating the same arguments as in Remark 3.1, we can see that , and are defined well. Note that the L-Lipschitz continuity of the gradient ∇f implies that ∇f is -ism [31], that is,
Repeating the same arguments as in the proof of Proposition 3.1, we know that is -ism. It is clear that is closed and is closed and convex for every . As the defining inequality in is equivalent to the inequality
by Lemma 2.5 we also have that is convex for every . As , we have for all , and by Proposition 2.1(i) we get .
We divide the rest of the proof into several steps.
Step 1. , and are bounded.
Indeed, take arbitrarily. Taking into account , and repeating the same arguments as in (3.5) and (3.6), we deduce that
and
Thus, from (4.2) and (4.3) it follows that
which together with implies that
Repeating the same arguments as in (3.8) and (3.9), we can deduce that
and
for every and hence . So,
Next, let us show by mathematical induction that is well defined and for every . For we have . Hence we obtain . Suppose that is given and for some integer . Since is nonempty, is a nonempty closed convex subset of C. So, there exists a unique element such that . It is also obvious that there holds for every . Since , we have for every and hence . Therefore, we obtain .
Step 2. and .
Indeed, let . From and , we have
for every . Therefore, is bounded. From (4.2), (4.3) and (4.5) we also obtain that , and are bounded. Since and , we have
for every . Therefore, there exists . Since and , using Proposition 2.1(ii), we have
for every . This implies that
Since , we have
which implies that
Hence we get
for every . From and , we conclude that as .
Step 3. and .
Indeed, since , , , and as , from the boundedness of , , and we conclude from (4.5) that
Utilizing the arguments similar to those in (4.4),
Hence,
Since , , , and as , from the boundedness of , , and we deduce that
Taking into consideration that
we also have
Since , we get
and hence . Observe also that
So, we have . On the other hand, since is bounded, from Lemma 2.8, we have . Therefore, we obtain
Step 4. .
Indeed, repeating the same arguments as in the proof of Proposition 3.4, we can derive the desired conclusion.
Step 5. , and converge strongly to .
Indeed, take arbitrarily. Then according to Step 4. Moreover, there exists a subsequence of such that . Hence, from , , and (4.6), we have
So, we obtain
From we have due to the Kadec-Klee property of Hilbert spaces [36]. So, it is clear that . Since and , we have
As , we obtain by and . Hence we have . This implies that . It is easy to see that and . This completes the proof. □
Remark 4.1 Our Theorem 4.1 improves, extends, supplements, and develops Nadezhkina and Takahashi [[9], Theorem 3.1] and Ceng et al. [[11], Theorem 3.1] in the following aspects.
-
(i)
The combination of the problem of finding an element of in [[9], Theorem 3.1] and the one of finding an element of in [[11], Theorem 3.1] is extended to develop the one of finding an element of in our Theorem 4.1.
-
(ii)
Our Theorem 3.1 is one strong convergence result and drops the required condition , in [[11], Theorem 3.1].
-
(iii)
The iterative scheme in [[9], Theorem 3.1] is extended to develop the iterative scheme (4.1) of our Theorem 4.1 by virtue of the iterative scheme of [[11], Theorem 3.1]. The iterative scheme (4.1) of our Theorem 4.1 is more advantageous and more flexible than the iterative scheme of [[9], Theorem 3.1] because it involves several parameter sequences , , and .
-
(iv)
The iterative scheme (4.1) in our Theorem 4.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1] because two explicit iteration steps of computing and in [[9], Theorem 3.1] and [[11], Theorem 3.1] are replaced by three iteration steps involving two implicit steps in the iterative scheme (3.1) of our Theorem 3.1.
-
(v)
The argument technique of our Theorem 4.1 combines the argument one in [[9], Theorem 3.1] and the argument one in [[11], Theorem 3.1]. Because the problem of finding an element of in our Theorem 4.1 involves a countable family of nonexpansive mappings , the proof of our Theorem 4.1 depends on the properties of the W-mapping (see Lemmas 2.6-2.8 of Section 2 in this paper). Therefore, the proof of our Theorem 4.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1].
References
Browder FE, Petryshyn WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6
Lions JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris; 1969.
Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.
Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.
Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs; 1986.
Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.
Takahashi W, Zembayashi K: Strong convergence theorem by a new hybrid method for equilibrium problems and relatively nonexpansive mappings. Fixed Point Theory Appl. 2009., 2009: Article ID 719360. doi:10.1155/2009/719360
Cholamjiak P: A hybrid iterative scheme for equilibrium problems, variational inequality problems, and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2008., 2008: Article ID 528476
Nadezhkina N, Takahashi W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 2006, 16: 1230–1241. 10.1137/050624315
Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056
Ceng LC, Teboulle M, Yao JC: Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 2010, 146: 19–31. 10.1007/s10957-010-9650-0
Liu F, Nashed MZ: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 1998, 6: 313–344. 10.1023/A:1008643727926
Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560
Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z
Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10(5):1293–1303.
Ceng LC, Ansari QH, Yao JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061
Iiduka H, Takahashi W: Strong convergence theorem by a hybrid method for nonlinear mappings of nonexpansive and monotone type and applications. Adv. Nonlinear Var. Inequal. 2006, 9: 1–10.
Yao Y, Noor MA: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 2007, 325: 776–787. 10.1016/j.jmaa.2006.01.091
Ceng LC, Yao JC: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215. 10.1016/j.amc.2007.01.021
Ceng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.
Ceng LC, Al-Homidan S, Ansari QH, Yao JC: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 2009, 223: 967–974. 10.1016/j.cam.2008.03.032
Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062
Ceng LC, Wong NC, Yao JC: Fixed point solutions of variational inequalities for a finite family of asymptotically nonexpansive mappings without common fixed point assumption. Comput. Math. Appl. 2008, 56: 2312–2322. 10.1016/j.camwa.2008.05.002
Yao Y, Postolache M: Iterative methods for pseudomonotone variational inequalities and fixed-point problems. J. Optim. Theory Appl. 2012, 155: 273–287. 10.1007/s10957-012-0055-0
Ceng LC, Yao JC: A viscosity relaxed-extragradient method for monotone variational inequalities and fixed point problems. J. Math. Inequal. 2007, 1(2):225–241.
Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018
Ceng LC, Yao JC: Approximate proximal methods in vector optimization. Eur. J. Oper. Res. 2007, 183: 1–19. 10.1016/j.ejor.2006.09.070
Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642. 10.1016/j.camwa.2011.12.074
Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75(4):2116–2125. 10.1016/j.na.2011.10.012
Ceng LC, Wong NC, Yao JC: Strong and weak convergence theorems for an infinite family of nonexpansive mappings and applications. J. Fixed Point Theory Appl. 2012., 2012: Article ID 117
Ceng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7
Sahu DR, Xu HK, Yao JC: Asymptotically strict pseudocontractive mappings in the intermediate sense. Nonlinear Anal. 2009, 70: 3502–3511. 10.1016/j.na.2008.07.007
Qin X, Cho YJ, Kang SM: An iterative method for an infinite family of nonexpansive mappings in Hilbert spaces. Bull. Malays. Math. Soc. 2009, 32(2):161–171.
Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363
Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.
Osilike MO, Aniagbosor SC, Akuchu BG: Fixed points of asymptotically demicontractive mappings in arbitrary Banach space. Panam. Math. J. 2002, 12: 77–88.
Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309
Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0
Acknowledgements
Dedicated to Professor Hari M Srivastava.
The first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). The last author was partially supported by the a grant from the NSC 101-2115-M-037-001.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contribute equally to this work. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Ceng, LC., Ansari, Q.H. & Wen, CF. Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J Inequal Appl 2013, 240 (2013). https://doi.org/10.1186/1029-242X-2013-240
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2013-240