Relaxed Extragradient Methods with Regularization for General System of Variational Inequalities with Constraints of Split Feasibility and Fixed Point Problems
Abstract
We suggest and analyze relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of a general system of variational inequalities, the solution set of a split feasibility problem, and the fixed point set of a strictly pseudocontractive mapping defined on a real Hilbert space. Here the relaxed extragradient methods with regularization are based on the well-known successive approximation method, extragradient method, viscosity approximation method, regularization method, and so on. Strong convergence of the proposed algorithms under some mild conditions is established. Our results represent the supplementation, improvement, extension, and development of the corresponding results in the very recent literature.
1. Introduction
Let ℋ be a real Hilbert space, whose inner product and norm are denoted by 〈·, ·〉 and ∥·∥, respectively. Let K be a nonempty closed convex subset of ℋ. The (nearest point or metric) projection from ℋ onto Kis denoted by PK. We write xn⇀x to indicate that the sequence {xn} converges weakly to x and xn⇀x to indicate that the sequence {xn} converges strongly to x.
Very recently, Xu [6] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and purposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.
Furthermore, Korpelevič [17] introduced the so-called extragradient method for finding a solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.
Proposition 1 (see [18], Proposition 3.1.)Given x* ∈ ℋ1, the following statements are equivalent:
- (i)
x* solves the SFP;
- (ii)
x* solves the fixed point equation
()where λ > 0, ∇f = A*(I − PQ)A and A* is the adjoint of A; - (iii)
x* solves the variational inequality problem (VIP) of finding x* ∈ C such that
()
Proposition 2 (see [18].)There hold the following statements:
- (i)
the gradient
()is (α + ∥A∥2)-Lipschitz continuous and α-strongly monotone; - (ii)
the mapping PC(I − λ∇fα) is a contraction with coefficient
()where ; - (iii)
if the SFP is consistent, then the strong lim α→0 xα exists and is the minimum-norm solution of the SFP.
Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [19], Ceng et al. [18] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of Fix (S)∩Γ, where S : C → C is a nonexpansive mapping.
Theorem 3 (see [18], Theorem 3.1.)Let S : C → C be a nonexpansive mapping such that Fix (S)∩Γ ≠ ∅. Let {xn} and {yn} the sequences in C generated by the following extragradient algorithm:
For finding an element of Fix (S)∩VI (C, A) under the assumption that a set C ⊂ ℋ is nonempty, closed, and convex, a mapping S : C → C is nonexpansive, and a mapping A : C → ℋ is α-inverse strongly monotone, Takahashi and Toyoda [29] introduced an iterative scheme and studied the weak convergence of the sequence generated by the proposed scheme to a point of Fix (S)∩VI (C, A). Recently, Iiduka and Takahashi [30] presented another iterative scheme for finding an element of Fix (S)∩VI (C, A) and showed that the sequence generated by the scheme converges strongly to PFix (S)∩VI (C,A)u, where u is the initially chosen point in the iterative scheme and PK denotes the metric projection of ℋ onto K.
Based on Korpelevič’s extragradient method [17], Nadezhkina and Takahashi [19] introduced an iterative process for finding an element of Fix (S)∩VI (C, A) and proved the weak convergence of the sequence to a point of Fix (S)∩VI (C, A). Zeng and Yao [27] presented an iterative scheme for finding an element of Fix (S)∩VI (C, A) and proved that two sequences generated by the method converges strongly to an element of Fix (S)∩VI (C, A). Recently, Bnouhachem et al. [31] suggested and analyzed an iterative scheme for finding a common element of the fixed point set Fix (S) of a nonexpansive mapping S and the solution set VI (C, A) of the variational inequality (11) for an inverse strongly monotone mapping A : C → ℋ.
Lemma 4 (see [23].)For given is a solution of problem (14) if and only if is a fixed point of the mapping G : C → C defined by
In particular, if the mapping Bi : C → ℋ is βi-inverse strongly monotone for i = 1,2, then the mapping G is nonexpansive provided μi ∈ (0,2βi) for i = 1,2.
Utilizing Lemma 4, they introduced and studied a relaxed extragradient method for solving problem (14). Throughout this paper, the set of fixed points of the mapping G is denoted by Ξ. Based on the relaxed extragradient method and viscosity approximation method, Yao et al. [26] proposed and analyzed an iterative algorithm for finding a common solution of the GSVI (14) and the fixed point problem of a strictly pseudo-contractive mapping S : C → C. Subsequently, Ceng et al. [33] further presented and analyzed an iterative scheme for finding a common element of the solution set of the VIP (11), the solution set of the GSVI (14), and the fixed point set of a strictly pseudo-contractive mapping S : C → C.
Theorem 5 (see [33], Theorem 3.1.)Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let A : C → ℋ be α-inverse strongly monotone and Bi : C → ℋ be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩VI(C, A) ≠ ∅. Let Q : C → C be a ρ-contraction with ρ ∈ [0, 1/2). For given x0 ∈ C arbitrarily, let the sequences be generated by the relaxed extragradient iterative scheme:
- (i)
βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (ii)
lim n→∞αn = 0 and ;
- (iii)
0 < lim inf n→∞βn ≤ limsup n→∞βn < 1 and liminf n→∞δn > 0;
- (iv)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (v)
0 < lim inf n→∞λn ≤ limsup n→∞λn < α and lim n→∞|λn+1 − λn| = 0.
Then the sequences converge strongly to the same point if and only if lim n→∞∥un+1 − un∥ = 0. Furthermore, is a solution of the GSVI (14), where .
Motivated and inspired by the research going on this area, we propose and analyze the following relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of the GSVI (14), the solution set of the SFP (1), and the fixed point set of a strictly pseudo-contractive mapping S : C → C.
Algorithm 6. Let μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ∞), {λn}⊂(0, 1/∥A∥2) and {σn}, {βn}, {γn}, {δn}⊂[0,1] such that βn + γn + δn = 1 for all n ≥ 0. For given x0 ∈ C arbitrarily, let be the sequences generated by the following relaxed extragradient iterative scheme with regularization:
Under mild assumptions, it is proven that the sequences converge strongly to the same point if and only if lim n→∞∥un+1 − un∥ = 0. Furthermore, is a solution of the GSVI (14), where .
Algorithm 7. Let μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ∞), {λn}⊂(0, 1/∥A∥2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that σn + τn ≤ 1 and βn + γn + δn = 1 for all n ≥ 0. For given x0 ∈ C arbitrarily, let {xn}, {yn}, {zn} be the sequences generated by the following relaxed extragradient iterative scheme with regularization:
Also, under appropriate conditions, it is shown that the sequences {xn}, {yn}, {zn} converge strongly to the same point if and only if lim n→∞∥zn+1 − zn∥ = 0. Furthermore, is a solution of the GSVI (14), where .
Note that both [6, Theorem 5.7] and [18, Theorem 3.1] are weak convergence results for solving the SFP (1). Beyond question our strong convergence results are very interesting and quite valuable. Because our relaxed extragradient iterative schemes (17) and (18) with regularization involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem 5.7] and [18, Theorem 3.1], respectively. Furthermore, the relaxed extragradient iterative scheme (16) is extended to develop our relaxed extragradient iterative schemes (17) and (18) with regularization. All in all, our results represent the modification, supplementation, extension, and improvement of [6, Theorem 5.7], [18, Theorem 3.1], and [33, Theorem 3.1].
2. Preliminaries
Let K be a nonempty, closed, and convex subset of a real Hilbert space ℋ. Now we present some known results and definitions which will be used in the sequel.
The following properties of projections are useful and pertinent to our purpose.
Proposition 8 (see [34].)For given x ∈ ℋ and z ∈ K:
- (i)
z = PKx⇔〈x − z, y − z〉 ≤ 0, ∀ y ∈ K;
- (ii)
z = PKx⇔∥x−z∥2 ≤ ∥x−y∥2 − ∥y−z∥2, ∀ y ∈ K;
- (iii)
, which hence implies that PK is nonexpansive and monotone.
Definition 9. A mapping T : ℋ → ℋ is said to be
- (a)
nonexpansive if
() - (b)
firmly nonexpansive if 2T − I is nonexpansive, or equivalently,
()
Definition 10. Let T be a nonlinear operator with domain D(T)⊆ℋ and range R(T)⊆ℋ.
- (a)
T is said to be monotone if
() - (b)
Given a number β > 0, T is said to be β-strongly monotone if
() - (c)
Given a number ν > 0, T is said to be ν-inverse strongly monotone (ν-ism) if
()
It can be easily seen that if S is nonexpansive, then I − S is monotone. It is also easy to see that a projection PK is 1-ism.
Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [35, 36].
Definition 11. A mapping T : ℋ → ℋ is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,
Proposition 12 (see [7].)Let T : ℋ → ℋ be a given mapping.
- (i)
T is nonexpansive if and only if the complement I − T is 1/2-ism.
- (ii)
If T is ν-ism, then for γ > 0, γT is ν/γ-ism.
- (iii)
T is averaged if and only if the complement I − T is ν-ism for some ν > 1/2. Indeed, for α ∈ (0,1), T is α-averaged if and only if I − T is 1/2α-ism.
Proposition 13 (see [7], [37].)Let S, T, V : ℋ → ℋ be given operators.
- (i)
If T = (1 − α)S + αV for some α ∈ (0,1) and if S is averaged and V is nonexpansive, then T is averaged.
- (ii)
T is firmly nonexpansive if and only if the complement I − T is firmly nonexpansive.
- (iii)
If T = (1 − α)S + αV for some α ∈ (0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
- (iv)
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite T1∘T2∘⋯∘TN. In particular, if T1 is α1-averaged and T2 is α2-averaged, where α1, α2 ∈ (0,1), then the composite T1∘T2 is α-averaged, where α = α1 + α2 − α1α2.
- (v)
If the mappings are averaged and have a common fixed point, then
()
In order to prove the main result of this paper, the following lemmas will be required.
Lemma 14 (see [39].)Let {xn} and {yn} be bounded sequences in a Banach space X and let {βn} be a sequence in [0,1] with 0 < liminf n→∞βn ≤ limsup n→∞βn < 1. Suppose xn+1 = (1 − βn)yn + βnxn for all integers n ≥ 0 and limsup n→∞(∥yn+1 − yn∥−∥xn+1 − xn∥) ≤ 0. Then, lim n→∞∥yn − xn∥ = 0.
Lemma 15 (see [38], Proposition 2.1.)Let C be a nonempty closed convex subset of a real Hilbert space ℋ and S : C → C a mapping.
- (i)
If S is a k-strict pseudo-contractive mapping, then S satisfies the Lipschitz condition
() - (ii)
If S is a k-strict pseudo-contractive mapping, then the mapping I − S is semiclosed at 0, that is, if {xn} is a sequence in C such that weakly and (I − S)xn → 0 strongly, then .
- (iii)
If S is k-(quasi-)strict pseudo-contraction, then the fixed point set Fix (S) of S is closed and convex so that the projection P Fix (S) is well defined.
The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.
Lemma 16 (see [34].)Let {an} be a sequence of nonnegative real numbers satisfying the property
- (i)
;
- (ii)
either limsup n→∞tn ≤ 0 or ;
- (iii)
where rn ≥ 0, ∀ n ≥ 0. Then, lim n→∞an = 0.
Lemma 17 (see [26].)Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let S : C → C be a k-strictly pseudo-contractive mapping. Let γ and δ be two nonnegative real numbers such that (γ + δ)k ≤ γ. Then
The following lemma is an immediate consequence of an inner product.
Lemma 18. In a real Hilbert space ℋ, there holds the inequality
3. Main Results
In this section, we first prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (17) with regularization.
Theorem 19. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2), and let Bi : C → ℋ1 be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩Γ ≠ ∅. Let Q : C → C be a ρ-contraction with ρ ∈ [0, 1/2). For given x0 ∈ C arbitrarily, let the sequences be generated by the relaxed extragradient iterative algorithm (17) with regularization, where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ∞), {λn}⊂(0, 1/∥A∥2) and {σn}, {βn}, {γn}, {δn}⊂[0,1] such that
- (i)
;
- (ii)
βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < lim inf n→∞βn ≤ lim sup n→∞βn < 1 and liminf n→∞δn > 0;
- (v)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (vi)
0 < lim inf n→∞λn ≤ lim sup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Then the sequences converge strongly to the same point if and only if lim n→∞∥un+1 − un∥ = 0. Furthermore, is a solution of the GSVI (14), where .
Proof. First, taking into account 0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2, without loss of generality we may assume that {λn}⊂[a, b] for some a, b ∈ (0, 1/∥A∥2).
Now, let us show that PC(I − λ∇fα) is ζ-averaged for each λ ∈ (0, 2/(α + ∥A∥2)), where
Indeed, it is easy to see that ∇f = A*(I − PQ)A is 1/∥A∥2-ism, that is,
Next we divide the remainder of the proof into several steps.
Step 1. {xn} is bounded.
Indeed, take an arbitrary p ∈ Fix (S)∩Ξ∩Γ. Then, we get Sp = p, PC(I − λ∇f)p = p for λ ∈ (0, 2/∥A∥2), and
Step 2. limn→∞∥xn+1 − xn∥ = 0.
Indeed, define xn+1 = βnxn + (1 − βn)wn for all n ≥ 0. It follows that
Step 3. limn→∞∥B2xn − B2p∥ = 0, , and , where q = PC(p − μ2B2p).
Indeed, utilizing Lemma 17 and the convexity of ∥·∥2, we obtain from (17), (48), and (51) that
Step 4. limn→∞∥Syn − yn∥ = 0.
Indeed, observe that
Step 5. where .
Indeed, since {xn} is bounded, there exists a subsequence of {xn} such that
Step 6. .
Indeed, from (48) and (51) it follows that
Corollary 20. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and Bi : C → ℋ1 be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩Γ ≠ ∅. For fixed u ∈ C and given x0 ∈ C arbitrarily, let the sequences be generated iteratively by
- (i)
;
- (ii)
βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1 and liminf n→∞δn > 0;
- (v)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (vi)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Next, utilizing Corollary 20 we give the following improvement and extension of the main result in [18] (i.e., [18, Theorem 3.1]).
Corollary 21. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and S : C → C be a nonexpansive mapping such that Fix (S)∩Γ ≠ ∅. For fixed u ∈ C and given x0 ∈ C arbitrarily, let the sequences be generated iteratively by
- (i)
;
- (ii)
lim n→∞σn = 0 and ;
- (iii)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1;
- (iv)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Then the sequences converge strongly to the same point if and only if lim n→∞∥un+1 − un∥ = 0.
Proof. In Corollary 20, put B1 = B2 = 0 and γn = 0. Then, Ξ = C, βn + δn = 1 for all n ≥ 0, and the iterative scheme (101) is equivalent to
Now, we are in a position to prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (18) with regularization.
Theorem 22. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and Bi : C → ℋ1 be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a k-strictly pseudocontractive mapping such that Fix (S)∩Ξ∩Γ ≠ ∅. Let Q : C → C be a ρ-contraction with ρ ∈ [0, 1/2). For given x0 ∈ C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated by the relaxed extragradient iterative algorithm (18) with regularization, where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ∞), {λn}⊂(0, 1/∥A∥2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that
- (i)
;
- (ii)
σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < liminf n→∞τn ≤ limsup n→∞τn < 1 and lim n→∞ | τn+1 − τn | = 0;
- (v)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1 and liminf n→∞δn > 0;
- (vi)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (vii)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Proof. First, taking into account 0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2, without loss of generality we may assume that {λn}⊂[a, b] for some a, b ∈ (0, 1/∥A∥2). Repeating the same argument as that in the proof of Theorem 19, we can show that PC(I − λ∇fα) is ζ-averaged for each λ ∈ (0, 2/(α + ∥A∥2)), where ζ = (2 + λ(α + ∥A∥2))/4. Further, repeating the same argument as that in the proof of Theorem 19, we can also show that for each integer is ζn-averaged with ζn = (2 + λn(αn + ∥A∥2))/4 ∈ (0,1).
Next we divide the remainder of the proof into several steps.
Step 1. {xn} is bounded.
Indeed, take p ∈ Fix (S)∩Ξ∩Γ arbitrarily. Then Sp = p, PC(I − λ∇f)p = p for λ ∈ (0, 2/∥A∥2), and
Step 2. lim n→∞∥xn+1 − xn∥ = 0.
Indeed, define xn+1 = βnxn + (1 − βn)wn for all n ≥ 0. Then, utilizing the arguments similar to those of (58)–(61) in the proof of Theorem 19, we can obtain that
Indeed, utilizing Lemma 17 and the convexity of ∥·∥2, we obtain from (18) and (106)–(109) that
Step 4. lim n→∞∥Syn − yn∥ = 0.
Indeed, observe that
Thus from (18) and (126) it follows that
Step 5. where .
Indeed, since {xn} is bounded, there exists a subsequence of {xn} such that
First, it is clear from Lemma 15 and ∥Syn − yn∥→0 that . Now let us show that . Note that
Step 6. .
Indeed, observe that
Corollary 23. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and let Bi : C → ℋ1 be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a k-strictly pseudocontractive mapping such that Fix (S)∩Ξ∩Γ ≠ ∅. For fixed u ∈ C and given x0 ∈ C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated iteratively by
- (i)
;
- (ii)
σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < liminf n→∞τn ≤ limsup n→∞τn < 1 and lim n→∞ | τn+1 − τn | = 0;
- (v)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1 and liminf n→∞δn > 0;
- (vi)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (vii)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Corollary 24. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and let Bi : C → ℋ1 be βi-inverse strongly monotone for i = 1,2. Let S : C → C be a nonexpansive mapping such that Fix (S)∩Ξ∩Γ ≠ ∅. Let Q : C → C be a ρ-contraction with ρ ∈ [0, 1/2). For given x0 ∈ C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated iteratively by
- (i)
;
- (ii)
σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)k ≤ γn for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < liminf n→∞τn ≤ limsup n→∞τn < 1 and lim n→∞ | τn+1 − τn | = 0;
- (v)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1 and liminf n→∞δn > 0;
- (vi)
lim n→∞(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;
- (vii)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Next, utilizing Corollary 23 one gives the following improvement and extension of the main result in [18] (i.e., [18, Theorem 3.1]).
Corollary 25. Let C be a nonempty closed convex subset of a real Hilbert space ℋ1. Let A ∈ B(ℋ1, ℋ2) and let S : C → C be a nonexpansive mapping such that Fix (S)∩Γ ≠ ∅. For fixed u ∈ C and given x0 ∈ C arbitrarily, let the sequences {xn}, {zn} be generated iteratively by
- (i)
;
- (ii)
σn + τn ≤ 1 for all n ≥ 0;
- (iii)
lim n→∞σn = 0 and ;
- (iv)
0 < liminf n→∞τn ≤ limsup n→∞τn < 1 and lim n→∞ | τn+1 − τn | = 0;
- (v)
0 < liminf n→∞βn ≤ limsup n→∞βn < 1;
- (vi)
0 < liminf n→∞λn ≤ limsup n→∞λn < 1/∥A∥2 and lim n→∞ | λn+1 − λn | = 0.
Proof. In Corollary 23, put B1 = B2 = 0 and γn = 0. Then, Ξ = C, βn + δn = 1, PC[PC(zn − μ2B2zn) − μ1B1PC(zn − μ2B2zn)] = zn, and the iterative scheme (146) is equivalent to
Remark 26. Our Theorems 19 and 22 improve, extend, and develop [6, Theorem 5.7], [18, Theorem 3.1], and [33, Theorem 3.1] in the following aspects.
- (i)
Because both [6, Theorem 5.7] and [18, Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our Theorems 19 and 22 as strong convergence results are very interesting and quite valuable.
- (ii)
The problem of finding an element of Fix (S)∩Ξ∩Γ in our Theorems 19 and 22 is more general than the corresponding problems in [6, Theorem 5.7] and [18, Theorem 3.1], respectively.
- (iii)
The relaxed extragradient iterative method for finding an element of Fix (S)∩Ξ∩VI (C, A) in [33, Theorem 3.1] is extended to develop the relaxed extragradient method with regularization for finding an element of Fix (S)∩Ξ∩Γ in our Theorem 19.
- (iv)
The proof of our Theorems 19 and 22 is very different from that of [33, Theorem 3.1] because our argument technique depends on Lemma 16, the restriction on the regularization parameter sequence {αn}, and the properties of the averaged mappings to a great extent.
- (v)
Because our iterative schemes (17) and (18) involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S, and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem 5.7] and [18, Theorem 3.1], respectively.
Acknowledgment
The work of L. C. Ceng was partially supported by the National Science Foundation of China (11071169), Ph. D. Program Foundation of Ministry of Education of China (20123127110002). The work of J. C. Yao was partially supported by the Grant NSC 99-2115-M-037-002-MY3 of Taiwan. For A. Petruşsel, this work was possible with the nancial support of a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0094.