Volume 2013, Issue 1 891232
Research Article
Open Access

Relaxed Extragradient Methods with Regularization for General System of Variational Inequalities with Constraints of Split Feasibility and Fixed Point Problems

L. C. Ceng

L. C. Ceng

Department of Mathematics, Shanghai Normal University, and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China shnu.edu.cn

Search for more papers by this author
A. Petruşel

A. Petruşel

Department of Applied Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania ubbcluj.ro

Search for more papers by this author
J. C. Yao

Corresponding Author

J. C. Yao

Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan kmu.edu.tw

Department of Applied Mathematics, National Sun Yat-Sen University, Kaohsiung 804, Taiwan kmu.edu.tw

Search for more papers by this author
First published: 17 March 2013
Citations: 5
Academic Editor: Julian López-Gómez

Abstract

We suggest and analyze relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of a general system of variational inequalities, the solution set of a split feasibility problem, and the fixed point set of a strictly pseudocontractive mapping defined on a real Hilbert space. Here the relaxed extragradient methods with regularization are based on the well-known successive approximation method, extragradient method, viscosity approximation method, regularization method, and so on. Strong convergence of the proposed algorithms under some mild conditions is established. Our results represent the supplementation, improvement, extension, and development of the corresponding results in the very recent literature.

1. Introduction

Let be a real Hilbert space, whose inner product and norm are denoted by 〈·, ·〉 and ∥·∥, respectively. Let K be a nonempty closed convex subset of . The (nearest point or metric) projection from onto Kis denoted by PK. We write xnx to indicate that the sequence {xn} converges weakly to x and xnx to indicate that the sequence {xn} converges strongly to x.

Let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces 1 and 2, respectively. The split feasibility problem (SFP) is to find a point x* with the property:
()
where AB(1, 2) and B(1, 2) denotes the family of all bounded linear operators from 1 to 2.
In 1994, the SFP was first introduced by Censor and Elfving [1], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [2] and the references therein. Recently, it was found that the SFP can also be applied to study intensity-modulated radiation therapy; see, for example, [35] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [212] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [13] of finding an element x such that
()
It has been extensively investigated in the literature using the projected Landweber iterative method [14]. Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set Q. Therefore, whether various versions of the projected Landweber iterative method [14] can be extended to solve the SFP remains an interesting open topics. For example, it is yet not clear whether the dual approach to (1.2) of [15] can be extended to the SFP. The original algorithm given in [1] involves the computation of the inverse A−1 (assuming the existence of the inverse of A) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm of Byrne [2, 7] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [16]. The CQ algorithm only involves the computation of the projections PC and PQ onto the sets C and Q, respectively, and is therefore implementable in the case where PC and PQ have closed-form expressions, for example, C and Q are closed balls or half-spaces. However, it remains a challenge how to implement the CQ algorithm in the case where the projections PC and/or PQ fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.

Very recently, Xu [6] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and purposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

Furthermore, Korpelevič [17] introduced the so-called extragradient method for finding a solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.

Throughout this paper, assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let f : 1R be a continuous differentiable function. The minimization problem
()
is ill posed. Therefore, Xu [6] considered the following Tikhonov regularization problem:
()
where α > 0 is the regularization parameter. The regularized minimization (4) has a unique solution which is denoted by xα. The following results are easy to prove.

Proposition 1 (see [18], Proposition  3.1.)Given x*1, the following statements are equivalent:

  • (i)

    x* solves the SFP;

  • (ii)

    x* solves the fixed point equation

    ()
    where λ > 0, ∇f = A*(IPQ)A and A* is the adjoint of A;

  • (iii)

    x* solves the variational inequality problem (VIP) of finding x*C such that

    ()

It is clear from Proposition 1 that
()
for all λ > 0, where Fix (PC(Iλf)) and VI (C, ∇f) denote the set of fixed points of PC(Iλf) and the solution set of VIP (6), respectively.

Proposition 2 (see [18].)There hold the following statements:

  • (i)

    the gradient

    ()
    is (α + ∥A2)-Lipschitz continuous and α-strongly monotone;

  • (ii)

    the mapping PC(Iλfα) is a contraction with coefficient

    ()
    where ;

  • (iii)

    if the SFP is consistent, then the strong lim α→0xα exists and is the minimum-norm solution of the SFP.

Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [19], Ceng et al. [18] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of Fix (S)∩Γ, where S : CC is a nonexpansive mapping.

Theorem 3 (see [18], Theorem  3.1.)Let S : CC be a nonexpansive mapping such that Fix (S)∩Γ ≠ . Let {xn} and {yn} the sequences in C generated by the following extragradient algorithm:

()
where for some a, b ∈ (0, 1/∥A2) and {βn} ⊂ [c, d] for some c, d ∈ (0,1). Then, both the sequences {xn} and {yn} converge weakly to an element .

On the other hand, assume that C is a nonempty closed convex subset of and A : C is a mapping. The classical variational inequality problem (VIP) is to find x*C such that
()
It is now well known that the variational inequalities are equivalent to the fixed point problems, the origin of which can be traced back to Lions and Stampacchia [20]. This alternative formulation has been used to suggest and analyze projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding the fixed points of nonexpansive mappings or strict pseudocontraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [2128] and the references therein.
A mapping A : C is said to be an α-inverse strongly monotone if there exists α > 0 such that
()
A mapping S : CC is said to be k-strictly pseudocontractive if there exists 0 ≤ k < 1 such that
()
In this case, we also say that S is a k-strict pseudo-contraction. In particular, whenever k = 0, S becomes a nonexpansive mapping from C into itself. It is clear that every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping. We denote by VI (C, A) and Fix (S) the solution set of problem (11) and the set of all fixed points of S, respectively.

For finding an element of Fix (S)∩VI (C, A) under the assumption that a set C is nonempty, closed, and convex, a mapping S : CC is nonexpansive, and a mapping A : C is α-inverse strongly monotone, Takahashi and Toyoda [29] introduced an iterative scheme and studied the weak convergence of the sequence generated by the proposed scheme to a point of Fix (S)∩VI (C, A). Recently, Iiduka and Takahashi [30] presented another iterative scheme for finding an element of Fix (S)∩VI (C, A) and showed that the sequence generated by the scheme converges strongly to PFix (S)∩VI (C,A)u, where u is the initially chosen point in the iterative scheme and PK denotes the metric projection of onto K.

Based on Korpelevič’s extragradient method [17], Nadezhkina and Takahashi [19] introduced an iterative process for finding an element of Fix (S)∩VI (C, A) and proved the weak convergence of the sequence to a point of Fix (S)∩VI (C, A). Zeng and Yao [27] presented an iterative scheme for finding an element of Fix (S)∩VI (C, A) and proved that two sequences generated by the method converges strongly to an element of Fix (S)∩VI (C, A). Recently, Bnouhachem et al. [31] suggested and analyzed an iterative scheme for finding a common element of the fixed point set Fix (S) of a nonexpansive mapping S and the solution set VI (C, A) of the variational inequality (11) for an inverse strongly monotone mapping A : C.

Furthermore, as a much more general generalization of the classical variational inequality problem (11), Ceng et al. [23] introduced and considered the following problem of finding (x*, y*) ∈ C × C such that
()
which is called a general system of variational inequalities (GSVI), which μ1 > 0 and μ2 > 0 are two constants. The set of solutions of problem (14) is denoted by GSVI(C, B1, B2). In particular, if B1 = B2, then problem (14) reduces to the new system of variational inequalities, introduced and studied by Verma [32]. Recently, Ceng et al. [23] transformed problem (14) into a fixed point problem in the following way.

Lemma 4 (see [23].)For given is a solution of problem (14) if and only if is a fixed point of the mapping G : CC defined by

()
where .

In particular, if the mapping Bi : C is βi-inverse strongly monotone for i = 1,2, then the mapping G is nonexpansive provided μi ∈ (0,2βi) for i = 1,2.

Utilizing Lemma 4, they introduced and studied a relaxed extragradient method for solving problem (14). Throughout this paper, the set of fixed points of the mapping G is denoted by Ξ. Based on the relaxed extragradient method and viscosity approximation method, Yao et al. [26] proposed and analyzed an iterative algorithm for finding a common solution of the GSVI (14) and the fixed point problem of a strictly pseudo-contractive mapping S : CC. Subsequently, Ceng et al. [33] further presented and analyzed an iterative scheme for finding a common element of the solution set of the VIP (11), the solution set of the GSVI (14), and the fixed point set of a strictly pseudo-contractive mapping S : CC.

Theorem 5 (see [33], Theorem  3.1.)Let C be a nonempty closed convex subset of a real Hilbert space . Let A : C be α-inverse strongly monotone and Bi : C be βi-inverse strongly monotone for i = 1,2. Let S : CC be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩VI(C, A) ≠ . Let Q : CC be a ρ-contraction with ρ ∈ [0, 1/2). For given x0C arbitrarily, let the sequences be generated by the relaxed extragradient iterative scheme:

()
where μi ∈ (0,2βi) for i = 1,2, and the following conditions hold for five sequences {λn}⊂(0, α] and {αn}, {βn}, {γn}, {δn}⊂[0,1]:
  • (i)

    βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (ii)

    lim nαn = 0 and ;

  • (iii)

    0 < lim  inf nβn ≤ limsup nβn < 1 and liminf nδn > 0;

  • (iv)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (v)

    0 < lim  inf nλn ≤ limsup nλn < α and lim n|λn+1λn| = 0.

Then the sequences converge strongly to the same point if and only if lim nun+1un∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Motivated and inspired by the research going on this area, we propose and analyze the following relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of the GSVI (14), the solution set of the SFP (1), and the fixed point set of a strictly pseudo-contractive mapping S : CC.

Algorithm 6. Let μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {βn}, {γn}, {δn}⊂[0,1] such that βn + γn + δn = 1 for all n ≥ 0. For given x0C arbitrarily, let be the sequences generated by the following relaxed extragradient iterative scheme with regularization:

()

Under mild assumptions, it is proven that the sequences converge strongly to the same point if and only if lim nun+1un∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Algorithm 7. Let μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that σn + τn ≤ 1 and βn + γn + δn = 1 for all n ≥ 0. For given x0C arbitrarily, let {xn}, {yn}, {zn} be the sequences generated by the following relaxed extragradient iterative scheme with regularization:

()

Also, under appropriate conditions, it is shown that the sequences {xn}, {yn}, {zn} converge strongly to the same point if and only if lim nzn+1zn∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Note that both [6, Theorem  5.7] and [18, Theorem  3.1] are weak convergence results for solving the SFP (1). Beyond question our strong convergence results are very interesting and quite valuable. Because our relaxed extragradient iterative schemes (17) and (18) with regularization involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem  5.7] and [18, Theorem  3.1], respectively. Furthermore, the relaxed extragradient iterative scheme (16) is extended to develop our relaxed extragradient iterative schemes (17) and (18) with regularization. All in all, our results represent the modification, supplementation, extension, and improvement of [6, Theorem  5.7], [18, Theorem  3.1], and [33, Theorem  3.1].

2. Preliminaries

Let K be a nonempty, closed, and convex subset of a real Hilbert space . Now we present some known results and definitions which will be used in the sequel.

The metric (or nearest point) projection from onto K is the mapping PK : K which assigns to each point x the unique point PKxK satisfying the property
()

The following properties of projections are useful and pertinent to our purpose.

Proposition 8 (see [34].)For given x and zK:

  • (i)

    z = PKx⇔〈xz, yz〉 ≤ 0,   ∀ yK;

  • (ii)

    z = PKx⇔∥xz2 ≤ ∥xy2 − ∥yz2,   ∀ yK;

  • (iii)

    , which hence implies that PK is nonexpansive and monotone.

Definition 9. A mapping T : is said to be

  • (a)

    nonexpansive if

    ()

  • (b)

    firmly nonexpansive if 2TI is nonexpansive, or equivalently,

    ()

alternatively, T is firmly nonexpansive if and only if T can be expressed as
()
where S : is nonexpansive; projections are firmly nonexpansive.

Definition 10. Let T be a nonlinear operator with domain D(T)⊆ and range R(T)⊆.

  • (a)

    T is said to be monotone if

    ()

  • (b)

    Given a number β > 0, T is said to be β-strongly monotone if

    ()

  • (c)

    Given a number ν > 0, T is said to be ν-inverse strongly monotone (ν-ism) if

    ()

It can be easily seen that if S is nonexpansive, then IS is monotone. It is also easy to see that a projection PK is 1-ism.

Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [35, 36].

Definition 11. A mapping T : is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,

()
where α ∈ (0,1) and S : is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are 1/2-averaged maps.

Proposition 12 (see [7].)Let T : be a given mapping.

  • (i)

    T is nonexpansive if and only if the complement IT is 1/2-ism.

  • (ii)

    If T is ν-ism, then for γ > 0,  γT is ν/γ-ism.

  • (iii)

    T is averaged if and only if the complement IT is ν-ism for some ν > 1/2. Indeed, for α ∈ (0,1), T is α-averaged if and only if IT is 1/2α-ism.

Proposition 13 (see [7], [37].)Let S, T, V : be given operators.

  • (i)

    If T = (1 − α)S + αV for some α ∈ (0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  • (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  • (iii)

    If T = (1 − α)S + αV for some α ∈ (0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  • (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite T1T2∘⋯∘TN. In particular, if T1 is α1-averaged and T2 is α2-averaged, where α1, α2 ∈ (0,1), then the composite T1T2 is α-averaged, where α = α1 + α2α1α2.

  • (v)

    If the mappings are averaged and have a common fixed point, then

    ()

The notation Fix (T) denotes the set of all fixed points of the mapping T, that is, Fix (T) = {x : Tx = x}.

It is clear that, in a real Hilbert space , S : CC is k-strictly pseudo-contractive if and only if there holds the following inequality:
()
This immediately implies that if S is a k-strictly pseudo-contractive mapping, then IS is (1 − k)/2-inverse strongly monotone; for further detail, we refer to [38] and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings.

In order to prove the main result of this paper, the following lemmas will be required.

Lemma 14 (see [39].)Let {xn} and {yn} be bounded sequences in a Banach space X and let {βn} be a sequence in [0,1] with 0 < liminf nβn ≤ limsup nβn < 1. Suppose xn+1 = (1 − βn)yn + βnxn for all integers n ≥ 0 and limsup n(∥yn+1yn∥−∥xn+1xn∥) ≤ 0. Then, lim nynxn∥ = 0.

Lemma 15 (see [38], Proposition  2.1.)Let C be a nonempty closed convex subset of a real Hilbert space and S : CC a mapping.

  • (i)

    If S is a k-strict pseudo-contractive mapping, then S satisfies the Lipschitz condition

    ()

  • (ii)

    If S is a k-strict pseudo-contractive mapping, then the mapping IS is semiclosed at 0, that is, if {xn} is a sequence in C such that weakly and (IS)xn → 0 strongly, then .

  • (iii)

    If S is k-(quasi-)strict pseudo-contraction, then the fixed point set Fix (S) of S is closed and convex so that the projection P Fix (S) is well defined.

The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.

Lemma 16 (see [34].)Let {an} be a sequence of nonnegative real numbers satisfying the property

()
where {sn}⊂(0,1] and {tn} are such that
  • (i)

    ;

  • (ii)

    either limsup ntn ≤ 0 or ;

  • (iii)

    where rn ≥ 0,   ∀ n ≥ 0. Then, lim nan = 0.

Lemma 17 (see [26].)Let C be a nonempty closed convex subset of a real Hilbert space . Let S : CC be a k-strictly pseudo-contractive mapping. Let γ and δ be two nonnegative real numbers such that (γ + δ)kγ. Then

()

The following lemma is an immediate consequence of an inner product.

Lemma 18. In a real Hilbert space , there holds the inequality

()

Let K be a nonempty closed convex subset of a real Hilbert space and let F : K be a monotone mapping. The variational inequality problem (VIP) is to find xK such that
()
The solution set of the VIP is denoted by VI (K, F). It is well known that
()
A set-valued mapping T : → 2 is called monotone if for all x, y, fTx and gTy imply that 〈xy, fg〉≥0. A monotone set-valued mapping T : → 2 is called maximal if its graph Gph(T) is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping T : → 2 is maximal if and only if for (x, f) ∈ × , 〈xy, fg〉≥0 for every (y, g) ∈ Gph(T) implies that fTx. Let F : K be a monotone and Lipschitz continuous mapping and let NKv be the normal cone to K at vK, that is,
()
Define
()
It is known that in this case the mapping T is maximal monotone, and 0 ∈ Tv if and only if vVI (K, F); see further details, one refers to [40] and the references therein.

3. Main Results

In this section, we first prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (17) with regularization.

Theorem 19. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2), and let Bi : C1 be βi-inverse strongly monotone for i = 1,2. Let S : CC be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩Γ ≠ . Let Q : CC be a ρ-contraction with ρ ∈ [0, 1/2). For given x0C arbitrarily, let the sequences be generated by the relaxed extragradient iterative algorithm (17) with regularization, where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {βn}, {γn}, {δn}⊂[0,1] such that

  • (i)

    ;

  • (ii)

    βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < lim  inf nβn ≤ lim  sup nβn < 1 and liminf nδn > 0;

  • (v)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (vi)

    0 < lim  inf nλn ≤ lim  sup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences converge strongly to the same point if and only if lim nun+1un∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Proof. First, taking into account 0 < liminf nλn ≤ limsup nλn < 1/∥A2, without loss of generality we may assume that {λn}⊂[a, b] for some a, b ∈ (0, 1/∥A2).

Now, let us show that PC(Iλfα) is ζ-averaged for each λ ∈ (0, 2/(α + ∥A2)), where

()

Indeed, it is easy to see that ∇f = A*(IPQ)A is 1/∥A2-ism, that is,

()
Observe that
()
Hence, it follows that ∇fα = αI + A*(IPQ)A is 1/(α + ∥A2)-ism. Thus, λfα is 1/λ(α + ∥A2)-ism according to Proposition 12(ii). By Proposition 12(iii) the complement Iλfα is λ(α + ∥A2)/2-averaged. Therefore, noting that PC is 1/2-averaged, and utilizing Proposition 13(iv), we know that for each λ ∈ (0, 2/(α + ∥A2)), PC(Iλfα) is ζ-averaged with
()
This shows that PC(Iλfα) is nonexpansive. Furthermore, for {λn}⊂[a, b] with a, b ∈ (0, 1/∥A2), we have
()
Without loss of generality we may assume that
()
Consequently, it follows that for each integer is ζn-averaged with
()
This immediately implies that is nonexpansive for all n ≥ 0.

Next we divide the remainder of the proof into several steps.

Step  1. {xn} is bounded.

Indeed, take an arbitrary p ∈ Fix (S)∩Ξ∩Γ. Then, we get Sp = p,  PC(Iλf)p = p for λ ∈ (0, 2/∥A2), and

()
From (17) it follows that
()
Utilizing Lemma 18 we also have
()
For simplicity, we write
()
for each n ≥ 0. Then for each n ≥ 0. Since Bi : C1 is βi-inverse strongly monotone for i = 1,2 and 0 < μi < 2βi for i = 1,  2, we know that for all n ≥ 0,
()
Furthermore, by Proposition 8(ii), we have
()
Further, by Proposition 8(i), we have
()
So, from (45) we obtain
()
Hence it follows from (48) and (51) that
()
Since (γn + δn)kγn for all n ≥ 0, utilizing Lemma 17 we obtain from (52)
()
Now, we claim that
()
As a matter of fact, if n = 0, then it is clear that (54) is valid, that is,
()
Assume that (54) holds for n ≥ 1, that is,
()
Then, we conclude from (53) and (56) that
()
By induction, we conclude that (54) is valid. Hence, {xn} is bounded. Since and B2 are Lipschitz continuous, it is easy to see that and are bounded, where for all n ≥ 0.

Step  2.   limnxn+1xn∥ = 0.

Indeed, define xn+1 = βnxn + (1 − βn)wn for all n ≥ 0. It follows that

()
Since (γn + δn)kγn for all n ≥ 0, utilizing Lemma 17 we have
()
Next, we estimate ∥yn+1yn∥. Observe that
()
()
and hence
()
Combining (60) with (61), we get
()
This together with (62) implies that
()
Hence it follows from (58), (59), and (64) that
()
From (60), we deduce from condition (vi) that . Since {xn}, {un}, and are bounded, it follows from conditions (i), (iii), (v), and (vi) that
()
Hence by Lemma 14 we get lim nwnxn∥ = 0. Thus,
()

Step  3. limnB2xnB2p∥ = 0,  , and , where q = PC(pμ2B2p).

Indeed, utilizing Lemma 17 and the convexity of ∥·∥2, we obtain from (17), (48), and (51) that

()
Therefore,
()
Since αn → 0, σn → 0,  ∥xnxn+1∥→0,  liminf n(γn + δn) > 0 and {λn}⊂[a, b] for some a, b ∈ (0, 1/∥A2), it follows that
()

Step  4. limnSynyn∥ = 0.

Indeed, observe that

()
This together with implies that and hence . By firm nonexpansiveness of PC, we have
()
that is,
()
Moreover, using the argument technique similar to the above one, we derive
()
that is,
()
Utilizing (46), (73), and (75), we have
()
Thus from (17) and (76) it follows that
()
which hence implies that
()
Since limsup nβn < 1,  {λn}⊂[a, b],  αn → 0,  σn → 0,  ∥B2xnB2p∥→0,  ,   and ∥xn+1xn∥→0, it follows from the boundedness of , and that
()
Consequently, it immediately follows that
()
This together with implies that
()
Since
()
it follows that
()

Step  5. where .

Indeed, since {xn} is bounded, there exists a subsequence of {xn} such that

()
Also, since H is reflexive and {yn} is bounded, without loss of generality we may assume that weakly for some . First, it is clear from Lemma 15 that . Now let us show that . We note that
()
where G : CC is defined as such that in Lemma 4. According to Lemma 15 we obtain . Further, let us show that . As a matter of fact, since and ∥xnyn∥→0, we deduce that weakly and weakly. Let
()
where NCv = {w1 : 〈vu, w〉≥0,   ∀ uC}. Then, T is maximal monotone and 0 ∈ Tv if and only if vVI (C, ∇f); see [40] for more details. Let (v, w) ∈ Gph(T). Then, we have
()
and hence
()
So, we have
()
On the other hand, from
()
we have
()
and, hence,
()
Therefore, from
()
we have
()
Hence, we get
()
Since T is maximal monotone, we have , and, hence, . Thus it is clear that . Therefore, . Consequently, in terms of Proposition 8(i) we obtain from (84) that
()

Step 6. .

Indeed, from (48) and (51) it follows that

()
Note that
()
Utilizing Lemmas 17 and 18, we obtain from (48) and the convexity of ∥·∥2
()
Note that liminf n(1 − 2ρ)(γn + δn) > 0. It follows that . It is clear that
()
because and lim nxnyn∥ = 0. In addition, note also that {λn}⊂[a, b],   and is bounded. Hence we get . Therefore, all conditions of Lemma 16 are satisfied. Consequently, we immediately deduce that as n. This completes the proof.

Corollary 20. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and Bi : C1 be βi-inverse strongly monotone for i = 1,2. Let S : CC be a k-strictly pseudo-contractive mapping such that Fix (S)∩Ξ∩Γ ≠ . For fixed uC and given x0C arbitrarily, let the sequences be generated iteratively by

()
where μi ∈ (0,2βi) for i = 1,2, and the following conditions hold for six sequences {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {βn}, {γn}, {δn}⊂[0,1]:
  • (i)

    ;

  • (ii)

    βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < liminf nβn ≤ limsup nβn < 1 and liminf nδn > 0;

  • (v)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (vi)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2  and lim n | λn+1λn | = 0.

Then the sequences converge strongly to the same point if and only if lim nun+1un∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Next, utilizing Corollary 20 we give the following improvement and extension of the main result in [18] (i.e., [18, Theorem  3.1]).

Corollary 21. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and S : CC be a nonexpansive mapping such that Fix (S)∩Γ ≠ . For fixed uC and given x0C arbitrarily, let the sequences be generated iteratively by

()
where the following conditions hold for four sequences {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {βn}⊂[0,1]:
  • (i)

    ;

  • (ii)

    lim nσn = 0 and ;

  • (iii)

    0 < liminf nβn ≤ limsup nβn < 1;

  • (iv)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences converge strongly to the same point if and only if lim nun+1un∥ = 0.

Proof. In Corollary 20, put B1 = B2 = 0 and γn = 0. Then, Ξ = C,  βn + δn = 1 for all n ≥ 0, and the iterative scheme (101) is equivalent to

()
This is is equivalent to (102). Since S is a nonexpansive mapping, S must be a k-strictly pseudo-contractive mapping with k = 0. In this case, it is easy to see that conditions (i)–(vi) in Corollary 20 all are satisfied. Therefore, in terms of Corollary 20, we obtain the desired result.

Now, we are in a position to prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (18) with regularization.

Theorem 22. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and Bi : C1 be βi-inverse strongly monotone for i = 1,2. Let S : CC be a k-strictly pseudocontractive mapping such that Fix (S)∩Ξ∩Γ ≠ . Let Q : CC be a ρ-contraction with ρ ∈ [0, 1/2). For given x0C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated by the relaxed extragradient iterative algorithm (18) with regularization, where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that

  • (i)

    ;

  • (ii)

    σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < liminf nτn ≤ limsup nτn < 1 and lim n | τn+1τn | = 0;

  • (v)

    0 < liminf nβn ≤ limsup nβn < 1 and liminf nδn > 0;

  • (vi)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (vii)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences {xn}, {yn}, {zn} converge strongly to the same point if and only if lim nzn+1zn∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Proof. First, taking into account 0 < liminf nλn ≤ limsup nλn < 1/∥A2, without loss of generality we may assume that {λn}⊂[a, b] for some a, b ∈ (0, 1/∥A2). Repeating the same argument as that in the proof of Theorem 19, we can show that PC(Iλfα) is ζ-averaged for each λ ∈ (0, 2/(α + ∥A2)), where ζ = (2 + λ(α + ∥A2))/4. Further, repeating the same argument as that in the proof of Theorem 19, we can also show that for each integer is ζn-averaged with ζn = (2 + λn(αn + ∥A2))/4 ∈ (0,1).

Next we divide the remainder of the proof into several steps.

Step  1. {xn} is bounded.

Indeed, take p ∈ Fix (S)∩Ξ∩Γ arbitrarily. Then Sp = p, PC(Iλf)p = p for λ ∈ (0, 2/∥A2), and

()
Utilizing the arguments similar to those of (45) and (46) in the proof of Theorem 19, from (18) we can obtain
()
()
For simplicity, we write q = PC(pμ2B2p),  ,
()
for each n ≥ 0. Then for each n ≥ 0. Since Bi : C1 is βi-inverse strongly monotone and 0 < μi < 2βi for i = 1,2, utilizing the argument similar to that of (48) in the proof of Theorem 19, we can obtain that for all n ≥ 0,
()
Furthermore, utilizing Proposition 8(i)-(ii) and the argument similar to that of (51) in the proof of Theorem 19, from (105) we obtain
()
Hence it follows from (105), (108), and (109) that
()
Since (γn + δn)kγn for all n ≥ 0, utilizing Lemma 17 we obtain from (110)
()
Repeating the same argument as that of (54) in the proof of Theorem 19, by induction we can prove that
()
Thus, {xn} is bounded. Since and B2 are Lipschitz continuous, it is easy to see that , and are bounded, where for all n ≥ 0.

Step  2. lim nxn+1xn∥ = 0.

Indeed, define xn+1 = βnxn + (1 − βn)wn for all n ≥ 0. Then, utilizing the arguments similar to those of (58)–(61) in the proof of Theorem 19, we can obtain that

()
(due to Lemma 17)
()
()
So, we have
()
This together with (114) implies that
()
Hence it follows from (113), and (117) that
()
Since {xn}, {yn}, {zn}, {un}, and are bounded, it follows from conditions (i), (iii), (iv), (vi), and (vii) that
()
Hence by Lemma 14 we get lim nwnxn∥ = 0. Thus,
()
Step  3. lim nB2znB2p∥ = 0,  , and lim nxnzn∥ = 0, where q = PC(pμ2B2p).

Indeed, utilizing Lemma 17 and the convexity of ∥·∥2, we obtain from (18) and (106)–(109) that

()
Therefore,
()
Since αn → 0, σn → 0, ∥xnxn+1∥→0, liminf n(γn + δn) > 0, {λn}⊂[a, b], and 0 < liminf nτn ≤ limsup nτn < 1, it follows that
()

Step  4. lim nSynyn∥ = 0.

Indeed, observe that

()
This together with ∥znxn∥→0 implies that and hence . Utilizing the arguments similar to those of (73) and (75) in the proof of Theorem 19 we can prove that
()
Utilizing (106), (109), (125), we have
()

Thus from (18) and (126) it follows that

()
which hence implies that
()
Since limsup nβn < 1,  limsup nτn < 1,  {λn}⊂[a, b],  αn → 0,  σn → 0,  ∥B2znB2p∥→0,   and ∥xn+1xn∥→0, it follows from the boundedness of {xn}, {un}, {zn}, and that
()
Consequently, it immediately follows that
()
Also, note that
()
This together with implies that
()
Since
()
it follows that
()

Step  5. where .

Indeed, since {xn} is bounded, there exists a subsequence of {xn} such that

()
Also, since H is reflexive and {xn} is bounded, without loss of generality we may assume that weakly for some . Taking into account that ∥xnyn∥→0 and ∥xnzn∥→0 as n, we deduce that weakly and weakly.

First, it is clear from Lemma 15 and ∥Synyn∥→0 that . Now let us show that . Note that

()
where G : CC is defined as that in Lemma 4. According to Lemma 15 we get . Further, let us show that . As a matter of fact, define
()
where NCv = {w1 : 〈vu, w〉≥0,   ∀ uC}. Then, T is maximal monotone and 0 ∈ Tv if and only if vVI (C, ∇f); see [40] for more details. Let (v, w) ∈ Gph(T). Then, we have
()
Observe that
()
Utilizing the arguments similar to those of Step  5 in the proof of Theorem 19 we can prove that
()
Since T is maximal monotone, we have , and, hence, . Thus it is clear that . Therefore, . Consequently, in terms of Proposition 8 (i) we obtain from (135) that
()

Step  6. .

Indeed, observe that

()
Utilizing Lemmas 17 and 18, we obtain from (106)–(109) and the convexity of ∥·∥2 that
()
Note that liminf n(1 − 2ρ)(γn + δn) > 0. It follows that . It is clear that
()
because and lim nxnyn∥ = 0. In addition, note also that and {zn} is bounded. Hence we get . Therefore, all conditions of Lemma 16 are satisfied. Consequently, we immediately deduce that as n. In the meantime, taking into account that ∥xnyn∥→0 and ∥xnzn∥→0 as n, we infer that
()
This completes the proof.

Corollary 23. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and let Bi : C1 be βi-inverse strongly monotone for i = 1,2. Let S : CC be a k-strictly pseudocontractive mapping such that Fix (S)∩Ξ∩Γ ≠ . For fixed uC and given x0C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated iteratively by

()
where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that
  • (i)

    ;

  • (ii)

    σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < liminf nτn ≤ limsup nτn < 1 and lim n | τn+1τn | = 0;

  • (v)

    0 < liminf nβn ≤ limsup nβn < 1 and liminf nδn > 0;

  • (vi)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (vii)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences {xn}, {yn}, {zn} converge strongly to the same point if and only if lim nzn+1zn∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Corollary 24. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and let Bi : C1 be βi-inverse strongly monotone for i = 1,2. Let S : CC be a nonexpansive mapping such that Fix (S)∩Ξ∩Γ ≠ . Let Q : CC be a ρ-contraction with ρ ∈ [0, 1/2). For given x0C arbitrarily, let the sequences {xn}, {yn}, {zn} be generated iteratively by

()
where μi ∈ (0,2βi) for i = 1,2, {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {τn}, {βn}, {γn}, {δn}⊂[0,1] such that
  • (i)

    ;

  • (ii)

    σn + τn ≤ 1, βn + γn + δn = 1 and (γn + δn)kγn for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < liminf nτn ≤ limsup nτn < 1 and lim n | τn+1τn | = 0;

  • (v)

    0 < liminf nβn ≤ limsup nβn < 1 and liminf nδn > 0;

  • (vi)

    lim n(γn+1/(1 − βn+1) − γn/(1 − βn)) = 0;

  • (vii)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences {xn}, {yn}, {zn} converge strongly to the same point if and only if lim nzn+1zn∥ = 0. Furthermore, is a solution of the GSVI (14), where .

Next, utilizing Corollary 23 one gives the following improvement and extension of the main result in [18] (i.e., [18, Theorem  3.1]).

Corollary 25. Let C be a nonempty closed convex subset of a real Hilbert space 1. Let AB(1, 2) and let S : CC be a nonexpansive mapping such that Fix (S)∩Γ ≠ . For fixed uC and given x0C arbitrarily, let the sequences {xn}, {zn} be generated iteratively by

()
where {αn}⊂(0, ), {λn}⊂(0, 1/∥A2) and {σn}, {τn}, {βn}⊂[0,1] such that
  • (i)

    ;

  • (ii)

    σn + τn ≤ 1 for all n ≥ 0;

  • (iii)

    lim nσn = 0 and ;

  • (iv)

    0 < liminf nτn ≤ limsup nτn < 1 and lim n | τn+1τn | = 0;

  • (v)

    0 < liminf nβn ≤ limsup nβn < 1;

  • (vi)

    0 < liminf nλn ≤ limsup nλn < 1/∥A2 and lim n | λn+1λn | = 0.

Then the sequences {xn}, {zn} converge strongly to the same point if and only if lim nzn+1zn∥ = 0.

Proof. In Corollary 23, put B1 = B2 = 0 and γn = 0. Then, Ξ = C, βn + δn = 1, PC[PC(znμ2B2zn) − μ1B1PC(znμ2B2zn)] = zn, and the iterative scheme (146) is equivalent to

()
This is equivalent to (148). Since S is a nonexpansive mapping, S must be a k-strictly pseudocontractive mapping with k = 0. In this case, it is easy to see that conditions (i)–(vii) in Corollary 23 all are satisfied. Therefore, in terms of Corollary 23, we obtain the desired result.

Remark 26. Our Theorems 19 and 22 improve, extend, and develop [6, Theorem  5.7], [18, Theorem  3.1], and [33, Theorem  3.1] in the following aspects.

  • (i)

    Because both [6, Theorem  5.7] and [18, Theorem  3.1] are weak convergence results for solving the SFP, beyond question, our Theorems 19 and 22 as strong convergence results are very interesting and quite valuable.

  • (ii)

    The problem of finding an element of Fix (S)∩Ξ∩Γ in our Theorems 19 and 22 is more general than the corresponding problems in [6, Theorem  5.7] and [18, Theorem  3.1], respectively.

  • (iii)

    The relaxed extragradient iterative method for finding an element of Fix (S)∩Ξ∩VI (C, A) in [33, Theorem  3.1] is extended to develop the relaxed extragradient method with regularization for finding an element of Fix (S)∩Ξ∩Γ in our Theorem 19.

  • (iv)

    The proof of our Theorems 19 and 22 is very different from that of [33, Theorem  3.1] because our argument technique depends on Lemma 16, the restriction on the regularization parameter sequence {αn}, and the properties of the averaged mappings to a great extent.

  • (v)

    Because our iterative schemes (17) and (18) involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S, and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem  5.7] and [18, Theorem  3.1], respectively.

Acknowledgment

The work of L. C. Ceng was partially supported by the National Science Foundation of China (11071169), Ph. D. Program Foundation of Ministry of Education of China (20123127110002). The work of J. C. Yao was partially supported by the Grant NSC 99-2115-M-037-002-MY3 of Taiwan. For A. Petruşsel, this work was possible with the nancial support of a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0094.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.