Volume 2011, Issue 1 619813
Research Article
Open Access

A General Iterative Algorithm for Generalized Mixed Equilibrium Problems and Variational Inclusions Approach to Variational Inequalities

Thanyarat Jitpeera

Thanyarat Jitpeera

Department of Mathematics, Faculty of Science, King Mongkut′s University of Technology Thonburi (KMUTT), Bangmod, Thrungkru, Bangkok 10140, Thailand kmutt.ac.th

Search for more papers by this author
Poom Kumam

Corresponding Author

Poom Kumam

Department of Mathematics, Faculty of Science, King Mongkut′s University of Technology Thonburi (KMUTT), Bangmod, Thrungkru, Bangkok 10140, Thailand kmutt.ac.th

Search for more papers by this author
First published: 13 April 2011
Citations: 4
Academic Editor: Vittorio Colao

Abstract

We introduce a new general iterative method for finding a common element of the set of solutions of fixed point for nonexpansive mappings, the set of solution of generalized mixed equilibrium problems, and the set of solutions of the variational inclusion for a β-inverse-strongly monotone mapping in a real Hilbert space. We prove that the sequence converges strongly to a common element of the above three sets under some mild conditions. Our results improve and extend the corresponding results of Marino and Xu (2006), Su et al. (2008), Klin-eam and Suantai (2009), Tan and Chang (2011), and some other authors.

1. Introduction

Let C be a closed convex subset of a real Hilbert space H with the inner product 〈·, ·〉 and the norm ∥·∥. Let F be a bifunction of C × C into , where is the set of real numbers, Ψ : CH a mapping, and φ : C a real-valued function. The generalized mixed equilibrium problem is for finding xC such that
(1.1)
The set of solutions of (1.1) is denoted by GMEP(F, φ, Ψ), that is,
(1.2)
If F ≡ 0, the problem (1.1) is reduced into the mixed variational inequality of Browder type [1] for finding xC such that
(1.3)
The set of solutions of (1.3) is denoted by MVI(C, φ, Ψ).
If Ψ ≡ 0 and φ ≡ 0, the problem (1.1) is reduced into the equilibrium problem [2] for finding xC such that
(1.4)
The set of solutions of (1.4) is denoted by EP(F). This problem contains fixed point problems and includes as special cases numerous problems in physics, optimization, and economics. Some methods have been proposed to solve the equilibrium problem; see [35].
If F ≡ 0 and φ ≡ 0, the problem (1.1) is reduced into the Hartmann-Stampacchia variational inequality [6] for finding xC such that
(1.5)
The set of solutions of (1.5) is denoted by VI(C, Ψ). The variational inequality has been extensively studied in the literature [7].
If F ≡ 0 and Ψ ≡ 0, the problem (1.1) is reduced into the minimize problem for finding xC such that
(1.6)
The set of solutions of (1.6) is denoted by Arg min  (φ).
Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence on the development of almost all branches of pure and applied sciences. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space H:
(1.7)
where A is a linear bounded operator, F(S) is the fixed point set of a nonexpansive mapping S, and y is a given point in H [8].
Recall that a mapping S : CC is said to be nonexpansive if
(1.8)
for all x, yC. If C is bounded closed convex and S is a nonexpansive mapping of C into itself, then F(S) is nonempty [9]. We denote weak convergence and strong convergence by notations ⇀ and →, respectively. A mapping A of C into H is called monotone if
(1.9)
for all x, yC. A mapping A of C into H is called α-inverse-strongly monotone if there exists a positive real number α such that
(1.10)
for all x, yC. It is obvious that any α-inverse-strongly monotone mapping A is monotone and Lipschitz continuous mapping. A linear bounded operator A is strongly positive if there exists a constant with the property
(1.11)
for all xH. A self mapping f : CC is a contractions on C if there exists a constant α ∈ (0,1) such that
(1.12)
for all x, yC. We use ∏C  to denote the collection of all contraction on C. Note that each f ∈ ∏C  has a unique fixed point in C.
Let B : HH be a single-valued nonlinear mapping and M : H → 2H a set-valued mapping. The variational inclusion problem is to find xH such that
(1.13)
where θ is the zero vector in H. The set of solutions of problem (1.13) is denoted by I(B, M). The variational inclusion has been extensively studied in the literature, see, for example, [1013] and the reference therein.

A set-valued mapping M : H → 2H is called monotone if for all x, yH, fM(x), and gM(y) imply 〈xy, fg〉≥0. A monotone mapping M is maximal if its graph G(M): = {(f, x) ∈ H × H : fM(x)} of M is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if, for (x, f) ∈ H × H, 〈xy, fg〉 ≥ 0 for all (y, g) ∈ G(M) imply fM(x).

Let B be an inverse-strongly monotone mapping of C into H, and let NCv be normal cone to C at vC, that is, NCv = {wH : 〈vu, w〉 ≥ 0, ∀uC}, and define
(1.14)
Then, T is a maximal monotone and θTv if and only if v ∈ VI(C, B) [14].
Let M : H → 2H be a set-valued maximal monotone mapping; then the single-valued mapping JM,λ : HH defined by
(1.15)
is called the resolvent operator associated with M, where λ is any positive number and I is the identity mapping. It is worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone and that a solution of problem (1.13) is a fixed point of the operator JM,λ(IλB) for all λ > 0 [15].
In 2000, Moudafi [16] introduced the viscosity approximation method for nonexpansive mapping and proved that, if H is a real Hilbert space, the sequence {xn} defined by the iterative method below, with the initial guess x0C, is chosen arbitrarily,
(1.16)
where {αn} ⊂ (0,1) satisfies certain conditions and converges strongly to a fixed point of S (say ) which is the unique solution of the following variational inequality:
(1.17)
In 2006, Marino and Xu [8] introduced a general iterative method for nonexpansive mapping. They defined the sequence {xn} generated by the algorithm x0C:
(1.18)
where {αn}⊂(0,1) and A is a strongly positive linear bounded operator. They proved that, if C = H and the sequence {αn} satisfies appropriate conditions, then the sequence {xn} generated by (1.18) converges strongly to a fixed point of S (say ) which is the unique solution of the following variational inequality:
(1.19)
which is the optimality condition for the minimization problem
(1.20)
where h is a potential function for γf (i.e., h(x) = γf(x) for xH).
For finding a common element of the set of fixed points of nonexpansive mappings and the set of solution of the variational inequalities, let PC be the projection of H onto C. In 2005, Iiduka and Takahashi [17] introduced following iterative process for x0C:
(1.21)
where uC, {αn}⊂(0,1), and {λn}⊂[a,   b] for some a, b with 0 < a < b < 2β. They proved that under certain appropriate conditions imposed on {αn} and {λn}, the sequence {xn} generated by (1.21) converges strongly to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping (say ) which solve some variational inequality
(1.22)
In 2008, Su et al. [18] introduced the following iterative scheme by the viscosity approximation method in a real Hilbert space: x1, unH
(1.23)
for all n, where {αn}⊂[0,1) and {rn}⊂(0, ) satisfy some appropriate conditions. Furthermore, they proved that {xn} and {un} converge strongly to the same point zF(S)∩VI(C, A)∩EP (F), where z = PF(S) ∩ VI(C,A) ∩ EP (F)f(z).
In 2011, Tan and Chang [12] introduced following iterative process for {Tn : CC} which is a sequence of nonexpansive mappings. Let {xn} be the sequence defined by
(1.24)
where {αn}⊂(0,1), λ ∈ (0,2α], and μ ∈ (0,2β]. The sequence {xn} defined by (1.24) converges strongly to a common element of the set of fixed points of nonexpansive mapping, the set of solutions of the variational inequality, and the generalized equilibrium problem.
In this paper, we modify the iterative methods (1.18), (1.23), and (1.24) by proposing the following new general viscosity iterative method: x0, unC,
(1.25)
for all n, where {αn}⊂(0,1), {rn}⊂(0,2σ), and λ ∈ (0,2β) satisfy some appropriate conditions. The purpose of this paper is to show that under some control conditions the sequence {xn} strongly converges to a common element of the set of fixed points of nonexpansive mapping, the solution of the generalized mixed equilibrium problems, and the set of solutions of the variational inclusion in a real Hilbert space.

2. Preliminaries

Let H be a real Hilbert space and C a nonempty closed convex subset of H. Recall that the (nearest point) projection PC from H onto C assigns to each xH the unique point in PCxC satisfying the property
(2.1)
The following characterizes the projection PC. We recall some lemmas which will be needed in the rest of this paper.

Lemma 2.1. The function uC is a solution of the variational inequality (1.5) if and only if uC satisfies the relation u = PC(uλΨu) for all λ > 0.

Lemma 2.2. For a given zH, uC, u = PCz⇔〈uz, vu〉≥0, for all vC.

It is well known that PC is a firmly nonexpansive mapping of H onto C and satisfies

(2.2)
Moreover, PCx is characterized by the following properties: PCxC and, for all xH, yC,
(2.3)

Lemma 2.3 (see [19].)Let M : H → 2H be a maximal monotone mapping, and let B : HH be a monotone and Lipshitz continuous mapping. Then the mapping L = M + B : H → 2H is a maximal monotone mapping.

Lemma 2.4 (see [20].)Each Hilbert space H satisfies Opial′s condition, that is, for any sequence {xn} ⊂ H with xnx, the inequality lim  inf nxnx∥ < lim  inf nxny∥ holds for each yH with yx.

Lemma 2.5 (see [21].)Assume that {an} is a sequence of nonnegative real numbers such that

(2.4)
where {γn}⊂(0,1) and {δn} is a sequence in such that
  • (i)

    ,

  • (ii)

    limsup n(δn/γn) ≤ 0 or .

Then lim nan = 0.

Lemma 2.6 (see [22].)Let C be a closed convex subset of a real Hilbert space H, and let T : CC be a nonexpansive mapping. Then IT is demiclosed at zero, that is,

(2.5)
implies x = Tx.

For solving the generalized mixed equilibrium problem, let us assume that the bifunction F : C × C, the nonlinear mapping Ψ : CH is continuous monotone, and φ : C satisfies the following conditions:
  • (A1)

    F(x, x) = 0 for all xC;

  • (A2)

    F is monotone, that is, F(x, y) + F(y, x) ≤ 0 for any x, yC;

  • (A3)

    for each fixed yC, xF(x, y) is weakly upper semicontinuous;

  • (A4)

    for each fixed xC, yF(x, y) is convex and lower semicontinuous;

  • (B1)

    for each xC and r > 0, there exist a bounded subset DxC and yxC such that, for any zCDx,

    (2.6)

  • (B2)

    C is a bounded set.

Lemma 2.7 (see [23].)Let C be a nonempty closed convex subset of a real Hilbert space H. Let F : C × C be a bifunction mapping satisfying (A1)–(A4), and let φ : C be convex and lower semicontinuous such that C∩dom φ. Assume that either (B1) or (B2) holds. For r > 0 and xH, there exists uC such that

(2.7)
Define a mapping Kr : HC as follows:
(2.8)
for all xH. Then, the following hold:
  • (i)

    Kr is single valued;

  • (ii)

    Kr is firmly nonexpansive, that is, for any ;

  • (iii)

    F(Kr) = MEP(F, φ);

  • (iv)

    MEP(F, φ) is closed and convex.

Lemma 2.8 (see [8].)Assume that A is a strongly positive linear bounded operator on a Hilbert space H with coefficient and 0 < ρ ≤ ∥A−1; then .

3. Strong Convergence Theorems

In this section, we show a strong convergence theorem which solves the problem of finding a common element of F(S), GMEP(F, φ, Ψ), and I(B, M) of an inverse-strongly monotone mappings in a Hilbert space.

Theorem 3.1. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : CH be β, σ-inverse-strongly monotone mappings, respectively. Let φ : C be a convex and lower semicontinuous function, f : CC a contraction with coefficient α  (0 < α < 1), M : H → 2H a maximal monotone mapping, and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let S be a nonexpansive mapping of C into itself such that

(3.1)
Suppose that {xn} is a sequences generated by the following algorithm for x0C arbitrarily:
(3.2)
for all n = 0,1, 2, …, where
  • (C1)

    {αn} ⊂ (0,1), lim n→0αn = 0, , and ,

  • (C2)

    {rn} ⊂ [c, d] with c, d ∈ (0,2σ) and ,

  • (C3)

    λ ∈ (0,2β).

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + IA)(q) which solves the following variational inequality:

(3.3)
which is the optimality condition for the minimization problem
(3.4)
where h is a potential function for γf (i.e., h(q) = γf(q) for qH).

Proof. Due to condition (C1), we may assume without loss of generality, then, that αn ∈ (0, ∥A−1) for all n. By Lemma 2.8, we have that . Next, we will assume that .

Next, we will divide the proof into six steps.

Step 1. We will show that {xn}, {un} are bounded.

Since B, Ψ are β, σ-inverse-strongly monotone mappings, we have that

(3.5)
In a similar way, we can obtain
(3.6)
It is clear that if 0 < λ < 2β, 0 < rn < 2σ, then IλB, IrnΨ are all nonexpansive.

Put yn = JM,λ(unλBun), n ≥ 0. It follows that

(3.7)
By Lemma 2.7, we have that for all n ≥ 0. Then, we have that
(3.8)
Hence, we have that
(3.9)
From (3.2), we deduce that
(3.10)
It follows by induction that
(3.11)
Therefore {xn} is bounded, so are {yn}, {Syn}, {Bun}, {f(xn)}, and {ASyn}.

Step 2. We claim that lim nxn+1xn∥ = 0. From (3.2), we have that

(3.12)

Since IλB are nonexpansive, we also have that

(3.13)
On the other hand, from and , it follows that
(3.14)
(3.15)
Substituting y = un into (3.14) and y = un−1 into (3.15), we get
(3.16)
From (A2), we obtain
(3.17)
and then
(3.18)
So
(3.19)
It follows that
(3.20)
Without loss of generality, let us assume that there exists a real number c such that rn−1 > c > 0, for all n. Then, we have that
(3.21)
and hence
(3.22)
where M1 = sup {∥unxn∥ : n}. Substituting (3.22) into (3.13), we have that
(3.23)
Substituting (3.23) into (3.12), we get
(3.24)
where M2 = sup  {max  {∥ASyn−1∥, ∥f(xn−1)∥ : n}}. By conditions (C1)-(C2) and Lemma 2.5, we have that ∥xn+1xn∥ → 0 as n. From (3.23), we also have that ∥yn+1yn∥ → 0 as n.

Step 3. We show the following:

  • (i)

    lim nBunBq∥ = 0;

  • (ii)

    lim n∥Ψxn − Ψq∥ = 0.

For q ∈ Ω and q = JM,λ(q − λBq), by (3.5) and (3.8), we get
(3.25)
It follows that
(3.26)
So, we obtain
(3.27)
where . By conditions (C1) and (C3) and lim nxn+1xn∥ = 0, we obtain that ∥BunBq∥ → 0 as n.

Substituting (3.8) into (3.25), we get

(3.28)
From (3.26), we have that
(3.29)
So, we also have that
(3.30)
where . By conditions (C1)–(C3), lim nxn+1xn∥ = 0 and lim nBunBq∥ = 0, we obtain that ∥Ψxn − Ψq∥ → 0 as n.

Step 4. We show the following:

  • (i)

    lim nxnun∥ = 0;

  • (ii)

    lim nunyn∥ = 0;

  • (iii)

    lim nynSyn∥ = 0.

Since is firmly nonexpansive and by (2.2), we observe that
(3.31)
Hence, we have that
(3.32)
Since JM,λ is 1-inverse-strongly monotone and by (2.2), we compute
(3.33)
which implies that
(3.34)
Substituting (3.32) into (3.34), we have that
(3.35)
Substituting (3.35) into (3.26), we get
(3.36)
Then, we derive
(3.37)
By condition (C1), lim nxnxn+1∥ = 0, lim n∥Ψxn − Ψq∥ = 0, and lim nBunBq∥ = 0. So, we have that ∥xnun∥ → 0, ∥unyn∥ → 0 as n. It follows that
(3.38)
From (3.2), we have that
(3.39)
By condition (C1) and lim nyn−1yn∥ = 0, we obtain that ∥xnSyn∥ → 0 as n. Next, we observe that
(3.40)
Since {ASyn} is bounded and by condition (C1), we have that ∥xn+1Syn∥→0 as n, and
(3.41)
Since lim nxnxn+1∥ = 0 and lim nxn+1Syn∥ = 0, it implies that ∥xnSyn∥ → 0 as n. Hence, we have that
(3.42)
By (3.38) and lim nxnSyn∥ = 0, we obtain ∥xnSxn∥ → 0 as n. Moreover, we also have that
(3.43)
By (3.38) and lim nxnSyn∥ = 0, we obtain ∥ynSyn∥ → 0 as n.

Step 5. We show that q ∈ Ω : = F(S)∩GMEP(F, φ, Ψ)∩I(B, M) and limsup n〈(γfA)q, Synq〉 ≤ 0. It is easy to see that PΩ(γf + (IA)) is a contraction of H into itself. Indeed, since , we have that

(3.44)
Hence H is complete, and there exists a unique fixed point qH such that q = PΩ(γf + (IA))(q). By Lemma 2.2, we obtain that 〈(γfA)q, wq〉≤0 for all w ∈ Ω.

Next, we show that limsup n〈(γfA)q, Synq〉 ≤ 0, where q = PΩ(γf + IA)(q) is the unique solution of the variational inequality 〈(γfA)q, pq〉 ≥ 0, for all p ∈ Ω. We can choose a subsequence of {yn} such that

(3.45)
As is bounded, there exists a subsequence of which converges weakly to w. We may assume without loss of generality that .

We claim that w ∈ Ω. Since ∥ynSyn∥ → 0, ∥xnSxn∥→0, and ∥xnyn∥→0 and by Lemma 2.6, we have that wF(S).

Next, we show that w ∈ GMEP(F, φ, Ψ). Since , we know that

(3.46)
It follows by (A2) that
(3.47)
Hence,
(3.48)
For t ∈ (0,1] and yH, let yt = ty + (1 − t)w. From (3.48), we have that
(3.49)
From , we have that . Further, from (A4) and the weakly lower semicontinuity of φ, and , we have that
(3.50)
From (A1), (A4), and (3.50), we have that
(3.51)
and hence
(3.52)
Letting t → 0, we have, for each yC, that
(3.53)
This implies that w ∈ GMEP(F, φ, Ψ).

Lastly, we show that wI(B, M). In fact, since B is a β-inverse-strongly monotone, B is monotone and Lipschitz continuous mapping. It follows from Lemma 2.3 that M + B is a maximal monotone. Let (v, g) ∈ G(M + B), since gBvM(v). Again since , we have that , that is, . By virtue of the maximal monotonicity of M + B, we have that

(3.54)
and hence
(3.55)
It follows from lim nunyn∥ = 0, lim nBunByn∥ = 0, and that
(3.56)
It follows from the maximal monotonicity of B + M that θ ∈ (M + B)(w), that is, wI(B, M). Therefore, w ∈ Ω. It follows that
(3.57)

Step 6. We prove that xnq. By using (3.2) and together with Schwarz inequality, we have that

(3.58)

Since {xn} is bounded, where for all n ≥ 0, it follows that

(3.59)
where ςn = 2〈Synq, γf(q) − Aq〉 + ηαn. By limsup n〈(γfA)q, Synq〉≤0, we get limsup nςn ≤ 0. Applying Lemma 2.5, we can conclude that xnq. This completes the proof.

Corollary 3.2. Let H be a real Hilbert space and C a closed convex subset of H. Let B, Ψ : CH be β, σ-inverse-strongly monotone mappings and φ : C a convex and lower semicontinuous function. Let f : CC be a contraction with coefficient α  (0 < α < 1), M : H → 2H a maximal monotone mapping, and S a nonexpansive mapping of C into itself such that

(3.60)
Suppose that {xn} is a sequence generated by the following algorithm for x0, unC arbitrarily:
(3.61)
for all n = 0,1, 2, …, by (C1)–(C3) in Theorem 3.1.

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(f + I)(q) which solves the following variational inequality:

(3.62)

Proof. Putting AI and γ ≡ 1 in Theorem 3.1, we can obtain the desired conclusion immediately.

Corollary 3.3. Let H be a real Hilbert space and C a closed convex subset of H. Let B, Ψ : CH be β, σ-inverse-strongly monotone mappings, φ : C a convex and lower semicontinuous function, and M : H → 2H a maximal monotone mapping. Let S be a nonexpansive mapping of C into itself such that

(3.63)
Suppose that {xn} is a sequence generated by the following algorithm for x0, uC and unC:
(3.64)
for all n = 0,1, 2, …, by (C1)–(C3) in Theorem 3.1.

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(q) which solves the following variational inequality:

(3.65)

Proof. Putting f(x) ≡ u, for all xC, in Corollary 3.2, we can obtain the desired conclusion immediately.

Corollary 3.4. Let H be a real Hilbert space, C a closed convex subset of H, B : CH be β-inverse-strongly monotone mappings, and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let f : CC be a contraction with coefficient α(0 < α < 1) and S a nonexpansive mapping of C into itself such that

(3.66)
Suppose that {xn} is a sequence generated by the following algorithm for x0C arbitrarily:
(3.67)
for all n = 0,1, 2, …, by (C1)–(C3) in Theorem 3.1.

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + IA)(q) which solves the following variational inequality:

(3.68)

Proof. Taking F ≡ 0,   Ψ ≡ 0,   φ ≡ 0,   un = xn, and JM,λ = PC in Theorem 3.1, we can obtain the desired conclusion immediately.

Remark 3.5. In Corollary 3.4 we generalize and improve the result of Klin-eam and Suantai [24].

4. Applications

In this section, we apply the iterative scheme (1.25) for finding a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping and also apply Theorem 3.1 for finding a common fixed point of nonexpansive mappings and inverse-strongly monotone mappings.

Definition 4.1. A mapping T : CC is called strictly pseudocontraction if there exists a constant 0 ≤ κ < 1 such that

(4.1)
If κ = 0, then S is nonexpansive. In this case, we say that T : CC is a κ-strictly pseudocontraction. Putting B = IT. Then, we have that
(4.2)
Observe that
(4.3)
Hence, we obtain
(4.4)
Then, B is ((1 − κ)/2)-inverse-strongly monotone mapping.

Using Theorem 3.1, we first prove a strong convergence theorem for finding a common fixed point of a nonexpansive mapping and a strict pseudocontraction.

Theorem 4.2. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : CH be β, σ-inverse-strongly monotone mappings, φ : C a convex and lower semicontinuous function, f : CC a contraction with coefficient α  (0 < α < 1), and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let S be a nonexpansive mapping of C into itself, and let T be a κ-strictly pseudocontraction of C into itself such that

(4.5)
Suppose that {xn} is a sequence generated by the following algorithm for x0, unC arbitrarily:
(4.6)
for all n = 0,1, 2, …, by (C1)–(C3) in Theorem 3.1.

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + IA)(q) which solves the following variational inequality:

(4.7)
which is the optimality condition for the minimization problem
(4.8)
where h is a potential function for γf (i.e., h(q) = γf(q) for qH).

Proof. Put BIT, then B is ((1 − κ)/2)-inverse-strongly monotone, F(T) = I(B, M), and JM,λ(xnλBxn) = (1 − λ)xn + λTxn. So by Theorem 3.1, we obtain the desired result.

Corollary 4.3. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : CH be β, σ-inverse-strongly monotone mappings, and φ : C a convex and lower semicontinuous function. Let f : CC be a contraction with coefficient α  (0 < α < 1) and S a nonexpansive mapping of C into itself, and let T be a κ-strictly pseudocontraction of C into itself such that

(4.9)
Suppose that {xn} is a sequence generated by the following algorithm for x0C arbitrarily:
(4.10)
for all n = 0,1, 2, …, by (C1)–(C3) in Theorem 3.1.

Then {xn} converges strongly to q ∈ Ω, where q = PΩ(f + I)(q) which solves the following variational inequality:

(4.11)
which is the optimality condition for the minimization problem
(4.12)
where h is a potential function for γf (i.e., h(q) = γf(q) for qH).

Proof. Putting AI and γ ≡ 1 in Theorem 4.2, we obtain the desired result.

Acknowledgments

The authors would like to thank the National Research University Project of Thailand′s Office of the Higher Education Commission for financial support under the project NRU-CSEC no. 54000267. Furthermore, they also would like to thank the Faculty of Science (KMUTT) and the National Research Council of Thailand. Finally, the authors would like to thank Professor Vittorio Colao and the referees for reading this paper carefully, providing valuable suggestions and comments, and pointing out a major error in the original version of this paper.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.