1. Introduction
Let
C be a closed convex subset of a real Hilbert space
H with the inner product 〈·, ·〉 and the norm ∥·∥. Let
F be a bifunction of
C ×
C into
ℛ, where
ℛ is the set of real numbers, Ψ :
C →
H a mapping, and
φ :
C →
ℛ a real-valued function. The
generalized mixed equilibrium problem is for finding
x ∈
C such that
(1.1)
The set of solutions of (
1.1) is denoted by GMEP(
F,
φ, Ψ), that is,
(1.2)
If
F ≡ 0, the problem (
1.1) is reduced into the
mixed variational inequality of Browder type [
1] for finding
x ∈
C such that
(1.3)
The set of solutions of (
1.3) is denoted by MVI(
C,
φ, Ψ).
If Ψ ≡ 0 and
φ ≡ 0, the problem (
1.1) is reduced into the
equilibrium problem [
2] for finding
x ∈
C such that
(1.4)
The set of solutions of (
1.4) is denoted by EP(
F). This problem contains fixed point problems and includes as special cases numerous problems in physics, optimization, and economics. Some methods have been proposed to solve the equilibrium problem; see [
3–
5].
If
F ≡ 0 and
φ ≡ 0, the problem (
1.1) is reduced into the
Hartmann-Stampacchia variational inequality [
6] for finding
x ∈
C such that
(1.5)
The set of solutions of (
1.5) is denoted by VI(
C, Ψ). The variational inequality has been extensively studied in the literature [
7].
If
F ≡ 0 and Ψ ≡ 0, the problem (
1.1) is reduced into the
minimize problem for finding
x ∈
C such that
(1.6)
The set of solutions of (
1.6) is denoted by Arg min (
φ).
Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence on the development of almost all branches of pure and applied sciences. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space
H:
(1.7)
where
A is a linear bounded operator,
F(
S) is the fixed point set of a nonexpansive mapping
S, and
y is a given point in
H [
8].
Recall that a mapping
S :
C →
C is said to be
nonexpansive if
(1.8)
for all
x,
y ∈
C. If
C is bounded closed convex and
S is a nonexpansive mapping of
C into itself, then
F(
S) is nonempty [
9]. We denote weak convergence and strong convergence by notations ⇀ and →, respectively. A mapping
A of
C into
H is called
monotone if
(1.9)
for all
x,
y ∈
C. A mapping
A of
C into
H is called
α-
inverse-strongly monotone if there exists a positive real number
α such that
(1.10)
for all
x,
y ∈
C. It is obvious that any
α-inverse-strongly monotone mapping
A is monotone and Lipschitz continuous mapping. A linear bounded operator
A is
strongly positive if there exists a constant
with the property
(1.11)
for all
x ∈
H. A self mapping
f :
C →
C is a
contractions on
C if there exists a constant
α ∈ (0,1) such that
(1.12)
for all
x,
y ∈
C. We use ∏
C to denote the collection of all contraction on C. Note that each
f ∈ ∏
C has a unique fixed point in
C.
Let
B :
H →
H be a single-valued nonlinear mapping and
M :
H → 2
H a set-valued mapping. The
variational inclusion problem is to find
x ∈
H such that
(1.13)
where
θ is the zero vector in
H. The set of solutions of problem (
1.13) is denoted by
I(
B,
M). The variational inclusion has been extensively studied in the literature, see, for example, [
10–
13] and the reference therein.
A set-valued mapping M : H → 2H is called monotone if for all x, y ∈ H, f ∈ M(x), and g ∈ M(y) imply 〈x − y, f − g〉≥0. A monotone mapping M is maximal if its graph G(M): = {(f, x) ∈ H × H : f ∈ M(x)} of M is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if, for (x, f) ∈ H × H, 〈x − y, f − g〉 ≥ 0 for all (y, g) ∈ G(M) imply f ∈ M(x).
Let
B be an inverse-strongly monotone mapping of
C into
H, and let
NCv be normal cone to
C at
v ∈
C, that is,
NCv = {
w ∈
H : 〈
v −
u,
w〉 ≥ 0, ∀
u ∈
C}, and define
(1.14)
Then,
T is a maximal monotone and
θ ∈
Tv if and only if
v ∈ VI(
C,
B) [
14].
Let
M :
H → 2
H be a set-valued maximal monotone mapping; then the single-valued mapping
JM,λ :
H →
H defined by
(1.15)
is called the
resolvent operator associated with
M, where
λ is any positive number and
I is the identity mapping. It is worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone and that a solution of problem (
1.13) is a fixed point of the operator
JM,λ(
I −
λB) for all
λ > 0 [
15].
In 2000, Moudafi [
16] introduced the viscosity approximation method for nonexpansive mapping and proved that, if
H is a real Hilbert space, the sequence {
xn} defined by the iterative method below, with the initial guess
x0 ∈
C, is chosen arbitrarily,
(1.16)
where {
αn} ⊂ (0,1) satisfies certain conditions and converges strongly to a fixed point of
S (say
) which is the unique solution of the following variational inequality:
(1.17)
In 2006, Marino and Xu [
8] introduced a general iterative method for nonexpansive mapping. They defined the sequence {
xn} generated by the algorithm
x0 ∈
C:
(1.18)
where {
αn}⊂(0,1) and
A is a strongly positive linear bounded operator. They proved that, if
C =
H and the sequence {
αn} satisfies appropriate conditions, then the sequence {
xn} generated by (
1.18) converges strongly to a fixed point of
S (say
) which is the unique solution of the following variational inequality:
(1.19)
which is the optimality condition for the minimization problem
(1.20)
where
h is a potential function for
γf (i.e.,
h′(
x) =
γf(
x) for
x ∈
H).
For finding a common element of the set of fixed points of nonexpansive mappings and the set of solution of the variational inequalities, let
PC be the projection of
H onto
C. In 2005, Iiduka and Takahashi [
17] introduced following iterative process for
x0 ∈
C:
(1.21)
where
u ∈
C, {
αn}⊂(0,1), and {
λn}⊂[
a,
b] for some
a,
b with 0 <
a <
b < 2
β. They proved that under certain appropriate conditions imposed on {
αn} and {
λn}, the sequence {
xn} generated by (
1.21) converges strongly to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping (say
) which solve some variational inequality
(1.22)
In 2008, Su et al. [
18] introduced the following iterative scheme by the viscosity approximation method in a real Hilbert space:
x1,
un ∈
H
(1.23)
for all
n ∈
ℕ, where {
αn}⊂[0,1) and {
rn}⊂(0,
∞) satisfy some appropriate conditions. Furthermore, they proved that {
xn} and {
un} converge strongly to the same point
z ∈
F(
S)∩VI(
C,
A)∩
EP (
F), where
z =
PF(S) ∩ VI(C,A) ∩ EP (F)f(
z).
In 2011, Tan and Chang [
12] introduced following iterative process for {
Tn :
C →
C} which is a sequence of nonexpansive mappings. Let {
xn} be the sequence defined by
(1.24)
where {
αn}⊂(0,1),
λ ∈ (0,2
α], and
μ ∈ (0,2
β]. The sequence {
xn} defined by (
1.24) converges strongly to a common element of the set of fixed points of nonexpansive mapping, the set of solutions of the variational inequality, and the generalized equilibrium problem.
In this paper, we modify the iterative methods (
1.18), (
1.23), and (
1.24) by proposing the following new general viscosity iterative method:
x0,
un ∈
C,
(1.25)
for all
n ∈
ℕ, where {
αn}⊂(0,1), {
rn}⊂(0,2
σ), and
λ ∈ (0,2
β) satisfy some appropriate conditions. The purpose of this paper is to show that under some control conditions the sequence {
xn} strongly converges to a common element of the set of fixed points of nonexpansive mapping, the solution of the generalized mixed equilibrium problems, and the set of solutions of the variational inclusion in a real Hilbert space.
2. Preliminaries
Let
H be a real Hilbert space and
C a nonempty closed convex subset of
H. Recall that the (nearest point) projection
PC from
H onto
C assigns to each
x ∈
H the unique point in
PCx ∈
C satisfying the property
(2.1)
The following characterizes the projection
PC. We recall some lemmas which will be needed in the rest of this paper.
Lemma 2.1. The function u ∈ C is a solution of the variational inequality (1.5) if and only if u ∈ C satisfies the relation u = PC(u − λΨu) for all λ > 0.
Lemma 2.2. For a given z ∈ H, u ∈ C, u = PCz⇔〈u − z, v − u〉≥0, for all v ∈ C.
It is well known that PC is a firmly nonexpansive mapping of H onto C and satisfies
(2.2)
Moreover,
PCx is characterized by the following properties:
PCx ∈
C and, for all
x ∈
H,
y ∈
C,
(2.3)
Lemma 2.3 (see [19].)Let M : H → 2H be a maximal monotone mapping, and let B : H → H be a monotone and Lipshitz continuous mapping. Then the mapping L = M + B : H → 2H is a maximal monotone mapping.
Lemma 2.4 (see [20].)Each Hilbert space H satisfies Opial′s condition, that is, for any sequence {xn} ⊂ H with xn⇀x, the inequality lim inf n→∞∥xn − x∥ < lim inf n→∞∥xn − y∥ holds for each y ∈ H with y ≠ x.
Lemma 2.5 (see [21].)Assume that {an} is a sequence of nonnegative real numbers such that
(2.4)
where {
γn}⊂(0,1) and {
δn} is a sequence in
ℛ such that
Then lim
n→∞an = 0.
Lemma 2.6 (see [22].)Let C be a closed convex subset of a real Hilbert space H, and let T : C → C be a nonexpansive mapping. Then I − T is demiclosed at zero, that is,
(2.5)
implies
x =
Tx.
For solving the generalized mixed equilibrium problem, let us assume that the bifunction
F :
C ×
C →
ℛ, the nonlinear mapping Ψ :
C →
H is continuous monotone, and
φ :
C →
ℛ satisfies the following conditions:
- (A1)
F(x, x) = 0 for all x ∈ C;
- (A2)
F is monotone, that is, F(x, y) + F(y, x) ≤ 0 for any x, y ∈ C;
- (A3)
for each fixed y ∈ C, x ↦ F(x, y) is weakly upper semicontinuous;
- (A4)
for each fixed x ∈ C, y ↦ F(x, y) is convex and lower semicontinuous;
- (B1)
for each x ∈ C and r > 0, there exist a bounded subset Dx⊆C and yx ∈ C such that, for any z ∈ C∖Dx,
(2.6)
- (B2)
C is a bounded set.
Lemma 2.7 (see [23].)Let C be a nonempty closed convex subset of a real Hilbert space H. Let F : C × C → ℛ be a bifunction mapping satisfying (A1)–(A4), and let φ : C → ℛ be convex and lower semicontinuous such that C∩dom φ ≠ ∅. Assume that either (B1) or (B2) holds. For r > 0 and x ∈ H, there exists u ∈ C such that
(2.7)
Define a mapping
Kr :
H →
C as follows:
(2.8)
for all
x ∈
H. Then, the following hold:
- (i)
Kr is single valued;
- (ii)
Kr is firmly nonexpansive, that is, for any ;
- (iii)
F(Kr) = MEP(F, φ);
- (iv)
MEP(F, φ) is closed and convex.
Lemma 2.8 (see [8].)Assume that A is a strongly positive linear bounded operator on a Hilbert space H with coefficient and 0 < ρ ≤ ∥A∥−1; then .
3. Strong Convergence Theorems
In this section, we show a strong convergence theorem which solves the problem of finding a common element of F(S), GMEP(F, φ, Ψ), and I(B, M) of an inverse-strongly monotone mappings in a Hilbert space.
Theorem 3.1. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : C → H be β, σ-inverse-strongly monotone mappings, respectively. Let φ : C → ℛ be a convex and lower semicontinuous function, f : C → C a contraction with coefficient α (0 < α < 1), M : H → 2H a maximal monotone mapping, and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let S be a nonexpansive mapping of C into itself such that
(3.1)
Suppose that {
xn} is a sequences generated by the following algorithm for
x0 ∈
C arbitrarily:
(3.2)
for all
n = 0,1, 2, …, where
- (C1)
{αn} ⊂ (0,1), lim n→0αn = 0, , and ,
- (C2)
{rn} ⊂ [c, d] with c, d ∈ (0,2σ) and ,
- (C3)
λ ∈ (0,2β).
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + I − A)(q) which solves the following variational inequality:
(3.3)
which is the optimality condition for the minimization problem
(3.4)
where
h is a potential function for
γf (i.e.,
h′(
q) =
γf(
q) for
q ∈
H).
Proof. Due to condition (C1), we may assume without loss of generality, then, that αn ∈ (0, ∥A∥−1) for all n. By Lemma 2.8, we have that . Next, we will assume that .
Next, we will divide the proof into six steps.
Step 1. We will show that {xn}, {un} are bounded.
Since B, Ψ are β, σ-inverse-strongly monotone mappings, we have that
(3.5)
In a similar way, we can obtain
(3.6)
It is clear that if 0 <
λ < 2
β, 0 <
rn < 2
σ, then
I −
λB,
I −
rnΨ are all nonexpansive.
Put yn = JM,λ(un − λBun), n ≥ 0. It follows that
(3.7)
By Lemma
2.7, we have that
for all
n ≥ 0. Then, we have that
(3.8)
Hence, we have that
(3.9)
From (
3.2), we deduce that
(3.10)
It follows by induction that
(3.11)
Therefore {
xn} is bounded, so are {
yn}, {
Syn}, {
Bun}, {
f(
xn)}, and {
ASyn}.
Step 2. We claim that lim n→∞∥xn+1 − xn∥ = 0. From (3.2), we have that
(3.12)
Since I − λB are nonexpansive, we also have that
(3.13)
On the other hand, from
and
, it follows that
(3.14)
(3.15)
Substituting
y =
un into (
3.14) and y =
un−1 into (
3.15), we get
(3.16)
From (A2), we obtain
(3.17)
and then
(3.18)
So
(3.19)
It follows that
(3.20)
Without loss of generality, let us assume that there exists a real number
c such that
rn−1 >
c > 0, for all
n ∈
ℕ. Then, we have that
(3.21)
and hence
(3.22)
where
M1 = sup {∥
un −
xn∥ :
n ∈
ℕ}. Substituting (
3.22) into (
3.13), we have that
(3.23)
Substituting (
3.23) into (
3.12), we get
(3.24)
where
M2 = sup {max {∥
ASyn−1∥, ∥
f(
xn−1)∥ :
n ∈
ℕ}}. By conditions (C1)-(C2) and Lemma
2.5, we have that ∥
xn+1 −
xn∥ → 0 as
n →
∞. From (
3.23), we also have that ∥
yn+1 −
yn∥ → 0 as
n →
∞.
Step 3. We show the following:
- (i)
lim n→∞∥Bun − Bq∥ = 0;
- (ii)
lim n→∞∥Ψxn − Ψq∥ = 0.
For
q ∈ Ω and
q =
JM,λ(q −
λBq), by (
3.5) and (
3.8), we get
(3.25)
It follows that
(3.26)
So, we obtain
(3.27)
where
. By conditions (C1) and (C3) and lim
n→∞∥
xn+1 −
xn∥ = 0, we obtain that ∥
Bun −
Bq∥ → 0 as
n →
∞.
Substituting (3.8) into (3.25), we get
(3.28)
From (
3.26), we have that
(3.29)
So, we also have that
(3.30)
where
. By conditions (C1)–(C3), lim
n→∞∥
xn+1 −
xn∥ = 0 and lim
n→∞∥
Bun −
Bq∥ = 0, we obtain that ∥Ψ
xn − Ψ
q∥ → 0 as
n →
∞.
Step 4. We show the following:
- (i)
lim n→∞∥xn − un∥ = 0;
- (ii)
lim n→∞∥un − yn∥ = 0;
- (iii)
lim n→∞∥yn − Syn∥ = 0.
Since
is firmly nonexpansive and by (
2.2), we observe that
(3.31)
Hence, we have that
(3.32)
Since
JM,λ is 1-inverse-strongly monotone and by (
2.2), we compute
(3.33)
which implies that
(3.34)
Substituting (
3.32) into (
3.34), we have that
(3.35)
Substituting (
3.35) into (
3.26), we get
(3.36)
Then, we derive
(3.37)
By condition (C1), lim
n→∞∥
xn −
xn+1∥ = 0, lim
n→∞∥Ψ
xn − Ψ
q∥ = 0, and lim
n→∞∥
Bun −
Bq∥ = 0. So, we have that ∥
xn −
un∥ → 0, ∥
un −
yn∥ → 0 as
n →
∞. It follows that
(3.38)
From (
3.2), we have that
(3.39)
By condition (C1) and lim
n→∞∥
yn−1 −
yn∥ = 0, we obtain that ∥
xn −
Syn∥ → 0 as
n →
∞. Next, we observe that
(3.40)
Since {
ASyn} is bounded and by condition (C1), we have that ∥
xn+1 −
Syn∥→0 as
n →
∞, and
(3.41)
Since lim
n→∞∥
xn −
xn+1∥ = 0 and lim
n→∞∥
xn+1 −
Syn∥ = 0, it implies that ∥
xn −
Syn∥ → 0 as
n →
∞. Hence, we have that
(3.42)
By (
3.38) and lim
n→∞∥
xn −
Syn∥ = 0, we obtain ∥
xn −
Sxn∥ → 0 as
n →
∞. Moreover, we also have that
(3.43)
By (
3.38) and lim
n→∞∥
xn −
Syn∥ = 0, we obtain ∥
yn −
Syn∥ → 0 as
n →
∞.
Step 5. We show that q ∈ Ω : = F(S)∩GMEP(F, φ, Ψ)∩I(B, M) and limsup n→∞〈(γf − A)q, Syn − q〉 ≤ 0. It is easy to see that PΩ(γf + (I − A)) is a contraction of H into itself. Indeed, since , we have that
(3.44)
Hence
H is complete, and there exists a unique fixed point
q ∈
H such that
q =
PΩ(
γf + (
I −
A))(
q). By Lemma
2.2, we obtain that 〈(
γf −
A)
q,
w −
q〉≤0 for all
w ∈ Ω.
Next, we show that limsup n→∞〈(γf − A)q, Syn − q〉 ≤ 0, where q = PΩ(γf + I − A)(q) is the unique solution of the variational inequality 〈(γf − A)q, p − q〉 ≥ 0, for all p ∈ Ω. We can choose a subsequence of {yn} such that
(3.45)
As
is bounded, there exists a subsequence
of
which converges weakly to
w. We may assume without loss of generality that
.
We claim that w ∈ Ω. Since ∥yn − Syn∥ → 0, ∥xn − Sxn∥→0, and ∥xn − yn∥→0 and by Lemma 2.6, we have that w ∈ F(S).
Next, we show that w ∈ GMEP(F, φ, Ψ). Since , we know that
(3.46)
It follows by (A2) that
(3.47)
Hence,
(3.48)
For
t ∈ (0,1] and
y ∈
H, let
yt =
ty + (1 −
t)
w. From (
3.48), we have that
(3.49)
From
, we have that
. Further, from (A4) and the weakly lower semicontinuity of
φ,
and
, we have that
(3.50)
From (A1), (A4), and (
3.50), we have that
(3.51)
and hence
(3.52)
Letting
t → 0, we have, for each
y ∈
C, that
(3.53)
This implies that
w ∈ GMEP(
F,
φ, Ψ).
Lastly, we show that w ∈ I(B, M). In fact, since B is a β-inverse-strongly monotone, B is monotone and Lipschitz continuous mapping. It follows from Lemma 2.3 that M + B is a maximal monotone. Let (v, g) ∈ G(M + B), since g − Bv ∈ M(v). Again since , we have that , that is, . By virtue of the maximal monotonicity of M + B, we have that
(3.54)
and hence
(3.55)
It follows from lim
n→∞∥
un −
yn∥ = 0, lim
n→∞∥
Bun −
Byn∥ = 0, and
that
(3.56)
It follows from the maximal monotonicity of
B +
M that
θ ∈ (
M +
B)(
w), that is,
w ∈
I(
B,
M). Therefore,
w ∈ Ω. It follows that
(3.57)
Step 6. We prove that xn → q. By using (3.2) and together with Schwarz inequality, we have that
(3.58)
Since {xn} is bounded, where for all n ≥ 0, it follows that
(3.59)
where
ςn = 2〈
Syn −
q,
γf(
q) −
Aq〉 +
ηαn. By limsup
n→∞〈(
γf −
A)
q,
Syn −
q〉≤0, we get limsup
n→∞ςn ≤ 0. Applying Lemma
2.5, we can conclude that
xn →
q. This completes the proof.
Corollary 3.2. Let H be a real Hilbert space and C a closed convex subset of H. Let B, Ψ : C → H be β, σ-inverse-strongly monotone mappings and φ : C → ℛ a convex and lower semicontinuous function. Let f : C → C be a contraction with coefficient α (0 < α < 1), M : H → 2H a maximal monotone mapping, and S a nonexpansive mapping of C into itself such that
(3.60)
Suppose that {
xn} is a sequence generated by the following algorithm for
x0,
un ∈
C arbitrarily:
(3.61)
for all
n = 0,1, 2, …, by (C1)–(C3) in Theorem
3.1.
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(f + I)(q) which solves the following variational inequality:
(3.62)
Proof. Putting A ≡ I and γ ≡ 1 in Theorem 3.1, we can obtain the desired conclusion immediately.
Corollary 3.3. Let H be a real Hilbert space and C a closed convex subset of H. Let B, Ψ : C → H be β, σ-inverse-strongly monotone mappings, φ : C → ℛ a convex and lower semicontinuous function, and M : H → 2H a maximal monotone mapping. Let S be a nonexpansive mapping of C into itself such that
(3.63)
Suppose that {
xn} is a sequence generated by the following algorithm for
x0,
u ∈
C and
un ∈
C:
(3.64)
for all
n = 0,1, 2, …, by (C1)–(C3) in Theorem
3.1.
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(q) which solves the following variational inequality:
(3.65)
Proof. Putting f(x) ≡ u, for all x ∈ C, in Corollary 3.2, we can obtain the desired conclusion immediately.
Corollary 3.4. Let H be a real Hilbert space, C a closed convex subset of H, B : C → H be β-inverse-strongly monotone mappings, and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let f : C → C be a contraction with coefficient α(0 < α < 1) and S a nonexpansive mapping of C into itself such that
(3.66)
Suppose that {
xn} is a sequence generated by the following algorithm for
x0 ∈
C arbitrarily:
(3.67)
for all
n = 0,1, 2, …, by (C1)–(C3) in Theorem
3.1.
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + I − A)(q) which solves the following variational inequality:
(3.68)
Proof. Taking F ≡ 0, Ψ ≡ 0, φ ≡ 0, un = xn, and JM,λ = PC in Theorem 3.1, we can obtain the desired conclusion immediately.
Remark 3.5. In Corollary 3.4 we generalize and improve the result of Klin-eam and Suantai [24].
4. Applications
In this section, we apply the iterative scheme (1.25) for finding a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping and also apply Theorem 3.1 for finding a common fixed point of nonexpansive mappings and inverse-strongly monotone mappings.
Definition 4.1. A mapping T : C → C is called strictly pseudocontraction if there exists a constant 0 ≤ κ < 1 such that
(4.1)
If
κ = 0, then
S is nonexpansive. In this case, we say that
T :
C →
C is a
κ-strictly pseudocontraction. Putting
B =
I −
T. Then, we have that
(4.2)
Observe that
(4.3)
Hence, we obtain
(4.4)
Then,
B is ((1 −
κ)/2)-inverse-strongly monotone mapping.
Using Theorem 3.1, we first prove a strong convergence theorem for finding a common fixed point of a nonexpansive mapping and a strict pseudocontraction.
Theorem 4.2. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : C → H be β, σ-inverse-strongly monotone mappings, φ : C → ℛ a convex and lower semicontinuous function, f : C → C a contraction with coefficient α (0 < α < 1), and A a strongly positive linear bounded operator of H into itself with coefficient . Assume that . Let S be a nonexpansive mapping of C into itself, and let T be a κ-strictly pseudocontraction of C into itself such that
(4.5)
Suppose that {
xn} is a sequence generated by the following algorithm for
x0,
un ∈
C arbitrarily:
(4.6)
for all
n = 0,1, 2, …, by (C1)–(C3) in Theorem
3.1.
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(γf + I − A)(q) which solves the following variational inequality:
(4.7)
which is the optimality condition for the minimization problem
(4.8)
where
h is a potential function for
γf (i.e.,
h′(
q) =
γf(
q) for
q ∈
H).
Proof. Put B ≡ I − T, then B is ((1 − κ)/2)-inverse-strongly monotone, F(T) = I(B, M), and JM,λ(xn − λBxn) = (1 − λ)xn + λTxn. So by Theorem 3.1, we obtain the desired result.
Corollary 4.3. Let H be a real Hilbert space, C a closed convex subset of H, B, Ψ : C → H be β, σ-inverse-strongly monotone mappings, and φ : C → ℛ a convex and lower semicontinuous function. Let f : C → C be a contraction with coefficient α (0 < α < 1) and S a nonexpansive mapping of C into itself, and let T be a κ-strictly pseudocontraction of C into itself such that
(4.9)
Suppose that {
xn} is a sequence generated by the following algorithm for
x0 ∈
C arbitrarily:
(4.10)
for all
n = 0,1, 2, …, by (C1)–(C3) in Theorem
3.1.
Then {xn} converges strongly to q ∈ Ω, where q = PΩ(f + I)(q) which solves the following variational inequality:
(4.11)
which is the optimality condition for the minimization problem
(4.12)
where
h is a potential function for
γf (i.e.,
h′(
q) =
γf(
q) for
q ∈
H).
Proof. Putting A ≡ I and γ ≡ 1 in Theorem 4.2, we obtain the desired result.
Acknowledgments
The authors would like to thank the National Research University Project of Thailand′s Office of the Higher Education Commission for financial support under the project NRU-CSEC no. 54000267. Furthermore, they also would like to thank the Faculty of Science (KMUTT) and the National Research Council of Thailand. Finally, the authors would like to thank Professor Vittorio Colao and the referees for reading this paper carefully, providing valuable suggestions and comments, and pointing out a major error in the original version of this paper.