Volume 2013, Issue 1 531912
Research Article
Open Access

An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space

Xueyong Wang

Corresponding Author

Xueyong Wang

College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China cqu.edu.cn

Search for more papers by this author
Shengjie Li

Shengjie Li

College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China cqu.edu.cn

Search for more papers by this author
Xipeng Kou

Xipeng Kou

College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China cqu.edu.cn

Search for more papers by this author
First published: 12 March 2013
Citations: 5
Academic Editor: Guanglu Zhou

Abstract

An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions.

1. Introduction

Let Ω be a nonempty closed convex subset of a real Hilbert space H, and let f : Ω → Ω be a continuous mapping. The variational inequality problem, denoted by VI (f, Ω), is to find a vector x* ∈ Ω, such that
(1)
Throughout the paper, let Ω* be the solution set of VI (f, Ω), which is assumed to be nonempty. In the special case when Ω   is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector x* ∈ Ω, such that
(2)
The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see [116] and the references therein. Many algorithms, which employ the projection onto the feasible set Ω   of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich [2] proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter [3] extended the extragradient algorithm through replying the set Ω by the intersection of two sets related to VI (f, Ω). In each iteration of the algorithm, the new vector xk+1 is calculated according to the following iterative scheme. Given the current vector xk, compute r(xk) = xkPΩ(xkf(xk)), if r(xk) = 0, stop; otherwise, compute
(3)
where and mk being the smallest nonnegative integer m satisfying
(4)
and then compute
(5)
where Hk = {xRn∣〈xzk, f(zk)〉 ≤ 0}.
On the other hand, Nadezhkina and Takahashi [11] got xk+1 by the following iterative formula:
(6)
where is a sequence in is a sequence, and S : Ω → Ω   is a nonexpansive mapping. Denoting the fixed points set of S by F(S) and assuming F(S)∩Ω*  , they proved that the sequence converges weakly to some x*F(S)∩Ω*.

Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods and analyze the weak converge property of three sequences generated by our method.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.

2. Preliminary and Basic Results

Let H be a real Hilbert space with 〈x, y〉 denoting the inner product of the vectors x, y. Weak converge and strong converge of the sequence to a point x are denoted by xkx and xkx, respectively. Identity mapping from Ω to itself is denoted by I.

For some vector xH, the orthogonal projection of x onto Ω, denoted by PΩ(x), is defined as
(7)
The following lemma states some well-known properties of the orthogonal projection operator.

Lemma 1. One has

(8)
(9)
(10)
(11)

A mapping f is called monotone if
(12)
A mapping f is called Lipschitz continuous, if there exists an L ≥ 0, such that
(13)
The graph of f, denoted by G(f), is defined by
(14)
A mapping S : Ω → Ω   is called nonexpansive if
(15)
and the fixed point set of a mapping S, denoted by F(S), is defined by
(16)
We denote the normal cone of Ω at v ∈ Ω by
(17)
and define the function T(v) as
(18)
Then T is maximal monotone. It is well known that 0 ∈ T(v), if and only if v ∈ Ω*. For more details, see, for example, [9] and references therein. The following lemma is established in Hilbert space and is well known as Opial condition.

Lemma 2. For any sequence that converges weakly to x(xkx), one has

(19)

The next lemma is proposed in [10].

Lemma 3 (Demiclosedness principle). Let Ω be a closed, convex subset of a real Hilbert space H, and let S : Ω → H   be a nonexpansive mapping. Then IS is demiclosed at yH; that is, for any sequence , such that and (IS)xky, one has .

3. An Algorithm and Its Convergence Analysis

In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.

Definition 4. For some vector x ∈ Ω, the projected residual function is defined as

(20)
Obviously, we have that x ∈ Ω* if and only if r(x) = 0. Now we describe our algorithm.

Algorithm A. Step  0. Take δ ∈ (0,1),  γ ∈ (0,1),  x0 ∈ Ω, and k = 0.

Step 1. For the current iterative point xk ∈ Ω, compute

(21)
(22)
where and nk being the smallest nonnegative integer n satisfying
(23)
Compute
(24)
(25)
where {αk}⊂(a, b)  (a, b ∈ (0,1)),   Hk = {x ∈ Ω∣〈xzk, f(zk)〉 ≤ 0}, and S : Ω → Ω   is a nonexpansive mapping.

Step 2. If ∥r(xk+1)∥ = 0, stop; otherwise go to Step 1.

Remark 5. The iterative point tk is well computed in Algorithm A according to [3] and can be interpreted as follows: if (23) is well defined, then tk can be derived by the following iterative scheme: compute

(26)
For more details, see [3, 4].

Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu [17].

Lemma 6. Let H be a real Hilbert space, let be a sequence of real number, and let , such that

(27)
for some c ≥ 0. Then one has
(28)

The following theorem is crucial in proving the boundness of the sequence .

Theorem 7. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, F(S)∩Ω*, and x* ∈ Ω*. Then for any sequence generated by Algorithm A, one has

(29)

Proof. Letting and y = x*. It follows from Lemma 1 (10) that

(30)
that is,
(31)
From (20)–(23) in Algorithm A, we get 〈xkzk, f(zk)〉 > 0, which means xkHk. So, by the definition of the projection operator and [3], we obtain
(32)
Substituting (32) into (31), we have
(33)
Since f is monotone, connecting with (1), we obtain
(34)
Thus
(35)
which completes the proof.

Theorem 8. Let Ω be a nonempty, closed, and convex subset of H, f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*. Then for any sequence generated by Algorithm A, one has

(36)
Furthermore,
(37)

Proof. Using (22), we have

(38)
By the Cauchy-Schwarz inequality,
(39)
Hence, by (23)
(40)
Then we have
(41)
where the first inequation follows from that S is a nonexpansive mapping.

That means is bounded, and so as . Since f is continuous; namely, there exists a constant M > 0, s.t. ∥f(zk)∥ ≤ M,  for  all  k, we yet have

(42)
So we know that there exists ξ ≥ 0,  lim kxkx*∥ = ξ, and hence
(43)
which implies that lim kxkyk∥ = 0   or lim kηk = 0.

If lim kxkyk∥ = 0, we get the conclusion.

If lim kηk = 0, we can deduce that the inequality (23) in Algorithm A is not satisfied for nk − 1; that is, there exists k0, for all kk0,

(44)
Applying (8) by setting x = xkf(xk), y = xk leads to
(45)
Therefore
(46)
Passing onto the limit in (44), (46), we get δ lim kxkyk∥ ≥ lim kxkyk∥, since δ ∈ (0,1), we obtain lim kxkyk∥ = 0.

On the other hand, using Cauchy-Schwarz inequality again, we have

(47)
Therefore,
(48)
Then we have
(49)
Noting that αk ∈ (a, b)  (a, b ∈ (0,1)), it easily follows that 0 < (1 − b)/2 < (1 − αk)/2 < (1 − a)/2 < 1, which implies that
(50)
By the triangle inequality, we have
(51)
Passing onto the limit in (51), we conclude
(52)
The proof is complete.

Theorem 9. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S) ∩ Ω*. Then the sequences generated by Algorithm A converge weakly to the same point x*F(S)∩Ω*, where .

Proof. By Theorem 8, we know that is bound, which implies that there exists a subsequence of that converges weakly to some points x*F(S)∩Ω*.

First, we investigate some details of x*F(S).

Letting xF(S)∩Ω*, since S is nonexpansive mapping, from (29) we have

(53)
Passing onto the limit in (53), we obtain
(54)
Then by (25) we have
(55)
From Lemma 6, it follows that
(56)
By the triangle inequality, we have
(57)
and then passing onto the limit in (57), we deduce that
(58)
which imply that x*F(S)  by Lemma 3.

Second, we describe the details of x* ∈ Ω*.

Since , using Theorem 8 we claim that and .

Letting (v, u) ∈ G(T), we have

(59)
thus,
(60)
Applying (8) by letting x = xkf(xk), y = v, we have
(61)
that is,
(62)
Note that uf(v) ∈ NΩ(v) and tk ∈ Ω, then
(63)
where the last inequation follows from the monotone of f.

Since f is continuous, by (37) we have

(64)
Passing onto the limit in (63), we obtain
(65)
As f is maximal monotone, we have x*f−1(0), which implies that x* ∈ Ω*.

At last we show that such x* is unique.

Let be another subsequence of , such that . Then we conclude that . Suppose ; by Lemma 2 we have

(66)
which implies that lim kxkx*∥ < lim kxkx*∥, and this is a contradiction. Thus, , and the proof is complete.

4. Further Study

In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant τ   > 0, we define a new projected residual function as follows:
(67)
It is clear that the new projected residual function (67) degenerates into (20) by setting τ   = 1.

Algorithm B. Step 0. Take δ ∈ (0,1),  γ ∈ (0,1),  η−1 > 0,  θ > 1,  x0 ∈ Ω, and k = 0.

Step 1. For the current iterative point xk ∈ Ω, compute

(68)
where and nk being the smallest nonnegative integer n satisfying
(69)
Compute
(70)
where {αk}⊂(a, b) (a, b ∈ (0,1)) and Hk = {x ∈ Ω∣〈xzk, f(zk)〉 ≤ 0}.

Step 2. If ∥r(xk, τk)∥ = 0, stop; otherwise go to Step 1.

At the rest of this section, we discuss the weak convergence property of Algorithm B.

Lemma 10. For any τ > 0, one has

(71)

Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function r(•, τ). Meanwhile we know that r(x, τ) is a continuous function of x, as the projection mapping is nonexpansive.

Lemma 11. For any xRn  and  τ1τ2 > 0, it holds that

(72)

Theorem 12. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*. Then for any sequence generated by Algorithm B, one has

(73)

Proof. The proof of this theorem is similar to Theorem 7, so we omit it.

Theorem 13. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*. Then for any sequences generated by Algorithm B, one has

(74)
Furthermore,
(75)

Proof. The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by

(76)
where (76) follows from Lemma 11 with τ1 = 1 and τ2 = τk.

Theorem 14. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*. Then the sequences generated by Algorithm B converge weakly to the same point x*F(S)∩Ω*, where .

5. Conclusions

In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.