An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space
Abstract
An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions.
1. Introduction
Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods and analyze the weak converge property of three sequences generated by our method.
The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.
2. Preliminary and Basic Results
Let H be a real Hilbert space with 〈x, y〉 denoting the inner product of the vectors x, y. Weak converge and strong converge of the sequence to a point x are denoted by xk⇀x and xk → x, respectively. Identity mapping from Ω to itself is denoted by I.
Lemma 1. One has
Lemma 2. For any sequence that converges weakly to x(xk⇀x), one has
The next lemma is proposed in [10].
Lemma 3 (Demiclosedness principle). Let Ω be a closed, convex subset of a real Hilbert space H, and let S : Ω → H be a nonexpansive mapping. Then I − S is demiclosed at y ∈ H; that is, for any sequence , such that and (I − S)xk → y, one has .
3. An Algorithm and Its Convergence Analysis
In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.
Definition 4. For some vector x ∈ Ω, the projected residual function is defined as
Algorithm A. Step 0. Take δ ∈ (0,1), γ ∈ (0,1), x0 ∈ Ω, and k = 0.
Step 1. For the current iterative point xk ∈ Ω, compute
Step 2. If ∥r(xk+1)∥ = 0, stop; otherwise go to Step 1.
Remark 5. The iterative point tk is well computed in Algorithm A according to [3] and can be interpreted as follows: if (23) is well defined, then tk can be derived by the following iterative scheme: compute
Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu [17].
Lemma 6. Let H be a real Hilbert space, let be a sequence of real number, and let , such that
The following theorem is crucial in proving the boundness of the sequence .
Theorem 7. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, F(S)∩Ω* ≠ ∅, and x* ∈ Ω*. Then for any sequence generated by Algorithm A, one has
Proof. Letting and y = x*. It follows from Lemma 1 (10) that
Theorem 8. Let Ω be a nonempty, closed, and convex subset of H, f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω* ≠ ∅. Then for any sequence generated by Algorithm A, one has
Proof. Using (22), we have
That means is bounded, and so as . Since f is continuous; namely, there exists a constant M > 0, s.t. ∥f(zk)∥ ≤ M, for all k, we yet have
If lim k→∞∥xk − yk∥ = 0, we get the conclusion.
If lim k→∞ηk = 0, we can deduce that the inequality (23) in Algorithm A is not satisfied for nk − 1; that is, there exists k0, for all k ≥ k0,
On the other hand, using Cauchy-Schwarz inequality again, we have
Theorem 9. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S) ∩ Ω* ≠ ∅. Then the sequences generated by Algorithm A converge weakly to the same point x* ∈ F(S)∩Ω*, where .
Proof. By Theorem 8, we know that is bound, which implies that there exists a subsequence of that converges weakly to some points x* ∈ F(S)∩Ω*.
First, we investigate some details of x* ∈ F(S).
Letting x′ ∈ F(S)∩Ω*, since S is nonexpansive mapping, from (29) we have
Second, we describe the details of x* ∈ Ω*.
Since , using Theorem 8 we claim that and .
Letting (v, u) ∈ G(T), we have
Since f is continuous, by (37) we have
At last we show that such x* is unique.
Let be another subsequence of , such that . Then we conclude that . Suppose ; by Lemma 2 we have
4. Further Study
Algorithm B. Step 0. Take δ ∈ (0,1), γ ∈ (0,1), η−1 > 0, θ > 1, x0 ∈ Ω, and k = 0.
Step 1. For the current iterative point xk ∈ Ω, compute
Step 2. If ∥r(xk, τk)∥ = 0, stop; otherwise go to Step 1.
At the rest of this section, we discuss the weak convergence property of Algorithm B.
Lemma 10. For any τ > 0, one has
Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function r(•, τ). Meanwhile we know that r(x, τ) is a continuous function of x, as the projection mapping is nonexpansive.
Lemma 11. For any x ∈ Rn and τ1 ≥ τ2 > 0, it holds that
Theorem 12. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω* ≠ ∅. Then for any sequence generated by Algorithm B, one has
Proof. The proof of this theorem is similar to Theorem 7, so we omit it.
Theorem 13. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω* ≠ ∅. Then for any sequences generated by Algorithm B, one has
Proof. The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by
Theorem 14. Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω* ≠ ∅. Then the sequences generated by Algorithm B converge weakly to the same point x* ∈ F(S)∩Ω*, where .
5. Conclusions
In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping.
Acknowledgments
This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.