The Strong Convergence of Prediction-Correction and Relaxed Hybrid Steepest-Descent Method for Variational Inequalities
Abstract
We establish the strong convergence of prediction-correction and relaxed hybrid steepest-descent method (PRH method) for variational inequalities under some suitable conditions that simplify the proof. And it is to be noted that the proof is different from the previous results and also is not similar to the previous results. More importantly, we design a set of practical numerical experiments. The results demonstrate that the PRH method under some descent directions is more slightly efficient than that of the modified and relaxed hybrid steepest-descent method, and the PRH Method under some new conditions is more efficient than that under some old conditions.
1. Introduction
In fact, the projection PK in the contraction methods may not be easy to compute, and a great effort is to compute the projection PK in each iteration. Yamada and Deutsch have provided a hybrid steepest-descent method for solving the VI (F, K) [2, 3] in order to reduce the difficulty and complexity of computing the projection PK. Subsequently, the convergence of hybrid steepest-descent methods was given out by Xu and Kim [4] and Zeng et al. [5]. Naturally, by analyzing several three-step iterative methods in each iteration by the fixed pointed equation, we can obtain the Noor iterations. Recently, Ding et al. [7] proposed a three-step relaxed hybrid steepest-descent method for variational inequalities, and the simple proof of three-step relaxed hybrid steepest-descent methods under different conditions was introduced by Yao et al. [24]. The literature [14, 16] described a modified and relaxed hybrid steepest-descent (MRHSD) method and the different convergence of the MRHSD method under the different conditions. A set of practical numerical experiments in the literature [16] demonstrated that the MRHSD method has different efficiency under different conditions. Subsequently, the prediction-correction and relaxed hybrid steepest-descent method (PRH method) [15] makes more use of the history information and less decreases the loss of information than the methods [7, 14]. The PRH method introduced more descent directions than the MRHSD method [14, 16], and computing these descent directions only needs the history information.
In this paper, we will prove the strong convergence of PRH method under different and suitable restrictions imposed on parameters (Condition 12), which differs from that of [15]. Moreover, the proof of strong convergence is different from the previous proof in [15], which is not similar to that in [7] in Step 2. And more importantly, numerical experiments verify that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method [14, 16]. Furthermore, it is easy to obtain these descent directions.
The remainder of the paper is organized as follows. In Section 2, we review several lemmas and preliminaries. We prove the convergence theorem under Condition 12 in Section 3. In Section 4, we give out a series of numerical experiments, which demonstrated that the PRH method under Condition 12 is more efficient than under Condition 10. Section 5 concludes the paper.
2. Preliminaries
In order to proof the later convergence theorem, we introduce several lemmas and the main results in the following.
Lemma 1. In a real Hilbert space H, there holds the inequality
The lemma is a basic result of a Hilbert space with the inner product.
Lemma 2 (demiclosedness principle). Assume that T is a nonexpansive self-mapping on a nonempty closed convex subset K of a Hilbert space H. If T has a fixed point, then (I − T) is demiclosed. That is, whenever xn is a sequence in K weakly converging to some x ∈ K and the sequence (I − T)xn strongly converges to some y ∈ H, it follows that (I − T)x = y. Here I is the identity operator of H.
The following lemma is an immediate result of a projection mapping onto a closed convex subset of a Hilbert space.
Lemma 3. Let K be a nonempty closed convex subset of H. For all x, y ∈ H and z ∈ K, then
- (1)
〈PK(x) − x, z − PK(y)〉 ≥ 0,
- (2)
−.
Lemma 4 (see [13].)Let {xn} and {yn} be bounded sequence in a Banach space X and let {ζn} be a sequence in [0,1] with 0 < lim inf n→∞ζn ≤ lim sup n→∞ζn < 1. Suppose xn+1 = (1 − ζn)yn + ζnxn for all integers n ≥ 0 and lim sup n→∞(∥yn+1 − yn∥ − ∥xn+1 − xn∥) ≤ 0. Then limsup n→∞∥yn − xn∥ = 0.
Lemma 5 ([5, 7]). Let {sn} be a sequence of nonnegative real numbers satisfying the inequality
- (1)
,
- (2)
lim n→∞sup τn ≤ 0,
- (3)
.
Since F is η-strongly monotone, VI (F, K) has a unique solution x* ∈ K [5]. Assume that T : H → H is a nonexpansive mapping with the fixed point set Fix (T) = K. Obviously Fix (PK) = K.
Lemma 6 (see [5].)If 0 < μ < 2η/κ2 and 0 < λ < 1, then is a contraction. In fact,
Lemma 7 (see [7].)Let {αn} be a sequence of nonnegative numbers with limsup n→∞αn < ∞ and let {βn} be sequence of real numbers with lim sup n→∞βn ≤ 0. Then
3. Convergence Theorem
Before analyzing the convergence theorem, we first review the PRH method and related results [15].
Algorithm 8 (see [15].)Take three fixed numbers t, ρ, γ ∈ (0, 2η/κ2), starting with arbitrarily chosen initial points x0 ∈ H, compute the sequences such that;
-
Prediction
-
Step 1: ,
-
Step 2: ,
-
Step 3: ,
-
-
Correction
-
Step 4:
-
-
-
where T : H → H is a nonexpansive mapping.
Let {αn}⊂[0,1), {βn}⊂[0,1] and {γn}⊂[0,1], satisfy the following conditions.
Remark 9. In fact, the PRH method is the MRHSD method when θn ≡ 0, for all n.
Condition 10. One has
Theorem 11 (see [15].)In Condition 10, the sequence {xn} converges strongly to x* ∈ K, and x* is the unique solution of the VI(F, K).
We obtain the strong convergence theorem of PRH method for variational inequalities under different assumptions.
Condition 12. One has
Theorem 13. The sequence {xn} converges strongly to x* ∈ K, and x* is the unique solution of the VI(F, K). Assume that {αn}, {βn} and {γn}, satisfy Condition 12.
Proof. We divide the proof into several steps.
Step 1. , and are bounded. Since F is η-strongly monotone, VI (F, K) (1) has a unique solution x* ∈ K, and , , .
A series of computations yields
Moreover, we also obtain
Step 2. Consider ∥xn+1 − xn∥ → 0.
Indeed, by a series of computations, we have
Let
so we get
Step 3. Consider∥xn+1 − Txn∥ → 0.
Indeed, by the prediction step of Algorithm 8, we have
By a series of computations, we can get
Corollary 14. Consider ∥xn − Txn∥ → 0.
Applying Steps 2 and 3 , one gets
Step 4. Consider .
For some , here exits weakly and such that
Step 5. By Step 1 and Lemma 1, we have
Denote
Furthermore, by (43), (47), and (48), it is easy to obtain
4. Numerical Experiments
The computation begins with ones (n, n) in MATLAB and stops as soon as ∥xk+1 − xk∥ ≤ 10−3 or 10−2. All codes were implemented in MATLAB 7.1 and ran at a Pentium R 1.70G processor, 2G Acer note computer.
We test the problems with n = 100, 200, 300, 400, 500, 1000, and 2000. The test results with the PRH method under different conditions are reported in Tables 1, 2, 3, and 4. And the CPU time is in seconds. It is to be noted that the results of extended contraction method are only given out when the iteration step (It) is less than or equal to 100.
Asymmetric matrix | c0 = 0.1, θn = 0.8, tolerance = 10−4 | ||||||
---|---|---|---|---|---|---|---|
Condition 10 | Condition 12 | EC method | |||||
n | It | cpu | It | cpu | It | cpu | tolerance |
100 | 201 | 8.34 | 130 | 5.35 | 100 | 14.46 | 8.289e + 000 |
200 | 333 | 75.44 | 208 | 47.14 | 100 | 94.30 | 1.010e + 002 |
300 | 443 | 318.02 | 272 | 174.70 | 100 | 302.29 | 4.899e + 002 |
400 | 543 | 789.16 | 330 | 446.00 | 100 | 686.83 | 9.628e + 002 |
500 | 647 | 1747.70 | 388 | 972.18 | 100 | 1287.36 | 1.756e + 003 |
1000 | 1082 | 19884.30 | 634 | 11502.13 | 100 | 9220.50 | 9.826e + 003 |
2000 | >2000 | >150000 | 1052 | 128504.67 | 100 | >74640.41 | >5.597e + 003 |
Asymmetric matrix | γ = 0.1, ρ = 0.3, t = 0.1 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
θn = 0 | θn = 0.2 | θn = 0.4 | θn = 0.6 | θn = 0.8 | ||||||
n | It | cpu | It | cpu | It | cpu | It | cpu | It | cpu |
100 | 132 | 5.52 | 134 | 5.60 | 128 | 5.50 | 134 | 5.67 | 132 | 5.54 |
200 | 210 | 48.04 | 206 | 47.22 | 208 | 48.04 | 204 | 47.15 | 214 | 48.58 |
300 | 274 | 177.49 | 268 | 176.08 | 276 | 178.80 | 274 | 177.68 | 276 | 178.84 |
400 | 336 | 468.28 | 328 | 445.93 | 336 | 468.20 | 334 | 454.24 | 330 | 453.79 |
500 | 392 | 977.79 | 394 | 1012.57 | 378 | 948.44 | 386 | 953.91 | 390 | 971.10 |
-
Algorithm 1
-
Matlab code:
-
C = zeros(n, n); HU = ones(n, n)*0.1; HL = −HU;
-
for i = 1 : n
-
for j = 1 : n
-
t = mod (t*42108 + 13846,46273);
-
C(i, j) = t*2/46273 − 1;
-
end;
-
end;
-
for i = 1 : n
-
C(i, i) = abs(C(i, i))*2; HU(i, i) = 1; HL(i, i) = 1;
-
end;
-
Algorithm 2
-
Matlab code:
-
C = zeros(n, n); HU = ones(n, n)*0.1; HL = −HU;
-
for i = 1 : n
-
for j = 1 : n
-
C = −1 + 2*rand(n);
-
end;
-
end;
-
for i = 1 : n
-
C(i, i) = abs(C(i, i))*2; HU(i, i) = 1; HL(i, i) = 1;
-
end;
From Tables 1 to 3, we found that the iteration numbers and CPU time of PRH under Condition 12 are more efficient than that under Condition 10. In Table 4 of our method, the tests’ results give out that the PRH method under some descent directions is more slightly efficient than those of the MRHSD method [14, 16], and it is easy to obtain these descent directions. Furthermore, it is important to find γ, ρ, and t by Tables 2 and 4.
5. Conclusions
We have proved the strong convergence of PRH method under Condition 12, which differs from Condition 10. The result can be considered as an improvement and refinement of the previous results [14]. And more importantly, numerical experiments demonstrated that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method. How to select parameters of the PRH method for solving variational inequalities is worthy of further investigations in the future.
Acknowledgments
This research was supported by National Science and Technology Support Program (Grant no. 2011BAH24B06), Joint Fund of National Natural Science Foundation of China and Civil Aviation Administration of China (Grant no. U1233105), and Science Foundation of the Civil Aviation Flight University of China (Grant no. J2010-45).