Volume 2013, Issue 1 515902
Research Article
Open Access

The Strong Convergence of Prediction-Correction and Relaxed Hybrid Steepest-Descent Method for Variational Inequalities

Haiwen Xu

Corresponding Author

Haiwen Xu

School of Computer Science, Civil Aviation Flight University of China, Guanghan 618307, China

School of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China nuaa.edu.cn

Search for more papers by this author
First published: 23 October 2013
Academic Editor: Xu Minghua

Abstract

We establish the strong convergence of prediction-correction and relaxed hybrid steepest-descent method (PRH method) for variational inequalities under some suitable conditions that simplify the proof. And it is to be noted that the proof is different from the previous results and also is not similar to the previous results. More importantly, we design a set of practical numerical experiments. The results demonstrate that the PRH method under some descent directions is more slightly efficient than that of the modified and relaxed hybrid steepest-descent method, and the PRH Method under some new conditions is more efficient than that under some old conditions.

1. Introduction

Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ∥·∥, let K be a nonempty closed convex subset of H, and let F : HH be an operator. Then the variational inequality problem VI (F, K) [1] is to find x*K such that
()
The literature contains many methods for solving variational inequality problems; see [225] and references therein. According to the relationship between the variational inequality problems and a fixed point problem, we can obtain
()
where the projection operator PK is the projection from H onto K, that is,
()
In this paper, F : HH is an operator with F : κ-Lipschtz and η-strongly monotone; that is, F satisfies the following conditions:
()
If β is small enough, then PK is a contraction. Naturally, the convergence of Picard iterates generated by the right-hand side of (2) is obtained by Banach’s fixed point theorem. Such a method is called the projection method or more results about the projection method see [6, 8, 20] and so forth.

In fact, the projection PK in the contraction methods may not be easy to compute, and a great effort is to compute the projection PK in each iteration. Yamada and Deutsch have provided a hybrid steepest-descent method for solving the VI (F, K) [2, 3] in order to reduce the difficulty and complexity of computing the projection PK. Subsequently, the convergence of hybrid steepest-descent methods was given out by Xu and Kim [4] and Zeng et al. [5]. Naturally, by analyzing several three-step iterative methods in each iteration by the fixed pointed equation, we can obtain the Noor iterations. Recently, Ding et al. [7] proposed a three-step relaxed hybrid steepest-descent method for variational inequalities, and the simple proof of three-step relaxed hybrid steepest-descent methods under different conditions was introduced by Yao et al. [24]. The literature [14, 16] described a modified and relaxed hybrid steepest-descent (MRHSD) method and the different convergence of the MRHSD method under the different conditions. A set of practical numerical experiments in the literature [16] demonstrated that the MRHSD method has different efficiency under different conditions. Subsequently, the prediction-correction and relaxed hybrid steepest-descent method (PRH method) [15] makes more use of the history information and less decreases the loss of information than the methods [7, 14]. The PRH method introduced more descent directions than the MRHSD method [14, 16], and computing these descent directions only needs the history information.

In this paper, we will prove the strong convergence of PRH method under different and suitable restrictions imposed on parameters (Condition 12), which differs from that of [15]. Moreover, the proof of strong convergence is different from the previous proof in [15], which is not similar to that in [7] in Step 2. And more importantly, numerical experiments verify that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method [14, 16]. Furthermore, it is easy to obtain these descent directions.

The remainder of the paper is organized as follows. In Section 2, we review several lemmas and preliminaries. We prove the convergence theorem under Condition 12 in Section 3. In Section 4, we give out a series of numerical experiments, which demonstrated that the PRH method under Condition 12 is more efficient than under Condition 10. Section 5 concludes the paper.

2. Preliminaries

In order to proof the later convergence theorem, we introduce several lemmas and the main results in the following.

Lemma 1. In a real Hilbert space H, there holds the inequality

()

The lemma is a basic result of a Hilbert space with the inner product.

Lemma 2 (demiclosedness principle). Assume that T is a nonexpansive self-mapping on a nonempty closed convex subset K of a Hilbert space H. If T has a fixed point, then (IT) is demiclosed. That is, whenever xn is a sequence in K weakly converging to some xK and the sequence (IT)xn strongly converges to some yH, it follows that (IT)x = y. Here I is the identity operator of H.

The following lemma is an immediate result of a projection mapping onto a closed convex subset of a Hilbert space.

Lemma 3. Let K be a nonempty closed convex subset of H. For all  x, yH and zK, then

  • (1)

    PK(x) − x, zPK(y)〉 ≥ 0,

  • (2)

    .

Lemma 4 (see [13].)Let {xn} and {yn} be bounded sequence in a Banach space X and let {ζn} be a sequence in [0,1] with 0 < lim  inf nζn ≤ lim  sup nζn < 1. Suppose xn+1 = (1 − ζn)yn + ζnxn for all integers n ≥ 0 and lim  sup n(∥yn+1yn∥ − ∥xn+1xn∥) ≤ 0. Then limsup nynxn∥ = 0.

Lemma 5 ([5, 7]). Let {sn} be a sequence of nonnegative real numbers satisfying the inequality

()
where αn, τn, and γn satisfy the following conditions:
  • (1)

    ,

  • (2)

    lim nsup τn ≤ 0,

  • (3)

    .

Then lim nsn = 0.

Since F is η-strongly monotone, VI (F, K) has a unique solution x*K [5]. Assume that T : HH is a nonexpansive mapping with the fixed point set Fix (T) = K. Obviously Fix (PK) = K.

For any given numbers λ ∈ (0,1) and μ ∈ (0,2η/κ2), we define the mapping by
()

Lemma 6 (see [5].)If 0 < μ < 2η/κ2 and 0 < λ < 1, then is a contraction. In fact,

()
where , for all x, yH.

Lemma 7 (see [7].)Let {αn} be a sequence of nonnegative numbers with limsup nαn < and let {βn} be sequence of real numbers with lim  sup nβn ≤ 0. Then

()

3. Convergence Theorem

Before analyzing the convergence theorem, we first review the PRH method and related results [15].

Algorithm 8 (see [15].)Take three fixed numbers t, ρ, γ ∈ (0, 2η/κ2), starting with arbitrarily chosen initial points x0H, compute the sequences such that;

  •  Prediction

    • Step  1: ,

    • Step  2: ,

    • Step  3: ,

  • Correction

    • Step  4:

    •   

  • where T : HH is a nonexpansive mapping.

Let {αn}⊂[0,1), {βn}⊂[0,1] and {γn}⊂[0,1], satisfy the following conditions.

Remark 9. In fact, the PRH method is the MRHSD method when θn ≡ 0, for all  n.

Condition 10. One has

()

Theorem 11 (see [15].)In Condition 10, the sequence {xn} converges strongly to x*K, and x* is the unique solution of the VI(F, K).

We obtain the strong convergence theorem of PRH method for variational inequalities under different assumptions.

Condition 12. One has

()

Theorem 13. The sequence {xn} converges strongly to x*K, and x* is the unique solution of the VI(F, K). Assume that {αn}, {βn} and {γn}, satisfy Condition 12.

Proof. We divide the proof into several steps.

Step 1. , and are bounded. Since F is η-strongly monotone, VI (F, K) (1) has a unique solution x*K, and , , .

A series of computations yields

()
where ,
()
where .

Moreover, we also obtain

()
where , subtituting; (14) into (13) and (14) into (12), we immediately obtain
()
Furthermore,
()
It is easy to obtain the following by induction:
()
where M0 = max {3∥x0x*∥, 3(ρ + γ + t)∥F(x*)∥/τ},
()
Hence
()
are also bounded.

Step 2. Consider ∥xn+1xn∥ → 0.

Indeed, by a series of computations, we have

()
According to (20) and the prediction step of Algorithm 8, we also obtain
()
Also by the prediction step of Algorithm 8 and (20), (21), we have
()

Let

()

so we get

()
Furthermore,
()
Apply lim nβn = 1, lim nλn = 0, and lim nγn = 1 and (22), (25) to get
()
According to Lemma 4, we obtain
()
Furthermore, by lim nγn = 1, we also get
()
By (27), (28) and the correction step of Algorithm 8, we immediately conclude that
()
so we get
()

Step 3. Consider∥xn+1Txn∥ → 0.

Indeed, by the prediction step of Algorithm 8, we have

()
According to the assumption lim nβn = 1 and lim nλn = 0, then
()
By (32), we immediately obtain
()

By a series of computations, we can get

()
Hence, by (28), (33), and (34), we also obtain
()
Using Steps 2 and 3, it is easy to obtain the following corollary.

Corollary 14. Consider ∥xnTxn∥ → 0.

Applying Steps   2 and 3 , one gets

()
so it is easy to see that
()

Step 4. Consider .

For some , here exits weakly and such that

()
According to , we have
()
By x* being the unique solution of VI (F, K), we can obtain
()
Since , we immediately conclude that
()

Step 5. By Step 1 and Lemma 1, we have

()
where
()
and M0M < .

Denote

()
We can rewrite (42) as
()
In fact, satisfies Lemma 5; according to
()
we obtain
()
Moreover, by Step 4, we also obtain
()

Furthermore, by (43), (47), and (48), it is easy to obtain

()
Consequently apply Lemma 5 to obtain
()

4. Numerical Experiments

The problem considered in this section is
()
where ∥·∥F is the matrix Fröbenis norm; that is,
()
Note that the matrix Fröbenis norm is induced by the inner product
()
The problems arise from finance and statistics, and we form the test problems similarly as in [9, 21].
Let , where
()
Let HL, HU be given n × n symmetric matrices, and C asymmetric which differs from previous approaches [9, 21], and it is to be noted that the extended contraction method (EC method) [9] has much difficulty in computing the examples when C is asymmetric, where HLHU in element wise:
()
Then (51) is equivalent to the following variational inequality:
()
So we get
()
According to Condition 10, we take the following parameter sequences, and let Condition 10 denote the parameter sequences:
()
According to Condition 12, we take the following parameter sequences, and let Condition 12 denote the parameter sequences:
()
Obviously, we have much difficulty in computing the projection of PK[X],   for all  xSn. In order to reduce the difficulty and complexity of computing the projection PK, we define TX by
()
where
()
which can be computed without difficulty and the fixed point set of Fix (T) = K. According to Theorems 11 and 13, the sequences generated by Algorithm 8 under Conditions 10 and 12 are convergent.

The computation begins with ones (n, n) in MATLAB and stops as soon as ∥xk+1xk∥ ≤ 10−3  or  10−2. All codes were implemented in MATLAB 7.1 and ran at a Pentium R 1.70G processor, 2G Acer note computer.

We test the problems with n = 100, 200, 300, 400, 500, 1000, and 2000. The test results with the PRH method under different conditions are reported in Tables 1, 2, 3, and 4. And the CPU time is in seconds. It is to be noted that the results of extended contraction method are only given out when the iteration step (It) is less than or equal to 100.

Table 1. Numerical results for the PRH method and the EC method.
Asymmetric matrix c0 = 0.1,  θn = 0.8, tolerance = 10−4
Condition 10 Condition 12 EC method
n It cpu It cpu It cpu tolerance
100 201 8.34 130 5.35 100 14.46 8.289e + 000 
200 333 75.44 208 47.14 100 94.30 1.010e + 002 
300 443 318.02 272 174.70 100 302.29 4.899e + 002 
400 543 789.16 330 446.00 100 686.83 9.628e + 002 
500 647 1747.70 388 972.18 100 1287.36 1.756e + 003 
1000 1082 19884.30 634 11502.13 100 9220.50 9.826e + 003 
2000 >2000 >150000 1052 128504.67 100 >74640.41 >5.597e + 003
Table 2. Numerical results for tolerance 10−4.
Asymmetric matrix c0 = 0.1,  θn = 0.8
Condition 10 Condition 12
n It cpu It cpu
100 204 8.78 130 5.45
200 330 76.08 208 47.72
300 445 323.20 272 175.89
400 548 867.56 330 450.59
500 663 1916.90 388 994.18
Table 3. Numerical results for tolerance 10−3.
Asymmetric matrix c0 = 0.1,  θn = 0.8
Condition 10 Condition 12
n It cpu It cpu
1000 193 3893.63 126 2280.74
2000 318 42981.02 200 28737.65
Table 4. Numerical results for tolerance 10−4.
Asymmetric matrix γ = 0.1, ρ = 0.3, t = 0.1
θn = 0 θn = 0.2 θn = 0.4 θn = 0.6 θn = 0.8
n It cpu It cpu It cpu It cpu It cpu
100 132 5.52 134 5.60 128 5.50 134 5.67 132 5.54
200 210 48.04 206 47.22 208 48.04 204 47.15 214 48.58
300 274 177.49 268 176.08 276 178.80 274 177.68 276 178.84
400 336 468.28 328 445.93 336 468.20 334 454.24 330 453.79
500 392 977.79 394 1012.57 378 948.44 386 953.91 390 971.10
Test Examples 1. In this example we generate the data in a similar manner as in [9]. The entries of diagonal elements of C are randomly generated in the interval (0,2); the entries of off-diagonal elements of C are randomly generated in the interval (−1,1) (Algorithm 1):
()
When n ≥ 1000 and tolerance 10−4, the computation time of the proposed method is too long, so the results of the PRH method give out approximate solution with n ≥ 1000 and tolerance 10−3 in the following. And the extended contraction method (EC method) has much difficulty in computing the examples when C is asymmetric. Furthermore, by introducing auxiliary variable, the certain projection method or relaxed-PPA method [10] can be implemented by these tests.
    Algorithm 1
  • Matlab code:

  • C = zeros(n, n); HU = ones(n, n)*0.1; HL = −HU;

  • for i = 1 : n

  •  for j = 1 : n

  •   t = mod (t*42108 + 13846,46273);

  •   C(i, j) = t*2/46273 − 1;

  •  end;

  • end;

  • for i = 1 : n

  • C(i, i) = abs(C(i, i))*2; HU(i, i) = 1; HL(i, i) = 1;

  • end;

Test Examples 2. We form the data of the second problems similarly as in the first test examples. The entries of diagonal elements of C are randomly generated in the interval (0,2); the entries of off-diagonal elements of C are generated from a uniform distribution in the same interval (Algorithm 2):
()
    Algorithm 2
  • Matlab code:

  • C = zeros(n, n); HU = ones(n, n)*0.1; HL = −HU;

  • for i = 1 : n

  •  for j = 1 : n

  •   C = −1 + 2*rand(n);

  •  end;

  • end;

  • for i = 1 : n

  • C(i, i) = abs(C(i, i))*2; HU(i, i) = 1; HL(i, i) = 1;

  • end;

From Tables 1 to 3, we found that the iteration numbers and CPU time of PRH under Condition 12 are more efficient than that under Condition 10. In Table 4 of our method, the tests’ results give out that the PRH method under some descent directions is more slightly efficient than those of the MRHSD method [14, 16], and it is easy to obtain these descent directions. Furthermore, it is important to find γ, ρ,   and  t by Tables 2 and 4.

5. Conclusions

We have proved the strong convergence of PRH method under Condition 12, which differs from Condition 10. The result can be considered as an improvement and refinement of the previous results [14]. And more importantly, numerical experiments demonstrated that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method. How to select parameters of the PRH method for solving variational inequalities is worthy of further investigations in the future.

Acknowledgments

This research was supported by National Science and Technology Support Program (Grant no. 2011BAH24B06), Joint Fund of National Natural Science Foundation of China and Civil Aviation Administration of China (Grant no. U1233105), and Science Foundation of the Civil Aviation Flight University of China (Grant no. J2010-45).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.