Volume 2012, Issue 1 274636
Research Article
Open Access

Improving Results on Convergence of AOR Method

Guangbin Wang

Corresponding Author

Guangbin Wang

Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China qust.edu.cn

Search for more papers by this author
Ting Wang

Ting Wang

Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China qust.edu.cn

Search for more papers by this author
Yanli Du

Yanli Du

Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China qust.edu.cn

Search for more papers by this author
First published: 30 October 2012
Academic Editor: Carlos J. S. Alves

Abstract

We present some sufficient conditions on convergence of AOR method for solving Ax = b with A being a strictly doubly α diagonally dominant matrix. Moreover, we give two numerical examples to show the advantage of the new results.

1. Introduction

Let us denote all complex square matrices by Cn×n and all complex vectors by Cn.

For A = (aij) ∈ Cn×n, we denote by ρ(A) the spectral radius of matrix A.

Let us consider linear system Ax = b, where bCn is a given vector and xCn is an unknown vector. Let A = DTS be given and D is the diagonal matrix, −T and −S are strictly lower and strictly upper triangular parts of A, respectively, and denote
()
where det (D) ≠ 0.
Then the AOR method [1] can be written as
()
where
()

2. Preliminaries

We denote
()
For any matrix A = (aij) ∈ Cn×n, the comparison matrix M(A) = (mij) ∈ Rn×n is defined by
()

Definition 2.1 (see [2].)A matrix ACn×n is called a strictly diagonally dominant matrix (SD) if

()
A matrix ACn×n is called a strictly doubly diagonally dominant matrix (DD) if
()

Definition 2.2 (see [3].)A matrix ACn×n is called a strictly α diagonally dominant matrix (D(α)) if there exits α ∈ [0,1], such that

()

Definition 2.3 (see [4].)Let A = (aij) ∈ Cn×n, if there exits α ∈ [0,1] such that

()
then A is called a strictly doubly α diagonally dominant matrix (DD(α)).

In [3, 5, 6], some people studied the convergence of AOR method for solving linear system Ax = b and gave the areas of convergence. In [5], Cvetković and Herceg studied the convergence of AOR method for strictly diagonally dominant matrices. In [3], Huang and Wang studied the convergence of AOR method for strictly α diagonally dominant matrices. In [6], Gao and Huang studied the convergence of AOR method for strictly doubly diagonally dominant matrices.

Theorem 2.4 (see [3].)Let AD(α), then AOR method converges for

()

Theorem 2.5 (see [6].)Let ADD, then AOR method converges for

()
where
()

3. Upper Bound for Spectral Radius of Mσ,ω

In the following, we present an upper bound for spectral radius of AOR iterative matrix Mσ,ω for strictly doubly α diagonally dominant coefficient matrix.

Lemma 3.1 (see [4].)If ADD(α), then A is a nonsingular H-matrix.

Theorem 3.2. Let ADD(α), if 1 − σ2[αRi(L)Rj(L) + (1 − α)Si(L)Sj(L)] > 0, for  all  jN,   ij, then

()
where
()

Proof. Let λ be an eigenvalue of Mσ,ω such that

()
that is,
()
If λ(IσL) − ((1 − ω)I + (ωσ)L + ωU) ∈ DD(α), then by Lemma 3.1, λ(IσL) − ((1 − ω)I + (ωσ)L + ωU) is nonsingular and λ is not an eigenvalue of iterative matrix Mσ,ω, that is, if
()
then λ is not an eigenvalue of Mσ,ω. Especially, if
()
then λ is not an eigenvalue of Mσ,ω.

If λ is an eigenvalue of Mσ,ω, then there exits at least a couple of i, j (ij), such that

()
that is,
()
Since A1 = 1 − σ2[αRi(L)Rj(L) + (1 − α)Si(L)Sj(L)] > 0, and the discriminant Δ of the quadratic in |λ| satisfies Δ ≥ 0, then the solution of (3.8) satisfies
()
So
()

4. Improving Results on Convergence of AOR Method

In this section, we present new results on convergence of AOR method.

Theorem 4.1. Let ADD(α), then AOR method converges if ω, σ satisfy either

()
where
()

Proof. It is easy to verify that for each σ, which satisfies one of the conditions (I)–(III), we have

()

Firstly, we consider case (I). Since A be a α diagonally dominant matrix, then by Lemma 3.1, we know that A is a nonsingular H-matrix; therefore, M(A) is a nonsingular M-matrix, and it follows that from paper [7], ρ(Mσ,σ) < 1 holds for 0 ≤ σ < s and for σ ≠ 0,

()
If 0 < ω < max {2σ/(1 + ρ(Mσ,σ)), t} = 2σ/(1 + ρ(Mσ,σ)), then
()
by extrapolation theorem [6], we have ρ(Mσ,ω) < 1.

If 0 < ω < max {2σ/(1 + ρ(Mσ,σ)), t} = t, then it remains to analyze the case

()
Since when ρ(Mσ,σ) < 1,
()
then 0 ≤ σ < ω. From
()
we have .
  • (1)

    When ω ≤ 1, it easy to verify that (4.8) holds.

  • (2)

    When ω > 1, since

    ()

then by A2A3 < A1 and 1 − σ2[αRi(L)Rj(L) + (1 − α)Si(L)Sj(L)] > 0, for  all  i, jN,   ij, we have
()
It is easy to verify that the discriminant Δ of the quadratic in ω satisfies Δ > 0, and so there holds
()
or
()
For ω1, we have A2 > 2, it is in contradiction with ((4.8)b). Therefore, ω1 should be deleted.

Secondly, we prove (II).

  • (1)

    When 0 < ω ≤ 1,  σ < 0,

    ()
    By A2A3 < A1, we have
    ()
    It is easy to verify that the discriminant Δ of the quadratic in σ satisfies Δ > 0, and so there holds
    ()
    By σ < 0 and 1 − σ2[αRi(L)Rj(L) + (1 − α)Si(L)Sj(L)] > 0, for  all  i, jN, ij, we obtain
    ()

  • (2)

    When 1 < ω < t, σ < 0,

    ()
    By A2A3 < A1, we have
    ()
    It is easy to verify that the discriminant Δ of the quadratic in σ satisfies Δ > 0, and so there holds
    ()
    By σ < 0 and 1 − σ2[αRi(L)Rj(L) + (1 − α)Si(L)Sj(L)] > 0, for  all  i, jN,  ij, we obtain
    ()
    Therefore, by (4.16) and (4.20), we get
    ()

Finally, we prove (III).

  • (1)

    When 0 < ω ≤ 1, σt,

    ()
    By A2A3 < A1, we have
    ()
    It is easy to verify that the discriminant Δ of the quadratic in σ satisfies Δ > 0, and so there holds
    ()
    By σt, we obtain
    ()

  • (2)

    When 1 < ω < t,  σt,

    ()
    By A2A3 < A1, we have
    ()
    It is easy to verify that the discriminant Δ of the quadratic in σ satisfies Δ > 0, and so there holds
    ()
    By σt, we obtain
    ()
    Therefore, by (4.25) and (4.29), we obtain
    ()

We can obtain the following results easily.

Theorem 4.2. Let ADD(α). If Ri(L + U)Rj(L + U) − Si(L + U)Sj(L + U) > 0, when 0 < ω ≤ 1, the following conditions hold:

()
or when 1 < ω < t, the following conditions hold:
()
then the area of convergence of AOR method obtained by Theorem 4.1 is larger than that obtained by Theorem 2.5.

Theorem 4.3. Let ADD(α). If Ri(L + U)Rj(L + U) − Si(L + U)Sj(L + U) < 0, when 0 < ω ≤ 1, the following conditions hold:

()
or when 1 < ω < t, the following conditions hold:
()
then the area of convergence of AOR method obtained by Theorem 4.1 is larger than that obtained by Theorem 2.5.

5. Examples

In the following examples, we give the areas of convergence of AOR method to show that our results are better than ones obtained by Theorems 2.4 and 2.5.

Example 5.1 (see [6].)Let

()
where
()
Obviously, ASD, but ADD(1/2).

By Theorem 4.1, we have the following area of convergence:
()
Obviously, ADD.
By Theorem 2.5, we have the following area of convergence:
()
In addition, AD(1/2).
By Theorem 2.4, we have the following area of convergence:
()

Now we give two figures. In Figure 1, we can see that the area of convergence obtained by Theorem 4.1 (real line) is larger than that obtained by Theorem 2.5 (virtual line). In Figure 2, we can see that the area of convergence obtained by Theorem 4.1 (real line) is larger than that obtained by Theorem 2.4 (virtual line). From above we know that the area of convergence obtained by Theorem 4.1 is larger than ones obtained by Theorems 2.5 and 2.4.

Description unavailable
Description unavailable

Example 5.2. Let

()
Obviously, ADD(1/3), AD(α), ADD. So we cannot use Theorems 2.4 and 2.5. By Theorem 4.1, we have the following area of convergence:
()

Acknowledgments

The authors would like to thank the referees for their valuable comments and suggestions, which greatly improved the original version of this paper. This work was supported by the National Natural Science Foundation of China (Grant no. 11001144) and the Science and Technology Program of Shandong Universities of China (J10LA06).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.