Volume 2013, Issue 1 487062
Research Article
Open Access

A New Second-Order Iteration Method for Solving Nonlinear Equations

Shin Min Kang

Shin Min Kang

Department of Mathematics and RINS, Gyeongsang National University, Jinju 660-701, Republic of Korea gnu.ac.kr

Search for more papers by this author
Arif Rafiq

Arif Rafiq

School of Computer Science and Mathematics, Hajvery University, 43-52 Industrial Area, Gulberg III, Lahore 54660, Pakistan hup.edu.pk

Search for more papers by this author
Young Chel Kwun

Corresponding Author

Young Chel Kwun

Department of Mathematics, Dong-A University, Busan 614-714, Republic of Korea donga.ac.kr

Search for more papers by this author
First published: 09 April 2013
Citations: 13
Academic Editor: Gue Lee

Abstract

We establish a new second-order iteration method for solving nonlinear equations. The efficiency index of the method is 1.4142 which is the same as the Newton-Raphson method. By using some examples, the efficiency of the method is also discussed. It is worth to note that (i) our method is performing very well in comparison to the fixed point method and the method discussed in Babolian and Biazar (2002) and (ii) our method is so simple to apply in comparison to the method discussed in Babolian and Biazar (2002) and involves only first-order derivative but showing second-order convergence and this is not the case in Babolian and Biazar (2002), where the method requires the computations of higher-order derivatives of the nonlinear operator involved in the functional equation.

1. Introduction

Our problem, to recall, is solving equations in one variable. We are given a function f and would like to find at least one solution to the equation f(x) = 0. Note that, priorly, we do not put any restrictions on the function f; we need to be able to evaluate the function; otherwise, we cannot even check that a given solution x = α is true, that is, f(r) = 0. In reality, the mere ability to be able to evaluate the function does not suffice. We need to assume some kind of “good behavior.” The more we assume, the more potential we have, on the one hand, to develop fast algorithms for finding the root. At the same time, the more we assume, the fewer the functions are going to satisfy our assumptions! This is a fundamental paradigm in numerical analysis.

We know that one of the fundamental algorithm for solving nonlinear equations is so-called fixed-point iteration method [1].

In the fixed-point iteration method for solving the nonlinear equation f(x) = 0, the equation is usually rewritten as
()
where
  • (i)

    there exists [a, b] such that g(x)∈[a, b] for all x ∈ [a, b],

  • (ii)

    there exists [a, b] such that |g(x)| ≤ L < 1 for all x ∈ [a, b].

Considering the following iteration scheme:
()
and starting with a suitable initial approximation x0, we build up a sequence of approximations, say {xn}, for the solution of the nonlinear equation, say α. The scheme will converge to the root α, provided that
  • (i)

    the initial approximation x0 is chosen in the interval [a, b],

  • (ii)

    g has a continuous derivative on (a, b),

  • (iii)

    |g(x)| < 1 for all x ∈ [a, b],

  • (iv)

    ag(x) ≤ b for all x ∈ [a, b] (see [1]).

The order of convergence for the sequence of approximations derived from an iteration method is defined in the literature, as follows.

Definition 1. Let {xn} converge to α. If there exist an integer constant p and real positive constant C such that

()
then p is called the order and C the constant of convergence.

To determine the order of convergence of the sequence {xn}, let us consider the Taylor expansion of g(xn)
()
Using (1) and (2) in (4) we have
()
and we can state the following result [1].

Theorem 2 (see [2].)Suppose that gCp[a, b]. If g(k)(x) = 0, for k = 1,2, …, p − 1 and g(p)(x) ≠ 0, then the sequence {xn} is of order p.

It is well known that the fixed point method has first order convergence.

During the last many years, the numerical techniques for solving nonlinear equations have been successfully applied (see, e.g., [24] and the references therein).

In [4], Babolian and Biazar modified the standard Adomian decomposition method for solving the nonlinear equation f(x) = 0 to derive a sequence of approximations to the solution, with nearly superlinear convergence. However, their method requires the computation of higher-order derivatives of the nonlinear operator involved in the functional equation.

In this paper, a new iteration method extracted from the fixed point method is proposed to solve nonlinear equations. The proposed method has second-order convergence and then applied to solve some problems in order to assess its validity and accuracy. It is worth to mention that our method involves only first-order derivative but showing second-order convergence.

2. New Iteration Method

Consider the nonlinear equation
()
We assume that α is simple zero of f(x), and x0 is an initial guess sufficiently close to α. Equation (6) is usually rewritten as
()
Following the approach of [4], if g(x) ≠ 1, we can modify (7) by adding θ ≠ −1 to both sides as follows:
()
which implies that
()
In order for (9) to be efficient, we can choose θ such that , we yields
()
so that (9) takes the form
()

This formulation allows us to suggest the following iteration methods for solving nonlinear equation (6).

Algorithm 3. For a given x0, we calculate the approximation solution xn+1, by the iteration scheme

()

3. Convergence Analysis

Now we discuss the convergence analysis of Algorithm 3.

Theorem 4. Let f : D for an open interval D and consider that the nonlinear equation f(x) = 0 (or x = g(x)) has a simple root αD, where g(x) : D be sufficiently smooth in the neighborhood of the root α; then the order of convergence of Algorithm 3 is at least 2.

Proof. The iteration scheme is given by

()
Let α be a simple zero of f,  en = xnα, where en is the error term involved at the nth step of Algorithm 3 and
()

By Taylor′s expansion, we have

()
this shows that
()
This completes the proof.

Remark 5. For

()
and using the software Maple, we can easily deduce that
()
()
Now, it can be easily seen that for (14) we obtain G(α) = α,  G(α) = 0, and G′′(α) = −(1 − g(α))g′′(α) ≠ 0. Hence, according to Theorem 2, Algorithm 3 has second-order convergence.

4. Applications

Now we present some examples [4] to illustrate the efficiency of the developed methods namely, Algorithm 3. We compare the fixed point method (FPM) with Algorithm 3.

Example 6. Consider the equation x3 + 4x2 + 8x + 8 = 0. We have g(x) = −(1 + (1/2)x2 + (1/8)x3) and g(x) = −x − (3/8)x2. The exact solution of this equation is −2. Take x0 = −1.9; then the comparison of the two methods is shown in Table 1 correct up to four decimal places.

FPM Algorithm 3
x1  =   − 1.9476  x1  =   − 2.0050 
x2  =   − 1.9731  x2  =   − 2 
x3  =   − 1.9864 
x4  =   − 1.9932 
x5  =   − 1.9966 
x6  =   − 1.9983 
x7  =   − 1.9991 
x8  =   − 1.9995 
x9  =   − 1.9997 
x10  =   − 1.9998 
x11  =   − 1.9999 

Example 7. Consider the equation x + ln (x − 2) = 0. We have g(x) = 2 + ex and g(x) = −ex. The graphical solution of this equation is 2.1200 (4D). Take x0 = 2.1, then the comparison of the two methods is shown in Table 2 up to four decimal places.

FPM Algorithm 3
x1  =  2.1225  x1  =  2.12 
x2  =  2.1197 
x3  =  2.1201 
x4  =  2.12 

Example 8. Consider the equation x3 + 4x2 + 8x + 8 = 0. We have g(x) = 0.2 + 1.8x2 − 2x3 + x4 − 0.2x5 and g(x) = 3.6x − 6x2 + 4x3x4. The graphical solution of this equation is 0.34595 (5D). Take x0 = 0.28, then the comparison of the two methods is shown in Table 3 correct up to five decimal places.

FPM Algorithm 3
x1  =  0.30302  x1  =  0.34046 
x2  =  0.31755  x2  =  0.34592 
x3  =  0.32699  x3  =  0.34595 
x4  =  0.33322 
x5  =  0.33737 
x6  =  0.34016 
x7  =  0.34203 
x8  =  0.34330 
x9  =  0.34416 
x10  =  0.34474 
x11  =  0.34513 
x12  =  0.34540 
x13  =  0.34558 
x14  =  0.3457 
x15  =  0.34578 
x16  =  0.34584 
x17  =  0.34588 
x18  =  0.3459 
x19  =  0.34592 
x20  =  0.34593 
x21  =  0.34594 

Example 9. Consider the equation ex − 3x2 = 0. We have and . The graphical solution of this equation is 0.91 (2D). Take x0 = 0.8; then the comparison of the two methods is shown in Table 4 corrected up to five decimal places.

FPM Algorithm 3
x1  =  0.86131  x1  =  0.90768 
x2  =  0.88812  x2  =  0.91001 
x3  =  0.9001 
x4  =  0.90551 
x5  =  0.90796 
x6  =  0.90908 
x7  =  0.90959 
x8  =  0.90982 
x9  =  0.90992 
x10 = 0.90997 
x11 = 0.90999 
x12  = 0.91000 
x13  =  0.91 

5. Conclusions

A new iteration method for solving nonlinear equations is established. By using some examples the performance of the method is also discussed. The method is performing very well in comparison to the fixed point method and the method discussed in [4]. The method can be studied for functional equations and can be extended to a system of nonlinear equations.

Acknowledgments

The authors would like to thank the editor and referees for useful comments and suggestions. This study was supported by research funds from Dong-A University.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.