Volume 2011, Issue 1 972537
Research Article
Open Access

Geometrically Constructed Families of Newton′s Method for Unconstrained Optimization and Nonlinear Equations

Sanjeev Kumar

Corresponding Author

Sanjeev Kumar

Department of Mathematics, Maharishi Markandeshwar University, Sadopur, Ambala, Haryana 134007, India mmumullana.org

Search for more papers by this author
Vinay Kanwar

Corresponding Author

Vinay Kanwar

University Institute of Engineering and Technology, Panjab University, Chandigarh 160 014, India puchd.ac.in

Search for more papers by this author
Sushil Kumar Tomar

Sushil Kumar Tomar

Department of Mathematics, Panjab University, Chandigarh 160 014, India puchd.ac.in

Search for more papers by this author
Sukhjit Singh

Sukhjit Singh

Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Punjab 148106, India

Search for more papers by this author
First published: 07 August 2011
Citations: 7
Academic Editor: Naseer Shahzad

Abstract

One-parameter families of Newton′s iterative method for the solution of nonlinear equations and its extension to unconstrained optimization problems are presented in the paper. These methods are derived by implementing approximations through a straight line and through a parabolic curve in the vicinity of the root. The presented variants are found to yield better performance than Newton′s method, in addition that they overcome its limitations.

1. Introduction

Newton’s method is one of the most fundamental tools in computational mathematics, operations research, optimization, and control theory. It has many applications in management science, industrial and financial research, chaos and fractals, dynamical systems, variational inequalities and equilibrium-type problems, stability analysis, data mining, and even to random operator equations. Its role in optimization theory cannot be overestimated as the method is the basis for the most effective procedures in linear and nonlinear programming. For a more detailed survey, one can refer to [1] and the references cited therein.

Let f(x) be a sufficiently continuous differentiable function of a single variable x. One of the most basic problems in numerical analysis is to find the solution of frequently occurring nonlinear equation of the form
(1.1)
Let
(1.2)
represent the graph of the function f(x).

A large number of iterative methods have been developed for finding out the solution of single variable nonlinear equations as well for the solution of a system of nonlinear equations. One important reason for these methods is that none of them works for all types of problems. For a more detailed survey of these most important methods, many excellent textbooks are available in the literature [24].

Newton’s method is probably the simplest, most flexible, best known, and most used numerical method. However, as it is well known, a major difficulty in the application of Newton’s method is the selection of initial guess which must be chosen sufficiently close to the true solution in order to guarantee the convergence. Finding a criterion for choosing initial guess is quite cumbersome and the method may fail miserably if at any stage of computation, the derivative of the function f(x) is either zero or sufficiently small. Due to this reason, it exhibits poor convergence and falls in stability problems.

Also for solving nonlinear, univariate, and unconstrained optimization problems, Newton’s method [1] is an important and basic method which converges quadratically. The idea behind Newton’s method is to approximate the objective function locally by a quadratic function which at x = xn agrees with the function f(x) up to second derivatives. Again, the condition f′′(x) ≠ 0 in a neighborhood of the root is required for the success of Newton’s method.

The purpose of this paper is to eliminate the defects of Newton’s method by the simple modification of iteration processes. Numerical results indicate that the proposed iterative formulae are effective and comparable to the well-known Newton’s method. Furthermore, the presented techniques have guaranteed convergence unlike Newton’s method and are as simple as this known technique.

2. Proposed Methods for Single-Variable Nonlinear Equations

In this section, we shall derive two families by applying approximation via a straight line and via a parabolic curve.

(a) Approximation by a Straight Line. Consider the equation of a straight line having slope equal to pf(x0) and passing through the point (x0, 0), in the form

(2.1)
where x0 is an initial guess to an exact root r of (1.1) and p, |  p  | < . Let
(2.2)
be the better approximation to the root. Assume that the straight line (2.1) intersects the graph of the function (1.2) at a point (x1, f(x0 + h)). Now the straight line (2.1) while passing through the point (x1, f(x0 + h)) takes the form
(2.3)
Expanding the left-hand side by means of Taylor’s expansion about the point x = x0 and simplifying (retaining the terms up to O(h)), we get an iteration formula given by
(2.4)

The general formula for successive approximations is given by

(2.5)
This is the one-parameter family of Newton’s method. This formula was independently derived earlier by Mamta et al. [5] and Kanwar and Tomar [6] by using different approaches. In order to obtain the quadratic convergence of the method, the sign of entity p should be chosen so that the denominator is the largest in magnitude. This formula is well defined, even if f(x) is zero unlike Newton’s method.

(b) Approximation by a Parabola Consider a parabola in the form

(2.6)
Adopting the same procedure as done in the previous case (a), one can obtain the following iterative formula given by
(2.7)

In (2.7), the sign in the denominator should be chosen so that the denominator is the largest in magnitude. This is a parabolic version of Newton’s method [6] and does not fail if f(xn) = 0. Note that for p = 0, the classical Newton’s formula can be recovered from the formulae (2.5) and (2.7).

(c) Exponential Iteration Formulae. Exploiting the main idea of Mamta et al. [5], Chen and Li [7] derived new classes of quadratically convergent exponential iterative methods. On the similar lines, we can also derive exponential iterative formulae for solving nonlinear equations.

If letting xn+1 = xnexp (−h/xn) be the better approximation to the exact root r, then from (2.5) and (2.7), we obtain the following exponential iteration formulae:

(2.8)
respectively. Letting p = 0 in these formulae, we obtain another exponential iterative formula given by
(2.9)
Note that by taking the first-order Taylor’s expansion of exp (−f(xn  )/xnf(xn)) in (2.9), Newton’s formula can be achieved. The idea can further be generalized (similar to Mir and Rafiq [8, 9]) to the case of multiple zeros of nonlinear equations.

3. Extension to Unconstrained Optimization Problems

In this section, we shall extend the formulae (2.5) and (2.7) to solve nonlinear, univariate and unconstrained optimization problems.

Consider the nonlinear optimization problem: minimize {f(x),     x,   f : }, where the function f(x) is nonlinear twice-differentiable function.

(a) Extension of Formula (2.5). Assume that f(x) is sufficiently smooth function and has an extremum (maxima or minima) at a point x = α. From (2.3), consider the auxiliary function with parameter p as

(3.1)
It is possible to construct a quadratic function q(x) which agrees with f(x) up to second derivatives in the neighborhood of a point x = xn, that is,
(3.2)
We may calculate an estimate of f(x) at x = xn+1 by finding the point where the derivative of q(x) vanishes [1, 10], that is, q(xn+1) = 0, we have
(3.3)
This gives the following iterative formula given by
(3.4)
This is a one-parameter family of Newton’s method for unconstrained optimization problem and do not fail even if f′′(xn) = 0 unlike Newton’s method.

(b) Extension of Formula (2.7). Similarly, it is possible to construct a quadratic function q(x) from (2.6) which agrees with f(x) up to second derivatives in the neighborhood of a point x = xn, that is,

(3.5)
Taking into account that q(xn+1) = 0, we get
(3.6)
This is the modification over the formula (2.7) for unconstrained optimization problem which again does not fail even if f′′(xn) = 0. In (3.4) and (3.6), the sign in the denominator should be chosen so that the denominator is largest in magnitude. If we let p = 0 in (3.4) and (3.6), we obtain Newton’s iteration formula for unconstrained optimization problem [1].

Adopting the same procedure as in exponential iteration formulae, we can also derive exponential quadratically convergent iterative formulae for unconstrained optimization. Recently, Kahya [10] also derived similar formulae, namely, (3.4) and (3.6) by using the different approach based on the ideas of Mamta et al. [5].

4. Convergence Analysis

Here, we shall present the mathematical proof for the order of convergence of iterative formulae (3.4) and (3.6), respectively.

Theorem 4.1. Let f : I be a sufficiently differentiable function defined on I, and let x = αI be an optimum point of f(x). Assume that the initial guess x = xn is sufficiently close to α and f′′(xn) − pf(xn) ≠ 0 in I. Then the iteration scheme defined by formula (3.4) is quadratically convergent and satisfies the following error equation:

(4.1)
where en = xnα and
(4.2)

Proof. Since x = α is an optimum point of f(x), that is, f(α) = 0 and f′′(α) ≠ 0. Expanding f(xn) and f′′(xn) about x = α by Taylor’s expansion, we obtain

(4.3)
and
(4.4)
Furthermore,
(4.5)
Using (4.5) in (3.4), we get
(4.6)
This proves the quadratic convergence of the formula (3.4).

Theorem 4.2. Let f : I be a sufficiently differentiable function defined on I, and let x = αI be an optimum point of f(x). Assume that initial guess x = xn is sufficiently close to α in I. Then the iteration scheme defined by formula (3.6) is quadratically convergent and satisfies the following error equation:

(4.7)

Proof. Using (4.3) and (4.4), we have

(4.8)
Using (4.8) in (3.6), we get
(4.9)
It is interesting to note that this error equation is the same as that of Newton’s method. This completes the proof of the theorem.

5. Numerical Examples

Here we consider some examples to compare the number of iterations needed in the traditional Newton’s method and its modifications, namely, (2.5), (2.7), (3.4), and (3.6) respectively, for solving nonlinear equation (Table 1 and Table 2) as well as unconstrained optimization problems (Table 3 and Table 4). Here, for simplicity, the formulae are tested for |p| = 1. Computations have been performed using C++ in double-precision arithmetic.

Table 1. Test problem (nonlinear equations).
No. Examples [a,   b] Initial guesses Root (r)
(1) xex − 0.1 = 0 [0,2] 0.8 0.111832559108734
(2)  sinx = 0  1.0
[−1,1.6] 1.51 0.000000000000000
1.52
(3) ex − sinx = 0 [5,7] 5.0 6.285049438476562
(4)     [2,3.5] 2.0    3.000000000000000 
2.8
(5)   (x−1)6 − 1 = 0  [1,3] 1.0   
1.5 2.000000000000000
2.5
Table 2. Comparison table for nonlinear equations.
Examples NM Method (2.5) Method (2.7)
(1)   9 5 5
Fails 5 5
(2)   Converges to undesired root (−5  π) 7 5
Converges to undesired root (−6  π) 7 5
(3)  Converges to undesired root (9.424696922302246) 6 4
3 5 3
(4)  Divergent 2 2
11 11 11
Fails 1 1
(5)  14 7 7
6 7 6
Table 3. Unconstrained optimization problems.
No.  Examples Initial guesses Optimum point (β)
(1)  x3 − 6x2 + 9x − 8   2    3 
 3.5
(2)   32 40.777259826660156 
 45
(3)  (x−2)2 + cos (x)   1 2.3542480166260 
 3
(4)  ex − 3x2  −1 0.204481452703476 
 1
(5)   0.5 0.860541462898254 
 2.0
Table 4. Comparison table for unconstrained optimization problems.
Examples NM Method (3.4) Method (3.6) Optimum value
(1)  Fails 1 1      −8 
4 5 4
(2)  4 15 13   3.599765349958602 
4 10 8
(3)  4 6 4   −0.580237420585759 
3 5 4
(4)  4 6 5   1.1014507066670358 
4 5 4
(5)  5 5 5   15.804002928482971 
4 6 5

In the following problems, we are to find the root of equations in the given interval [a, b].

6. Discussion and Conclusions

This study presents several iterative formulae of second order for solving scalar nonlinear equations and unconstrained optimization problems. The numerical examples considered in Table 1, Table 2, Table 3, and Table 4 above show that in many cases our methods are efficient alternative to Newton’s method which may converge slowly or even fail. These are the simple extensions of Newton’s formula and have well-known geometric derivations. These methods remove the severe conditions f(x) ≠ 0 or f′′(x) ≠ 0 of Newton’s method for the case of nonlinear equations or for the case of nonlinear unconstrained optimization problems, respectively. The behaviors of Newton’s method and the proposed modifications can be compared by their correction factors. For example, Newton’s correction factor f(xn  )/f(xn)   is now modified by f(xn)/(f(xn) − pf(xn)), where the parameter p is chosen such that the corresponding function values f(xn) and pf(xn) have opposite signs. However, for p = 0 and if derivatives of the function f(xn) are singular or almost singular, Newton’s method will either fail or diverge. Therefore, these modifications have two remarkable advantages over Newton’s method, namely, (i) if p ≠ 0, the modified denominator of proposed modifications is well defined and never zero, provided xn is not accepted as an approximation to the required root or optima, respectively, and hence, they are well defined even if f(x) = 0 or f′′(xn) = 0 happens; (ii) the absolute value of the modified denominator of modified techniques is always greater than the denominator of Newton’s method, that is, |f(xn)|, provided xn is not accepted as an approximation to the required root or optima, respectively. This means that the proposed methods are numerically more stable unlike Newton’s method. Finally numerical experiments demonstrate that the parabolic methods outperform Newton’s method and the one-parameter family of Newton’s method.

Acknowledgment

The authors would like to thank the reviewers and the academic editor for many valuable comments and suggestions.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.