Volume 2013, Issue 1 586708
Research Article
Open Access

Fourth- and Fifth-Order Methods for Solving Nonlinear Systems of Equations: An Application to the Global Positioning System

Manuel F. Abad

Manuel F. Abad

Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 40022 Valencia, Spain upv.es

Search for more papers by this author
Alicia Cordero

Corresponding Author

Alicia Cordero

Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 40022 Valencia, Spain upv.es

Search for more papers by this author
Juan R. Torregrosa

Juan R. Torregrosa

Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 40022 Valencia, Spain upv.es

Search for more papers by this author
First published: 13 May 2013
Citations: 19
Academic Editor: Changsen Yang

Abstract

Two iterative methods of order four and five, respectively, are presented for solving nonlinear systems of equations. Numerical comparisons are made with other existing second- and fourth-order schemes to solve the nonlinear system of equations of the Global Positioning System and some academic nonlinear systems.

1. Introduction

The search for solutions of nonlinear systems of equations is an old and difficult problem with wide applications in sciences and engineering. The best known method, for being very simple and effective, is Newton’s method. Its generalization to a system of equations was proposed by Ostrowski [1] and to Banach spaces by Kantorovič [2]. In the literature, several modifications have been made on classical methods in order to accelerate the convergence or to reduce the number of operations and evaluations of functions in each step of the iterative process. The extension of the variants of Newton’s method described by Weerakoon and Fernando in [3], by Özban in [4] and Gerlach in [5], to the functions of several variables has been developed in [69]. In [6, 7], families of variants of Newton’s method of third-order have been designed by using open and closed formulas of quadrature, including the families of the methods defined by Frontini and Sormani in [8]. Using the generic formula of the interpolatory quadrature, in [9] a family of methods is obtained with order of convergence 2d + 1, under certain conditions, where d is the order up to which the partial derivatives of each coordinate function evaluated in the solution are canceled. Indeed, Darvishi and Barati improved in [10] the method from Frontini and Sormani, getting a fourth-order scheme. In addition to multistep methods based on interpolatory quadrature, other schemes have been developed by using different techniques, as extension to several variables of one-dimensional schemes (see [11]), Adomian decomposition (see [12, 13], e.g.), the one proposed by Darvishi and Barati in [14, 15] with super cubic convergence, and the methods proposed by Cordero et al. in [16] with orders of convergence four and five. Another procedure to develop iterative methods for nonlinear systems is the replacement of the second derivative by some approximation. In [17], Traub presented a family of multipoint methods based on approximating the second derivative that appears in the iterative formula of Chebyshev’s scheme, and more recently, Babajee et al. in [18] designed two Chebyshev-like methods free from second derivatives. Recently, Sharma et al. [19] designed a fourth-order scheme by using weight-function technique. Another well-known acceleration technique is the composition of two iterative methods of orders p1 and p2, respectively, obtaining a method of order p1p2 (see [17]). New evaluations of the Jacobian matrix and the nonlinear function are usually needed in order to increase the order of convergence.

Now, we are going to introduce the problem and some necessary concepts in order to develop the modified methods and to analyze their convergence. Let us consider the problem of finding a real zero of a function F : Dnn, that is, a solution of the nonlinear system F(x) = 0 of n equations with n unknowns. The best known iterative method is the classical Newton method given by
()
where F(x(k)) is the Jacobian matrix of the function F evaluated in the kth iteration x(k). Traub, in [17], introduced a variant of Newton’s method of convergence order three. We are going to describe it because our methods combine Traub with Newton′s method. Traub′s scheme consists of the composition of Newton′s method with itself, but with a frozen Jacobian matrix, its iterative expression is
()
where y(k) is the kth iteration of Newton’s method.
On the other hand, recently Sharma et al. in [19] have developed a fourth-order method for solving nonlinear systems of equations. The algorithm is composed of two weighted Newton steps, and it is given by
()
In the following, we remember some known notions and results that we need in order to analyze the convergence of the new methods. Let F : Dnn be sufficiently Frechet differentiable in D. By using the notation introduced in [20], the qth derivative of F at un, q ≥ 1 is the q-linear function F(q)(u) : n × n × ⋯×nn such that F(q)(u)(v1, v2, …, vq) ∈ n. It is easy to observe that
  • (1)

    F(q)(u)(v1, v2, …, vq−1, ·) ∈ (n),

  • (2)

    F(q)(u)(vσ(1), vσ(2), …, vσ(q)) = F(q)(u)(v1, v2, …, vq) for all permutation σ of {1,2, …, q},

where (n) is the set of lineal operators of n in n.
From the above properties, we can use the following notation:
  • (1)

    F(q)(u)(v1, v2, …, vq) = F(q)(u)v1 · ⋯·vq,

  • (2)

    F(q)(u)vq−1F(p)vp = F(q)(u)F(p)(u)vq+p−1.

On the other hand, for lying in a neighborhood of a solution of F(x) = 0, we can apply Taylor′s expansion, and assuming that the Jacobian matrix is nonsingular, we have
()
where , q ≥ 2. We observe that Cqhqn since and .
In addition, we can express F as
()
where I is the identity matrix. Therefore, qCqhq−1(n). From the previous equation, we obtain
()
Then, if the inverse of D(h) is
()
provided that Xi, i = 2,3, … verify
()
Solving the system involved in (8), we have that
()
We denote as the error in the kth iteration. The equation
()
where L is a p-linear function L( × ⋯×, ), is called the error equation, and p is the order of convergence. Observe that is (e(k), e(k), …, e(k)).

In [7], the concept of computational order of convergence was introduced as follows.

Definition 1. Let be a zero of a function F, and suppose that x(k−1), x(k), and x(k+1) are three consecutive iterations close to . Then, the computational order of convergence p can be approximated using the formula

()

In addition, in order to compare different methods, we use the efficiency index, I = p1/d, where p is the order of convergence and d is the total number of new functional evaluations (per iteration) required by the method. This is the most used index, but not the only one. In [17], Traub uses a computational index defined as CI = p1/op, where op is the number of products/quotients per iteration. We recall that the number of products and quotients that we need for solving m linear systems with the same matrix of coefficients, by using LU factorization, is
()
where n is the size of each system. We will use these indices in order to compare the different iterative methods. Kung and Traub in [21] conjectured that the order of convergence of any multipoint method without memory for solving nonlinear equations cannot exceed the bound 2d−1 (called the optimal order). Ostrowski′s method [1], Jarrett′s scheme [22], and King′s procedure [23] are some of the optimal one-dimensional fourth-order methods. We have adapted the definition of optimal order of convergence to the case of iterative methods to solve nonlinear systems. The extension to several variables of the conjecture of Kung and Traub could be done in the following way [24].

Conjecture 2. Given a multipoint iterative method to solve nonlinear systems of equations which requires d = k1 + k2 functional evaluations per step such that k1 of them correspond to the functional evaluations of the Jacobian matrix and k2 to evaluations of the nonlinear function. We conjecture that the optimal order for this method is if k1k2.

In this paper, we propose two new and competitive iterative methods of orders four and five, respectively, that improve other known methods.

The rest of this paper is organized as follows: in Section 2, we make an introduction to the Global Positioning System (GPS), focusing on the way that the receiver calculates the user position using the ephemeris data of the artificial satellites. In Section 3, we present our new iterative methods and analyze its convergence order, and by using the idea of a technique presented in [25], it is also proved that, in general, if we combine two methods of orders p and q, respectively, with pq, in the same way that we do it in our method of order five, the order of convergence of the resultant method is p + q. In Section 4, we show an application of this analysis in order to solve the nonlinear system of the GPS and several academic nonlinear systems of equations. A comparison is established among the new methods and Newton and Sharma’s methods in terms of convergence order, approximated computational convergence order (ACOC), and computational and efficiency indices, CI and I, respectively.

2. Basics on Global Positioning System

This section introduces the basic concept of how a GPS receiver determines its position. From the satellite constellation, the equations required for solving the user position conform a nonlinear system of equations. In addition, some practical considerations (i.e., the inaccuracy of the user clock) will be included in these equations. These equations are usually solved through a linearization and a fixed point iteration method. The obtained solution is in a Cartesian coordinate system, and after that the result will be converted into a spherical coordinate system. However, the Earth is not a perfect sphere; therefore, once the user position is estimated, the shape of the Earth must be taken into consideration. The user position is then translated into the Earth-based coordinate system. In this paper, we are going to focus our attention in solving the nonlinear system of equations of the GPS giving the results in a Cartesian coordinate system. We can find further information about GPS in [26].

2.1. Basic GPS Concepts

The position of a point in space can be found by using the distances measured from this point to some known position in space. We are going to use an example to illustrate this point.

Figure 1 shows a two-dimensional case. In order to determine the user position U, three satellites S1, S2, and S3 and three distances are required. The trace of a point with constant distance to a fixed point is a circle in the two-dimensional case. Two satellites and two distances give two possible solutions because two circles intersect at two points. A third circle is needed to uniquely determine the user position. For similar reasons in a three-dimensional case, four satellites and four distances are needed. The equal-distance trace to a fixed point is a sphere in a three-dimensional case. Two spheres intersect to make a circle. This circle intersects another sphere, and this intersection produces two points. In order to determine which point is the user position, one more satellite should be needed. In GPS, the position of the satellite is known from the ephemeris data transmitted by the satellite. By measuring the distance from the receiver to the satellite, the position of the receiver can be determined. In the above discussion, the distance measured from the user to the satellite is assumed to be very accurate, and there is no bias error. However, the distance measured between the receiver and the satellite has a constant unknown bias, because the user clock usually is different from the GPS clock. In order to solve this bias error, one more satellite is required. Therefore, in order to find the user position, five satellites are needed. If one uses four satellites and the measured distance with bias error to measure a user position, two possible solutions can be obtained. Theoretically, one cannot determine the user position. However, one of the solutions is close to the Earth′s surface, and the other one is in the space. In fact, as we will see in Section 4, in this memory, we have used four satellites, and sometimes we have found the solution in the space. Since the user position is usually close to the surface of the earth, it can be uniquely determined. Therefore, the general statement is that four satellites can be used to determine a user position, even though the distance measured has a bias error. The method of solving the user position discussed in the next subsections is through iteration. The initial position is often selected at the center of the Earth. In the following discussion, four satellites are considered as the minimum number required for finding the user position.

Details are in the caption following the image
Two-dimensional user position.

2.2. Basic Equations for Finding User Position

In this section, the basic equations for determining the user position will be presented. Assume that the distance measured is accurate, and under this condition, three satellites should be sufficient. Let us suppose that there are three known points at locations r1 or (x1, y1, z1), r2 or (x2, y2, z2), and r3 or (x3, y3, z3) and an unknown point at ru or (xu, yu, zu). If the distances between the three known points to the unknown point can be measured as ρ1, ρ2, and ρ3, these distances can be written as
()

Because there are three unknowns and three equations, the values of xu, yu, and zu can be determined from these equations. Theoretically, there should be two sets of solutions as they are second-order equations. These equations can be solved by linearizing them and making an iterative approach. The solution of these equations will be discussed later in Section 2.4. In GPS operation, the positions of the satellites are given. This information can be obtained from the data transmitted from the satellites. The distances from the user (the unknown position) to the satellites must be measured simultaneously at a certain time instance. Each satellite transmits a signal with a time reference associated with it. By measuring the time of the signal traveling from the satellite to the user, the distance between the user and the satellite can be found. The distance measurement is discussed in the next section.

2.3. Measurement of Pseudorange

Every satellite sends a signal at a certain time tsi. The receiver will receive the signal at a later time tu. The distance between the user and the satellite i can be determined as
()
where c is the speed of light, ρiT is often referred to as the true value of pseudorange from user to satellite i, tsi is referred to as the true time of transmission from satellite i, and tu is the true time of reception. From a practical point of view, it is difficult, if not impossible, to obtain the correct time from the satellite or the user. The actual satellite clock time and actual user clock time are related to the true time as
()
where Δbi is the satellite clock error and but is the user clock bias error. Besides the clock error, there are other factors affecting the pseudorange measurement. The measured pseudorange ρi can be written as
()
where ΔDi is the satellite position error effect on the range, ΔTi is the tropospheric delay error, ΔIi is the ionospheric delay error, vi is the receiver measurement noise error, and Δvi is the relativistic time correction. Some of these errors can be corrected; for example, the tropospheric delay can be modeled, and the ionospheric error can be corrected in a two-frequency receiver. The errors will cause inaccuracy of the user position. However, the user clock error cannot be corrected through receiver information. Thus, it will remain as an unknown. So, the system of (13) must be modified as
()
where bu is the user clock bias error expressed in distance, which is related to the quantity but by bu = cbut. In the system of (17), four equations are needed to solve four unknowns xu, yu, zu, and bu. Thus, in a GPS receiver, a minimum of four satellites is required to solve the user position.

2.4. Solution of User Position from Pseudoranges

One common way to solve the system of (17) is to linearize them. The system can be written in a simplified form as
()
(with i = 1,2, 3,4, and xu, yu, zu and bu) are the unknowns. The pseudorange ρi and the positions of the satellites xi, yi, zi are known. By differentiating (18),
()
In (19), δxu, δyu, δzu, and δbu can be considered as the only unknowns. The quantities xu, yu, zu, and bu are treated as known values because one can assume some initial values for these quantities. From these initial values, a new set of δxu, δyu, δzu, and δbu can be calculated. These values are used to modify the original xu, yu, zu, and bu to find another new set of solutions. This new set of xu, yu, zu, and bu can be considered again as known quantities. This process continues until the absolute values of δxu, δyu, δzu, and δbu are very small and within a certain predetermined limit. The final values of xu, yu, zu, and bu are the desired solution. This method is often referred to as an iteration method of fixed point. With δxu, δyu, δzu, and δbu as unknowns, the above equation becomes a set of linear equations. This procedure is often referred to as linearization. The expression (19) can be written in matrix form as
()
where
()
The solution of (20) is
()
This process obviously does not provide the needed solutions directly. However, the desired solutions can be obtained from it. In order to find the desired position solution, this procedure must be used repetitively in an iterative way. A quantity is often used to determine whether the desired result is reached, and this quantity can be defined as
()

When δυ is lower than a certain predetermined threshold, the iteration will stop. Sometimes, the clock bias bu is not included in (23). In this paper, we use as stopping criterion the quantity | | x(k+1)x(k)||+| | F(x(k+1))|| because it is stronger than (23). As we can verify in [27] the above iterative method used to calculate via software, the receiver position in the GPS is Newton′s method, a well-known method of second-order of convergence. In this work, we improve the GPS software by means of two methods of order four and five, respectively, that converge to the solution with less number of iterations and better I or CI than Newton scheme.

3. Description of the Methods and Convergence Analysis

3.1. A Fourth-Order Method

In this section, we display a new method for solving nonlinear systems that we call M4, obtained by combining (Newton and Traub′s method). Its iterative expression is
()
where y(k) is the kth iteration of Newton’s method. In the next result, we are going to prove that the convergence order of the method is four.

Theorem 3. Let F : Dnn be sufficiently differentiable at each point of an open neighborhood D of   that is a solution of the nonlinear system F(x) = 0. Let one suppose that F(x) is continuous and nonsingular in . Then, the sequence {x(k)} k≥0 obtained by using the iterative expression (24) converges to with order four. The error equation is

()
where , and .

Proof. Taylor expansion of F(x(k)) and F(x(k)) around gives

()
where , and . From (26), we obtain
()
where X2 = −2C2, , and . Taylor′s expansion of y(k) is
()

On the other hand, we have that

()
and by operating, we get
()

Analogously, we obtain the expression of . Given that the kth iteration of Traub′s scheme is

()
then
()

Besides, the expression of F(z(k)) is

()
or, equivalently from (32),
()

So,

()
and provided that [F(z(k))] −1F(z(k)) = I, solving the linear system of equations involved, we have W2 = W3 = 0, , and , so
()

Then,

()

Finally, by replacing (28) and (37) in the iterative expression (24), we obtain the error equation

()
and the proof of the theorem is completed.

3.2. A Fifth-Order Method

In this section, we show a new method for solving nonlinear systems that we call M5, which is obtained by combining again Newton and Traub′s methods but in a different way. Its iterative expression is
()
where y(k) is the kth iteration of Newton′s method. We prove in the next result that the convergence order of this method is five.

Theorem 4. Let F : Dnn be sufficiently differentiable at each point of an open neighborhood D of   that is a solution of the nonlinear system F(x) = 0. Let one suppose that F(x) is continuous and nonsingular in . Then, the sequence {x(k)} k≥0 obtained by using the iterative expression (39) converges to with order five. The error equation is

()
where , and .

Proof. Following the procedure used in Theorem 3, we have that

()
()

On the other hand,

()
()

Finally, by replacing (42) and (44) in the iterative expression (39), we obtain the error equation

()
and the proof of the theorem is completed.

3.3. Pseudocomposition

In [25] a technique called pseudocomposition that uses a known method as a predictor and the Gaussian quadrature as a corrector was introduced. The order of convergence of the resulting scheme depends, among other factors, on the order of the last two steps of the predictor. Following this idea, we generalize the procedure used to design method M5.

Then, we can establish the next result.

Theorem 5. Let F : Dnn be sufficiently differentiable at each point of an open neighborhood D of   that is a solution of the nonlinear system F(x) = 0. Let one suppose that F(x) is continuous and nonsingular in . Let y(k) be the kth iteration of an iterative method of order q and z(k) the kth iteration of an iterative method of order p, with pq. The sequence {x(k)} k≥0 obtained by the iterative expression

()
converges to with the order of convergence p + q.

Proof. Taylor’s expansions of y(k) and z(k) are

()

Taylor’s expansion of F(z(k)) around gives

()
where , and . On the other hand, we have that
()
where m = min{p + q + 1,2q + 1} = p + q + 1. Finally, by replacing (47), (48), and (49) in the iterative expression (46), we obtain the error equation
()

Then, the convergence order of the method that results from this combination of a method of order q with another of order p with pq is p + q.

4. Numerical Results

Numerical computations have been carried out using variable precision arithmetic, with 2000 digits of mantissa, in MATLAB 7.1. The stopping criterion has been | | x(k+1)x(k)||+| | F(x(k+1))|| < 10−250, and therefore, we check that the iterate sequence converges to an approximation of the solution of the nonlinear system. For every method, we count the number of iterations needed to reach the wished tolerance, and we calculate the approximated computational order of convergence ACOC, the efficiency index I, the computational index CI, and an error estimation made with the last values of | | x(k+1)x(k)|| and | | F(x(k+1))||.

4.1. Numerical Results Obtained with Academic Nonlinear Systems

Now, we are going to compare M4, M5, and Newton (N) and Sharma’s (S) schemes with some nonlinear academic systems in order to prove the effectiveness and the computational order of convergence of the methods developed in this work. The test systems used are as follows:
  • (a)

    ,

  • (b)

    ,

  • (c)

    F(x) = (f1(x), f2(x), …, fn(x)), where x = (x1, x2, …, xn) T and fi : n,   i = 1,2, …, n such that

    ()
    When n is odd, the exact zeros of F are and .

In Table 1, we can find a comparative among the different numerical methods for the nonlinear systems (a), (b), and (c). As we can see, the approximated computational orders of convergence are the expected ones, and methods M4 and M5 are clearly very competitive in terms of error estimation.

Table 1. Comparative of the iterative methods with nonlinear systems (a) to (c).
Function x(0) Method Iter ACOC | | x(k+1)x(k)|| | | F(x(k+1))||
(a) (2,2) T N 13 2  0.163e − 338   0
S 7 4  0.962e − 270   0
M4 7 4  0.140e − 805  0
M5 7 5  0.893e − 831   0 
  
(b) (−0.1, −0.1) T N 9 2  0.94e − 307   0.28e − 921 
S 5 4  0.53e − 276   0.83e − 1106 
M4 5 4  0.98e − 300   0.32e − 1801 
M5 5 5  0.435e − 647   0.1e − 3941 
  
  • (c)
  • n = 99
(2,2, …, 2) T N 11 2  0.535e − 488   0.286e − 976 
S 6 4  0.113e − 408   0.960e − 1636 
M4 6 4  0.380e − 704   0 
M5 6 5  0.122e − 1201   0 
The efficiency index, I, and the computational index, CI, of the different methods are as follows:
()

In Figures 2 and 3, we show these efficiency indices for n = 2,3, …, 10. It can be concluded that our methods improve Sharma′s scheme in terms of CI, although the classical efficiency of Sharma′s procedure is better for n ≤ 6. M4 and M5 are competitive, obtaining better error estimation than N and S, with the same number of iterations.

Details are in the caption following the image
Comparative among the efficiency indices of the methods.
Details are in the caption following the image
Comparative among the computational indices of the methods.

4.2. Numerical Results for the GPS Problem

In order to test the proposed schemes on the problem of a user position of a GPS device, we have requested to the Cartographic Institute of Valencia to provide us with data of known geocentric coordinates.

Concretely, the Cartographic Institute of Valencia provided us with the following:

With these data, we obtain the positions of the visible satellites in the instant that corresponds to the provided data. With these coordinates, we calculate the approximated pseudoranges for every satellite, and then we are able to build the nonlinear system of equations of GPS (18) using four of the satellites, with which we check the iterative methods of Newton, Sharma, M4, and M5.

In Table 2, we can find a comparative among the iterative methods N, S, M4, and M5 for the nonlinear system of the GPS. We recall that the coordinates of the center of the Earth and bu = 0, that is, x(0) = (0,0, 0,0) T, are usually used as initial estimation. Despite this, we have also tested the methods with some other initial conditions. We denote that x* ≈ (4984687.426, −41199.155,3966605.952, and  0.116e − 8) T as the Earth’s solution and xs ≈ (−39720114.893, −16748760.539, −23938190.113, and   − 0.159) T as the exterior space solution.

Table 2. Comparative of the iterative methods with the nonlinear system of equations of the GPS.
Method x(0) Iter ACOC | | x(k+1)x(k)|| | | F(x(k+1))||
N (0,0, 0,0) T  12   2   0.6e − 374   0.830e − 739  x*
S  7  4   0.173e − 767   0.2e − 1991  x*
M4  7  4   0.823e − 512   0.1e − 1991  x*
M5  7   5   0.222e − 814   0.3e − 1991  xs
  
N
  • (106, 106, 106, 106) T
 15   2   0.421e − 317   0.537e − 627  xs
S  8   4   0.739e − 799   0.1e − 1991  x*
M4 7   4   0.191e − 735   0  x*
M5  8   5   0.104e − 563   0.1e − 1991  xs
  
N
  • (−104, −104, −104, −104) T
 12   2   0.371e − 381   0.317e − 753  x*
S 7  4   0.138e − 784   0.3e − 1991  x*
M4 7   4   0.704e − 520   0  x*
M5  7   4.9968   0.510e − 692   0.1e − 1991  xs

As we can see, for this particular system of equations, Newton′s method does not converge to the user position for all the initial estimations, so does M5, but M4 is a good method in all senses, very competitive in respect of known methods.

5. Conclusions

In this paper, we have gone in depth on an emerging line of investigation, the GPS receivers software improvement. Concretely, GPS receivers currently use Newton′s method to solve the nonlinear system (18) and to calculate their exact position with the information obtained from signals received from the GPS constellation of satellites. We propose two different combinations of Newton method and Traub′s methods, obtaining two methods of fourth- (M4) and fifth-order (M5). Using the idea presented in [25], called pseudocomposition, it is proved that combining in a particular way two methods of order p and q, respectively, with pq, the order of convergence of the resulting scheme is p + q. We have numerically compared the different methods, and we have concluded that M4 and M5 are very competitive in terms of the error estimation.

Acknowledgments

This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02 and FONDOCYT 2011-1-B1-33 República Dominicana. The authors would also like to thank the work of the anonymous referee.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.