Volume 2014, Issue 1 912796
Research Article
Open Access

Efficient Iterative Methods with and without Memory Possessing High Efficiency Indices

T. Lotfi

T. Lotfi

Department of Mathematics, Islamic Azad University, Hamedan Branch, Hamedan, Iran srbiau.ac.ir

Search for more papers by this author
F. Soleymani

F. Soleymani

Department of Mathematics, Islamic Azad University, Shahrekord Branch, Shahrekord, Iran srbiau.ac.ir

Search for more papers by this author
Z. Noori

Z. Noori

Department of Mathematics, Islamic Azad University, Hamedan Branch, Hamedan, Iran srbiau.ac.ir

Search for more papers by this author
A. Kılıçman

Corresponding Author

A. Kılıçman

Department of Mathematics and Institute for Mathematical Research, Universiti Putra Malaysia, 43400 Serdang, Malaysia upm.edu.my

Search for more papers by this author
F. Khaksar Haghani

F. Khaksar Haghani

Department of Mathematics, Islamic Azad University, Shahrekord Branch, Shahrekord, Iran srbiau.ac.ir

Search for more papers by this author
First published: 03 September 2014
Citations: 5
Academic Editor: Krzysztof Ciepliński

Abstract

Two families of derivative-free methods without memory for approximating a simple zero of a nonlinear equation are presented. The proposed schemes have an accelerator parameter with the property that it can increase the convergence rate without any new functional evaluations. In this way, we construct a method with memory that increases considerably efficiency index from 81/4 ≈ 1.681 to 121/4 ≈ 1.861. Numerical examples and comparison with the existing methods are included to confirm theoretical results and high computational efficiency.

1. Preliminaries

The main goal and motivation in constructing iterative methods for solving nonlinear equations is to obtain as high as possible order of convergence with minimal computational cost (see, e.g., [13]). Hence, many researchers (see, e.g., [4, 5]) have paid much attention to construct optimal multipoint methods without memory based on the unproved conjecture of Kung and Traub [6], which indicates that any multipoint iterative method without memory using n + 1 functional evaluations can reach the optimal order 2n.

Let α be a simple real zero of a real function and let x0 be an initial approximation to α. In many practical situations, it is preferable to avoid calculations of derivatives of f. This makes the construction of iterative derivative-free methods important [7].

This paper follows two main goals. First, developing some optimal three-step derivative-free families of methods without memory. And second, developing the proposed families to methods with memory in a way that reaches convergence R-orders 6 and 12 without any new functional evaluation.

The main idea in methods with memory is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step. This parameter is calculated using information from the current and previous iteration so that the developed methods may be regarded as methods with memory following Traub’s classification [8]. A supplemental inspiration for studying methods with memory stands up from an astonishing fact that such classes of iterative methods have been considered in the literature very rarely despite their high efficiency indices.

The paper is organized as follows. First, two families of optimal methods with orders four and eight consuming three and four function evaluations per iteration, respectively, are proposed in Section 2. Then in Section 3, we state methods with memory of very high computational efficiencies. The increase of convergence speed is achieved without additional function evaluations, which is the main advantage of the methods with memory. Section 4 is assigned to numerical results connected to the order of the methods with and without memory. Concluding remarks are given in Section 5.

2. Construction of New Families

This section deals with constructing new multipoint methods for solving nonlinear equations. The discussions are divided into two subsections. Let the scalar function and f(α) = 0 ≠ f(α) = c1; that is, α is a simple zero of f(x) = 0.

2.1. New Two-Step Methods without Memory

In this subsection, we start with the two-step scheme:
()

Note that we omit the iteration index k for the sake of simplicity only. The order of convergence of scheme (1) is four, but it does not contain any contributions. We substitute derivatives in the first and second steps by suitable approximations that use available data; thus, we introduce an approximation as follows.

Using the points x and w = x + γf(x) (γ is a nonzero real constant), we can apply Lagrange’s interpolatory polynomial for approximating f(x):
()
Fix , and by setting t = x, we have
()
Also, Lagrange’s interpolatory polynomial at points x, w, and y for approximating f(y) can be given as follows:
()
We obtain
()
Finally, we set above approximations in the denominator of (1), and so our derivative-free two-step iterative method is derived in what follows:
()

Now to check the convergence order of (6), we avoid retyping the widely practiced approach in the literature and put forward the following self-explained Mathematica code:

;

ew = e + γf[e]  (*ew = wα*);

;

;

;

ey = e − Series[f[e]/p1[ew],

{e, 0,3}]//FullSimplify  (*ey = y − α*)

Out[a]:(1 + f1aγ)c2e2 + O[e]3.

;

;

.

Considering  Out[b] of the above Mathematica program, we observe that the order of convergence of the family (6) is four, and so we can state a convergence theorem in what follows.

Theorem 1. If an initial approximation x0 is sufficiently close to a simple zero α of f, then the convergence order of the two-step approach (6) is equal to four. And its error equation is given by

()

2.2. New Three-Step Family

Now, we construct a three-step uniparametric family of methods based on the two-step method (6). We start from a three-step scheme where the first two steps are given by (6), and also the third step is Newton’s method; that is,
()
The derivative f(z) in the third step of (8) should be substituted by a suitable approximation in order to obtain as high as possible of convergence order and to make the scheme optimal. To provide this approximation, we apply Lagrange’s interpolatory polynomial at the points w = x + γf(x), x, y, and z; that is,
()
It is obvious that P3(z) = f(z). By differentiating (9) and setting t = z, we obtain
()
By substituting in (8), we have
()
where is defined by (5) and given by (10).

In the following theorem, we state suitable conditions for deriving an optimal three-step scheme without memory.

Theorem 2. Let be a scalar function which has the simple root α in the open interval D; also initial approximation x0 is sufficiently close to a simple zero α of f. The three-step iterative method (11) has eighth order and satisfies the following error equation:

()

Proof. We are going to employ symbolic computation in the computational software package Mathematica. We write the following:

()
Now by introducing the abbreviations e = xα, ew = wα, ey = yα, ez = zα, and ck = f(k)(α)/(k! f(α)), we provide the following Mathematica program in order to obtain the convergence order of (11).

Program Written in Mathematica.  Consider the following.

;

ew = e + γf[e];

;

;

;

ey = e − Series[f[e]/p1[ew],

{e, 0,8}]//FullSimplify

Out[a] : (1 + γf1a)c2e2 + O[e]3.

;

;

.

dp3[t_]

;

+O[e]9.

As a result, the proof of the theorem is finished. According to  Out[c], it possesses eighth order.

Error equations (7) and (12) indicate that the orders of methods (6) and (11) are four and eight, respectively.

In the next section, we will modify the proposed methods and introduce new methods with memory. With the use of accelerator parameters, the order of convergence will significantly increase.

3. Extension to Methods with Memory

This section is concerned with extraction of efficient methods with memory from (6) and (11), through a careful inspection of their error equations containing the parameter γ, which can be approximated in such a way that increases the local order of convergence.

Toward this goal, we set γ = γk as the iteration proceeds by the formula for k = 1,2, …, where is an approximation of f(α). We therefore use the approximation
()
for (6) and the following one
()
for (11). Here, we consider Newton’s interpolating polynomial of third degree as the method for approximating f(α) in two-step method (6) and Newton’s interpolating polynomial of fourth degree for approximating f(α) in the three-step method (11), where N3(t) is the Newton’s interpolation polynomial of third degree, set through four available approximations xk, xk−1, yk−1, wk−1 and N4(t) is the Newton’s interpolation polynomial of fourth degree, set through five available approximations xk, zk−1, yk−1, wk−1, xk−1. Consider
()
()
Note that a divided difference of order m, defined recursively as
()
has been used throughout this paper. Hence, the with memory developments of (6) and (11) can be presented as follows:
()
()

Remark 3. Accelerating methods obtained by recursively calculated free parameter may also be called self-accelerating methods. The initial value γ0 should be chosen before starting the iterative process, for example, using one of the ways proposed in [8].

Here, we attempt to prove that the methods with memory (19) and (20) have convergence orders six and twelve provided that we use accelerator γk as in (14) and (15). For ease of continuing analysis, we introduce the convenient notation as follows. If the sequence {xk} converges to the zero α of f with the order p, then we write , where ek = xkα. The following lemma will play a crucial role in improving the convergence order of the methods with memory to be proposed in this paper.

Lemma 4. If , ek = xkα, ek,y = ykα, and ek,w = wkα, then the following relation holds:

()

Proof. Following the same terminology as in Theorem 2 and the symbolic software Mathematica, it would be easy to obtain (21) via writing the following code:

ClearAll["Global*"]

A[t_]≔InterpolatingPolynomial[{{e, fx},

{ew, fw}, {ey, fy}, {e1, fx1}}, t]//Simplify

Approximation = −1/A[e1]//Simplify;

;

;

;

;

b = Series[Approximation, {e, 0,2}, {ew, 0,2},

{ey, 0,2}, {e1,0, 2}]//Simplify;

Collect[Series[1 + b*f1a, {e, 0,1}, {ew, 0,1},

{ey, 0,1}, {e1,0, 0}],

{e, ew, ey, e1}, Simplify]

The proof is complete.

In order to obtain the R-order of convergence [9] of the method with memory (19), we establish the following theorem.

Theorem 5. If an initial approximation x0 is sufficiently close to the zero α of f(x) and the parameter γk in the iterative scheme (19) is recursively calculated by the forms given in (14), then the R-order of convergence for (19) is at least six.

Proof. Let {xk} be a sequence of approximations generated by the iterative method with memory (19). If this sequence converges to the zero α of f with the order p, then we write

()
Thus
()
Moreover, assume that the iterative sequences wk and yk have the orders p1 and p2, respectively. Then, (22) gives
()
()
Since
()
()
()
using Lemma 4 and (27), induce
()
()
()
Matching the powers of ek−1 on the right hand sides of (24)–(29), (25)–(30), and (23)–(31), one can obtain
()
The nontrivial solution of this system is p1 = 2, p2 = 3, and p = 6. This completes the proof.

Using symbolic computations and Taylor expansions, it is easy to derive the following lemma.

Lemma 6. Assuming (15) and (17), we have

()
where , ek = xkα, ek,z = zkα, ek,y = ykα, and ek,w = wkα.

Proof. The proof of this lemma is similar to Lemma 4. It is hence omitted.

Similarly for the three-step method with memory (20), we have the following theorem.

Theorem 7. If an initial approximation x0 is sufficiently close to the zero α of f(x) and the parameter γk in the iterative scheme (20) is recursively calculated by the forms given in (15), then the order of convergence for (20) is at least twelve.

Proof. Let {xk} be a sequence of approximations generated by the iterative method with memory (20). If this sequence converges to the zero α of f with the order p, then we write

()
So
()
Moreover, assume that the iterative sequences wk, yk, and zk have the orders p1, p2, and p3, respectively. Then, (34) gives
()
()
()
Since
()
()
()
()
by Lemma 6 and (40), we obtain
()
()
()
()
Matching the powers of ek−1 on the right hand sides of (36)–(43), (37)–(44), (38)–(45), and (35)–(46), one can obtain
()
This system has the solutions p1 = 2, p2 = 3, p3 = 6, and p = 12. The proof is complete.

Remark 8. The advantage of the proposed methods is in their higher computational efficiency indices. We emphasize that the increase of the R-order of convergence has been obtained without any additional function evaluations, which points to very high computational efficiency. Indeed, the efficiency index 121/4 ≈ 1.861 of the proposed three-step twelfth-order method with memory is higher than the efficiency index, 61/3 ≈ 1.817 of (19), 81/4 ≈ 1.682 of the optimal three-point method (11), and 41/3 ≈ 1.587 of (6).

Remark 9. We observe that the methods (19) and (20) with memory are considerably accelerated (up to 50%) in contrast to the corresponding method (11) without memory.

4. Numerical Experiments

In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence. The results are reported using the programming package Mathematica 8 in multiple precision arithmetic environment. We have considered 1000 digits floating point arithmetic so as to minimize the round-off errors as much as possible. The errors |xkα| denote approximations to the sought zeros, and a(−b) stands for a × 10b. Moreover, coc indicates the computational order of convergence and is computed by
()

It is assumed that the initial estimate γ0 should be chosen before starting the iterative process, and also x0 is given suitably.

Several iterative methods of optimal orders four and eight, for comparing with our proposed methods, have been chosen as comes next.

Derivative-free Kung-Traub’s two-step method (KT4) [6] is as follows:
()
Two-step method by Zheng et al. (ZLH4) [10] is as follows:
()
Two-step method by Lotfi and Tavakoli (LT4) [11] is as follows:
()
where H(tk, uk) = 1 + tk.
Derivative-free Kung-Traub’s three-step method (KT8) [6] is as follows:
()
Three-step methods developed by Zheng et al. (ZLH8) [10] is as follows:
()
Three-step method by Lotfi and Tavakoli (LT8) [11] is as follows:
()
where , and (ϕk = 1/(1 + γkf[xk, wk])), and H(tk, uk) = 1 + tk are the weight functions.

In Tables 1, 2, and 3 our two-step proposed classes (6) and (19) have been compared with optimal two-step methods KT4, ZLH4, and LT4. We observe that all these methods behave very well practically and confirm their theoretical results.

Table 1. , and x0 = 0.6.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (6), γ = −1 0.6742(−2) 0.1587(−10) 0.4307(−45) 4.0049
 KT4, γ = −0.01 0.6012(−1) 0.1486(−4) 0.6094(−19) 4.0171
 ZLH4, γ = −0.01 0.5882(−1) 0.1320(−5) 0.8291(−24) 3.9365
 LT, γ = −0.01 0.6178(−1) 0.3334(−4) 0.2739(−17) 4.0368
With memory
 New method (19), γ = −1 0.6742(−2) 0.3177(−11) 0.8372(−62) 5.4213
 KT4, γ = −1 0.2102(−1) 0.9935(−9) 0.2326(−48) 5.4031
 ZLH4, γ = −0.01 0.5882(−1) 0.2204(−8) 0.1503(−45) 5.0216
 LT4, γ = −1 0.1752(−1) 0.4934(−9) 0.5220(−50) 5.4214
Table 2. f2(x) = e−5x(x − 2)(x10 + x + 2), α = 2, and x0 = 2.2.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (6), γ = −0.01 0.9445(−3) 0.1550(−13) 0.1289(−56) 3.9945
 KT4, γ = −0.01 0.1125(−2) 0.3033(−13) 0.1891(−55) 3.9932
 ZLH4, γ = −0.1 0.9349(−3) 0.1478(−13) 0.1074(−56) 3.9939
 LT, γ = −0.01 0.1343(−2) 0.5965(−13) 0.2821(−54) 3.9918
With memory
 New method (19), γ = 1 0.1057(−2) 0.1102(−19) 0.8335(−109) 5.2480
 KT4, γ = 1 0.1267(−2) 0.2180(−19) 0.3580(−107) 5.2364
 ZLH4, γ = −0.01 0.9445(−3) 0.2868(−20) 0.8014(−112) 5.2264
 LT4, γ = −1 0.1343(−2) 0.1072(−19) 0.1150(−108) 5.2035
Table 3. , and x0 = −1.65.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (6), γ = −0.01 0.6849(−3) 0.8725(−13) 0.2284(−52) 4.0003
 KT4, γ = −0.01 0.6610(−3) 0.8682(−13) 0.2564(−52) 4.0004
 ZLH4, γ = 0.1 0.1214(−2) 0.2388(−12) 0.3560(−51) 4.0004
 LT4, γ = −0.01 0.6398(−3) 0.8549(−13) 0.2714(−52) 4.0002
With memory
 New method (19), γ = −1 0.4838(−2) 0.1348(−11) 0.1230(−64) 5.5506
 KT4, γ = 1 0.1170(−1) 0.3517(−7) 0.8936(−41) 6.0860
 ZLH4, γ = −0.01 0.6849(−3) 0.7630(−17) 0.1662(−92) 5.4226
 LT4, γ = −0.01 0.6398(−3) 0.6666(−17) 0.7373(−93) 5.4324

Also Tables 4, 5, and 6 present numerical results for our three-step classes (11) and (20) and methods KT8, ZLH8, and LT8. It is also clear that all these methods behave very well practically and confirm their relevant theories.

Table 4. , and x0 = 0.6.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (11), γ = −1 0.2345(−3) 0.1042(−32) 0.1593(−267) 7.9999
 KT8, γ = −1 0.3101(−3) 0.2671(−31) 0.8196(−256) 7.9999
 ZLH8, γ = −0.01 0.1742(−2) 0.1558(−22) 0.6471(−183) 7.9999
 LT8, γ = −1 0.5758(−3) 0.7106(−29) 0.3880(−236) 7.9998
With memory
 New method (20), γ = −0.1 0.3440(−3) 0.1220(−41) 0.8636(−503) 11.9935
 KT8, γ = −0.01 0.9773(−3) 0.2960(−33) 0.1505(−399) 12.0021
 ZLH8, γ = −0.01 0.1742(−2) 0.3012(−33) 0.4438(−402) 11.9900
 LT8, γ = −0.1 0.7107(−4) 0.2040(−49) 0.4971(−596) 12.0024
Table 5. f2(x) = e−5x(x − 2)(x10 + x + 2), α = 2, and x0 = 2.2.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (11), γ = −1 0.3124(−6) 0.9972(−56) 0.1074(−451) 8.0000
 KT8, γ = −1 0.8055(−6) 0.1413(−52) 0.1268(−426) 8.0000
 ZLH8, γ = 1 0.4955(−6) 0.5798(−54) 0.2038(−437) 8.0000
 LT8, γ = −1 0.4052(−6) 0.5819(−55) 0.1053(−445) 8.0000
With memory
 New method (20), γ = −1 0.3124(−6) 0.1213(−81) 0.3093(−985) 11.9822
 KT8, γ = −1 0.8055(−6) 0.2641(−78) 0.9143(−945) 11.9538
 ZLH8, γ = −0.01 0.3943(−6) 0.1889(−80) 0.6308(−971) 11.9817
 LT8, γ = −0.01 0.4532(−6) 0.4774(−76) 0.5848(−853) 11.1023
Table 6. , and x0 = −1.65.
Methods |x1α| |x2α| |x3α| coc
Without memory
 New method (11), γ = −1 0.2768(−3) 0.3670(−27) 0.3499(−218) 8.0000
 KT8, γ = −1 0.2915(−4) 0.5324(−34) 0.6583(−272) 8.0000
 ZLH8, γ = 1 0.2322(−1) 0.5044(−11) 0.2253(−88) 8.0081
 LT, γ = −1 0.1094(−2) 0.1454(−22) 0.1323(−181) 8.0014
With memory
 New method (20), γ = −1 0.2768(−3) 0.2660(−42) 0.1805(−509) 11.9734
 KT8, γ = −1 0.2915(−4) 0.5214(−53) 0.1157(−642) 12.0961
 ZLH8, γ = −0.01 0.6052(−5) 0.2355(−65) 0.4206(−786) 11.9310
 LT8, γ = −1 0.1094(−2) 0.3522(−34) 0.1717(−415) 12.1081

We remark the importance of the choice of initial guesses. If they are chosen sufficiently close to the sought roots, then the expected (theoretical) convergence speed will be reached in practice; otherwise, all iterative root-finding methods show slower convergence, especially at the beginning of the iterative process. Hence, a special attention should be paid to finding good initial approximations. We note that efficient ways for the determination of initial approximations of great accuracy were discussed thoroughly in the works [1214].

5. Conclusions

We have constructed two families of iterative methods without memory which are optimal in the sense of Kung and Traub’s conjecture in this paper. Our proposed methods do not need any derivative.

In addition, they contain an accelerator parameter which raises convergence order without any new functional evaluations. In other words, the efficiency index of the three-step with memory hit 121/4 ≈ 1.861.

We finalize this work by suggesting some points for future researches: first developing the proposed methods for some matrix functions, such as the ones in [15, 16], second exploring its dynamic or basins of attractions, and lastly extending the developed methods with memory using two or three accelerators.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The first author thanks Islamic Azad University, Hamedan Branch, for the financial support of the present research. The authors are also thankful for insightful comments of three referees whom helped with the readability and reliability of the present paper. The fourth author also gratefully acknowledges that this research was partially supported by the University Putra Malaysia under the GP-IBT Grant Scheme having project number GP-IBT/2013/9420100.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.