Volume 3, Issue 6 e1184
RESEARCH ARTICLE
Full Access

On the local convergence of efficient Newton-type solvers with frozen derivatives for nonlinear equations

Ramandeep Behl

Ramandeep Behl

Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia

Search for more papers by this author
Ioannis K. Argyros

Corresponding Author

Ioannis K. Argyros

Department of Mathematics Sciences, Cameron University, Lawton, Oklahoma, USA

Correspondence Ioannis K. Argyros, Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA.

Email: [email protected]

Search for more papers by this author
Christopher I. Argyros

Christopher I. Argyros

Department of Computer Science, University of Oklahoma, Norman, Oklahoma, USA

Search for more papers by this author
First published: 02 August 2021

Abstract

The aim of this article is to study the local convergence of a generalized m + 2 -step solver with nondecreasing order of convergence 3 m + 3 . Sharma and Kumar gave the order of convergence using Taylor series expansions and derivatives up to the order 3 m + 4 that do not appear in the method. Hence, the applicability of it is very limited. The novelty of our article is that we use only the first derivative in our local convergence (that only appears on the proposed method). Error bounds and uniqueness results not given earlier are also provided based on q-continuity functions. We also work with Banach space instead of Euclidean space valued operators. This way the applicability of the solver is extended. Applications where the convergence criteria are tested to complete this article.

1 INTRODUCTION

Let H : Ω 𝕋 𝕐 be Fréchet differentiable operator, 𝕋 , 𝕐 be two Banach spaces and Ω 𝕋 be open, convex and nonvoid. The task of finding a solution x of
H ( x ) = 0 , ()
is very challenging and of extreme importance in analysis because such equations originate from different disciplines such as Mathematical, Biology, Chemistry, Economics, Physics, Engineering (see the numerical examples) by adopting mathematical modeling.1-19 Then, suitable closed form of solutions can rarely be found.
Recently, Sharma and Kumar20 proposed the following scheme in order to find the solution of (1) for k = 0 , 1 , 2 , as
y k = x k H ( x k ) 1 H ( x k ) , y k ( 0 ) = y k 1 2 ( I Q k ) H ( x k ) 1 H ( x k ) , y k ( 1 ) = y k ( 0 ) q ( x k , y k ) H ( x k ) 1 H ( y k ( 0 ) ) , y k ( 2 ) = y k ( 1 ) q ( x k , y k ) H ( x k ) 1 H ( y k ( 1 ) ) , y k ( 3 ) = y k ( 2 ) q ( x k , y k ) H ( x k ) 1 H ( y k ( 2 ) ) , y k ( m 1 ) = y k ( m 2 ) q ( x k , y k ) H ( x k ) 1 H ( y k ( m 2 ) ) , x k + 1 = y k ( m 1 ) q ( x k , y k ) H ( x k ) 1 H ( y k ( m 1 ) ) , ()
where y k ( 1 ) = y k , Q k = H ( x k ) 1 H ( y k ) , and q ( x k , y k ) = 7 2 I + Q k 4 I + 3 2 Q k . Notice that for m = 1 the efficiency of method (2) was compared favorably in Reference 20 to many other methods of same order using similar information.

Taylor series expansions and derivatives of high order are used in the convergence analysis although these higher order derivatives do not occur in the expressions. Moreover, no computable error bounds or any information on the uniqueness of the solution are provided using Lipschitz-type functions. Furthermore, the convergence criteria cannot be compared, since they are based on different hypotheses. These problems limit the applicability of the method.

Let us consider a motivational example, we assume the following function H on 𝕋 = 𝕐 = , Ω = [ 1 2 , 3 2 ] such as:
H ( ζ ) = ζ 3 ln ζ 2 + ζ 5 ζ 4 , ζ 0 0 , ζ = 0 . ()
We yield
H ( ζ ) = 3 ζ 2 ln ζ 2 + 5 ζ 4 4 ζ 3 + 2 ζ 2 ,
H ( ζ ) = 6 ζ ln ζ 2 + 20 ζ 3 12 ζ 2 + 10 ζ ,
H ( ζ ) = 6 ln ζ 2 + 60 ζ 2 12 ζ + 22 .
We identify that H ( ζ ) is not bounded in Ω . Therefore, results requiring the existence of H ( ζ ) or higher cannot apply for studying the convergence of (2).

Notice that, we have a huge amount of iteration functions4-6, 8, 9, 11-17, 19 used for the solutions of equations. It is mentioned in these articles that x 0 should be enough close to x for convergence to be realized. But, nothing is said about how close x 0 should be to x for convergence. Hence, the radius of the convergence ball is required. The counterexample (3) can also be used to other methods which were proposed in References 4-6, 8, 9, 11-17, 19. Local convergence results are important, since they show how difficult it is to choose the starting guesses.

We use only hypothesis on the continuity of operator F employed in method (2). We also develop a scheme based on Lipschitz constants to find convergence radii as well as the error estimations, and uniqueness of the solution results. We talk the range of starting guess x 0 to guarantee the convergence of (2).

2 LOCAL CONVERGENCE

The analysis utilizes real constants and functions. Let ψ 0 : [ 0 , ) [ 0 , ) be a continuous and nondecreasing function.

Suppose that equation
ψ 0 ( ζ ) 1 = 0 , ()
has a minimal positive solution ρ 0 . Set M 0 = [ 0 , ρ 0 ) . Consider functions ψ : M 0 [ 0 , ) and ψ 1 : M 0 [ 0 , ) continuous, nondecreasing. Define functions G 1 , G 0 , G 1 and G 0 on the interval M 0 by
G 1 ( ζ ) = 0 1 ψ 0 ( ( 1 θ ) ζ ) d θ 1 ψ 0 ( ζ ) , G 1 ( ζ ) = G 1 ( ζ ) 1 , G 0 ( ζ ) = G 1 ( ζ ) + 1 2 ( ψ 0 ( ζ ) + ψ 0 ( G 1 ( ζ ) ζ ) ) 0 1 ψ 1 ( θ ζ ) d θ ( 1 ψ 0 ( ζ ) ) 2 , and G 0 ( ζ ) = G 0 ( ζ ) 1 .
Suppose that equations G 1 ( ζ ) = G 0 ( ζ ) = 0 have solutions in ( 0 , ρ 0 ) . Denote by r 1 and r 0 the minimal solution of equations G 1 ( ζ ) = 0 and G 0 ( ζ ) = 0 , respectively in ( 0 , ρ 0 ) . Moreover, suppose that equations
ψ 0 ( G i 1 ( ζ ) ζ ) 1 = 0 , i = 1 , 2 , 3 , , m ()
have minimal positive solutions ρ i 1 , where
G i ( ζ ) = [ 0 1 ψ ( ( 1 θ ) G i 1 ( ζ ) ζ ) d θ 1 ψ 0 ( G i 1 ( ζ ) ζ ) + ( ψ 0 ( ζ ) + ψ 0 ( G i 1 ( ζ ) ζ ) ) 0 1 ψ 1 ( θ G i 1 ( ζ ) ζ ) d θ ( 1 ψ 0 ( ζ ) ) ( 1 ψ 0 ( G i 1 ( ζ ) ζ ) ) + 3 2 ( ψ 0 ( ζ ) + ψ 0 ( G 1 ( ζ ) ζ ) ) 2 0 1 ψ 1 ( θ G i 1 ( ζ ) ζ ) d θ ( 1 ψ 0 ( ζ ) ) 3 + ( ψ 0 ( ζ ) + ψ 0 ( G 1 ( ζ ) ζ ) ) 0 1 ψ 1 ( θ G i 1 ( ζ ) ζ ) d θ ( 1 ψ 0 ( ζ ) ) 2 ] G i 1 ( ζ ) .
Furthermore, suppose that equations G i ( ζ ) = G i ( ζ ) 1 = 0 have minimal solutions r i in ( 0 , ρ i ) , respectively. Then, we shall show
r = min { r i } , i = 1 , 2 , 3 , , m ()
is a radius of convergence.
By this definition and for all ζ M = [ 0 , r ) , we have
0 ψ 0 ( ζ ) < 1 , ()
0 ψ 0 ( G i ( ζ ) ζ ) < 1 , i = 0 , 1 , 2 , 3 , , m ()
and
0 G i ( ζ ) < 1 . ()

Finally, we define B ( v , α ) = { x 𝕋 : x v < α , α > 0 } and B ( v , α ) = { x 𝕋 : x v α } .

Let us introduce some hypotheses denoted by ( A ) :
  • (A1)

    H : Ω 𝕋 𝕐 is differentiable, for some x 𝕋 , H ( x ) = 0 , and H ( x ) 1 is invertible.

  • (A2)

    Function ψ 0 : [ 0 , ) [ 0 , ) is continuous, nondecreasing and such that

    H ( x ) 1 ( H ( x ) H ( x ) ) ψ 0 ( x x ) ,
    for each x 𝕋 . Set 𝕋 0 = 𝕋 B ( x , ρ 0 ) .

  • (A3)

    Functions ψ : M 0 [ 0 , ) and ψ 1 : M 0 [ 0 , ) are continuous, nondecreasing and for each x , y 𝕋 0

    H ( x ) 1 ( H ( y ) H ( x ) ) ψ ( y x )
    and
    H ( x ) 1 H ( x ) ψ 1 ( x x ) .

  • (A4)

    B ( x , r ) 𝕋 , where r is defined in (5) and ρ j given in (4) and (5) exist.

  • (A5)

    There exists r ˜ r such that

    0 1 ψ 0 ( θ r ˜ ) d θ < 1 .

Define 𝕋 1 = 𝕋 B ( x , r ˜ ) .

Next, the proceeding notation and hypotheses ( A ) are used to show the local analysis of method (4).

Theorem 1.Suppose that hypotheses ( A ) hold and choose x 0 B ( x , r ) { x } . Then, sequence { x k } obtained by method (2) is well defined, remains in B ( x , r ) for each k = 0 , 1 , 2 , 3 , and converges to x . Moreover, the following items hold

y k x G 1 ( x k x ) x k x x k x < r , ()
y k ( 0 ) x G 0 ( x k x ) x k x x k x , ()
y k ( i ) x G i ( x k x ) x k x x k x , ()
x k + 1 x = y k ( m ) x = G m ( x k x ) x k x x k x . ()

Furthermore, the only solution of equation F ( x ) = 0 in the set T 1 is x .

Proof.Sequence { x k } shall be shown to be well defined, remaining in B ( x , r ) and converging to x using mathematical induction. In order, to achieve this, we shall also show estimates (10)–(13). Let us assume that x B ( x , r ) { x } . Using A 2 , (6) and (7), we have

H ( x ) 1 ( H ( x ) H ( x ) ) ψ 0 ( x x ) ψ 0 ( r ) < 1 . ()

The Banach perturbation lemma on inversible operators4, 5, 13 together with expression (14) ensure: the existence of H ( x ) 1 and

H ( x ) 1 H ( x ) 1 ψ 0 ( x x ) . ()

In particular, for x 0 , y 0 , y 0 ( 0 ) , exist by the first and second substep of method (2) for k = 0 . Using (6), (9) (for j = 1 ), A 3 and (15) for x = x 0 , we have in turn that

y 0 x = x 0 x H ( x 0 ) 1 H ( x 0 ) H ( x 0 ) 1 H ( x ) 0 1 H ( x ) 1 ( H ( x + θ ( x 0 x ) ) H ( x 0 ) ) d θ ( x 0 x ) 0 1 ψ ( ( 1 θ ) x 0 x ) d θ x 0 x 1 ψ 0 ( x 0 x ) = G 1 ( x 0 x ) x 0 x x 0 x < r , ()
showing (10) for k = 0 , and y 0 B ( x , r ) . Then, similarly from the second substep of method (2) for k = 0 , the definition of Q 0 and (16), we get in turn that
y 0 ( 0 ) x = y 0 x + 1 2 H ( x 0 ) 1 ( H ( x 0 ) H ( y 0 ) ) H ( x 0 ) 1 H ( x 0 ) y 0 x + 1 2 H ( x 0 ) 1 ( ( H ( x 0 ) H ( x ) ) + ( H ( x ) H ( y 0 ) ) ) H ( x 0 ) 1 H ( x 0 ) [ G 1 ( x 0 x ) + 1 2 ( ψ 0 ( x 0 x ) + ψ 0 ( G 1 ( x 0 x ) x 0 x ) ) 1 ψ 0 ( x 0 x ) × 0 1 ψ 1 ( θ x 0 x ) d θ x 0 x ] x 0 x = G 0 ( x 0 x ) x 0 x x 0 x < r , ()
showing (11) for k = 0 and y 0 ( 0 ) B ( x , r ) .

Next, the existence of H ( y 0 ( 0 ) ) 1 is given as in (15) for x = y 0 ( 0 ) by (8) and (17) to get

H ( y 0 ( 0 ) ) 1 H ( x ) 1 1 ψ 0 ( G 0 ( x 0 x ) ) x 0 x . ()

Then, we notice that y 0 ( i ) also exists and obtain the following

y 0 ( 1 ) x = y 0 ( 0 ) x H ( y 0 ( 0 ) ) 1 H ( y 0 ( 0 ) ) + H ( y 0 ( 0 ) ) 1 H ( y 0 ( 0 ) ) 7 2 I + Q 0 4 I + 3 2 Q 0 H ( x 0 ) 1 H ( y 0 ( 0 ) ) = ( y 0 ( 0 ) x H ( y 0 ( 0 ) ) 1 H ( y 0 ( 0 ) ) ) + ( H ( y 0 ( 0 ) ) 1 H ( x 0 ) 1 ) H ( y 0 ( 0 ) ) 3 2 ( Q 0 I ) 2 H ( x 0 ) 1 H ( y 0 ( 0 ) ) + ( Q 0 I ) H ( x 0 ) 1 H ( y 0 ( 0 ) ) . ()

Then, using (6), (9), (15), (16), (18), and (19), we obtain

y 0 ( 1 ) x = y 0 ( 0 ) x H ( y 0 ( 0 ) ) 1 H ( y 0 ( 0 ) ) + H ( y 0 ( 0 ) ) 1 H ( x ) × [ H ( x ) 1 ( H ( x 0 ) H ( x ) ) + H ( x ) 1 ( H ( x ) H ( y 0 ( 0 ) ) ) ] × H ( x 0 ) 1 H ( x ) H ( x ) 1 H ( y 0 ( 0 ) ) + 3 2 ( Q 0 I ) 2 H ( x 0 ) 1 H ( x ) H ( x ) 1 H ( y 0 ( 0 ) ) + ( Q 0 I ) H ( x 0 ) 1 H ( x ) H ( x ) 1 H ( y 0 ( 0 ) ) [ 0 1 ψ ( ( 1 θ ) y 0 ( 0 ) x ) d θ 1 ψ 0 ( y 0 ( 0 ) x ) + ( ψ 0 ( x 0 x ) + ψ 0 ( y 0 ( 0 ) x ) ) 0 1 ψ 1 ( θ x 0 x ) d θ ( 1 ψ 0 ( y 0 ( 0 ) x ) ) ( 1 ψ 0 ( x 0 x ) ) + 3 2 ( ψ 0 ( x 0 x ) + ψ 0 ( y 0 ( 0 ) x ) ) 2 0 1 ψ 1 ( θ x 0 x ) d θ ( 1 ψ 0 ( x 0 x ) ) 3 + ( ψ 0 ( x 0 x ) + ψ 0 ( y 0 ( 0 ) x ) ) 0 1 ψ 1 ( θ x 0 x ) d θ ( 1 ψ 0 ( x 0 x ) ) 2 ] y 0 ( 0 ) x = G 1 ( x 0 x ) x 0 x x 0 x , ()
showing (12) for k = 0 and x 0 ( 1 ) B ( x , r ) . After that estimates (12) and (13) follow as the estimates (20). Then, by estimation
x m + 1 x γ x m x < r , γ = G k ( x 0 x ) [ 0 , 1 ) , ()
we get lim m x m = x , with x m + 1 B ( x , r ) .

The uniqueness, follows if we set A = 0 1 H ( x + θ ( y x ) ) d θ , for y T 1 .

Then, by hypothesis ( A 2 ) , we get

H ( x ) 1 ( A H ( x ) ) 0 1 ψ 0 ( θ y x ) d θ 0 1 ψ 0 ( θ r ˜ ) d θ < 1 ,
so A 1 exists.

Finally, from 0 = H ( y ) H ( x ) = A ( y x ) , we conclude that x = y .

3 NUMERICAL EXAMPLES

The theoretical results developed in the previous sections are illustrated numerically in this section. We denote the methods (2) (for i = 2 , 3 , 4 ) by ( P M 1 3 ) , ( P M 2 3 ) , and ( P M 3 9 ) of third, sixth and ninth order, respectively. We consider three real-life problems and three standard nonlinear problems (that includes the motivational problem) that are illustrated in Examples 16. The results are listed in Tables 1, 2, 4–7. Additionally, we obtain the computational order of convergence ( C O C ) approximated by means of
λ = ln x k + 1 x | x k x ln x k x x k 1 x , for k = 1 , 2 , ()
or the approximated computational order of convergence ( A C O C ) 19 by:
λ = ln x k + 1 x k x k x k 1 ln x k x k 1 x k 1 x k 2 , for k = 2 , 3 , . ()
TABLE 1. Radii for Example 1
Cases m r 1 r 0 r 1 r 2 r 3 r x 0 k λ
P M 1 3 1 0.00689631 0.00317202 0.00147108 0.00147108 1.001 4 3.0000
P M 1 6 2 0.00689631 0.00317202 0.00147108 0.0012523 0.0012523 1.0009 3 6.0000
P M 1 9 3 0.00689631 0.00317202 0.00147108 0.0012523 0.00119542 0.00119542 1.0008 2 9.0000
TABLE 2. Radii for Example 2
Cases m r 1 r 0 r 1 r 2 r 3 r x 0 k λ
P M 1 3 1 0.037037 0.0170355 0.00790049 0.00790049 ( 1 . 047 , 0 . 0347 ) t 5 3.0040
P M 1 6 2 0.037037 0.0170355 0.00790049 0.00672553 0.00672553 ( 1 . 046 , 0 . 0347 ) t 4 6.0038
P M 1 9 3 0.037037 0.0170355 0.00790049 0.00672553 0.00642004 0.00642004 ( 1 . 046 , 0 . 0347 ) t 4 9.0018

Notice that the computation of λ and λ requires only the first-order derivative of the involved function H in the case of method (2) and not the evaluation of higher than one derivatives as in Reference 20 or the earlier studies.1-19, 21-23 We adopt ϵ = 1 0 100 as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: ( i ) x k + 1 x k < ϵ , and ( i i ) H ( x k ) < ϵ .

The computations are performed with the package M a t h e m a t i c a 11 with multi precision arithmetic.

Example 1.Following the example presented in introduction, for x = 1 we can set

ψ 0 ( ζ ) = ψ ( ζ ) = 96 . 67 t and ψ 1 ( ζ ) = 2 .
In Table 1, we present radii for Example 1.

Example 2.Let 𝕋 = 𝕐 = 2 and Ω = B ( x , 1 ) , where x = ( 1 . 0407 , 0 . 0407 ) t . Assume H on Ω with x = ( x 1 , x 2 ) t as

H ( x ) = H ( x 1 , x 2 ) = e x 1 2 + 8 x 1 sin ( x 2 ) , x 1 + x 2 1 , t , ()
where x = ( x 1 , x 2 ) t . Define the Fréchet-derivative as
H ( x ) = 2 x 1 e x 1 2 + 8 sin ( x 2 ) 8 x 1 cos ( x 2 ) 1 1 .
Then, we have
ψ 0 ( ζ ) = ψ ( ζ ) = 18 ζ and ψ 1 ( ζ ) = 2 .
We obtain the convergence radii depicted in Table 2.

Example 3.The kinematic synthesis problem for steering,10, 18 is given as

E i x 2 sin η i x 3 H i x 2 sin φ i x 3 2 + H i x 2 cos φ i + 1 H i x 2 cos η i 1 2 x 1 x 2 sin η i x 3 x 2 cos φ i + 1 x 1 x 2 cos η i x 3 x 2 sin φ i x 3 2 = 0 , for i = 1 , 2 , 3 ,
where
E i = x 3 x 2 sin φ i sin φ 0 x 1 x 2 sin φ i x 3 + x 2 cos φ i cos φ 0 , i = 1 , 2 , 3
and
H i = x 3 x 2 sin η i + x 2 cos η i + x 3 x 1 x 2 sin η 0 + x 2 cos η 0 + x 1 x 3 , i = 1 , 2 , 3 .
In Table 3, we present the values of η i and φ i (in radians).

The approximated solution is for Ω = B ( x , 1 )

x = ( 0 . 9051567 , 0 . 6977417 , 0 . 6508335 ) t .
Then, we get
ψ 0 ( ζ ) = ψ 1 ( ζ ) = 2 ζ , and ψ 1 ( ζ ) = 2 .
We provide the radii of convergence for Example 3 in Table 4.

TABLE 3. Values of η i and φ i (in radians) for Example 3
i η i φ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
TABLE 4. Radii for Example 3
Cases m r 1 r 0 r 1 r 2 r 3 r x 0 k λ
P M 1 3 1 0.333333 0.153319 0.0711044 0.0711044 ( 0 . 97 , 0 . 76 , 0 . 72 ) t 7 3.0069
P M 1 6 2 0.333333 0.153319 0.0711044 0.0605298 0.0605298 ( 0 . 96 , 0 . 75 , 0 . 71 ) t 5 5.0129
P M 1 9 3 0.333333 0.153319 0.0711044 0.0605298 0.0577804 0.0577804 ( 0 . 95 , 0 . 74 , 0 . 70 ) t 4 6.9999

Example 4.Assume the following boundary value problem (which can be found in Reference 13)

u + a 2 u 2 + 1 = 0 ()
with boundary conditions u ( 0 ) = u ( 1 ) = 0 . Let us assume the following partitioning of the interval into k parts
ζ 0 = 0 < ζ 1 < ζ 2 < < ζ k 1 < ζ k , ζ k + 1 = ζ k + h , h = 1 k .
Next, we define u 0 = u ( ζ 0 ) = 0 , u 1 = u ( ζ 1 ) , , u k 1 = u ( ζ k 1 ) , u k = u ( ζ k ) = 1 . We adopt the following discretization approach
u n = u n + 1 u n 1 2 h , u n = u n 1 2 u n + u n + 1 h 2 , n = 1 , 2 , 3 , , m 1 .
By using the above approximations of derivatives in expression (25), we have the proceeding system of ( n 1 ) × ( n 1 )
u n 1 2 u n + u n + 1 + μ 2 4 ( u n + 1 u n 1 ) 2 + h 2 = 0 .
We choose the particular values m = 61 and μ = 1 2 that produces a system of nonlinear equations of order 60 × 60 , which converges to
x = ( 0 . 02756 , 0 . 05467 , 0 . 08133 , 0 . 1075 , 0 . 1333 , 0 . 1586 , 0 . 1836 , 0 . 2081 , 0 . 2321 , 0 . 2558 , 0 . 2791 , 0 . 3019 , 0 . 3244 , 0 . 3465 , 0 . 3682 , 0 . 3895 , 0 . 4104 , 0 . 4309769 , 0 . 4511 , 0 . 4709 , 0 . 4903 , 0 . 5094 , 0 . 5281 , 0 . 5465 , 0 . 5645 , 0 . 5822 , 0 . 5995 , 0 . 6165 , 0 . 6331 , 0 . 6494 , 0 . 6654 , 0 . 6810 , 0 . 6963 , 0 . 7113 , 0 . 7260 , 0 . 7403 , 0 . 7543 , 0 . 7680 , 0 . 7814 , 0 . 7945 , 0 . 8072 , 0 . 8197 , 0 . 8319 , 0 . 8437 , 0 . 8552 , 0 . 8665 , 0 . 8774 , 0 . 8881 , 0 . 8984 , 0 . 9085 , 0 . 9182 , 0 . 9277 , 0 . 9369 , 0 . 9457 , 0 . 9543 , 0 . 9626 , 0 . 9707 , 0 . 9784 , 0 . 9859 , 0 . 9931 ) t
Then, we have for Ω = B x , 1 4
ψ 0 ( ζ ) = ψ ( ζ ) = 60 ζ and ψ 1 ( ζ ) = 2 .
We depicted the radii of convergence for Example 4 in Table 5.

TABLE 5. Radii of convergence for Example 4
Cases m r 1 r 0 r 1 r 2 r 3 r x 0 k λ
P M 1 3 1 0.010929 0.00502686 0.00233129 0.00233129 ( 1 . 002 , 1 . 002 , 60 , 1 . 002 ) t 6 3.0098
P M 1 6 2 0.010929 0.00502686 0.00233129 0.00198458 0.00198458 ( 1 . 001 , 1 . 001 , 60 , 1 . 001 ) t 4 5.0849
P M 1 9 3 0.010929 0.00502686 0.00233129 0.00198458 0.00189444 0.00189444 ( 1 . 001 , 1 . 001 , 60 , 1 . 001 ) t 4 7.0495

Example 5.Let us consider that 𝕋 = 𝕐 = C [ 0 , 1 ] , Ω = B ( 0 , 1 ) and introduce the space of maps continuous in [ 0 , 1 ] having the max norm. We consider the following function φ on 𝔸 :

Ψ ( ϕ ) ( x ) = Ψ ( x ) 0 1 x τ ϕ ( τ ) 3 d τ , ()
which further yields:
Ψ ( ϕ ( μ ) ) ( x ) = μ ( x ) 3 0 1 x τ ϕ ( τ ) 2 μ ( τ ) d τ , for μ Ω .
We have x = 0 and
ψ 0 ( ζ ) = 1 . 5 ζ , ψ ( ζ ) = 3 ζ and ψ 1 ( ζ ) = 2 .
We list the radii of convergence for Example 5 in Table 6.

TABLE 6. Radii of convergence for Example 5
Cases m r 1 r 0 r 1 r 2 r 3 r
P M 1 3 1 0.444444 0.204426 0.0935918 0.0935918
P M 1 6 2 0.444444 0.204426 0.0935918 0.0783369 0.0783369
P M 1 9 3 0.444444 0.204426 0.0935918 0.0783369 0.0741664 0.0741664

Example 6.Consider the following nonlinear system that involves logarithmic functions

H ( x ) = ln ( x i + 1 ) x i 20 , i = 1 , 2 , 3 , , N , ()
where x = ( x 1 , x 2 , x 3 , , x N ) t . For N = 100 , the required zero is x = ( 0 , 0 , 0 , 100 , 0 ) t . Then, we have for Ω = B x , 1 10
ψ 0 ( ζ ) = ψ ( ζ ) = 4 ζ and ψ 1 ( ζ ) = 2 .
We mentioned the radii of convergence for Example 6 in Table 7.

TABLE 7. Radii of convergence for Example 6
Cases m r 1 r 0 r 1 r 2 r 3 r x 0 k λ
P M 1 3 1 0.166667 0.0766597 0.0355522 0.0355522 ( 0 . 034 , 0 . 034 , 100 , 0 . 034 ) t 5 3.0158
P M 1 6 2 0.166667 0.0766597 0.0355522 0.0302649 0.0302649 ( 0 . 030 , 0 . 030 , 100 , 0 . 030 ) t 4 6.0122
P M 1 9 3 0.166667 0.0766597 0.0355522 0.0302649 0.0288902 0.0288902 ( 0 . 027 , 0 . 027 , 100 , 0 . 027 ) t 3 9.0488

4 CONCLUSION

We proposed a new technique that is capable of providing convergence based on suppositions only on the first derivative (adopted in these schemes) in contrast to earlier studies where authors used seventh-order derivative's hypotheses along with the Taylor series expansion. in addition, our new approach produce usable error analysis for the adopted operators with the help of Banach space. We adopted the C O C and A C O C for convergence order instead of using Taylor series, because these definitions need only the first order derivative. For the computational point of view, we pick six numerical examples and compare the radii of the convergence balls for these methods. Moreover, we demonstrate with an example that our results can be adopted in the cases where early study was applicable. Finally, we conclude that our approach can also be used to extend the usage of other iterative methods using inverses in an analogous procedure.

Biographies

  • Ramandeep Behl obtained his Ph.D. degree from the Panjab University, Chandigarh, India. He is working an assistant professor in the Department of Mathematics, King Abdulaziz University since August 2017. He has published over 90 papers in his field. He is a prolific researcher on top of the journals in the field of numerical analysis, focused principally on the solutions of nonlinear problems using iterative processes. His teaching and research has been in the areas of iterative methods for nonlinear problems, numerical functional analysis, calculus, scientific computing, engineering mathematics, and optimization.

  • Ioannis K. Argyros obtained his Ph.D. degree from the University of Georgia, Athens. He has published over 1000 papers, 35 books and 21 chapters in book. He is an editor in numerous journals in computational mathematics, active reviewer, and grant evaluator. His research is at the border of image processing, numerical solutions of PDE, iterative methods and matrix equations. His teaching and research has been in the areas of iterative methods, numerical functional analysis, scientific computing, optimization, and Banach spaces.

  • Christopher I. Argyros is a research scientist with interests in algorithmic analysis, scientific computing, artificial intelligence, computational and applied mathematics. He has published 12 papers and one book in these areas. He is currently working at the Department of Computer Science, Cameron University, Lawton, OK, USA.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.