Volume 3, Issue 6 e1214
SPECIAL ISSUE PAPER
Full Access

A novel two-parameter class of optimized hybrid block methods for integrating differential systems numerically

Gurjinder Singh

Gurjinder Singh

Department of Mathematical Sciences, I. K. Gujral Punjab Technical University Jalandhar, Main Campus, Kapurthala, India

Search for more papers by this author
Arvind Garg

Arvind Garg

Department of Mathematics, D. A. V. College, Chandigarh, India

Search for more papers by this author
Rajat Singla

Corresponding Author

Rajat Singla

Department of Mathematical Sciences, I. K. Gujral Punjab Technical University Jalandhar, Main Campus, Kapurthala, India

Department of Mathematics, Akal University, Talwandi Sabo, Bathinda, Punjab, India

Correspondence Rajat Singla, Department of Mathematical Sciences, I. K. Gujral Punjab Technical University, Main Campus, Kapurthala, Punjab, India.

Email: [email protected]

Search for more papers by this author
Vinay Kanwar

Vinay Kanwar

University Institute of Engineering and Technology, Panjab University, Chandigarh, India

Search for more papers by this author
First published: 29 November 2021
Citations: 6

Abstract

In this article, a two-parameter class of hybrid block methods for integrating first-order initial value ordinary differential systems is proposed. The methods exhibit hybrid nature which helps in bypassing the first Dahlquist barrier existing for linear multistep methods. The approach used in the development of a class of methods is purely interpolation and collocation technique. The class of methods is based on four intra-step points from which two intra-step points have been optimized by using an optimization strategy. In this optimization strategy, the values of two intra-step points are obtained by minimizing the local truncation errors of the formulas at the points x n + 1 / 2 and x n + 1 .The order of accuracy of the proposed methods is six. A method as a special case of this class of methods is considered and developed into a block form which produces approximate numerical solutions at several points simultaneously. Further, the method is formulated into an adaptive step-size algorithm using an embedded type procedure. This method which is a special case of this class of methods has been tested on six well-known first-order differential systems.

1 INTRODUCTION

Numerous numerical integrators are available for solving first-order initial value ordinary differential systems numerically. Runge–Kutta and linear multistep methods are the usual classes of methods that are being used for solving these systems numerically. These days many built-in computer codes based on these classes of methods are available in software like Matlab, Mathematica, and so forth. To develop more accurate and efficient codes for solving the differential systems is an ongoing process of research. This article is concerned with the development of a class of optimized codes for solving the following system:
Y = F ( x , Y ) ; Y ( x 0 ) = Y 0 , ()
where x [ x 0 , x N ] , Y : [ x 0 , x N ] m , F : [ x 0 , x N ] × m m . To proceed further, we assume that the system (1) has a unique continuously differentiable solution that we denote by Y ( x ) . As a first step, the interval of interest [ x 0 , x N ] is discretized as follows:
x n = x 0 + n h , n = 0 , 1 , 2 , 3 , , N ; h = x n + 1 x n
and an approximate numerical solution of the system (1) at x n is denoted as Y n Y ( x n ) .

Besides the classes of Runge–Kutta and linear multistep methods, many other classes of methods are available in the scientific literature for finding approximate numerical solution of the system (1), for instance, hybrid methods, block methods, exponentially fitted methods, and so forth. A good reference of other types of existing numerical methods for solving initial value ordinary differential problems numerically, one can consult References 1-25 and also references therein.

Initially, in the development of methods for solving the differential systems numerically, there is a gap between the extrapolation and substitution methods. In order to fill this gap, Gragg and Stetter,26 Butcher,27 and Gear28 simultaneously proposed the concept of hybrid methods which uses the information of solution at other points apart from end points of interval, named as off-step points. Along with the properties to linear multistep methods, hybrid methods also hold the characteristics of Runge–Kutta methods29. In the development of class of zero-stable linear multistep methods, the Dahlquist barrier restricts the order of the method but in the formation of hybrid methods they bypass this barrier, hence sometimes called as modified linear multistep methods. For finding the numerical solution of differential system (1) at the end points of the interval using hybrid methods, first the solution at off-step points has to be obtained which makes it little costly in terms of computational cost. This limitation can be removed by using the concept of block methods with the hybrid methods. The main advantage of using block methods is that, they produce the numerical solution of system (1) at several points simultaneously which makes it more efficient in terms of computational cost. First, Milne22 proposed the concept of block methods to make the predictor-corrector methods more efficient. Later on Rosser30 used them for the general purposes. Second advantage of block methods is that they are self starting. In the literature, many researchers proposed the methods having the combination of hybrid and block methods.6, 7, 11-13, 17, 18, 20, 31-35 Many authors proposed the hybrid methods with the arbitrary values to the off-step points (e.g., one can see References 36-38), then the Ramos et al.35 first proposed the concept of finding the values to the off-step points, which improves the efficiency of the method in terms of accuracy.

In this article, we have derived a two-parameter class of methods based on hybrid and block approaches. The development of the class of methods is on purely interpolation and collocation approach which is based on four intra-step (off-step) points39. Two off-step points of these four points are optimized by using an optimization strategy proposed in Reference 35 and other two off-step points are left free. In this way, we get an optimized version of two-parameter class of methods. In the present article, a special case of this class of methods is considered by taking simple values of the free parameters. In order to improve the performance of this method, an adaptive step-size formulation of this method has been discussed (e.g., one can see Reference 40) and then the resulting new adaptive step-size method has been tested on well-known problems in comparison with some other existing methods in the literature.

2 DEVELOPMENT OF A TWO-PARAMETER CLASS OF METHODS

For the sake of convenience, we develop the class of methods for solving the scalar case of (1), that is, for m = 1 . Let the following interpolating polynomial as an approximation of the true solution of the problem given as
y ( x ) ψ ( x ) = j = 0 6 c j ψ j ( x x n ) j ()
on the generic one-step block of points, that is, [ x n , x n + 1 ] . Here ψ j are the unknown constant coefficients and ( x x n ) j are the polynomial basis functions. In order to determine the values of coefficients ψ j , the following interpolatory and collocation conditions on y ( x ) and ψ ( x ) are imposed which are as follows:
y ( x n ) = ψ ( x n ) , y ( x n + i ) = ψ ( x n + i ) , i = 0 , u , r , s , t , 1 , ()
where x n + u , x n + r , x n + s , x n + t are the intra-step points such that x n < x n + u < x n + r < x n + s < x n + t < x n + 1 . Using (3), we have a system of seven equations in seven unknowns that can be solved by any computer system like Mathematica in order to get values of the unknown coefficients c j of the interpolating polynomial (2). Solving this system of equations, we will have values of the coefficients c j in terms of intra-step points x n + u , x n + r , x n + s , x n + t . Now, our objective is to find appropriate values of some of these intra-step points. Let a change of variable x = x n + k h in the interpolating polynomial. Thus we have
y ( x n + k h ) ψ ( x n + k h ) = j = 0 6 c j ψ j ( k h ) j . ()
By putting k = s , 1 in (4), we get two formulas at the points x n + s and x n + 1 , respectively. Note that these formulas are in terms of the intra-step points x n + u , x n + r , x n + s , x n + t . In order to find appropriate values of these intra-step points, expanding the formulas at x n + s and x n + 1 about Taylor series about the point x n , we get
( y ( x n + s ) , h ) = s 3 A + s B y ( 7 ) ( x n ) h 7 302400 + O ( h ) 8 , ( y ( x n + 1 ) , h ) = ( C + 7 r D ) y ( 7 ) ( x n ) h 7 302400 + O ( h ) 8 , ()
where
A = 7 r 2 s 3 10 t u 3 s 2 ( 1 + t + u ) + 5 s ( t + u + t u ) , B = 10 s 3 35 t u 14 s 2 ( 1 + t + u ) + 21 s ( t + u + t u ) , C = 10 14 t 14 u + 21 t u 7 s ( 2 3 t 3 u + 5 t u ) , D = ( 2 + 3 t + 3 u 5 t u + s ( 3 5 t 5 u + 10 t u ) ) .
By putting the principal terms of the local truncation errors, we have a system of nonlinear equations
s 3 A + s B = 0 , C + 7 r D = 0 .
By solving the above nonlinear system for u and t, we have
u = M N O , M = 10 r 14 r 2 5 s + 18 r s 14 r 2 s 5 s 2 + 10 r s 2 , N = 10 r + 14 r 2 + 5 s 18 r s + 14 r 2 s + 5 s 2 10 r s 2 2 4 14 r 21 r 2 7 s + 14 r s 5 r s 7 r 2 s 3 s 2 + 5 r s 2 , O = 2 14 r 21 r 2 7 s + 14 r s , t = 1 Q P 70 r 2 Q + 203 r 3 Q 147 r 4 Q + 70 r s Q 595 r 2 s 2 Q + 385 r 3 s Q 147 r 4 s Q 35 s 2 2 Q + 133 r s 2 Q 595 r 2 s 2 2 Q + 203 r 3 s 2 Q 35 s 3 2 Q + 70 r s 3 Q 70 r 2 s 3 Q + 7 r P 2 4 Q 5 r s 7 r 2 s 3 s 2 + 5 r s 2 Q 21 r 2 P 2 4 Q 5 r s 7 r 2 s 3 s 2 + 5 r s 2 2 Q 7 s P 2 4 Q 5 r s 7 r 2 s 3 s 2 + 5 r s 2 2 Q + 7 r s P 2 4 Q 5 r s 7 r 2 s 3 s 2 + 5 r s 2 Q ,
where
P = 10 r 14 r 2 5 s + 18 r s 14 r 2 s 5 s 2 + 10 r s 2 , Q = 14 r 21 r 2 7 s + 14 r s .
From the above expressions, one can observe that optimized values of u and t have been obtained in terms of two free parameters r and s. Thus, we have a two-parameter class of optimized methods for solving (1) numerically. Now using some arbitrary simple values of r and s given by
r = 1 3 , s = 1 2
the corresponding values of u and t are given by
u = 1 84 ( 39 849 ) , t = 1 84 ( 39 + 849 ) .
Evaluating (4) at x n + u , x n + r , x n + s , x n + t , and x n + 1 , we get the complete hybrid block method whose generalized form into an equation can be written as
y n + i = y n + μ 0 f n + μ u f n + u + μ r f n + r + μ s f n + s + μ t f n + t + μ 1 f n + 1 , i = u , r , s , t , 1 ,
where the values of coefficients μ 0 , μ u , μ r , μ s , μ t , μ 1 are listed in Tables 1 and 2.
TABLE 1. Coefficients of the method
Method λ μ 0 μ 1 μ r
y n + u 1 35082237 + 565717 849 1161699840 21177633 + 789287 849 4065949440 479 4368 1669 3 283 7280
y n + r 1 599 19440 41 68040 33285611 + 1108909 849 333758880
y n + s 1 131 3840 1 13440 3 2016941 + 69691 849 65927680
y n + t 1 35082237 565717 849 1161699840 21177633 789287 849 4065949440 400794220019 + 13979636581 849 3324139553280
y n + 1 1 7 240 47 840 717 2912 6147 3 283 14560
  • Note: The coefficients of y n , f n , f n + 1 , and f n + u which are denoted by λ , μ 0 , μ 1 , and μ u , respectively.
TABLE 2. Coefficients of the method
μ s μ u μ t
3 2213083 + 146877 849 279668480 6727821 217627 849 36303120 400794220019 13979636581 849 3324139553280
593 4680 28 1215 33285611 1108909 849 333758880
1863 8320 7 120 3 2016941 + 69691 849 65927680
3 2213083 + 146877 849 279668480 6727821 + 217627 849 36303120 479 4368 + 1669 3 283 7280
81 520 4 15 3 338185 + 2049 849 4120480
  • Note: The coefficients of f n + r , f n + s , and f n + t which are represented by μ r , μ s and μ t , respectively.

3 ANALYSIS OF THE METHOD

In this section, we will discuss basic characteristics of the proposed method.

3.1 Order of accuracy and consistency

Let the difference operator 𝕃 associated with the proposed method
𝕃 [ z ( x ) , h ] = j ρ j z ( x + j h ) h j δ j z ( x + j h ) , j = 0 , u , r , s , t , 1 , ()
where ρ j , δ j , and γ j are precisely the corresponding coefficients of the method. Assuming that z ( x ) has continuous derivatives of sufficiently high order and expanding z ( x + j h ) and z ( x + j h ) in Taylor series about x we get
𝕃 [ z ( x ) , h ] = κ 0 z ( x ) + κ 1 h z ( x ) + κ 2 h 2 z ( x ) + + κ p h p z ( p ) ( x ) + . ()
Definition. The linear difference operator (6) and the associated block method are said to be of order p if, in (7)
κ 0 = κ 1 = κ 2 = = κ p = 0 and κ p + 1 0 .
Here, the κ i s are vectors and κ p + 1 is called vector of error constants and
𝕃 [ z ( x ) , h ] = κ p + 1 h p + 1 z ( p + 1 ) ( x ) + 𝒪 ( h p + 2 ) .
For the proposed method we get that κ 0 = κ 1 = κ 2 = = κ 6 = 0 and
κ 7 = ( 39 + 849 ) 3 ( 154467 + 5569 849 ) 9295345916559360 , 1 33067440 , 0 , ( 39 + 849 ) 3 ( 154467 + 5569 849 ) 9295345916559360 , 0 T .

Thus, this establishes that the proposed method has at least sixth order of accuracy. The method is also consistent with (1) as the method has order of accuracy more than one.

3.2 Zero-stability

Considering h 0 , the proposed method reduces to y n + u = y n , y n + r = y n , y n + s = y n , y n + t = y n , y n + 1 = y n that can be rewritten as
M ^ ζ = N M ^ ζ 1 , ()
where
N = 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
and
M ^ ζ = ( y n + u , y n + r , y n + s , y n + t , y n + 1 ) T , M ^ ζ 1 = ( y n + u 1 , y n + r 1 , y n + s 1 , y n + t 1 , y n ) T . ()
In this case, the first characteristic polynomial of the method is
ρ ( κ ) = d e t [ I 5 κ N ] = κ 4 ( κ 1 ) , ()
where I 5 is an identity matrix of order 5. Since its roots satisfy the condition: κ j 1 and the root of modulus one is simple. Thus the new proposed method is zero-stable.

3.3 Linear stability

Consider the Dahlquist test equation given by
y = λ y , Re ( λ ) < 0 . ()

The exact solution of the test equation is y = exp ( λ x ) and this solution approaches to zero as x . The absolute stability region is determined by applying the proposed method to the considered test equation (11) which results in the system of equations.

After eliminating y n + u , y n + r , y n + s , and y n + t from the obtained system and changing λ h H , y n + 1 z , we get the characteristic equation given by
90720 + 41760 H 8460 H 2 + 960 H 3 63 H 4 + 2 H 5 z = 90720 48960 H 12060 H 2 1740 H 3 153 H 4 7 H 5 .
The stability characteristics of the proposed method depend upon the magnitude of the root of the above characteristic equation. The absolute stability region, say S, of a method is defined as (see Reference 41)
S = { H : | ϕ ( z ) | < 1 } ,
where ϕ ( z ) , denotes the stability function.
If S , that is, if the absolute stability region completely lies in the left-half complex plane, then the method is said to be conditionally stable. For the proposed method, the stability function ϕ ( z ) is given by
ϕ ( z ) = 90720 48960 H 12060 H 2 1740 H 3 153 H 4 7 H 5 90720 + 41760 H 8460 H 2 + 960 H 3 63 H 4 + 2 H 5 .
Figure 1 represents the stability region of the method which reveals the propagation factor ϕ ( z ) such that | ϕ ( z ) | < 1 . From Figure 1, it can be judged that the method is conditionally stable.
Details are in the caption following the image
Absolute stability region of the method

4 FORMULATION AS A VARIABLE STEP-SIZE SOLVER

In this section, the given method is formulated in variable step-size mode by using an embedded-type procedure. In this procedure, two methods of different orders are implemented simultaneously. More precisely, let the first method is of order p and other method is of order q with q < p . The method of order p is used to integrate the problem whereas the method of order q is used in the estimation of local error at each integration step. This pair of methods is selected in such a way that both of these methods share some same function evaluations42. By doing so, there will not be increase in the number of function evaluations in using pair of these methods.

For the proposed method, the following fifth order method has been used in order to estimate the local error. In order to construct the method using the same function evaluations used by the main proposed method, the method of undetermined coefficients has been adopted in order to derive the following lower order method (for more details, one can see Reference 4)
y n + 1 = 44 y n + 405 y n + r 448 y n + s + h ( 4 f n + 54 f n + r + 32 f n + s ) , ()
whose local truncation error (LTE) is
L T E = 1 6480 h 6 y ( 6 ) ( x n ) + O ( h 7 ) .
Note that the method (12) shares all function evaluations with the proposed method.
Let y n + 1 is the approximate solution obtained by method of order p and ŷ n + 1 be the solution obtained by the method of q, then the error estimate ( E S T ) will be determined by | y n + 1 ŷ n + 1 | . This estimate helps in selecting the appropriate value of the new step-size. Following expression helps us in evaluating the new step-size for the next step of integration:
h n e w = η h o l d T o l E S T 1 / ( q + 1 ) , ()
where q denotes the order of lower order method. The safety factor η with condition 0 < η < 1 , is chosen to avoid large variations in the new step. The notation T o l is designated for predefined user tolerance.
In the execution of the method, the following conditional structure may be imposed:
If h m i n h n e w h m a x then h o l d = h n e w .
The whole procedure can be simplified as in following steps:
  1. If value of E S T < T o l , then obtained values at this step are accepted and for the next step of integration the step-size will be doubled.
  2. If value of E S T T o l , then the current step-size is redefined as per the above expression and redo the calculations.

In selecting the significant initial steps-size ( h i n i ), the different strategies were available in the scientific literature, for example, see Reference 24. In another way, one can also choose the small step-size ( h i n i ) (see Reference 25) and the above defined algorithm redefine it to appropriate value as if required.

5 NUMERICAL EXPERIMENTATION

This section is concerned with implementation of the proposed method in constant step-size mode named as C-OHBM and in variable step-size mode, named as EMOHB on a variety of well-known differential systems. The first example demonstrates the comparison of C-OHBM numerical integrator with the adaptive step-size size integrator EMOHB. In the second example, a numerical comparison has been made between the proposed constant step size method with some other well-known constant step-size solvers. Further, some numerical experiments performed by using the proposed method in adaptive step-size mode and other existing variable step-size methods.

Some abbreviations have been used in the tables below which are designated as follows: TOL: Tolerance; FEVAL's: Number of function evaluations, including derivatives; h i n i : Initial step-size; M a x E r r : Maximum absolute errors along the integration interval given by
M a x E r r = max 1 k m { max 0 n N { | Y k ( x n ) Y k n | } } ,
M a x E r r ( Y k ( x ) ) denotes maximum absolute errors in computing the true solution of the kth-component of a given system of equations, represented as
M a x E r r ( Y k ( x ) ) = max 0 n N { | Y k ( x n ) Y k n | } .
Further the notation E ( Y k N ) has been used for absolute errors at the final point in approximating the k-component of the true solution of a system and M a x E r r ( E ( Y k N ) ) represents the maximum absolute error among all the components of the true solution of a system at the final point of block, where N is the number of integration steps, Y k ( x n ) and Y k n represents the analytical and computed kth-component of solution of a differential system (1) at the point x n . For comparison purposes, the following methods have been considered:
  1. C-IMBLOCK: It is an implicit Simpson's type second derivative 𝒜 -stable two step block method. The method is of order 6 (see Reference 5).
  2. IMBLOCK: This method is the variable step-size version of above Simpson's type second derivative described in Reference 6. At each step of integration the local error is estimated using the third order Kutta's method.
  3. RKGauss: This method is an 𝒜 -stable implicit Runge–Kutta method based on a “Gaussian quadrature.” The method is a 5-stage 10th order method (see Reference 1). The method with constant step-size algorithm is named as C-RkGauss and with variable step-size mode is named as RkGauss
  4. RADAU: This integrator consists of variable order (1, 5, 9, 13) with adapting step-size control. The method is based on implicit Runge–Kutta methods (Radau-IIa). In the experimentation the method is used which was developed in the Matlab by Hairer (see MatlabStiff package in http://www.unige.ch/∼hairer/software.html).
  5. EMOHB: This is the new optimized hybrid scheme which uses the embedded variable step-size strategy explained in Section 4. Due to its implicit nature we have to solve a system of linear/nonlinear equations at each step of integration. The code of new proposed method is developed in Mathematica and in order to solve the resulting nonlinear systems the command FindRoot is used while for linear systems we used the NSolve command. Please note that proposed method with constant step-size algorithm is named as C-OHBM. Note. The RADAU solver and the proposed scheme both use different strategies in changing the step-size, as RADAU considers variable step as well as variable order methods. On the other hand, the methods: proposed scheme, RKGauss have been used in variable stepsize mode, using the strategy described in Section 4. In solving the test problems we have considered T O L = A b s T O L = R e l T O L in which A b s T O L and R e l T O L abbreviated as absolute error tolerance and relative error tolerance, respectively. The A b s T O L and R e l T O L have been used by RADAU and the T O L will be used by proposed scheme to estimate the error at each step of integration. We used the software MATLABR2009b to implement RADAU solver on test problems whereas the proposed method, RKGauss and IMBLOCK codes were implemented in Wolfram Mathematica 11, version 11.0.1.0. The runs have been done using a laptop with processor i3-4030U CPU @ 1.90GHz, Win10.

5.1 A nonlinear problem

Consider the nonlinear problem given by
Y 1 ( x ) = Y 2 ( x ) , Y 1 ( 0 ) = 1 , Y 2 ( x ) = x Y 2 ( x ) 2 , Y 2 ( 0 ) = 1 2 .

The problem is solved in the interval x [ 0 , 1 . 99 ] over various number of steps. The numerical computations have been done with the block method in both constant step-size (C-OHBM) and adaptive step-size (EMOHB) modes whose absolute errors with respect to exact solution Y 1 ( x ) = 1 + 1 2 ln 2 + x 2 x , Y 2 ( x ) = 2 4 x 2 are presented in Table 3. All the computations for this problem have been done by varying step-size h i n i and tolerances T O L for adaptive step-size code in order to equalize the number of steps used in block method with fixed step-size strategy. The data in Table 3 reveal the better performance of the proposed adaptive step-size code compared with its existing fixed step-size counterpart. It is clear from Figures 2 and 3 that the adaptive step-size code adjusts the step-size in order to capture the behavior of the true solution more appropriately than its fixed step-size version for same number of nodal points.

Details are in the caption following the image
Exact and discrete solutions Y 1 of problem in Section 5.1 using the constant step-size (C-OHBM) with N = 36 (left) and variable step-size (EMOHB) with N = 36 (right)
Details are in the caption following the image
Exact and discrete solutions Y 2 of problem in Section 5.1 using the constant step-size (C-OHBM) with N = 36 (left) and variable step-size (EMOHB) with N = 36 (right)
TABLE 3. Data for problem in Section 5.1
N FEval's M a x E r r in C-OHBM M a x E r r in EMOHB
19 114 1 . 48022 × 1 0 1 3 . 01815 × 1 0 4
36 216 3 . 82325 × 1 0 0 3 . 56729 × 1 0 6
51 306 1 . 13584 × 1 0 0 1 . 78101 × 1 0 7
106 636 3 . 91262 × 1 0 2 7 . 01569 × 1 0 10
153 918 5 . 48769 × 1 0 3 5 . 92806 × 1 0 11
332 1992 5 . 37355 × 1 0 5 5 . 42855 × 1 0 12

5.2 Example

In this example, first-order stiff system is considered5, 10
Y 1 ( x ) = Y 1 ( x ) + 95 Y 2 ( x ) , Y 1 ( 0 ) = 1 Y 2 ( x ) = Y 1 ( x ) 97 Y 2 ( x ) , Y 2 ( 0 ) = 1 , ()
where x [ 0 , 1 ] , whose analytical solution is below as:
Y 1 ( x ) = 1 47 ( 95 e 2 x 48 e 96 x ) , Y 2 ( x ) = 1 47 ( 48 e 96 x e 2 x ) . ()

The numerical results were computed by considering h = 2 j , j = 6 , 7 , 8 . The data in Table 4 demonstrate that the proposed method in constant step-size performs better in comparison with other fixed step-size methods available in the literature.

TABLE 4. Numerical results for problem in Section 5.2
h N Method M a x E r r ( Y 1 ( x ) ) M a x E r r ( Y 2 ( x ) )
2 6 64 C-OHBM 6 . 54616 × 1 0 7 6 . 54616 × 1 0 7
C-RKGauss 1 . 50956 × 1 0 4 1 . 58864 × 1 0 4
C-IMBLOCK 2 . 99956 × 1 0 4 2 . 99956 × 1 0 4
2 7 128 C-OHBM 4 . 11283 × 1 0 9 4 . 11283 × 1 0 9
C-RKGauss 1 . 49823 × 1 0 4 1 . 06961 × 1 0 4
C-IMBLOCK 7 . 32242 × 1 0 6 7 . 32242 × 1 0 6
2 8 256 C-OHBM 2 . 90306 × 1 0 11 2 . 90306 × 1 0 11
C-RKGauss 1 . 49261 × 1 0 4 8 . 99157 × 1 0 5
C-IMBLOCK 1 . 10007 × 1 0 7 1 . 10007 × 1 0 7

5.3 The Brusselator system

Consider the diffusion-free “Brusselator system” which was also studied in Reference 7. The problem consists of the two equations
Y 1 ( x ) = L + Y 1 2 ( x ) Y 2 ( x ) ( M + 1 ) Y 1 ( x ) , Y 1 ( 0 ) = Y 1 0 , Y 2 ( x ) = M Y 1 ( x ) Y 1 2 ( x ) Y 2 ( x ) , Y 2 ( 0 ) = Y 2 0 , ()
where L and M are positive real constants. For numerical experimentation, we assume M = 3 , L = 1 , with initial values Y 1 ( 0 ) = 1 . 5 , Y 2 ( 0 ) = 3 , over the integration interval [ 0 , 20 ] , as it was suggested in Reference 8. With the help of Wolfram Mathematica, the reference solution at end point x N = 20 is obtained with a 12th order implicit Runge–Kutta method using the command NDSolve. The reference solution is
Y 1 ( x N ) = 0 . 498637071268347848635481287883 , Y 2 ( x N ) = 4 . 596780349452011183183066998636 .
The numerical results were computed by considering
( h i n i , T O L ) = ( 1 0 ξ , 1 0 ( ξ + 3 ) ) , ξ = 1 , 2 , 3 .
The data in Table 5 demonstrate that the proposed algorithm performs finer in comparison with other methods. Also, in Figure 4, the efficiency curve depicts the superior performance of the new method.
Details are in the caption following the image
Efficiency curves of problem in Section 5.3 1 n 40 with h i n i = 1 0 ξ , T O L = 1 0 ( ξ + 3 ) , ξ = 1 ( 1 ) 6
TABLE 5. Numerical results for problem in Section 5.3
h i n i Tol Method E ( Y 1 N ) E ( Y 2 N ) N FEVALs
1 0 1 1 0 4 RADAU 8 . 2435 × 1 0 6 5 . 418 × 1 0 6 113 957
RKGauss 4 . 03571 × 1 0 5 3 . 96638 × 1 0 5 64 384
IMBLOCK 4 . 92369 × 1 0 6 9 . 47049 × 1 0 6 156 624
EMOHB 6 . 52057 × 1 0 8 6 . 04199 × 1 0 8 63 378
1 0 2 1 0 5 RADAU 1 . 8202 × 1 0 7 6 . 7623 × 1 0 8 156 1296
RKGauss 2 . 37942 × 1 0 5 2 . 05997 × 1 0 5 101 606
IMBLOCK 2 . 22909 × 1 0 7 4 . 66775 × 1 0 7 262 1048
EMOHB 5 . 64853 × 1 0 9 6 . 52808 × 1 0 9 89 534
1 0 3 1 0 6 RADAU 4 . 0993 × 1 0 7 7 . 4 × 1 0 9 173 1692
RKGauss 1 . 41206 × 1 0 5 1 . 43751 × 1 0 5 197 1182
IMBLOCK 8 . 04798 × 1 0 9 1 . 58845 × 1 0 8 452 1808
EMOHB 4 . 34532 × 1 0 10 3 . 91933 × 1 0 10 128 768

5.4 Jacobi elliptic functions

Here, a well-known system consisting of Jacobi elliptic functions S n , C n , and D n which is frequently used in numerical experiments9
S n ( x ) = C n ( x ) D n ( x ) , S n ( 0 ) = 0 , C n ( x ) = S n ( x ) D n ( x ) , C n ( 0 ) = 1 , D n ( x ) = l S n ( x ) C n ( x ) , D n ( 0 ) = 1 , ()
where l = 1 2 . The analytical solution of the system in terms of series as follows
S n ( x ) = 2 π 1 2 M i = 0 k i + 1 2 1 k 2 i + 1 sin ( ( 2 i + 1 ) g ) , C n ( x ) = 2 π 1 2 M i = 0 k i + 1 2 1 + k 2 i + 1 cos ( ( 2 i + 1 ) g ) , D n ( x ) = π 2 M + 2 π M i = 1 k i 1 + k 2 i cos ( 2 i g ) . ()
Here k = e π , g = π x 2 M , and M = 0 π 2 d ϕ 1 1 2 sin 2 ϕ 1 . 85 . The problem is solved over the integration interval [ 0 , 50 ] . In solving the problem, we have considered
( h i n i , T O L ) = ( 1 0 ξ , 1 0 ( ξ + 2 ) ) , ξ = 1 , 2 , 3 .
The numerical results introduced in Table 6 exhibits the superior performance of the new scheme. The efficiency curves in Figure 5 also show the better performance of the proposed method. The discrete solutions of the problem over the interval [ 0 , 50 ] has been plotted with parameters h i n i = 1 0 6 , T O L = 1 0 6 in Figure 6. From the figure, it is easily seen that the adaptive step-size strategy works well with choosing larger step-sizes for the smooth portion of curve.
Details are in the caption following the image
Efficiency curves of problem in Section 5.4 1 n 50 with h i n i = 1 0 ξ , T O L = 1 0 ( ξ + 2 ) , ξ = 1 ( 1 ) 8
Details are in the caption following the image
Discrete solutions of problem in Section 5.4 1 n 50 with h i n i = 1 0 6 , T O L = 1 0 8
TABLE 6. Computation data for problem in Section 5.4
h i n i Tol Method M a x E r r ( E ( Y i N ) ) N FEVALs
1 0 1 1 0 3 RADAU 3 . 7 × 1 0 3 130 1106
RKGauss 2 . 09624 × 1 0 3 58 348
IMBLOCK 1 . 71992 × 1 0 4 132 528
EMOHB 2 . 19936 × 1 0 6 61 366
1 0 2 1 0 4 RADAU 9 . 8342 × 1 0 4 152 1114
RKGauss 1 . 7235 × 1 0 3 92 552
IMBLOCK 1 . 34579 × 1 0 5 226 904
EMOHB 3 . 39734 × 1 0 7 89 534
1 0 3 1 0 5 RADAU 2 . 5953 × 1 0 4 188 1340
RKGauss 1 . 08544 × 1 0 3 147 882
IMBLOCK 1 . 46663 × 1 0 6 388 1552
EMOHB 5 . 20869 × 1 0 8 129 774

5.5 Example

In this, the nonlinear system of first-order ODEs has been considered as in Reference 10
Y 1 ( x ) = Y 2 ( x ) Y 1 ( x ) 2 ( 1 + x ) , Y 1 ( 0 ) = 1 , Y 2 ( x ) = 1 20 ( Y 2 ( x ) 2 ( 1 + x ) 2 ) , Y 2 ( 0 ) = 1 ()
the integration interval taken is x [ 0 , 10 ] . The true solution of the system is
Y 1 ( x ) = 1 1 + x , Y 2 ( x ) = 1 + x . ()

The characteristic values of the Jacobian matrix are 2 1 + x and 40 ( 1 + x ) . The numerical computations were done by considering the initial step-sizes and tolerances as ( h i n i , T O L ) = ( 1 0 ξ , 1 0 ξ + 1 ) , ξ = 1 , 2 , 3 , 4 . The numerical computations given in Table 7 supports that the superiority of the proposed block method in terms of accuracy and number of function evaluations.

TABLE 7. Numerical results for problem in Section 5.5
h i n i Tol Method E ( Y 1 N ) E ( Y 2 N ) N FEVALs
1 0 1 1 0 2 RADAU 1 . 9886 × 1 0 4 1 . 971 × 1 0 4 18 174
RKGauss 1 . 14371 × 1 0 5 1 . 63414 × 1 0 4 6 36
IMBLOCK 4 . 30389 × 1 0 5 0 10 40
EMOHB 1 . 69927 × 1 0 7 0 6 36
1 0 2 1 0 3 RADAU 8 . 0475 × 1 0 4 4 . 67 × 1 0 5 15 136
RKGauss 5 . 27568 × 1 0 6 1 . 87886 × 1 0 5 13 78
IMBLOCK 5 . 976 × 1 0 6 0 14 56
EMOHB 2 . 32306 × 1 0 8 0 8 48
1 0 3 1 0 4 RADAU 8 . 0443 × 1 0 7 0 14 117
RKGauss 2 . 36662 × 1 0 6 1 . 48129 × 1 0 5 25 150
IMBLOCK 6 . 56549 × 1 0 7 0 20 80
EMOHB 3 . 77153 × 1 0 9 0 11 66
1 0 4 1 0 5 RADAU 1 . 6836 × 1 0 7 0 17 136
RKGauss 8 . 0497 × 1 0 7 2 . 02722 × 1 0 6 63 378
IMBLOCK 5 . 32162 × 1 0 8 0 32 128
EMOHB 5 . 95103 × 1 0 10 0 15 90

5.6 Example

Consider the following system of first-order ODEs
Y 1 ( x ) = 200 Y 2 ( x ) 2 , Y 1 ( 0 ) = 1 , Y 2 ( x ) = 100 Y 2 ( x ) , Y 2 ( 0 ) = 1 , ()
with x [ 0 , 20 ] . The analytical solution of the system is
Y 1 ( x ) = e 200 x , Y 2 ( x ) = e 100 x . ()

The numerical computations of this problem have been done over various initial steps-sizes and tolerances as ( h i n i , T O L ) = ( 1 0 ξ , 1 0 ξ + 2 ) , ξ = 1 , 2 , 3 The numerical results exhibit in Table 8 confirms the finer performance of the proposed block integrator in terms of accuracy and number of function evaluations.

TABLE 8. Computation data for problem in Section 5.6
h i n i Tol Method M a x E r r ( E ( Y i N ) ) N FEVALs
1 0 1 1 0 3 RADAU 5 . 8170 × 1 0 4 28 178
RKGauss 7 . 1478 × 1 0 5 14 84
IMBLOCK 4 . 22379 × 1 0 5 40 160
EMOHB 2 . 42453 × 1 0 7 14 84
1 0 2 1 0 4 RADAU 1 . 2 × 1 0 3 24 163
RKGauss 4 . 26683 × 1 0 5 18 108
IMBLOCK 1 . 3207 × 1 0 6 48 192
EMOHB 1 . 70072 × 1 0 8 16 96
1 0 3 1 0 5 RADAU 2 . 4768 × 1 0 6 23 153
RKGauss 1 . 45338 × 1 0 5 28 168
IMBLOCK 4 . 23714 × 1 0 8 64 256
EMOHB 1 . 64273 × 1 0 9 22 132

6 CONCLUSIONS

A new two-parameter class of optimized hybrid block methods for solving first-order initial value ordinary differential systems is proposed. A theoretical analysis of the class of methods has been done. An adaptive step-size formulation of a method belongs to the class has been considered. The numerical experiments carried out on solving well-known problems that support the superior performance of the proposed method.

ACKNOWLEDGMENTS

We would like to thank the anonymous reviewers for their constructive comments that have greatly contributed to improve the article.

    CONFLICT OF INTEREST

    The authors declare that there is no conflict of interest regarding the publication of this article.

    Biographies

    • Gurjinder Singh is working at Department of Mathematical Sciences, I.K. Gujral Punjab Technical University Jalandhar, Kapurthala, Punjab, India. He completed his Ph.D. in Mathematics from Panjab University, Chandigarh in 2016. He has more than 8 years of teaching experience and his research interests include numerical analysis of differential equations and numerical linear algebra. He has published several research articles in well-esteemed journals.

    • Arvind Garg is working as an Asst. Professor in Post Graduate Department of Mathematics, D.A.V. College, Sector-10, Chandigarh, India. He has more than 8 years of teaching experience and his area of research is numerical analysis. He has published several research articles in this field.

    • Rajat Singla has completed his B.Sc. (Hons. School) and M.Sc. (Hons. School) from Panjab University, Chandigarh. Presently, he is pursuing his Ph.D. from the Department of Mathematical Sciences, I.K. Gujral Punjab Technical University, Kapurthala, Jalandhar, India and working as an Assistant Professor in Department of Mathematics, Akal University, Bathinda, India. He has more than 6 years of teaching experience. His scientific interests include numerical treatment of initial-value problems.

    • Vinay Kanwar completed his Ph.D. degree from Department of Mathematics, Himachal Pradesh University, Shimla, India. He is currently a Professor in Mathematics at University Institute of Engineering and Technology, Panjab University, Chandigarh. His doctoral thesis is in the area of fluid dynamics. Presently, he is working in numerical analysis, fixed point theory, specifically in iterative techniques for solving nonlinear equations and nonlinear systems, numerical solution of differential equations, and their applications in solving real-life problems.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.