Volume 2025, Issue 1 8813408
Research Article
Open Access

A Comparative Study of Linearization Techniques With Adaptive Block Hybrid Method for Solving First-Order Initial Value Problems

Salma Ahmedai

Corresponding Author

Salma Ahmedai

School of Mathematics , Statistics and Computer Science , University of KwaZulu-Natal , Pietermaritzburg , South Africa , ukzn.ac.za

Department of Modeling and Computing for Mathematics , Al Neelain University , Khartoum , Sudan , neelain.edu.sd

Search for more papers by this author
Precious Sibanda

Precious Sibanda

School of Mathematics , Statistics and Computer Science , University of KwaZulu-Natal , Pietermaritzburg , South Africa , ukzn.ac.za

Search for more papers by this author
Sicelo Goqo

Sicelo Goqo

School of Mathematics , Statistics and Computer Science , University of KwaZulu-Natal , Pietermaritzburg , South Africa , ukzn.ac.za

Search for more papers by this author
Osman Noreldin

Osman Noreldin

Department of Mathematical Sciences , University of Zululand , Zululand , South Africa , unizulu.ac.za

Search for more papers by this author
First published: 01 July 2025
Academic Editor: Jaume Giné

Abstract

This paper investigates linearization methods used in the development of an adaptive block hybrid method for solving first-order initial value problems. The study focuses on Picard, linear partition, simple iteration, and quasi-linearization methods, emphasizing their role in enhancing the performance of the adaptive block hybrid method. The efficiency and accuracy of these techniques are evaluated through solving nonlinear differential equations. The study provides a comparative analysis focusing on convergence properties, computational cost, and the ease of implementation. Nonlinear differential equations are solved using the adaptive block hybrid method, and for each linearization method, we determine the absolute error, maximum absolute error, and number of iterations per block for different initial step-sizes and tolerance values. The findings indicate that the four techniques demonstrated absolute errors, ranging from O(10−12) to O(10−20). We noted that both the Picard and quasi-linearization methods consistently achieve the highest accuracy in minimizing absolute errors and enhancing computational efficiency. Additionally, the quasi-linearization method required the fewest number of iterations per block to achieve its accuracy. Furthermore, the simple iteration method required fewer number of iterations than the linear partition method. Comparing minimal CPU time, the Picard method required the least. These results address a critical gap in optimizing linearization techniques for the adaptive block hybrid method, offering valuable insights that enhance the precision and efficiency of methods for solving nonlinear differential equations.

1. Introduction

Nonlinear differential equations arise in various fields of science and engineering, and obtaining accurate solutions to these problems remain a challenging task to researchers. Numerical methods are essential for solving nonlinear differential equations that lack analytical solutions [1]. Many numerical methods allow for iterative refinement and visualization of solutions, making them valuable for practical applications in science and engineering [2].

Block hybrid methods (BHMs) have emerged as promising numerical methods, combining the strengths of multistep and one-step methods to provide efficient and stable numerical schemes [3]. It is widely recognized that BHMs are capable of efficiently solving initial value problems (IVPs) and boundary value problems (BVPs) [4, 5]. The theoretical properties of the BHM, such as convergence order, zero-stability, and consistency, to ensure their convergence have been investigated [6]. Many researchers are focusing on improving BHM to enhance the efficiency and convergence when solving differential equations [7, 8]. The goal is to develop more accurate and robust BHMs that can handle a wider range of problems, including nonlinear IVPs. For example, some of the key areas being explored include deriving new k-step hybrid block methods with higher orders of accuracy and better stability properties [9]. Other innovations include incorporating off-grid points within the block to improve the approximation solutions [10] and employing different basis functions, such as Hermite polynomials, to construct the continuous linear multistep methods that form the foundation of the BHM [11]. Focusing on these aspects, researchers aim to develop BHMs that are not only more accurate but are also computationally efficient and reliable for solving differential equations.

One technique used to improve the BHM is to utilize variable step-sizes instead of a fixed step-size scheme [12]. This adaptive technique offers several advantages, such as improving the accuracy of the solution by adjusting the step-size to achieve the desired level of precision without wasting computational time or introducing unnecessary errors [13]. Adaptive techniques can enhance the efficiency of the method by avoiding unnecessary evaluations of the function or its derivatives, and thereby saving time and memory [14]. The variable step-size scheme can improve the stability of the method by using smaller step-sizes when needed to avoid numerical instability or divergence that may occur with a fixed step-size approach [15]. There are several existing methods for introducing adaptivity in numerical methods [16]. One such technique is the use of embedded methods that use two approximations of different orders at each step, and the difference is taken as an estimate of the local error [17, 18]. Examples of these include the Runge–Kutta–Fehlberg method, the Dormand–Prince method, and the Cash–Karp method [1921].

Several researchers have investigated different strategies for implementing the adaptive block hybrid method (ABHM) for solving nonlinear differential equations [22]. Some ABHMs employ two different approximations, each of a different order of accuracy, at each step [23]. The difference between these two approximations is then used as a local error estimate at that step. This error estimates the key that allows the method to adaptively adjust the step-size to maintain a desired accuracy in the computation [24]. Singh et al. [23] presented an efficient optimized adaptive hybrid block method for IVPs. The authors utilized three off-step points within the one-step block. Two of the off-step points were optimized to minimize the local truncation errors (LTEs) of the main method. The adaptive method was tested on several well-known systems and it compared favorably with other numerical methods in the literature. Rufai et al. [25] proposed an adaptive hybrid block method for solving first-order IVPs of ordinary differential equations and partial differential equation problems. The method was formulated in a variable step-size and implemented with an adaptive step-size strategy to maintain the error within a specified tolerance. It was demonstrated that the method was more efficient than some other existing numerical methods used in solving the differential equations in the study.

A key aspect of enhancing the accuracy of a numerical approximation is the choice of the linearization technique [26]. Linearization is necessary to transform the nonlinear problem into a form that can be easily solved [27]. Linearization methods are therefore important for the purpose of transforming nonlinear differential equations into linear equations that are easier to solve using numerical techniques [28]. The choice of the linearization method can impact the stability of the numerical scheme, its accuracy, and computational efficiency [29]. Some well-known linearization techniques include the Taylor series linearization which involves expanding the nonlinear function in a Taylor series around a point of interest and truncating higher-order terms. This is straightforward but may require small step-sizes for accuracy [30]. Newton’s method uses an iterative technique where the nonlinear equation is approximated by a linear one at each iteration step [31]. It is highly accurate but can be computationally intensive [32].

ABHMs have gained prominence due to their flexibility in adjusting step-sizes and ability to handle stiff equations. However, the performance of numerical methods is significantly influenced by the linearization techniques used [33]. Despite extensive work on linearization techniques, there remains a need for a comprehensive survey that evaluates these methods in the context of ABHMs for first-order IVPs. This paper aims to fill this gap by presenting an evaluation of linearization methods when used with the ABHM, assessing their strengths and weaknesses. We outline the following questions.
  • What are the existing linearization methods used with ABHM, and what are their strengths and weaknesses?

  • How can linearization techniques be implemented with the ABHM, and how does the performance compare with other numerical methods?

The Picard method (PM) is based on the Picard–Lindelöf theorem, which ensures the existence and uniqueness of solutions for IVPs [3436]. The PM starts with an initial guess which is then refined iteratively by integrating the nonlinear function over the interval of interest [37, 38]. The PM is a straightforward method that may however not always converge as it relies on the assumption that the nonlinear function is locally Lipschitz continuous, which is not always the case [39]. The PM converges linearly, meaning that the error in the solution decreases at a constant rate [40]. This can lead to a slow convergence rate, especially for problems with complex nonlinearities. The PM was utilized with the BHM to solve second-order IVPs by Rufai et al. [9].

There are several local and piece-wise linearization schemes for IVPs, the use of which depend on the algorithm employed in the numerical implementation [4144]. The piece-wise linearization approach is useful for handling nonlinearities locally, making the overall problem more tractable [45]. The linear partition method (LPM) linearization scheme is a modified piece-wise linearization approach that allows for flexibility in handling the nonlinearities [46]. The LPM is a direct scheme and is particularly useful for solving stiff differential equations. The LPM used to linearize first-order IVPs and combined with the BHM by Shateyi [5]. Similarly, the simple iteration method (SIM) technique is a piece-wise linearization approach that is easy to implement and is very effective for IVPs [47]. Ahmedai Abd Allah et al. [47] proposed a SIM and used it with the BHM to solve third-order IVPs. The SIM works similar to fixed-point iteration [48].

The quasi-linearization method (QLM) transforms the nonlinear equations into a sequence of linear problems [49]. This method is similar to Newton’s method for root-finding but applied to differential equations [50]. At each iteration, the method linearized the nonlinear terms around the current solution estimate, resulting in a linear differential equation that can be solved to update the solution. The QLM converges quadratically, making it very efficient for problems where a good initial guess is available [51]. It is particularly effective for strongly nonlinear problems where other linearization methods might not converge [52]. Motsa [53] implemented the BHM with the QLM to solve first-order IVPs.

2. Implementation of the BHM

We describe the derivation of the ABHM for solving the first-order IVPs of the following form:
()
where the prime denotes the derivative with respect to the independent variable t. We divide the interval [0, tF] into N blocks with variable step-size h as
()
The intrastep points introduced as equally spaced grid points defined as
()
Here, m is the number of intrastep points and
()
Approximating the solution in each block Ir using Lagrange interpolating polynomials, we obtain
()
Similarly, we approximate the first derivative by
()
where ar,k are m + 2 unknown coefficients. We collocate at the unknown nodes and assume that the approximated solutions (5)–(6) satisfy equation (1). The coefficients ar,k are obtained from a system of m + 2 equations with m + 2 unknowns generated from
()
()
where , fr = f(tr, yr), and yr = y(tr). Equations (6)–(7) can be written in matrix notation as
()
We solve equation (8) to obtain ar,k. Substituting ar,k into equation (5) and evaluating the result at collocation points (3), we obtain the approximate solution as
()
where αi,j are known constants. Equation (10) can be written in the matrix form as
()
where A and B are matrices of size m × m, and Yr, Yr+p, Fr, and Fr+p are vectors of size m × 1.

3. Theoretical Analysis

In this section, we investigate the order, error constant, linear stability, order star, and zero stability analysis of the BHM.

3.1. LTE and Consistency

To determine the accuracy of the BHM, we evaluated the LTE. To analyze the LTE within each block, we introduced the linear operator , which represents the difference between the exact solution and the numerical approximation. Using equation (10), the LTE is analyzed using the linear operator defined as
()
We consider that y(t) is sufficiently differentiable. Expanding the terms y(tr + hpi) and y(tr + hpi) by using Taylor series about tr to obtain
()
where Ci,0, Ci,1, …, Ci,q are constant. The method is said to be order q if
()
where . The vector is the error constant of the method [54]. In equation (12), applying Taylor series about tr leads to
()
()
where K is a positive integer. Expanding equation (16) gives
()
Following [55], equation (17) may be written as
()
From equation (18), we deduced that the error constant vector is defined as
()
Also from equation (18), the leading term in the LTE is proportional to O(hm+2). A method has order q if the LTE satisfies
()

Comparing equations (18) and (20) yields q = m + 1 as the order of the BHM. Following [56, 57], a block method is consistent if its order q ≥ 1. This condition is satisfied since m ≥ 2.

3.2. Zero Stability and Convergence

Zero stability ensures that the numerical solution remains bounded as the number of steps increases [58]. For the BHM, writing equation (10) in the matrix form and setting h⟶0, the system reduces to
()
Equation (21) gives the following characteristic equation:
()

The roots of the characteristic equation, called the characteristic roots, must satisfy all roots of ρ(λ) ≤ 1. Furthermore, any roots must have multiplicity one. This ensures that the method is zero stable, which is a prerequisite for convergence. Hence, the BHMs developed in this study are zero-stable. Since these methods exhibit both zero-stability and consistency, Dahlquist’s theorem implies that they are convergent [54].

3.3. Absolute Stability

We applied the method to the scalar linear test equation given in [59]. Following this procedure, we define
()
Substituting equation (23) into equation (10) yields
()
Equation (24) can be reduced to the matrix form as presented in [60]. Thus, we have
()
where is the stability function, and I is the identity matrix of size m × m. Table 1 presents the stability functions for the BHM for different values, specifically m = 2, 3, 4, 5.
Table 1. Stability functions for the BHM with varying values of m and corresponding equally spaced grid points.
m pj Stability function
2 0, (1/2), 1 (24 − z2)/(2(z2 − 6z + 12))
3 0, (1/3), (2/3), 1 (−z3 + 3z2 + 54z − 324)/(3(z3 − 11z2 + 54z − 108))
4 0, (1/4), (1/2), (3/4), 1 (−3z4 + 20z3 + 240z2 − 3840z + 15360)/(4(3z4 − 50z3 + 420z2 − 1920z + 3840))
5 0, (1/5), (2/5), (3/5), (4/5), 1 (−12z5 + 130z4 + 1125z3 − 37500z2 + 337500z − 1125000)/(5(12z5 − 274z4 + 3375z3 − 25500z2 + 112500z − 225000))
Following [61, 62], we define the region of absolute stability R as
()

Figure 1 shows the absolute stability regions of the BHM for different values of (m = 2, 3, 4, 5). The shaded areas represent the set of complex values z = hλ where the stability condition is satisfied.

Details are in the caption following the image
The absolute stability regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The absolute stability regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The absolute stability regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The absolute stability regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.

Definition: Following [63], a numerical method for solving ordinary differential equations is A-stable if its region of absolute stability contains the entire left half-plane of .

Figure 1 shows the absolute stability regions for m = 2, 3, 4, 5, confirming the A-stability of the BHM as the entire left-half complex plane is covered.

Definition: Following [64], a numerical method is L-stable if it satisfies both.
  • 1.

    Its region of absolute stability contains the entire left half-plane.

  • 2.

    Its stability function satisfies

    ()

  • From Table 1, we compute the asymptotic behavior of each stability function as z, which yields

    ()

  • Since none of the stability functions satisfy , the BHM is not L-stable for any m ≥ 2, despite being A-stable (as shown in Figure 1).

3.4. Order Stars

Order star theory is a crucial tool for assessing the stability and accuracy of numerical methods for ordinary differential equations [65]. Order stars provide a means to evaluate the stability of numerical methods in relative terms [66]. Given a numerical method with stability function , where and are polynomials. To analyze the stability properties, order stars partition the complex plane into three distinct regions based on comparing the stability function of a method with the exact solution as in [67]. We considered the first type of the order stars, which defined the three regions as
()

The zeros of (roots of ) appear as points where stable regions contract, indicating strong damping properties, while poles (roots of ) act as sources of instability where unstable regions diverge. Figures 2 and 3 show the structure of the three regions, zeros, and poles of the BHM, respectively.

Details are in the caption following the image
The plot of the order stars regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The plot of the order stars regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The plot of the order stars regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The plot of the order stars regions of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The poles and zeros of the stability function of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The poles and zeros of the stability function of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The poles and zeros of the stability function of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Details are in the caption following the image
The poles and zeros of the stability function of the BHM for various values of m. (a) m = 2. (b) m = 3. (c) m = 4. (d) m = 5.
Definition: Following [68], a numerical method is said to be A-acceptable if its stability function satisfies the following conditions:
  • The growth region R+ does not intersect the imaginary axis.

  • There are no poles of in the left half of the complex plane, i.e., Re(z) < 0.

These conditions ensure that the numerical method exhibits stable behavior. Figure 2 shows the order stars stability region for the scheme. It is observed that the region contains the entire left half of the complex plane for all values of (m = 2, 3, 4, 5). It is also noted that R+ does not intersect the imaginary axis, and no poles are found in the left of the complex plane (as shown in Figure 3). Thus, we conclude that the method is A-acceptable.

4. The Adaptive Scheme

An important factor to take into account is the derivation of the adaptive scheme, which is crucial for error estimation [69]. To achieve this, we calculate the error between two different versions of the BHM in equation (10) evaluated at specific points . One version of the higher order method is associated with a LTE of order O(hq+1), while the other is a lower order method with a LTE of order O(hq). To demonstrate the integration of adaptivity into the BHM, we consider a method derived from the evaluated points. We use the equally spaced intrastep points when m = 5, as
()
We evaluate equation (10) at the collocation points in equation (30). The resulting equations can be written in the matrix form:
()
where the matrices A and B of size 5 × 5 are defined as
()
whereas the vectors Yr+p, Yr, Fr+p, and Fr of size (5 × 1) are
()

In this case, the LTE from equation (18) are given in Table 2.

Table 2. Truncation errors for the block hybrid method when m = 5.
pj p1 p2 p3 p4 p5
Local truncation error −(863y(7)(tr)h7)/4725000000 −(37y(7)(tr)h7)/295312500 −(29y(7)(tr)h7)/175000000 −(8y(7)(tr)h7)/73828125 −(11y(7)(tr)h7)/37800000

An adaptive counterpart of the main method yr+1 can be constructed utilizing the method of undetermined coefficients on an equation with identical grid points, except at p5 = 1. This expression has a LTE in Table 3.

Table 3. Truncation errors for the block hybrid method when m = 5 excluding p5.
pj p1 p2 p3 p4 p5
Local truncation error (3y(6)(tr)h6)/2500000 (y(6)(tr)h6)/1406250 (3y(6)(tr)h6)/25000000 −(8y(7)(tr)h7)/73828125 (19y(6)(tr)h6)/900000
The adaptive pair is given as
()
()
where yr+1 and correspond to the higher order and lower order methods, respectively. The error estimation function, denoted as EST, is expressed as
()
The local error, subsequently utilized in ascertaining the optimal step-size for the ensuing step, is thus approximated as
()
For the step-size of the successive time step, we use the formula proposed by authors in [23].
()
where σ and Tol1 are the safety factor and the first user-defined tolerance, respectively, with the condition σ ∈ (0, 1). Furthermore, we evaluate the subsequent conditional structure in the implementation of the ABHM as
  • If ‖EST‖ < Tol1, we accept the procured values and increase the step-size by hnew = 2hold, to forward the integration process.

  • If ‖EST‖ ≥ Tol1, we recalibrate the current step-size employing the criterion in equation (38) and recalculate.

By the above methodology, we ensure a reliable error estimation and optimize the step-size for each integration block in the ABHM.

4.1. Error Estimation

To assess the convergence of the method, we evaluated the absolute error (AE) and absolute error estimate (AEE) between two consecutive iterations. Suppose Y(t) is the approximate solution. The AE is defined as
()
We compute the AEE between two consecutive iterations using the following:
()

This provides a measure of how much the solution changes between iterations, indicating the convergence behavior of the method.

We evaluate in Algorithm 1 the following conditional stopping procedure in the ABHM where Tol2 represents the second user-defined tolerance. Within each block, the method iterates until the error AEE falls below Tol2. Once the AEE converges to within an acceptable criteria, the procedure advances to the next block or concludes the computation. By applying this conditional structure, we ensure that the accuracy of our solutions is systematically improved within each block, thereby enhancing control and precision in the numerical computations. The pseudo-code loops through the steps until convergence or the maximum number of iterations is reached.

    Algorithm 1: ABHM algorithm pseudo-code.
  • Step 1: Input IVP f(t, y), tolerance levels Tol1, Tol2, initial step-size h0, safety factor σ, and max iterations.

  • Step 2: Initialize Yr+p, Fr+p, Yr, Fr.

  • Step 3: While not converged (AEE < Tol2) or max iterations not reached, do Steps 4–11:

  • Step 4: Compute using the ABHM equation.

  • Step 5: Compute yr+1 and using adaptive pair equations.

  • Step 6: Compute the error estimate EST,

  • if‖EST‖ < Tol1then

  • (a) Accept the current .

  • (b) Set hnew = 2 × hold.

  • else

  • (c) Set .

  • end if

  • Step 10: Update Yr+p and Fr+p with hnew.

  • Step 11: Update hold to hnew.

  • Step 12: Return Yr+p.

5. Linearization Techniques

In this section, we discuss various linearization approaches, namely, PM, LPM, SIM, and QLM, that must be applied to linearized equation (1) before using ABHM equation (10). ABHM equation (10) is a powerful numerical approach for solving differential equations, offering accuracy and efficiency by adapting to the problem’s behavior. However, the nonlinear nature of many IVPs necessitates an initial linearization step to facilitate the application of the ABHM. These techniques transform nonlinear IVPs into linear equations, which are easier to solve. Consider that f is a nonlinear function and F is the linearized form of f. The linear and nonlinear terms are evaluated as known and unknown terms at iteration s and s + 1, respectively.

5.1. PM

Applying the PM scheme to equation (1), the linearization of f is presented as follows:
()
where . Substituting equation (41) into equation (10) gives
()
Equation (42) may be written in the matrix notation as
()

Equation (43) represents the Picard adaptive block hybrid method (PM-ABHM) equations.

5.2. LPM

For the LPM scheme, we define
()
where and are the linear and nonlinear terms, respectively. Substituting equation (44) into equation (10) yields
()
where and . Equation (45) can be written as
()
Equation (46) has the matrix form
()
where Lr+p is a matrix of size m × m, and Nr+p is a vector of size m × 1. The approximate solutions are found using the linear partition adaptive block hybrid method (LPM-ABHM)
()
where inv denotes the inverse of the matrix, and is a nonsingular matrix.

5.3. SIM

In the SIM implementation, we consider the nonlinear term as a combination of known (at iteration s) and unknown (at iteration s + 1) of y at a particular iteration as
()
where . Thus, by substituting equation (49) into generalized BHM equation (10), and solving for , we have
()
where , and . The simple iteration adaptive block hybrid method (SIM-ABHM) can be written in the matrix form
()
where the matrix is of size m × m, and the vector is of size m × 1. Then, the approximate solutions are
()
where is a nonsingular matrix.

5.4. QLM

In QLM, we define the nonlinear function F(y, y) as
()
Expanding equation (53) using Taylor series approximation around and , we obtain
()
Assuming that the difference between the current and previous iteration and are small and the terms with higher order are negligible gives
()
Upon the application of QLM to equation (1), we derive the linearized form
()
where and . With this approximation, equation (10) becomes
()
where , and .
The quasi-linearization adaptive block hybrid method (QLM-ABHM) equations in the matrix notation are given by
()
The matrix Φr+p is of size m × m, and the vector Ψr+p is of size m × 1. The approximate solutions are found by solving the equation
()
where is a nonsingular matrix.

6. Numerical Results and Discussion

In this section, we present and analyze the numerical results obtained by implementing the ABHM on various first-order IVPs with particular focus on demonstrating the effectiveness and accuracy of the adaptive BHM, particularly when utilizing several linearization schemes in the case m = 5 and σ = 0.9. All computational experiments have been performed using MATLAB R2021b.

6.1. Example 1

Consider the Riccati equation
()
where the exact solution is given by
()
We define the function f = −y2 + 2y + 1. The ABHM is implemented using four linearization techniques
  • PM: .

  • LPM: .

  • SIM: .

  • QLM: .

Figure 4 shows the numerical and exact solutions for equation (60). It can be observed that the approximate and exact solutions coincide.

Details are in the caption following the image
Comparison of numerical and exact solutions for Example 1 with h0 = 10−1, Tol1 = 10−8, and Tol2 = 10−13 at tF = 5.

Figure 5 shows the AE for different linearization techniques with varying initial step-sizes and the first user-defined tolerance. As shown in Figure 5(a), the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM achieve an AE ranging from O(10−13) to O(10−17) for h0 = 10−1 and Tol1 = 10−8. Figure 5(b) presents the AE for these methods with h0 = 10−2 and Tol1 = 10−10, where all methods maintain an AE between O(10−13) and O(10−19).

Details are in the caption following the image
Comparison of absolute errors for the PM, LPM, SIM, and QLM with Tol2 = 10−13 at tF = 5 for Example 1. (a) When h0 = 10−1 and Tol1 = 10−8. (b) When h0 = 10−2 and Tol1 = 10−10.
Details are in the caption following the image
Comparison of absolute errors for the PM, LPM, SIM, and QLM with Tol2 = 10−13 at tF = 5 for Example 1. (a) When h0 = 10−1 and Tol1 = 10−8. (b) When h0 = 10−2 and Tol1 = 10−10.

Table 4 compares the maximum AE for different linearization schemes of the ABHM at Tol2 = 10−13 with varying h0 and Tol1. From Table 4, it is evident that the PM-ABHM, SIM-ABHM, and QLM-ABHM achieve a maximum AE of order O(10−15), while the LPM-ABHM records a maximum AE of order O(10−14) for h0 = 10−1 and Tol1 = 10−8. For h0 = 10−2 and Tol1 = 10−10, QLM-ABHM maintains a maximum AE of order O(10−15), while the PM-ABHM, LPM-ABHM, and SIM-ABHM achieve a maximum AE of order O(10−14). The results show that QLM-ABHM outperforms the other methods, with LPM-ABHM having the highest maximum AE. Table 4 reveals that PM-ABHM is the fastest for both h0 = 10−1 and h0 = 10−2.

Table 4. Comparison of maximum absolute errors for different linearization schemes of the ABHM at Tol2 = 10−13 for various initial step-sizes and first user-defined tolerances for Example 1.
tF h0 Tol2 PM-ABHM LPM-ABHM SIM-ABHM QLM-ABHM
5 10−1 10−8 6.2172 × 10−15 1.5543 × 10−14 6.6613 × 10−15 5.3291 × 10−15
CPU time in seconds 0.012192 0.082196 0.088391 0.052143
  
5 10−2 10−10 1.0658 × 10−14 1.8652 × 10−14 1.2879 × 10−14 6.6613 × 10−15
CPU time in seconds 0.061403 0.255753 0.263201 0.189779

Figure 6 shows the number of iterations per block for different linearization schemes with varying initial step-sizes and the first user-defined tolerance. Figure 6(a) illustrates the performance of the PM, LPM, SIM, and QLM schemes for h0 = 10−1 and Tol1 = 10−8. We observe that the maximum number of iterations required is 10 for LPM-ABHM, while the PM-ABHM and SIM-ABHM schemes require a maximum of eight iterations, and QLM-ABHM requires between three to four iterations. As shown in Figure 6(b), for smaller initial step-sizes and the first user-defined tolerance, the number of iterations per block is reduced to seven for LPM-ABHM. However, the PM-ABHM and SIM-ABHM schemes decrease to six iterations, while QLM-ABHM decreases to between two to three iterations. This indicates that smaller initial step-sizes and the first user-defined tolerance decrease the number of iterations per block, leading to rapid convergence of the ABHM.

Details are in the caption following the image
Number of iterations per block for the Picard, linear partition, simple iteration, and quasi-linearization methods with Tol2 = 10−10 for Example 1. (a) When h0 = 10−1 and Tol1 = 10−8. (b) When h0 = 10−2 and Tol1 = 10−10.
Details are in the caption following the image
Number of iterations per block for the Picard, linear partition, simple iteration, and quasi-linearization methods with Tol2 = 10−10 for Example 1. (a) When h0 = 10−1 and Tol1 = 10−8. (b) When h0 = 10−2 and Tol1 = 10−10.

6.2. Example 2

Consider a special case of the Riccati equation [5]
()
The exact solution for this case is given by
()
We define f = (1/t2)y2 + (1/t)y + 1, and its linearized forms are expressed as follows:
  • PM: .

  • LPM: .

  • SIM: .

  • QLM: .

Figure 7 depicts the numerical and exact solution using PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM, respectively. We note that the numerical solutions match the exact solution.

Details are in the caption following the image
Comparison of numerical and exact solutions for the implemented Picard, linear partition, simple iteration, and quasi-linearization methods with h0 = 10−1, Tol1 = 10−13, Tol2 = 10−12, and tF = 4 for Example 2.

Figure 8 and Table 5 present a comparison of the AE and the maximum AE achieved using the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM, respectively, for different initial step-sizes and second user-defined tolerances. Figure 8(a) shows that for h0 = 10−1 and Tol2 = 10−12, all methods achieve an AE between O(10−11) and O(10−17). Similarly, Figure 8(b) shows that for h0 = 10−3 and Tol2 = 10−14, the AE ranges from O(10−12) to O(10−17). These results indicate that smaller initial step-sizes and second user-defined tolerances lead to higher precision.

Details are in the caption following the image
Comparison of absolute errors for the PM, LPM, SIM, and QLM with Tol1 = 10−10 and tF = 4 for Example 2. (a) For h0 = 10−1 and Tol2 = 10−12. (b) For h0 = 10−3 and Tol2 = 10−14.
Details are in the caption following the image
Comparison of absolute errors for the PM, LPM, SIM, and QLM with Tol1 = 10−10 and tF = 4 for Example 2. (a) For h0 = 10−1 and Tol2 = 10−12. (b) For h0 = 10−3 and Tol2 = 10−14.
Table 5. Comparison of the maximum absolute error for the ABHM with Tol1 = 10−13 using the Picard, linear partition, simple iteration, and quasi-linearization methods for Example 2.
tF h0 Tol2 PM-ABHM LPM-ABHM SIM-ABHM QLM-ABHM
4 10−1 10−12 8.8143 × 10−12 1.4779 × 10−12 3.0198 × 10−13 2.1672 × 10−13
CPU time in seconds 0.079389 0.408207 0.362927 0.309272
  
4 10−3 10−14 8.5620 × 10−13 5.0449 × 10−13 4.2633 × 10−13 5.4001 × 10−13
CPU time in seconds 0.121848 0.467590 0.447170 0.297296

From Table 5, it is evident that the maximum AE improves with smaller h0 and Tol2 for all methods, although the improvement is marginal. Regarding CPU time, the PM-ABHM and QLM-ABHM are faster compared to the LPM-ABHM and SIM-ABHM.

Figure 9 compares the number of iterations per block required for all methods with varying h0 and Tol2. Figure 9(a) shows the performance of the PM, LPM, SIM, and QLM for h0 = 10−1 and Tol2 = 10−12. We observe that the maximum number of iterations required is four for PM-ABHM, while LPM-ABHM and SIM-ABHM require between three and four iterations, and the QLM-ABHM scheme requires three iterations. As shown in Figure 9(b), when h0 = 10−3 and Tol2 = 10−14, the number of iterations increases by one for all methods.

Details are in the caption following the image
Number of iterations per block for the Picard, linear partition, simple iteration, and quasi-linearization methods with Tol1 = 10−13 and tF = 4 for Example 2. (a) For h0 = 10−1 and Tol2 = 10−12. (b) For h0 = 10−3 and Tol2 = 10−14.
Details are in the caption following the image
Number of iterations per block for the Picard, linear partition, simple iteration, and quasi-linearization methods with Tol1 = 10−13 and tF = 4 for Example 2. (a) For h0 = 10−1 and Tol2 = 10−12. (b) For h0 = 10−3 and Tol2 = 10−14.

6.3. Example 3

We examine the logistic differential equation presented in [70].
()
where ωe is representing the growth rate, ωf is a factor that modulates the effect of the carrying capacity, and y0 is the initial population size. The exact solution is given by
()
By applying the ABHM with α = 0.98, ωe = 2, and ωf = 1, we define f = ωeyωfy ln(y). The linearized forms of f are as follows:
  • PM: .

  • LPM: .

  • SIM: .

  • QLM: .

In Figure 10, the numerical and exact solutions for Example 3 are presented using PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM, respectively. It is evident from Figure 10 that the approximate solutions match the exact solutions.

Details are in the caption following the image
Comparison of numerical and exact solutions for Example 3 with h0 = 10−2, Tol1 = 10−12, Tol2 = 10−12, and tF = 10.

Figure 11 presents the AE for the four techniques for different values of the second user-defined tolerance. Figures 11(a) and 11(b) show that the AE for PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM is less than or equal to O(10−13) for Tol2 = 10−12 and Tol2 = 10−14, respectively. It can be observed that for the same initial step-size but different Tol2, the AE remains within the same order.

Details are in the caption following the image
Comparison of the absolute errors for the implemented methods PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 10 for Example 3. (a) For Tol2 = 10−12. (b) For Tol2 = 10−14.
Details are in the caption following the image
Comparison of the absolute errors for the implemented methods PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 10 for Example 3. (a) For Tol2 = 10−12. (b) For Tol2 = 10−14.

Table 6 compares the maximum AEs for the ABHMs. For Tol2 = 10−12, PM-ABHM and QLM-ABHM achieve O(10−14) error, while LPM-ABHM and SIM-ABHM have O(10−13). For Tol2 = 10−14, all methods except SIM-ABHM achieve O(10−14). The PM-ABHM and QLM-ABHM are faster than LPM-ABHM and SIM-ABHM.

Table 6. Maximum absolute error for Example 4 using the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM with Tol1 = 10−12.
tF h0 Tol2 PM-ABHM LPM-ABHM SIM-ABHM QLM-ABHM
10 10−2 10−12 2.4869 × 10−14 1.7408 × 10−13 1.4477 × 10−13 9.5923 × 10−14
CPU time in seconds 0.303831 1.071535 1.089501 0.917177
  
10 10−2 10−14 2.3093 × 10−14 4.1744 × 10−14 1.2434 × 10−13 9.5923 × 10−14
CPU time in seconds 0.324150 1.252072 1.112535 0.979220

Figure 12 shows the number of iterations per block using PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM for different values of Tol2. In Figure 12(a), the PM requires between three and four iterations, while LPM-ABHM and SIM-ABHM require between four and five iterations, and QLM-ABHM requires between two and three iterations for Tol2 = 10−12. In Figure 12(b), for Tol2 = 10−14, the number of iterations for PM-ABHM, LPM-ABHM, and SIM-ABHM increases by one, while QLM-ABHM maintains the same number of iterations.

Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 5 for Example 3. (a)Tol2 = 10−12 and (b)Tol2 = 10−14.
Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 5 for Example 3. (a)Tol2 = 10−12 and (b)Tol2 = 10−14.

6.4. Example 4

We analyze the nonlinear IVP
()
The exact solution is
()
Defining f = −(1/3)yety4, we apply the ABHM with the following linearization techniques
  • PM: .

  • LPM: .

  • SIM: .

  • QLM: .

We depicted the exact and numerical solutions for Example 4 in Figure 13. It is evident that the numerical solutions closely align with the exact solution.

Details are in the caption following the image
Comparison of numerical and exact solutions for h0 = 10−1, Tol1 = 10−8, Tol2 = 10−14, and tF = 15 for Example 4.

Figure 14 presents the AE for the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM with varying initial step-sizes and the first user-defined tolerance. Figure 14(a) shows that the AE for these methods ranges from O(10−14) to O(10−18) for h0 = 10−1 and Tol1 = 10−8. In Figure 14(b), the AE for all four methods ranges from O(10−14) to O(10−19). We noted that smaller initial step-sizes and the first user-defined tolerance result in improved AE.

Details are in the caption following the image
Comparison of absolute errors for the implemented methods PM, LPM, SIM, and QLM with Tol2 = 10−12, and tF = 15 for Example 4. (a) For h0 = 10−1 and Tol1 = 10−8. (b) For h0 = 10−3 and Tol1 = 10−10.
Details are in the caption following the image
Comparison of absolute errors for the implemented methods PM, LPM, SIM, and QLM with Tol2 = 10−12, and tF = 15 for Example 4. (a) For h0 = 10−1 and Tol1 = 10−8. (b) For h0 = 10−3 and Tol1 = 10−10.

A comparison of the maximum AEs is shown in Table 7. The PM-ABHM achieves a maximum AE of O(10−16), while the LPM-ABHM, SIM-ABHM, and QLM-ABHM achieve maximum errors of O(10−15) for h0 = 10−1 and Tol1 = 10−8. For h0 = 10−3 and Tol1 = 10−10, all methods reach a maximum AE of O(10−15). The PM-ABHM consistently provides the smallest maximum AE across varying initial step-sizes and first user-defined tolerances, demonstrating its superior accuracy. The CPU times for each method differ, with PM-ABHM being the fastest.

Table 7. Comparison of maximum absolute errors and CPU times for the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM with Tol2 = 10−14 for Example 4.
tF h0 Tol1 PM-ABHM LPM-ABHM SIM-ABHM QLM-ABHM
15 10−1 10−8 8.8818 × 10−16 1.2768 × 10−15 1.3323 × 10−15 2.2204 × 10−15
CPU time in seconds 0.048587 0.129061 0.114315 0.071829
  
15 10−3 10−10 1.3323 × 10−15 3.3307 × 10−15 2.6645 × 10−15 3.7748 × 10−15
CPU time in seconds 0.131735 0.388458 0.396925 0.297469

Figure 15 shows the number of iterations per block for the four methods with different initial step-sizes and first user-defined tolerance values. In Figure 15(a), the PM-ABHM requires six to seven iterations, while LPM-ABHM and SIM-ABHM need five to six, and QLM-ABHM only three iterations per block for h0 = 10−1 and Tol1 = 10−8. For h0 = 10−3 and Tol1 = 10−10, as shown in Figure 15(b), the number of iterations decreases by one for PM-ABHM, LPM-ABHM, and SIM-ABHM, while QLM-ABHM requires only three iterations.

Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM with Tol1 = 10−14 and tF = 15 for Example 4. (a) For h0 = 10−1 and Tol1 = 10−8. (b) For h0 = 10−3 and Tol1 = 10−10.
Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM with Tol1 = 10−14 and tF = 15 for Example 4. (a) For h0 = 10−1 and Tol1 = 10−8. (b) For h0 = 10−3 and Tol1 = 10−10.

6.5. Example 5

We consider the nonlinear Krogh’s problem [22].
()
where β1 = 0.2, β2 = 0.2, β3 = 0.3, and β4 = 0.4. The exact solution is
()
Let us define and explore the following linearization of fi as
  • PM: .

  • LPM: .

  • SIM: .

  • QLM: .

We present the numerical and exact solutions in Figure 16. Figure 16 shows that the graphs of the approximate and exact solutions coincide.

Details are in the caption following the image
Comparison of numerical and exact solutions for h0 = 10−2, Tol1 = 10−12, Tol2 = 10−12, and tF = 20 for Example 5.

Figure 17 presents a comparative analysis of the AEs for the four implemented methods. We used h0 = 10−2, Tol1 = 10−12, Tol2 = 10−12, and tF = 20. Figure 17(a) illustrates the AE for the PM-ABHM. The PM demonstrates a high initial error O(10−14) that decreases consistently, achieving errors as low as O(10−20). Figure 17(b) depicts the AE for the LPM-ABHM. The LPM shows a similar trend of error reduction over time. The LPM-ABHM achieves errors in the range of O(10−13) to O(10−17). Figure 17(c) shows the AE for the SIM-ABHM. The SIM scheme manages to reduce the errors reaching O(10−18). Figure 17(d) presents the AE for the QLM-ABHM. The QLM technique shows a more consistent reduction in errors compared to the SIM. The error graphs decrease, reaching values between O(10−13) and O(10−19). It is evident that all four methods are capable of achieving low AEs.

Details are in the caption following the image
Comparison of the absolute errors for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 20 for Example 5. (a) Picard method. (b) Linear partition method. (c) Simple iteration method. (d) Quasi-linearization method.
Details are in the caption following the image
Comparison of the absolute errors for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 20 for Example 5. (a) Picard method. (b) Linear partition method. (c) Simple iteration method. (d) Quasi-linearization method.
Details are in the caption following the image
Comparison of the absolute errors for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 20 for Example 5. (a) Picard method. (b) Linear partition method. (c) Simple iteration method. (d) Quasi-linearization method.
Details are in the caption following the image
Comparison of the absolute errors for the implemented PM, LPM, SIM, and QLM with h0 = 10−2, Tol1 = 10−12, and tF = 20 for Example 5. (a) Picard method. (b) Linear partition method. (c) Simple iteration method. (d) Quasi-linearization method.

Figure 18 illustrates the number of iterations per block for the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM at various Tol1 values in Example 5. In Figure 18(a), for Tol1 = 10−10, the PM-ABHM required five to six iterations per block, while the LPM-ABHM and SIM-ABHM maintained around four and five iterations, respectively. The QLM-ABHM consistently performs better, requiring only three iterations. In Figure 18(b), for Tol1 = 10−14, the PM-ABHM, LPM-ABHM, and SIM-ABHM were reduced to three and four iterations per block, respectively. The QLM-ABHM remains the most efficient, maintaining around two to three iterations.

Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM methods with h0 = 10−2, Tol2 = 10−12, and tF = 20 for Example 5. (a) For Tol1 = 10−10. (b) For Tol1 = 10−14.
Details are in the caption following the image
Number of iterations per block for the implemented PM, LPM, SIM, and QLM methods with h0 = 10−2, Tol2 = 10−12, and tF = 20 for Example 5. (a) For Tol1 = 10−10. (b) For Tol1 = 10−14.

6.6. Example 6

The Brusselator is an oscillating system that models an autocatalytic chemical reaction, described by the following system of nonlinear differential equations [23, 71].
()
()
where μ and ν are constants. We define the functions f1 = μ + Y2Z − (ν + 1)Y and f2 = νYY2Z. The corresponding linearization techniques are implemented as follows:
  • PM:

    ()

  • LPM:

    ()

  • SIM:

    ()

  • QLM:

    ()

Figure 19 shows that the numerical solutions obtained by PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM demonstrate strong consistency with the reference solution computed using the ode45 MATLAB solver, as no exact solutions are available. This close agreement validates the effectiveness of the proposed methods in accurately solving Example 6.

Details are in the caption following the image
Comparison of numerical solutions from PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM with the ode45 solver in MATLAB for h0 = 10−1, Tol1 = 10−10, Tol2 = 10−14, and tF = 50 in Example 6.
Details are in the caption following the image
Comparison of numerical solutions from PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM with the ode45 solver in MATLAB for h0 = 10−1, Tol1 = 10−10, Tol2 = 10−14, and tF = 50 in Example 6.

Table 8 gives a comparison of the maximum AE using the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM at tF = 20 with various ABHMs [23, 71]. The results indicate that PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM consistently achieve the lowest errors across different h0 and Tol1, outperforming Opthbm and EMOHM. Additionally, PM-ABHM demonstrates the best computational efficiency in terms of CPU time.

Table 8. Comparison of the maximum absolute error using the PM-ABHM, LPM-ABHM, SIM-ABHM, QLM-ABHM, Opthbm [23], and EMOHM [71] at tF = 20 with Tol2 = 10−14 for Example 6.
h0 Tol1 PM-ABHM LPM-ABHM SIM-ABHM QLM-ABHM EMOHM [71] Opthbm [23]
10−3 10−6 1.3412 × 10−11 1.3421 × 10−11 1.3413 × 10−11 1.3408 × 10−11 1.53089 × 10−9 1.2513 × 10−8
CPU 0.059026 0.300385 0.314092 0.325632 0.077
  
10−4 10−7 1.7031 × 10−13 1.6931 × 10−13 1.7025 × 10−13 1.6875 × 10−13 9.6196 × 10−10
CPU 0.102599 0.450135 0.597188 0.515766 0.107
  
10−1 10−10 5.8287 × 10−15 1.1324 × 10−14 1.2434 × 10−14 7.6050 × 10−15
CPU 0.698620 2.923950 3.880332 3.542907

7. On the Application of the ABHM

We consider convection in a fluid layer that is heated from below [72, 73].
()
()
()
where Y, Z, and W are the convective amplitudes, Pr is the Prandtl number, Ra is the Rayleigh number, and a is the wave number. We used Pr = 10, Ra = 27π4/4, and . We implemented the ode45 solver in MATLAB to solve equations (76)–(78) and compared it with the numerical solutions obtained using the ABHM. Figure 20 shows that the ode45 solutions align with the PM-ABHM, LPM-ABHM, SIM-ABHM, and QLM-ABHM solutions.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.
Details are in the caption following the image
ode45 solution (cyan circle) and the ABHM solutions as PM (black sold line), LPM (red dash-dot line), SIM (blue dashed line), and QLM (magenta dotted line) with h0 = 10−1, Tol1 = 10−12, Tol2 = 10−8, and tF = 5.

8. Conclusion

In this study, we conducted a comprehensive investigation of the performance of the ABHM along with four linearization techniques for solving first-order IVP under various initial step-sizes and user-defined tolerance values. We used PM, LPM, SIM, and QLM to linearize the differential equations. We evaluated the AE, maximum AE, and the number of iterations per block for each method. This represents a significant step forward in optimizing linearization techniques for ABHM, addressing a critical gap in the numerical analysis of nonlinear differential equations. The key findings of our study are as follows.
  • For the same initial step-size, all methods exhibit comparable effectiveness, indicating their suitability for high precision.

  • By choosing the initial step-size and first user-defined tolerance small enough, we observed that the number of iterations per block required was reduced in all ABHMs.

  • The PM-ABHM and QLM-ABHM consistently achieve the highest accuracy and efficiency, making them highly effective for applications demanding high precision.

  • The QLM-ABHM, despite requiring the fewest iterations, achieves high accuracy and efficiency.

  • The SIM-ABHM is slightly better than the LPM-ABHM in terms of accuracy and the number of iterations required per block.

  • In terms of CPU time required, we noted that the PM-ABHM and QLM-ABHM are faster than SIM-ABHM and LPM-ABHM.

The insights gained from this work not only enhance the theoretical understanding of ABHM but also provide practical guidelines for selecting effective linearization methods. Future research could further implement the rational optimal grid points within ABHMs, combined with linearization techniques. Also, it could focus on implementing these methods to solve higher-order initial and BVPs, as well as enhancing their performance through more efficient tolerance settings. Additionally, developing new linearization techniques that surpass existing methods in terms of efficiency would represent a significant contribution.

Nomenclature

  • EST
  • Error estimation function
  • Tol1
  • First user-defined tolerance
  • Tol2
  • Second user-defined tolerance
  • Greek symbols

  • σ
  • Safety factor
  • Abbreviations

  • AE
  • Absolute error
  • AEE
  • Absolute error estimate
  • ABHM
  • Adaptive block hybrid method
  • BHM
  • Block hybrid method
  • IVP
  • Initial value problem
  • PM
  • Picard method
  • SIM
  • Simple iteration method
  • LPM
  • Linear partition method
  • LTE
  • Local truncation error
  • QLM
  • Quasi-linearization method
  • Conflicts of Interest

    The authors declare no conflicts of interest.

    Funding

    No funding was received for this manuscript.

    Acknowledgments

    The authors are grateful to the University of KwaZulu-Natal and University of Zululand. The authors sincerely thank the editors and reviewers for their valuable and constructive feedback.

      Data Availability Statement

      The data that support the findings of this study are available within the article.

        The full text of this article hosted at iucr.org is unavailable due to technical difficulties.