An efficient family of Steffensen-type methods with memory for solving systems of nonlinear equations
Abstract
The article discusses derivative-free algorithms with and without memory for solving numerically nonlinear systems. We proposed a family of fifth- and sixth-order schemes and extended them to algorithms with memory. We further discuss the convergence and computational efficiency of these algorithms. Numerical examples of mixed Hammerstein integral equation, discrete nonlinear ordinary differential equation, and Fisher's partial differential equation with Neumann's boundary conditions are discussed to demonstrate the convergence and efficiency of these schemes. Finally, some numerical results are included to examine the performance of the developed methods.
1 INTRODUCTION
This is a linear operator and satisfies (3). For , algorithm in (2) is the famous Steffensen method.13
Different approaches have been used by researchers to construct computationally efficient root-finding algorithms for both scalar and systems of nonlinear equations. Several iterative methods have been constructed using compositional approach, quadrature approach, adomian decomposition approach, weight function approach to name a few.
Several higher order derivative-free schemes have been proposed in literature. Liu et al.14 and Zheng et al.15 suggested fourth-order derivative-free methods. Grau et al.16 suggested fourth- and sixth-order methods and Sharma and Gupta17 proposed a family of sixth-order derivative-free methods. A derivative-free family of seventh-order methods was proposed by Wang and Zhang.18 Recently, Qiu et al.19 proposed a hybrid genetic algorithm by combining the advantages of genetic algorithms and derivative-free iterative methods to solve a class of complex nonlinear functions with discontinuity, nondifferentiability and multiple roots. This suggests derivative-free algorithms can be extended to the area of soft computing.
Recently, a technique used to achieve higher efficiency, is to develop memory based methods. This concept is given by Traub,6 who introduced algorithms with and without memory for the first time. Though this area has been explored more for scalar nonlinear equations, but very little progress has been made for nonlinear systems. Some of the existing memory based derivative-free methods can be found in works of Ren and Argyros,20 Chicharro et al.,21 and Ahmad et al.22 Here, we propose new multipoint schemes which are not only derivative-free but also with and without memory for solving numerically nonlinear systems.
2 DEVELOPMENT AND CONVERGENCE INVESTIGATION
Here, , and r are arbitrary constants, I denotes an identity matrix of order n, , where , , and , where and . Here, and d are arbitrary nonzero parameters such that either or and either or and represents the inverse of .
To discuss the convergence of proposed schemes, we recall the following Taylor series expansion of vector functions.3
Lemma 1.Let F: be t times Frchet differentiable function in a set , which is an open convex set. Then, for any , the following expression holds:
where and .
We now discuss the convergence of the algorithm (6) in the following theorem.
Theorem 1.Let : be sufficiently Frchet differentiable in set S, which is an open convex set and the solution “” of F lies in S. Assume that is continuous and nonsingular at x= and is an initial approximation sufficiently close to “.” Then, for all , such that either or and either or , the sequence of approximations generated by scheme (6) has local order of convergence five for and six for .
Proof.Let and .
Using from (8), one obtains
Let , and in (7) and with help of (9)–(11), using the results in Reference 12, one gets
On premultiplying (8) by (15) and simplifying with help of (12) and (13), one obtains
Using (16) in the first step of algorithm (6), one gets on simplification
Taylor series expansion of about “” gives
Premultiplying (18) by (15) and substituting in second step of scheme (6), one obtains
Now, the Taylor series of about “” gives
Let , and . Using (20), one gets
Using , and in (7), and (9)–(11), one gets
Using (15), (20), and (23) in the third step of scheme (6), one can write
In order to attain higher order of convergence of the proposed scheme (6), the choice of parameters , and r should be such that the coefficients , are zero. Solving these equations simultaneously, results in . The values of are substituted in the error equation (24), give
It can be seen easily that the convergence order of the proposed algorithm is five for and six for . For , the error equation (25) can be simplified with the help of (12), (13), (17), and (19) and one gets
Using , the error equation using (12), (13), (17), (19), (21), and (22) will become
Remark 1.The proposed algorithms require evaluation of two divided differences, at most seven F and inversion of one matrix only per iteration.
3 METHODS WITH MEMORY
In literature, several techniques are used to increment the order of convergence. One such methodology is to convert a method without memory into a method with memory. As the name suggests, this is done with the assistance of current and preceding data stored in memory.
Now, if we are allowed to choose or in the error equations (26) and (27), one can observe that the order of convergence of scheme becomes six and seven, respectively. This acceleration requires the knowledge of “” in advance, but in absence of this information, it is not viable to attain this acceleration. But, this acceleration can be achieved partially by considering an appropriate approximation of based on the current and previous acquired information.
Here, , , and , where and are arbitrary constants which satisfy the condition , and is given by (28). Note that the value of requires values that are based on preceding steps and hence (29) is a scheme with memory. In the undermentioned theorem, we establish the R-order of convergence of the scheme (29).
Theorem 2.Let F: be sufficiently Frchet differentiable function in a set which is convex and contains the zero “” of . Assume is continuous and nonsingular at “” and the parameter matrix of the algorithm is recursively calculated using (28). Then, the R-order of convergence the scheme (29), for and is 5.19, and for and is 5.54 where , such that either or . Further, for and algorithm converges with R-order of convergence 6.16, and for , the R-order of convergence is 6.31 where , such that either or . Furthermore, for , and , the R-order of convergence is 6.46, and for and , the R-order of convergence is 6.60.
Proof.Assume that the scheme (29) generates a sequence of approximations which approaches the zero “” of F and its R-order is , thus one can write
Here, the sequence approaches asymptotic error constant of scheme when and symbolize that and have magnitudes of same order. Thus, one can write
Replacing j by in (15), one obtains
Similarly, on replacing j by in (12) and (13), one gets
Using the value of , and rewriting the error equation (26), one can write
Using (33) in (34) and letting , with the help of (31), one can write
From (31) and (35), the exponents of are compared and one gets the quadratic equation
Solving (36), one gets the positive root as 5.19, which gives an estimate of the lower bound of R-order of convergence of the algorithm (29) for all such that either or and .
Next, if the matrix then, replacing j by j-1 in (15), one obtains
Substituting in (12) and (13), one obtains
Using (34) in (38), as explained earlier, one can write
On comparing exponents of using (31) and (39), one gets
The positive root of this quadratic equation (40) is 5.54. This gives the lower bound of the R-order of convergence of scheme (29) for all such that either or and .
Next, to explore the convergence behavior of the algorithm (29) for , we use (33) in the error equation (27) to obtain
From (33) and (41), one can write
Comparison of exponents of in (31) and (42), one can write the quadratic equation
A positive solution of (43) is 6.16, which gives the lower bound of the R-order of convergence of the iterative scheme (29) for all such that either or .
Next, on substituting (38) in (41), we can write
From (31) and (44), we compare the exponents of . This yields the quadratic equation
The positive root of the equation is 6.31, which is the lower bound of the R-order of convergence of the scheme (29) for all such that either or .
Now, for and using (15) for , one can write
Using (12) and (13) for , one gets
Using (46), the error equation (41) can be written as
Using (38) and (47) in (48), one can write
On comparing exponents of in (31) and (49), one gets the quadratic equation
Solving Equation (50), one gets the positive root as 6.46. This provides a lower bound of the R-order of convergence of the scheme (29).
Further, we consider and using (15), one can write
Simplifying (51) and using (12) and (13), one gets
The error equation (41) with the help of (51) can be simplified as
Using (38) and (52) in (53), one can write
Comparing the exponents of from (31) and (54), yields the quadratic equation
The positive root of (55) is 6.60, which estimates lower bound of R-order of convergence of the scheme (29).
3.1 Particular cases of family (29)
Here, let the ith scheme of order be represented by .
For and , where and are arbitrary constants such that , and is given by (28), , , where c and d are arbitrary constants such that either or , the method is referred as . For , we denote the method by .
Subcase (i) For and , where and are arbitrary constants that satisfy the condition , and is given by (28), , , where c and d are arbitrary constants such that either or , the method is referred as . For , the method is addressed by .
Subcase (ii) For , , and , where , , and are arbitrary constants satisfying the condition and , and and are given by (28), the method is denoted by . For , the method is referred as .
Subcase (iii) For , , and where , , , and are arbitrary constants which satisfy the condition and , and and are given by (28), the method is referred as . For , we address the method by .
4 COMPUTATIONAL EFFICIENCY
Among various parameters that decide the quality of an algorithm, one parameter is its computational efficiency. It is discussed using efficiency index.4 The computational efficiency index is defined by , where is the convergence order and C denotes the computational cost per step. In case of a nonlinear system of n equations in n unknowns, the computational cost12 per step has been modified and is calculated using
Here, is count of scalar functions while evaluating F and divided difference , and counts number of products and quotients requisite in each step. To evaluate , we require the ratios and . Here, represents the ratio of number of products used for evaluation of the function F and number of scalar function in F, whereas refers to the ratio of the time taken by one quotient and the time taken by one product.10
We recall that we calculate n scalar functions for F and for divided difference of F (refer (5)), and and are computed additionally. Also, we add quotients for each new divided difference, products are used in case of multiplication of a matrix and vector, whereas n products for multiplication of a vector and scalar.
Another aspect that contributes toward the computational cost is the calculation of number of products and quotients utilized for the evaluation of an inverse operator. Rather than calculating an inverse operator, we solve a linear system. The linear system uses products and quotients for LU decomposition, and further the resolution of two triangular linear systems requires products and n quotients.
The computational efficiency of the proposed methods is discussed by considering some existing derivative-free schemes with and without memory. The derivative-free algorithms without and with memory considered are SA, SA, SA, and SA proposed by Sharma and Arora,24 which require five functions, two divided differences and one matrix inversion per step to be evaluated. Another derivative-free method GGN without memory is by Grau et al.12 and this needs five functions, two divided differences and two matrix inverse per step to be evaluated. These schemes are given as follows:
Sharma and Arora method (SA):
,
,
where, , , and are arbitrary constants.
Grau et al. method (GGN):
,
,
where .
Sharma and Arora method (SA):
,
,
Sharma and Arora method (SA):
,
Sharma and Arora method (SA):
,
In the methods, SA, SA, and SA mentioned above,
, , and is an arbitrary constant for SA and SA, and , for SA.
For and , the methods in (6) are referred as respectively.
From the definition, and taking into consideration all aspects of computational cost discussed above, the values of , (the computational cost and efficiency index of scheme , respectively) can be seen in Table 1.
Methods | ||
---|---|---|
SA | ||
GGN | ||
SA | ||
SA | ||
SA |
4.1 Comparison between efficiencies
The comparison of efficiencies of various new schemes with existing schemes has been tabulated in Table 2, whereas the comparison of new methods among themselves has been tabulated in Table 3. These results can be easily verified using the ratio given in (58).
SA | |||||
---|---|---|---|---|---|
– | * | * | * | * | * | |
– | * | * | * | * | ||
– | * | * | * | |||
– | * | * | ||||
– | * | |||||
– |
Note 1. In Table 3 “*” indicates that the comparison of method has been done with , so the comparison of with is not shown.
Note 2. The entries in Table 3 indicate the values of “n” for which one method is more efficient than the other method, for example, the method is more efficient than the method for all .
Note 3. For the values of “n” other than the ones mentioned in table, the comparison depends on and “n.”
5 NUMERICAL RESULTS
In this section, we discuss some numericals to examine convergence behavior of the proposed schemes. The new methods , , , , , and for particular values of the parameters have been compared with the existing schemes , , , and . The values of parameters used for existing schemes , , and have been kept same as used by the authors in Reference 24.
To compute the values and l for numerical examples, we express the evaluation cost of elementary functions in terms of products. This depends on the arithmetic used, computer, and the software27 (www.mpfr.org/mpfr-2.1.0/timings.html). These programs have been run in on a processor with specifications: Intel(R) Core (TM) i5-3210M CPU @2.50 GHz (64-bit machine) Microsoft Windows 7 (2009) and are given in Table 4. This shows the estimates of cost of the elementary functions in product units, where the running time of one product is measured in milliseconds (ms). From Table 4, one can observe that the time taken by one quotient is twice the time taken by product and hence . The following numerical examples are considered to verify the results.
Function | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
CPU-time | 0.04212 | 0.08424 | 0.03822 | 2.7924 | 2.4383 | 3.0093 | 3.0061 | 5.1387 | 4.9951 | 4.9702 | 5.6972 |
Cost | 1 | 2 | 0.91 | 66.30 | 57.89 | 71.45 | 71.44 | 122 | 118.59 | 118.00 | 135.26 |
Problem 1.Here, we study the mixed Hammerstein's integral equation:3
The above integral equation is discretized into a finite dimensional problem with the help of Gauss–Legendre quadrature formula for abscissas and weights determined for . Let denote the approximation . This results in a nonlinear system of equations given below:
The solution of the problem is
, which is correct up to 16 decimal places. The methods are tested using the parameters and the initial approximation of the solution as and the results are displayed in Table 5.
Methods | Error | Itr | COC | e-Time | |||
---|---|---|---|---|---|---|---|
5 | 3.116 (181) | 5 | 6854 | 1.000234844 | 5.000 | 7.190 | |
5.54 | 6.859 (226) | 5 | 7022 | 1.00024383 | 5.645 | 7.206 | |
6 | 2.482 (283) | 5 | 7154 | 1.00025049 | 6.000 | 6.806 | |
SA | 6 | 1.779 (245) | 5 | 7166 | 1.00025007 | 6.000 | 7.610 |
GGN | 6 | 1.428 (7) | 7 | 7324 | 1.00024467 | 6.000 | 10.364 |
6.16 | 5.490 (310) | 4 | 7310 | 1.00024874 | 6.163 | 5.884 | |
SA | 6.16 | 3.190 (272) | 5 | 7322 | 1.00024833 | 6.162 | 6.625 |
6.46 | 3.881 (349) | 4 | 7478 | 1.00024951 | 6.521 | 6.104 | |
SA | 6.46 | 1.800 (299) | 5 | 7922 | 1.00023553 | 6.464 | 6.407 |
6.60 | 1.254 (363) | 4 | 7322 | 1.00025776 | 6.701 | 5.203 | |
SA | 6.60 | 2.054 (318) | 4 | 8078 | 1.00023363 | 6.615 | 6.121 |
Problem 2.Now, we discuss a system of fifty equations given by:
Methods | Error | Itr | COC | e-Time | |||
---|---|---|---|---|---|---|---|
5 | 2.828 (28) | 6 | 683,625 | 1.0000023543 | 5.000 | 481.21 | |
5.54 | 5.996 (55) | 5 | 686,225 | 1.0000024948 | 5.646 | 509.44 | |
6 | 1.975 (45) | 6 | 688,675 | 1.0000026018 | 6.000 | 540.02 | |
SA | 6 | 1.887 (43) | 6 | 688,725 | 1.0000026016 | 6.000 | 596.58 |
GGN | 6 | 2.866 (48) | 6 | 723,900 | 1.0000024752 | 6.000 | 640.85 |
6.16 | 4.650 (49) | 5 | 691,225 | 1.0000026302 | 6.162 | 346.76 | |
SA | 6.16 | 3.511 (47) | 6 | 691,275 | 1.0000026300 | 6.162 | 481.42 |
6.46 | 6.862 (85) | 5 | 693,825 | 1.0000026889 | 6.541 | 417.29 | |
SA | 6.46 | 2.310 (65) | 5 | 701,375 | 1.0000026600 | 6.463 | 452.35 |
6.60 | 1.855 (86) | 5 | 691,275 | 1.0000027298 | 6.707 | 443.78 | |
SA | 6.60 | 1.271 (67) | 5 | 703,925 | 1.0000026808 | 6.619 | 489.79 |
Problem 3.Now, we examine the following boundary value problem:
Assuming that the interval is partitioned as, , , we define . The problem is discretized using the numerical formulas for first and second derivative given by:
This yields a system of nonlinear equations in variables:
Methods | Error | Itr | COC | e-Time | |||
---|---|---|---|---|---|---|---|
5 | 2.826 (250) | 5 | 3,402,900 | 1.000000472961 | 5.000 | 76.48 | |
5.54 | 7.837 (308) | 4 | 3,442,899 | 1.000000497253 | 5.648 | 72.48 | |
6 | 1.872 (412) | 4 | 3,482,301 | 1.000000514533 | 6.001 | 76.15 | |
SA | 6 | 9.243 (385) | 4 | 3,482,500 | 1.000000514504 | 6.000 | 84.03 |
GGN | 6 | 4.117 (325) | 4 | 6,009,402 | 1.000000298159 | 6.003 | 86.31 |
6.16 | 1.411 (448) | 4 | 3,522,101 | 1.000000516191 | 6.164 | 72.61 | |
SA | 6.16 | 5.067 (423) | 4 | 3,522,300 | 1.000000516161 | 6.163 | 78.15 |
6.46 | 4.356 (504) | 4 | 3,562,100 | 1.000000523744 | 6.539 | 73.74 | |
SA | 6.46 | 2.743 (488) | 4 | 3,681,102 | 1.000000506812 | 6.466 | 85.24 |
6.60 | 1.453 (528) | 4 | 3,522,300 | 1.000000535749 | 6.699 | 72.48 | |
SA | 6.60 | 4.591 (514) | 4 | 3,720,902 | 1.000000507153 | 6.610 | 89.17 |
Problem 4.Now, we focus on the Van der Pol equation28, 29 which governs the flow of current in a vacuum tube is defined by:
This yields a system of nonlinear equations of size :
The values of the parameters are Numerical results for the problem can be seen in Table 8.
Methods | Error | Itr | COC | e-Time | |||
---|---|---|---|---|---|---|---|
5 | 4.269 (104) | 5 | 111,975 | 1.000014373 | 4.998 | 5.807 | |
5.54 | 2.898 (114) | 5 | 114,575 | 1.00001494224 | 5.012 | 6.306 | |
6 | 4.419 (164) | 5 | 117,025 | 1.0000153110 | 6.011 | 6.686 | |
SA | 6 | 2.712 (149) | 5 | 117,075 | 1.0000153045 | 6.009 | 7.451 |
GGN | 6 | 1.692 (61) | 6 | 152,250 | 1.00001176860 | 4.014 | 8.123 |
6.16 | 6.569 (185) | 6 | 119,575 | 1.00001520460 | 6.165 | 6.258 | |
SA | 6.16 | 4.881 (174) | 5 | 119,625 | 1.0000151983 | 6.164 | 7.125 |
6.46 | 7.672 (187) | 5 | 122,175 | 1.00001527026 | 6.003 | 6.231 | |
SA | 6.46 | 1.495 (201) | 5 | 129,725 | 1.0000143815 | 6.470 | 7.896 |
6.60 | 3.752 (194) | 5 | 119,625 | 1.00001577500 | 5.999 | 6.208 | |
SA | 6.60 | 2.235 (213) | 5 | 132,275 | 1.00001426636 | 6.614 | 7.458 |
Problem 5.A nonlinear partial differential equation can be solved with help of finite difference discretization. This yields a system of nonlinear algebraic equations. Here, we consider nonlinear PDE known as Fisher's equation30 with homogeneous Neumann's boundary conditions and the diffusion coefficient D give below:
Assume M and N to be number of partitions in x and t directions, and their respective step size as h and k. Let denote the exact solution of the given system and its approximate solution be at the grid points of the mesh created. Applying backward difference to and central difference to the other terms, that is, , and to solve the PDE with values and , and gives rise to nonlinear system equations of size where , . Here, we take two different initial guesses and . This has resulted in a system of size 400, which is solved using schemes mentioned earlier.
In case of PDE's, since the precision of solution required is low, so the stopping criterion used is . We count iterations and mean CPU time (e-Time) used by schemes to arrive at the stopping criterion. We take average of 25 performances of the program for e-Time. The results are displayed in Table 9, and the approximate solution of the system can be seem in Figure 1.

Methods | Itr | e-Time | Itr | e-Time |
---|---|---|---|---|
3 | 9.81 | 7 | 89.25 | |
3 | 8.52 | 4 | 80.24 | |
3 | 8.06 | Diverges | – | |
SA | 6 | 11.65 | Diverges | – |
GGN | 3 | 10.64 | 4 | 100.25 |
3 | 7.1 | Diverges | – | |
SA | Diverges | – | 4 | 82.17 |
3 | 7.27 | 4 | 72.14 | |
SA | Diverges | – | 4 | 86.12 |
3 | 7.17 | 4 | 70.16 | |
SA | Diverges | – | 3 | 85.75 |
6 BASINS OF ATTRACTION
The dynamical behavior of the proposed algorithms is examined by generating their basins of attraction and comparing them with the existing methods mentioned earlier. We examine the following test problems which are systems of polynomials.
Example 1.
Example 2.
Example 3.
For generating basins of attraction, we take a square of points containing all solutions of system of equations. The iterative algorithm is then applied to all points in the square as initial points. We allocate color to each point depending on the root to which the corresponding orbit of the scheme, starting from the point, converges. In case the corresponding orbit does not approach any root of the polynomial system or tends to infinity, with tolerance and in a maximal of 50 iterations, black color is assigned to the point. The number of iterations required to attain the solution with the given stopping criterion is shown in brighter or darker colors. The brighter color means number of iterations required is lower.
The attraction basin of examples generated using respective schemes have been displayed in Figures 2, 3, and 4.



To compare the basins quantitatively, we calculate total number of initial points convergent in all test problem and then find the average number of iterations required per point. The average performance results of the schemes in comparison are displayed in Table 10. We can derive following conclusions on considering the average of three examples in Table 10:
Average convergent points | Average iterations | ||||||||
---|---|---|---|---|---|---|---|---|---|
Methods | eg1 | eg2 | eg3 | Average | % | eg1 | eg2 | eg3 | Average |
120,404 | 129,295 | 123,128 | 12,4275.67 | 77.29 | 15.46 | 12.80 | 14.64 | 14.3 | |
123,222 | 132,716 | 127,998 | 127,978.67 | 79.59 | 14.74 | 11.85 | 13.38 | 13.32 | |
115,285 | 122,779 | 118,233 | 118,765.67 | 73.86 | 16.68 | 14.48 | 15.79 | 15.65 | |
SA | 132,490 | 141,946 | 138,360 | 137,598.67 | 85.57 | 12.84 | 9.78 | 10.97 | 11.20 |
GGN | 159,998 | 160,668 | 160,000 | 160,222 | 99.63 | 3.38 | 3.48 | 3.16 | 3.34 |
112,973 | 124,303 | 116,878 | 118,051.33 | 73.41 | 17.23 | 9.25 | 16.09 | 14.19 | |
SA | 105,756 | 122,873 | 114,825 | 114,484.67 | 71.20 | 19.49 | 14.40 | 17.06 | 16.98 |
149,241 | 151,659 | 151,223 | 150,707.67 | 93.72 | 9.49 | 7.99 | 8.83 | 8.77 | |
SA | 144,509 | 146,957 | 145,159 | 145,541.67 | 90.51 | 14.22 | 12.56 | 13.84 | 13.54 |
148,918 | 151,597 | 151,255 | 150,704 | 93.72 | 9.54 | 7.99 | 8.84 | 8.79 | |
SA | 144,509 | 146,957 | 145,160 | 145,542 | 90.51 | 14.22 | 12.56 | 13.84 | 13.54 |
(i) Both average number of convergent points and average number of iterations of schemes and indicate that the new algorithm with memory is better than the method without memory .
(ii) By considering convergent points and average number of iterations in case schemes of order 6 and above, methods with memory and are best and then followed by SA, SA. This is followed by and further followed by SA.
Therefore, from Table 10 one can say that the new schemes and are best among all methods.
7 CONCLUSIONS
In this article, we proposed two families of fifth-order and sixth-order derivative-free schemes to solve nonlinear systems of equations, numerically. Further, from these families we derived several derivative-free schemes with memory. The proposed algorithms involve evaluation of only two divided differences and inverse of only one matrix per step. Consequently, the schemes are low in computational cost and result in high efficiency index. Some special members of the collection have been compared with some of the existing schemes using numerical examples and comparison of efficiencies has been shown. Various parameters like the error in approximation of the roots at end of fourth iterations, the number of iterations required, and the e-Time (CPU time) utilized by the scheme to accomplish the stopping criterion, verify that the new schemes give faster and closer approximation of roots than the already existing schemes. These results confirmed that the proposed schemes with memory and are best and more efficient than the existent schemes. Finally, these results have been revalidated with help of their basins of attraction. Thus, it can be ascertained that these novel schemes exhibit highly efficient nature particularly for large-scale systems.
ACKNOWLEDGMENT
The authors would like to thank all the five anonymous reviewers for their useful comments and suggestions, which have improved the final version of this manuscript.
CONFLICT OF INTEREST
The authors declare no potential conflict of interests.
Biographies
Mona Narang received her B.Sc.(Hons. School), M.Sc.(Hons. School), and Ph.D. in numerical analysis from Department of Mathematics, Panjab University, Chandigarh. Currently, she is working as an Associate Professor in the Department of Mathematics, D.A.V. College, Chandigarh. Her research interests include iterative schemes for numerical solution of system of nonlinear equations and its applications in solving real-life problems, dynamical study of iterative methods.
Saurabh Bhatia received his B.Sc.(Hons. School), M.Sc.(Hons. School), and Ph.D. in valuation theory from Department of Mathematics, Panjab University, Chandigarh. Currently, he is working in the Department of Mathematics, University Institute of Engineering and Technology, Panjab University, Chandigarh. His research interests include iterative schemes for numerical solution of nonlinear equations and system of nonlinear equations and its applications in solving real-life problems.
Vinay Kanwar did his Ph.D. in fluid dynamics in the year 1996 from Department of Mathematics, Himachal Pradesh University, Shimla, India. Currently, he is working as a Professor in Mathematics at University Institute of Engineering and Technology, Panjab University, Chandigarh. Presently, he is working in numerical analysis, fixed point theory, specifically in iterative techniques for solving nonlinear equations and nonlinear systems, numerical solution of differential equations, and their applications in solving real-life problems.