On the One-Leg Methods for Solving Nonlinear Neutral Differential Equations with Variable Delay
Abstract
Based on A-stable one-leg methods and linear interpolations, we introduce four algorithms for solving neutral differential equations with variable delay. A natural question is which algorithm is better. To answer this question, we analyse the error behavior of the four algorithms and obtain their error bounds under a one-sided Lipschitz condition and some classical Lipschitz conditions. After extensively numerically experimenting, we give a positive conclusion.
1. Introduction
The initial-value problems for ordinary differential equations (ODEs) and delay differential equations (DDEs) are special cases of (1.1)-(1.2).
Neutral delay differential equations has found applications in many areas of science (see, e.g., [1–4]). The stability and convergence of numerical methods for NDDEs have drawn much attention in the last decades. A number of authors have investigated the linear stability of numerical methods for NDDEs (see, e.g., [5–12]). In 2000, based on a one-sided Lipschitz condition and some classical Lipschitz conditions, Bellen et al. [13] discussed the contractivity and asymptotic stability of Runge-Kutta methods for nonlinear NDDEs with special form. Following this paper, the nonlinear stability of numerical methods for NDDEs of the “Hale’ form” [14, 15] and for NDDEs (1.1) [16–22] has been extensively examined.
On the other hand, as far back as the 1970s, Castleton and Grimm [23] investigated the convergence of a first-order numerical method for state-dependent delay NDDEs. Jackiewicz [24–28] and Jackiewicz and Lo [29, 30] investigated the convergence of numerical methods for more general neutral functional differential equations (NFDEs) which contains problem (1.1) as a particular case. Enright and Hayashi in [31] investigated the convergence of continuous Runge-Kutta methods for state-dependent delay NDDEs. Baker [32] and Zennaro [33] discussed the convergence of numerical methods for NDDEs. Jackiewicz et al. [34] and Bartoszewski and Kwapisz [35] gave the convergence results of waveform relaxation methods for NFDEs. Observe that these convergence results were based on the assumption that the right-hand function f in (1.1) satisfies a classical Lipschitz condition with respect to the second variable and the error bounds were obtained by using directly the classical Lipschitz constant. Recently, Wang et al. [36] gave some new error bounds of a class of linear multi-step methods for the NDDEs (1.1) by means of the one-sided Lipschitz constant; Wang and Li [37] investigated the convergence of waveform relaxation methods for the NDDEs (1.1) in which the right-hand function f satisfies a one-sided Lipschitz condition with respect to the second variable.
The main contribution of this paper is that we introduce four algorithms for solving NDDEs (1.1) based on a one-leg method and present their convergence results. To accomplish this, we in Section 2 consider applying a one-leg method to NDDEs (1.1) and introduce two NDDEs numerical algorithms based on direct estimation. The convergence of the two algorithms then is analysized in Section 3. Noting that this class of algorithms may create some implementation problems, we introduce another two algorithms based on interpolation and analyze their convergence in Section 4. The application of the four algorithms for NDDEs is illustrated by means of two examples in Section 5. These numerical results confirm our theoretical analysis. We end with some concluding remarks in Section 6.
2. One-Leg Methods Discretization
Therefore, the NDDEs method (2.4) is determined completely by the method (ρ, σ), the interpolation procedure for yh(η(σ(E)tn)) and the approximation procedure for Yh(η(σ(E)tn)).
2.1. The First Class of Algorithms Derived from the Direct Evaluation: OLIDE
3. Convergence Analysis of Algorithms OLIDE
3.1. A General Assumption
-
(𝒜1) There exist constants, α, ϱ, β, and γ for which the following inequalities hold:
()()()where 〈·, ·〉 is the inner product in complex Euclid space with the corresponding norm ∥·∥. -
(𝒜2) α0 = max {α, 0} and ϱ/(1 − γ) are of moderate size.
-
(𝒜3) We assume the existence, uniqueness, and stability of solutions to the mathematical problems under consideration. For example, it is required that γ < 1 (see, e.g., [31]). This restriction ensures that when η(t) = t, the matrix
()is full rank so that y′(t) is uniquely determined as a function of t and y(t) at this point. -
(𝒜4) The unique true solution y(t) of problem (1.1)-(1.2) is also sufficiently differentiable on the interval [−τ, T], and all its derivatives used later are continuous and satisfy
()
Remarks (1) In general, the regularity assumption (𝒜4) does not hold for NDDEs (1.1)-(1.2). Even if the functions f and ϕ are sufficiently smooth, the solution has low regularity at the discontinuity points (see, e.g., [11]). This problem has attracted some researchers’ attention. Some approaches to handling discontinuities have been proposed, for example, discontinuity tracking [39, 40], discontinuity detection [27, 30, 31, 41, 42], perturbing the initial function [43]. Since the main purpose of this paper is to present some convergence results but not to discuss the treatment of discontinuities, we will think that the regularity assumption (𝒜4) holds.
(2) The regularity assumption (𝒜4) does not hold for general NDDEs (1.1)-(1.2) but holds for NDDEs (1.1)-(1.2) with proportional delay, that is, η(t) = qt, q ∈ (0.1), also holds for NDDEs (1.1)-(1.2) with η(t) = t, that is, implicit ODEs.
(3) For the case that
𝒞1 the function η(t) satisfies such conditions that we can divide the time interval I0 into some subintervals Ik = [ξk−1, ξk] for k ≥ 1, where ξ0 = 0 and ξk+1 is the unique solution of η(ξ) = ξk, then the analysis of the error behaviour of the solutions can be done interval-by-interval since the regularity assumption (𝒜4) generally holds on each subinterval Ik. For example, the function η(t) satisfies (𝒞1) if it is continuous and increasing and there exists a positive constant τ0 such that τ(t) = t − η(t) ≥ τ0, for all t ∈ I0,
The aim of this section is to derive theoretical estimates for the errors generated by the algorithms OLIDE(I) and OLIDE(II). To obtain the theoretical results, we need the following definition.
Definition 3.1. The one-leg method (ρ, σ) with two approximation procedures is said to be EB-convergent of order p if this method when applied to any given problem (1.1) satisfying the assumptions (𝒜1)–(𝒜4) with initial values y0, y1, …, yk−1, produces an approximation {yn}, and the global error satisfies a bound of the form
Proposition 3.2. EB-convergence of the method (2.4) implies B-convergence of the method (ρ, σ).
Remark 3.3. For nonneutral DDEs, Huang et al. [44] introduced the D-convergence of one-leg methods. Obviously, when an EB-convergent NDDEs method is applied to nonneutral DDEs, it is D-convergent.
3.2. Convergence Analysis of OLIDE(I)
Lemma 3.4. If the method (ρ, σ) is A-stable, then the numerical solution produced by the algorithm OLIDE(I) satisfies
Proof. Since A-stability is equivalent to G-stability (see [45]), it follows from the definition of G-stability that there exists a k × k real symmetric positive definite matrix G such that for any real sequence
Case 1. tn+k−1 ≤ η(i)(σ(E)tn) ≤ tn+k. In this case, from (2.7) and (3.9) we have
Compared to the nonneutral DDEs with a constant delay considered in [38, 44], the problems considered in this paper are more complex such that a series of new difficulties need to be overcome. Nevertheless, we note that some basic proof ideas are related to the ones used in [38, 44].
Lemma 3.5. If the method (ρ, σ) is A-stable, then there exist two constants d6 and h2, which depend only on the method, some of the bounds Mi, and the parameters α and ϱ/(1 − γ), such that
Proof. The idea is a generalization of that used in [38, 44]. The A-stability of the method (ρ, σ) implies that βk/αk > 0, and it is consistent of order p = 1,2 in the classical sense (see [45, 46]). Consider the following scheme:
Lemma 3.4, together with Lemma 3.5, implies the following theorem.
Theorem 3.6. The algorithm OLIDE(I) extended by the method (ρ, σ) is EB-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.
Proof. Firstly, we observe that the EB-convergence of order p of the method OLIDE(I) implies that the method (ρ, σ) is B-convergent of order p, and it is A-stable and consistent of order p in the classical sense for ODEs, p = 1 or 2 (see, e.g., [38]).
On the other hand, it follows from Lemmas 3.4 and 3.5 that
In [44], Huang et al. proved that A-stable one-leg methods with the interpolation (2.7) for constant-delay DDEs are D-convergent under a condition that η(i)(σ(E)tn) ≤ tn+k−1. As a special case, we have the following corollary.
Corollary 3.7. A one-leg method (ρ, σ) with the interpolation (2.7) for DDEs with any variable delay is D-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.
3.3. Convergence Analysis of OLIDE(II)
Theorem 3.8. The algorithm OLIDE(II) extended by the method (ρ, σ) is EB-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.
It should be pointed out that Huang et al. [38] proved that A-stable one-leg methods with the interpolation (2.8) for constant-delay DDEs are D-convergent. As a special case, in this paper we obtain the convergence result of A-stable one-leg methods with the interpolation (2.8) for DDEs with any variable delay.
Since the algorithm OLIDE(II) has better stability properties than the algorithm OLIDE(I) [18] and they have the same convergence properties, we believe that the algorithm OLIDE(II) is more effective than the algorithm OLIDE(I) for solving delay problem.
Although we can show theoretically the convergence of the algorithms OLIDE for solving NDDEs with any variable delay, in practical implementation, these algorithms are generally expensive and will create a serious storage problem since they always require to trace back the recursion until the initial interval. So for general variable delay NDDEs, we need other algorithms, which are based on interpolation.
4. The Second Class of Algorithms Derived from the Interpolation: OLIIT
4.1. Convergence Analysis of OLIIT(I)
Lemma 4.1. If the method (ρ, σ) is A-stable, then the numerical solution produced by the algorithm OLIIT(I) satisfies
Proof. Similar to the proof of Lemma 3.4, we have
Similarly, we can give the same lemma as Lemma 3.5 and obtain the following theorem.
Theorem 4.2. The algorithm OLIIT(I) extended by the method (ρ, σ) is convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.
4.2. Convergence Analysis of OLIIT(II)
In this subsection, we establish the convergence result of the algorithm OLIIT(II).
Theorem 4.3. The algorithm OLIIT(II) extended by the method (ρ, σ) is convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.
There is no essential difference between the proofs of the previous three theorems and the proof of this theorem, and therefore we omit it here.
We conclude this section with a few remarks about our results.
Our first remark is that if we allow the error bound to depend on (β + γL)/(1 − γ), the algorithms OLIDE(I) and OLIDE(II) are also convergent. Moreover, since ϱ ≤ β + γL, we have reason to believe that the algorithms OLIDE(I) and OLIDE(II) whose error bounds depend on α0 and ϱ/(1 − γ) are more efficient than the algorithms OLIIT(I) and OLIIT(II) whose error bounds depend on α0 and (β + γL)/(1 − γ).
5. Numerical Experiments
In this section in order to illustrate the one-leg methods (2.4) for solving the NDDEs (1.1)-(1.2) we will consider two examples.
5.1. Example 1: A Test Equation
h | OLIDE(I)-MP | OLIDE(I)-BDF2 |
---|---|---|
0.1 | 6.807355e − 004 | 2.922906e − 011 |
0.01 | 6.800335e − 006 | 2.811085e − 013 |
0.001 | 6.800264e − 008 | 2.775558e − 015 |
Note that for this problem, the two algorithms both are convergent of order 2 and the algorithm OLIDE(I)-BDF2 is more effective than the algorithm OLIDE(I)-MP.
5.2. Example 2: A Partial Functional Differential Equation
We take Δx = 0.1 or Δx = 0.01 for the numerical method of lines and use the midpoint rule connecting the different interpolation approximations for the numerical integration of the problem (5.6).
-
Coefficient I: a(t) = sin 2t, b(t) = 0, c(t) = −0.0001;
-
Coefficient II: a(t) = sin 2t, b(t) = 0, c(t) = −0.9;
-
Coefficient III: a(t) = 100sin 2t, b(t) = 0, c(t) = −0.9.
(a) Constant Delay Problem First of all, we consider the problem (5.6) with a constant delay τ(t) = t − η(t) = 1. When we choose the step-sizes h = 0.1, h = 0.01, and h = 0.001, the two algorithms OLIDE(I) and OLIDE(II) extended by the midpoint rule are identical (OLIDE-MP), and the two algorithms OLIIT(I) and OLIIT(II) extended by the midpoint rule are identical (OLIIT-MP). But since (5.6) is a variable-coefficient system, the algorithm OLIDE-MP differs from the algorithm OLIIT-MP. The errors ϵ at t = 10 are listed in Tables 2, 3, and 4 when the methods are applied to the problem (5.6) with three groups of coefficients, respectively.
Observe that for the constant delay problem with the coefficients I, both algorithms, OLIDE-MP and OLIIT-MP, are convergent of order 2. But for the problem with the coefficients II, the numerical results of OLIIT-MP is not ideal when Nx = 100 and h = 0.1. This situation has further deteriorated when the coefficients become III. On the one hand, this implies that OLIDE-MP are stronger than OLIIT-MP, which confirm our theoretical analysis. On the other hand, this implies that the coefficients, the spatial step-size Δx and the time step size h affect the efficiency of the algorithm OLIIT-MP. It is well-known that the midpoint rule is A-stable and is convergent for stiff ODEs. But now the numerical results become worse when the time step-size is larger and the grid is finer, which further confirms our theoretical results.
h | OLIDE-MP | OLIIT-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 2.804687e − 008 | 2.795105e − 008 | 2.808233e − 008 | 2.798658e − 008 |
0.01 | 2.802893e − 010 | 2.793375e − 010 | 2.806421e − 010 | 2.796910e − 010 |
0.001 | 2.802876e − 012 | 2.793359e − 012 | 2.806405e − 012 | 2.796895e − 012 |
h | OLIDE-MP | OLIIT-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 3.500122e − 007 | 3.433799e − 007 | 3.345386e − 006 | 5.307093e − 002 |
0.01 | 2.658399e − 009 | 2.575017e − 009 | 3.215464e − 008 | 3.198151e − 008 |
0.001 | 2.649853e − 011 | 2.566321e − 011 | 3.214185e − 010 | 3.196886e − 010 |
h | OLIDE-MP | OLIIT-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 1.932937e − 006 | 2.085638e − 006 | 5.303484e + 000 | 5.282569e + 001 |
0.01 | 1.432539e − 010 | 1.432852e − 010 | 2.926272e − 010 | 9.189176e − 004 |
0.001 | 1.429940e − 012 | 1.429875e − 012 | 2.938189e − 012 | 2.941581e − 012 |
(b) Bounded Variable Delay Problem Next we consider the problem (5.6) with a bounded variable delay τ(t) = t − η(t) = sin t + 1. Observe that the function η(t) = t − sin (t) − 1 satisfies the condition (𝒞1) such that the convergence analysis and numerically solving this equation can be done interval-by-interval. Because we have known that the true solution is sufficiently differentiable on the whole interval, the step sizes h = 0.1, h = 0.01, and h = 0.001 are chosen.
In this case, we do not consider OLIDE(I) and OLIDE(II) since the two algorithms will produce implementation and computational complexity issues. We explore only the algorithm OLIIT(I) extended by the midpoint rule (OLIIT(I)-MP) and the algorithm OLIIT(II) extended by the midpoint rule (OLIIT(II)-MP). The errors ϵ at t = 10 are listed in Tables 5, 6, and 7 when the algorithms are applied to the problem (5.6) with three groups of coefficients, respectively.
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 2.809033e − 008 | 2.799451e − 008 | 2.808259e − 008 | 2.798684e − 008 |
0.01 | 2.807181e − 010 | 2.797665e − 010 | 2.806578e − 010 | 2.797068e − 010 |
0.001 | 2.807207e − 012 | 2.797692e − 012 | 2.806491e − 012 | 2.796982e − 012 |
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 1.126902e − 006 | 9.145590e − 004 | 4.938563e − 006 | 1.656698e − 002 |
0.01 | 1.106792e − 008 | 1.102020e − 008 | 6.263681e − 009 | 6.274124e − 009 |
0.001 | 1.142587e − 010 | 1.136783e − 010 | 6.251279e − 011 | 6.255985e − 011 |
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 9.066809e − 002 | 1.199248e + 000 | 1.651731e + 000 | 3.112600e + 001 |
0.01 | 2.328686e − 009 | 6.123243e − 005 | 2.292876e − 009 | 2.828239e − 004 |
0.001 | 2.261214e − 011 | 4.619492e − 009 | 2.239975e − 011 | 2.251171e − 011 |
From these numerical data, we also see that both algorithms, OLIIT(I)-MP and OLIIT(II)-MP, are convergent of order 2 for this equation with the coefficients I, but such a good situation is destroyed when the coefficients become II or III.
(c) Proportional Delay Problem Finally, we consider the problem (5.6) with a proportional variable delay τ(t) = 0.5t. We still choose the step size h = 0.1, h = 0.01, and h = 0.001. Similar to the case of the bounded variable delay, we use only the algorithm OLIIT(I)-MP and the algorithm OLIIT(II)-MP to solve the problem (5.6). Tables 8, 9, and 10 show the errors ϵ at t = 10.
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 3.007486e − 008 | 2.998332e − 008 | 2.985691e − 008 | 2.976663e − 008 |
0.01 | 3.005176e − 010 | 2.996115e − 010 | 2.983553e − 010 | 2.974621e − 010 |
0.001 | 3.005155e − 012 | 2.996096e − 012 | 2.983534e − 012 | 2.974603e − 012 |
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 3.794966e − 005 | 2.539557e − 001 | 1.627824e − 005 | 2.019670e − 004 |
0.01 | 4.905939e − 007 | 4.870288e − 007 | 1.643376e − 007 | 1.649717e − 007 |
0.001 | 5.026856e − 009 | 4.977278e − 009 | 1.770753e − 009 | 1.770849e − 009 |
h | OLIIT(I)-MP | OLIIT(II)-MP | ||
---|---|---|---|---|
Nx = 10 | Nx = 100 | Nx = 10 | Nx = 100 | |
0.1 | 4.279370e + 001 | 3.962658e + 001 | 2.324247e − 002 | 6.345608e − 002 |
0.01 | 5.989558e − 008 | 3.347081e − 004 | 6.024260e − 008 | 6.023323e − 008 |
0.001 | 5.951239e − 010 | 5.951353e − 010 | 5.842018e − 011 | 5.842442e − 010 |
From these numerical data, we still observe that both algorithms, OLIIT(I)-MP and OLIIT(II)-MP, are convergent of order 2 for this equation with the coefficients I, but for the same equation with the coefficients II or III, the numerical results become worse when (β + γL)/(1 − γ) becomes larger.
6. Concluding Remarks
In this paper we introduced four algorithms connecting with an ODEs method to solve nonlinear NDDEs with general variable delay, established their convergence properties and compared their numerical results by means of extensive numerical data. This paper can be regarded as an extension from the nonneutral DDEs with a constant delay [38, 44] to neutral DDEs with general variable delay. Although some basic proof ideas are related to the ones used in [38, 44], the problems considered in this paper are more complex such that some new proof techniques were introduced to overcome a series of new difficulties encountered in theoretical analysis.
- (1)
If α0 and ϱ/(1 − γ) are of moderate size, the algorithms OLIDE(I) and OLIDE(II) based on A-stable one-leg methods (ρ, σ) are convergent of order p, where p = 1,2 is consistent of order in the classical sense. When α0 and (β + γL)/(1 − γ) are of moderate size, the four algorithms introduced in this paper are convergent of order p if the one-leg method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2. But if (β + γL)/(1 − γ) is very large, the algorithms OLIIT(I) and OLIIT(II) may produce bad numerical results when the time step size is large even if the ODEs method is A−stable. This revels the difference between numerically solving ODEs and NDDEs.
- (2)
If using the direct estimation (2.10) does not create implementation or computational complexity problem, we prefer the algorithms OLIDE to the algorithms OLIIT. Furthermore, considering the algorithm OLIDE(II) has better stability properties than the algorithm OLIDE(I) (see [18] and the numerical Example 2 in Section 5.2), it is our belief that the algorithm OLIDE(II) could become an effective numerical method for solving this class of problem, for example, NDDEs with constant delay, if the algorithm is easy to implement. Of course, for general NDDEs with variable delay, we have to use the algorithms OLIIT(I) or OLIIT(II).
- (3)
The results we established in this paper can be extended in a straightforward way to the case of NDDEs with multiple time delay (1.3)-(1.2).
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant no. 11001033), the Natural Science Foundation of Hunan Province (Grant no. 10JJ4003), Chinese Society for Electrical Engineering, and the Open Fund Project of Key Research Institute of Philosophies and Social Sciences in Hunan Universities.