Volume 2012, Issue 1 989867
Research Article
Open Access

On the One-Leg Methods for Solving Nonlinear Neutral Differential Equations with Variable Delay

Wansheng Wang

Corresponding Author

Wansheng Wang

School of Mathematics and Computational Sciences, Changsha University of Science & Technology, Yuntang Campus, Changsha 410114, China csust.edu.cn

Search for more papers by this author
Shoufu Li

Shoufu Li

School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China xtu.edu.cn

Search for more papers by this author
First published: 18 September 2012
Citations: 3
Academic Editor: Yuantong Gu

Abstract

Based on A-stable one-leg methods and linear interpolations, we introduce four algorithms for solving neutral differential equations with variable delay. A natural question is which algorithm is better. To answer this question, we analyse the error behavior of the four algorithms and obtain their error bounds under a one-sided Lipschitz condition and some classical Lipschitz conditions. After extensively numerically experimenting, we give a positive conclusion.

1. Introduction

In this paper, we focus on establishing convergence properties of one-leg methods for the numerical solution of the neutral differential equations with variable time delay (NDDEs). These are equations of the form
()
()
where , τ > 0, T > 0, f and ϕ are complex smooth vector functions, and η is scalar function with η(t) ≤ t for tI0. Variations of (1.1) include equations with multiple time delays as follows:
()
and equations with state-dependent time delay as follows:
()

The initial-value problems for ordinary differential equations (ODEs) and delay differential equations (DDEs) are special cases of (1.1)-(1.2).

Neutral delay differential equations has found applications in many areas of science (see, e.g., [14]). The stability and convergence of numerical methods for NDDEs have drawn much attention in the last decades. A number of authors have investigated the linear stability of numerical methods for NDDEs (see, e.g., [512]). In 2000, based on a one-sided Lipschitz condition and some classical Lipschitz conditions, Bellen et al. [13] discussed the contractivity and asymptotic stability of Runge-Kutta methods for nonlinear NDDEs with special form. Following this paper, the nonlinear stability of numerical methods for NDDEs of the “Hale’ form” [14, 15] and for NDDEs (1.1) [1622] has been extensively examined.

On the other hand, as far back as the 1970s, Castleton and Grimm [23] investigated the convergence of a first-order numerical method for state-dependent delay NDDEs. Jackiewicz [2428] and Jackiewicz and Lo [29, 30] investigated the convergence of numerical methods for more general neutral functional differential equations (NFDEs) which contains problem (1.1) as a particular case. Enright and Hayashi in [31] investigated the convergence of continuous Runge-Kutta methods for state-dependent delay NDDEs. Baker [32] and Zennaro [33] discussed the convergence of numerical methods for NDDEs. Jackiewicz et al. [34] and Bartoszewski and Kwapisz [35] gave the convergence results of waveform relaxation methods for NFDEs. Observe that these convergence results were based on the assumption that the right-hand function f in (1.1) satisfies a classical Lipschitz condition with respect to the second variable and the error bounds were obtained by using directly the classical Lipschitz constant. Recently, Wang et al. [36] gave some new error bounds of a class of linear multi-step methods for the NDDEs (1.1) by means of the one-sided Lipschitz constant; Wang and Li [37] investigated the convergence of waveform relaxation methods for the NDDEs (1.1) in which the right-hand function f satisfies a one-sided Lipschitz condition with respect to the second variable.

The main contribution of this paper is that we introduce four algorithms for solving NDDEs (1.1) based on a one-leg method and present their convergence results. To accomplish this, we in Section 2 consider applying a one-leg method to NDDEs (1.1) and introduce two NDDEs numerical algorithms based on direct estimation. The convergence of the two algorithms then is analysized in Section 3. Noting that this class of algorithms may create some implementation problems, we introduce another two algorithms based on interpolation and analyze their convergence in Section 4. The application of the four algorithms for NDDEs is illustrated by means of two examples in Section 5. These numerical results confirm our theoretical analysis. We end with some concluding remarks in Section 6.

2. One-Leg Methods Discretization

Let h > 0 be a fixed step size, and E be the translation operator Eyn = yn+1. The one-leg k-step method gives
()
for ODEs will be said to be determined by “the method (ρ, σ),” where ρ and σ are defined by
()
with the additional requirements that ρ(x) and σ(x) are coprime polynomials and
()
The application of the method (ρ, σ) to (1.1) yields
()
where yh(η(σ(E)tn)) and Yh(η(σ(E)tn)) are approximations to y(η(σ(E)tn)) and y(η(σ(E)tn)), respectively. For −mn ≤ 0, yn = ϕ(tn). When −τη(σ(E)tn) ≤ 0, y(η(σ(E)tn)) = ϕ(η(σ(E)tn, and y(η(σ(E)tn)) = ϕ(η(σ(E)tn)); when 0 ≤ η(σ(E)tn) ≤ T, y(η(σ(E)tn)) is obtained by a specific interpolation, and y(η(σ(E)tn)) is obtained by a specific approximation at the point t = η(σ(E)tn).

Therefore, the NDDEs method (2.4) is determined completely by the method (ρ, σ), the interpolation procedure for yh(η(σ(E)tn)) and the approximation procedure for Yh(η(σ(E)tn)).

2.1. The First Class of Algorithms Derived from the Direct Evaluation: OLIDE

Now we consider the different interpolation procedures for yh(η(σ(E)tn)) and the different approximation procedures for Yh(η(σ(E)tn)). It is well known that any A-stable one-leg method for ODEs has order of at most 2. So we can use the linear interpolation procedure for yh(η(σ(E)tn)). Let us define
()
and suppose that
()
where is an integer such that , . Then define
()
where yj = ϕ(jh) for j ≤ 0.
In approximating nonneutral DDEs with constant delay, another interpolation procedure is used to approximate yh(η(σ(E)tn)) (see, e.g., [38]). Slightly modifying this interpolation procedure, we have
()
provided
()
where is an integer, .
For Yh(η(σ(E)tn)), we first consider the direct evaluation:
()
Then we can obtain two algorithms for solving NDDEs: one is (2.4)–(2.7)–(2.10), which is simply denoted by OLIDE(I), the other is (2.4)–(2.8)–(2.10), which is simply denoted by OLIDE(II). In [18], Wang et al. have shown that the algorithm OLIDE(II) have better stability properties than the algorithm OLIDE(I) for DDEs with constant delay. It is interesting to prove that the methods OLIDE(I) and OLIDE(II) have the same convergence properties.

3. Convergence Analysis of Algorithms OLIDE

3.1. A General Assumption

In the current section, we shall consider the convergence of the algorithms OLIDE for NDDEs (1.1)-(1.2). To do this, we always make the following assumptions unless otherwise stated:
  • (𝒜1) There exist constants, α, ϱ, β, and γ for which the following inequalities hold:

    ()
    ()
    ()
    where 〈·, ·〉 is the inner product in complex Euclid space with the corresponding norm ∥·∥.

  • (𝒜2)  α0 = max {α, 0} and ϱ/(1 − γ) are of moderate size.

  • (𝒜3) We assume the existence, uniqueness, and stability of solutions to the mathematical problems under consideration. For example, it is required that γ < 1 (see, e.g., [31]). This restriction ensures that when η(t) = t, the matrix

    ()
    is full rank so that y(t) is uniquely determined as a function of t and y(t) at this point.

  • (𝒜4) The unique true solution y(t) of problem (1.1)-(1.2) is also sufficiently differentiable on the interval [−τ, T], and all its derivatives used later are continuous and satisfy

    ()

Remarks (1) In general, the regularity assumption (𝒜4) does not hold for NDDEs (1.1)-(1.2). Even if the functions f and ϕ are sufficiently smooth, the solution has low regularity at the discontinuity points (see, e.g., [11]). This problem has attracted some researchers’ attention. Some approaches to handling discontinuities have been proposed, for example, discontinuity tracking [39, 40], discontinuity detection [27, 30, 31, 41, 42], perturbing the initial function [43]. Since the main purpose of this paper is to present some convergence results but not to discuss the treatment of discontinuities, we will think that the regularity assumption (𝒜4) holds.

(2) The regularity assumption (𝒜4) does not hold for general NDDEs (1.1)-(1.2) but holds for NDDEs (1.1)-(1.2) with proportional delay, that is, η(t) = qt,   q ∈ (0.1), also holds for NDDEs (1.1)-(1.2) with η(t) = t, that is, implicit ODEs.

(3) For the case that

𝒞1 the function η(t) satisfies such conditions that we can divide the time interval I0 into some subintervals Ik = [ξk−1, ξk] for k ≥ 1, where ξ0 = 0 and ξk+1 is the unique solution of η(ξ) = ξk, then the analysis of the error behaviour of the solutions can be done interval-by-interval since the regularity assumption (𝒜4) generally holds on each subinterval Ik. For example, the function η(t) satisfies (𝒞1) if it is continuous and increasing and there exists a positive constant τ0 such that τ(t) = tη(t) ≥ τ0, for all tI0,

The aim of this section is to derive theoretical estimates for the errors generated by the algorithms OLIDE(I) and OLIDE(II). To obtain the theoretical results, we need the following definition.

Definition 3.1. The one-leg method (ρ, σ) with two approximation procedures is said to be EB-convergent of order p if this method when applied to any given problem (1.1) satisfying the assumptions (𝒜1)–(𝒜4) with initial values y0, y1, …, yk−1, produces an approximation {yn}, and the global error satisfies a bound of the form

()
where the function C(t) and the maximum step-size h0 depend only on the method, some of the bounds Mi, the parameters α and ϱ/(1 − γ).

Proposition 3.2. EB-convergence of the method (2.4) implies B-convergence of the method (ρ, σ).

Remark 3.3. For nonneutral DDEs, Huang et al. [44] introduced the D-convergence of one-leg methods. Obviously, when an EB-convergent NDDEs method is applied to nonneutral DDEs, it is D-convergent.

3.2. Convergence Analysis of OLIDE(I)

First, we consider the convergence of the algorithm OLIDE(I). For this purpose, let us consider
()
where
()
()
()
()
It can be seen that for n ≥ 0, when the step-size h satisfies certain conditions, en is defined completely by (3.7).
For any given k × k real symmetric positive definite matrix G = [gij], the norm ∥·∥G is defined as
()

Lemma 3.4. If the method (ρ, σ) is A-stable, then the numerical solution produced by the algorithm OLIDE(I) satisfies

()
where .

Proof. Since A-stability is equivalent to G-stability (see [45]), it follows from the definition of G-stability that there exists a k × k real symmetric positive definite matrix G such that for any real sequence

()
where Ai = (ai, ai+1, …, ai+k−1) T(i = 0,1). Therefore, we can easily obtain (see [45, 46])
()
Write , and note the difference between it and εn+1. Then use the argument technique above to obtain
()
By means of condition (3.1), it follows from (3.16) that
()
If there exists a positive integer l such that η(l)(σ(E)tn) ≤ 0, then in view of , we have
()
Conversely, if for all l, η(l)(σ(E)tn) > 0, we similarly have (3.18). In short, it follows from (3.16) that
()
As an important step toward the proof of this lemma, we show that
()
where
()
For this purpose, we consider the following two cases successively.

Case 1.  tn+k−1η(i)(σ(E)tn) ≤ tn+k. In this case, from (2.7) and (3.9) we have

()
Case 2.  η(i)(σ(E)tn) < tn+k−1. For this case, similarly, it follows from (2.7) and (3.9) that
()
Substituting (3.22) and (3.23) into (3.19), and using the Cauchy inequality yields (3.20). From (3.20) one further gets
()
where denotes the minimum eigenvalues of the matrix G. On the other hand, it is easy to verify that there exists a constant d4 such that
()
Thus, we get
()
Substituting the above inequality into (3.24) yields
()
For any given c0 ∈ (0,1), we let . Then for , the above inequality leads to
()
where . Noting that
()
where denotes the maximum eigenvalues of the matrix G, we have
()
Now for any given , when , the above inequality leads to (3.13) with
()
This completes the proof of this lemma.

Compared to the nonneutral DDEs with a constant delay considered in [38, 44], the problems considered in this paper are more complex such that a series of new difficulties need to be overcome. Nevertheless, we note that some basic proof ideas are related to the ones used in [38, 44].

Lemma 3.5. If the method (ρ, σ) is A-stable, then there exist two constants d6 and h2, which depend only on the method, some of the bounds Mi, and the parameters α and ϱ/(1 − γ), such that

()
where p is the consistent order in the classical sense, p = 1,2.

Proof. The idea is a generalization of that used in [38, 44]. The A-stability of the method (ρ, σ) implies that βk/αk > 0, and it is consistent of order p = 1,2 in the classical sense (see [45, 46]). Consider the following scheme:

()
()
By Taylor expansion, we can assert that there exists constant c3 such that
()
From (3.7) and (3.33), it is easy to obtain
()
If η(i)(σ(E)tn)∈(tn+k−1, tn+k], by Taylor expansion, we have
()
This inequality and (3.36) imply that
()
For any given , we let . Then for , the above inequality leads to
()
where
()
Substitute (3.39) into (3.36) to obtain
()
On the other hand, since
()
it follows from (3.41) that
()
For any given , let
()
Then the required inequality (3.32) follows from (3.43), where
()
This completes the proof of the lemma.

Lemma 3.4, together with Lemma 3.5, implies the following theorem.

Theorem 3.6. The algorithm OLIDE(I) extended by the method (ρ, σ) is EB-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.

Proof. Firstly, we observe that the EB-convergence of order p of the method OLIDE(I) implies that the method (ρ, σ) is B-convergent of order p, and it is A-stable and consistent of order p in the classical sense for ODEs, p = 1 or 2 (see, e.g., [38]).

On the other hand, it follows from Lemmas 3.4 and 3.5 that

()
which implies that
()
By induction, we have
()
and therefore
()
Consequently, the following holds:
()
which implies the algorithm OLIDE(I) is EB-convergent of order p,   p = 1 or 2. The proof of Theorem 3.6 is completed.

In [44], Huang et al. proved that A-stable one-leg methods with the interpolation (2.7) for constant-delay DDEs are D-convergent under a condition that η(i)(σ(E)tn) ≤ tn+k−1. As a special case, we have the following corollary.

Corollary 3.7. A one-leg method (ρ, σ) with the interpolation (2.7) for DDEs with any variable delay is D-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.

3.3. Convergence Analysis of OLIDE(II)

In [18], Wang et al. have shown that the algorithm OLIDE(II) has better stability properties than the algorithm OLIDE(I) for DDEs with constant delay. Here we will prove that the algorithms OLIDE(II) and OLIDE(I) have the same convergence properties. To obtain the convergence result of the algorithm OLIDE(II), we also consider (3.7). But in this case,   (i = 1,2, …) are determined by
()
When η(i)(σ(E)tn) ≤ tn+k−1, we have
()
where
()
By Taylor expansion, there exist constants and h3 such that
()
Noting that for tn+k−1η(σ(E)tn) ≤ tn+k we have similar inequality, we find that the proof of the following theorem is similar to that of Theorem 3.6.

Theorem 3.8. The algorithm OLIDE(II) extended by the method (ρ, σ) is EB-convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.

It should be pointed out that Huang et al. [38] proved that A-stable one-leg methods with the interpolation (2.8) for constant-delay DDEs are D-convergent. As a special case, in this paper we obtain the convergence result of A-stable one-leg methods with the interpolation (2.8) for DDEs with any variable delay.

Since the algorithm OLIDE(II) has better stability properties than the algorithm OLIDE(I) [18] and they have the same convergence properties, we believe that the algorithm OLIDE(II) is more effective than the algorithm OLIDE(I) for solving delay problem.

Although we can show theoretically the convergence of the algorithms OLIDE for solving NDDEs with any variable delay, in practical implementation, these algorithms are generally expensive and will create a serious storage problem since they always require to trace back the recursion until the initial interval. So for general variable delay NDDEs, we need other algorithms, which are based on interpolation.

4. The Second Class of Algorithms Derived from the Interpolation: OLIIT

In this section, for Yh(η(σ(E)tn)) we consider the consistent interpolation schemes with for yh(η(σ(E)tn)). Firstly, if (2.6) holds, the consistent interpolation with (2.7) is
()
where Yj is computed by the following formula:
()
Of course, when j ≤ 0, Yj is determined by Yj = ϕ(jh). The algorithm (2.4)–(2.7)–(4.1) is simply denoted by OLIIT(I). Obviously, when the methods OLIDE(I) and OLIIT(I) are applied to non-neutral DDEs, they are identical and have been used in [11].
Given (2.9), the corresponding interpolation scheme with (2.8) is
()
Similarly, Yj is produced by (4.2) and Yj = ϕ(jh) for j ≤ 0. The algorithm (2.4)–(2.8)–(4.3) is denoted by OLIIT(II).
Now we discuss the convergence of OLIIT(I) and OLIIT(II). In order to do this, the condition (3.3) will be replaced by the following:
()
If we allow the error bound to depend on (β + γL)/(1 − γ), we can also obtain the convergence results.

4.1. Convergence Analysis of OLIIT(I)

Below, we establish the convergence result of the algorithm OLIIT(I). To accomplish this, we need to consider (3.7) with (σ(E)tn))   defined by the following formula:
()
where
()
Then we have the following lemma.

Lemma 4.1. If the method (ρ, σ) is A-stable, then the numerical solution produced by the algorithm OLIIT(I) satisfies

()
Here the constants d1, d2, d3, and h1 depend on the method (ρ, σ), α, (β + γL)/(1 − γ), and bounds Mi for certain derivatives of the true solution y(t).

Proof. Similar to the proof of Lemma 3.4, we have

()
If η(σ(E)tn) ≤ tn+k−1, it follows from (2.7) and (4.5) that
()
If tn+k−1η(σ(E)tn) ≤ tn+k, then from (2.7) and (4.5) we have
()
Consequently, we have a similar inequality to (3.20). The remaining part of this proof is analogous with that of Lemma 3.4, and we omit it here. This completes the proof.

Similarly, we can give the same lemma as Lemma 3.5 and obtain the following theorem.

Theorem 4.2. The algorithm OLIIT(I) extended by the method (ρ, σ) is convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.

4.2. Convergence Analysis of OLIIT(II)

In this subsection, we establish the convergence result of the algorithm OLIIT(II).

Theorem 4.3. The algorithm OLIIT(II) extended by the method (ρ, σ) is convergent of order p if and only if the method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2.

There is no essential difference between the proofs of the previous three theorems and the proof of this theorem, and therefore we omit it here.

We conclude this section with a few remarks about our results.

Our first remark is that if we allow the error bound to depend on (β + γL)/(1 − γ), the algorithms OLIDE(I) and OLIDE(II) are also convergent. Moreover, since ϱβ + γL, we have reason to believe that the algorithms OLIDE(I) and OLIDE(II) whose error bounds depend on α0 and ϱ/(1 − γ) are more efficient than the algorithms OLIIT(I) and OLIIT(II) whose error bounds depend on α0 and (β + γL)/(1 − γ).

Second, a key property of NDDEs is that its solution does not possess the smoothing property that the solution becomes smoother with increasing values of t (see, [1]), that is, if
()
does not hold, the solution to NFDEs will in general have a discontinuous derivative at a finite set of points of discontinuity of the first kind. Our results cannot be applied to this case, but they would provide better insight to numerical researchers in the field of numerical methods for solving NDDEs. On the other hand, for the problem which satisfies the condition (𝒞1), the convergence analysis of the four algorithms can be done interval-by-interval if its true solution is sufficiently differentiable on each subinterval Ik.

5. Numerical Experiments

In this section in order to illustrate the one-leg methods (2.4) for solving the NDDEs (1.1)-(1.2) we will consider two examples.

5.1. Example 1: A Test Equation

First of all, we consider a simple test equation
()
which is a modification of a test equation for stiff ODEs (see, e.g., [47, 48]), where τ = 1, g : RR is a given smooth function and a, b, and c are given real constants. Its exact solution is y(t) = g(t). Now let us take a = −108,   b = 0.9 · 108,   c = 0.9,   and  g(t) = sin (t). Then α = −108, β = 0.9 · 108, γ = 0.9, and ϱ = 0. Observe that when the constant coefficient linear NDDE (5.1) is discretized on a constrained mesh, that is, h = 0.1,   h = 0.01,   and  h = 0.001, the four algorithms, OLIDE(I), OLIIT(I), OLIDE(II), and OLIIT(II), extended by the midpoint rule are identical. The same conclusion holds for the algorithms extended by second-order backward differentiation formula (BDF2). So we consider only applying the algorithms OLIDE(I) extended by the midpoint rule (OLIDE(I)-MP)
()
and by second-order backward differentiation formula (OLIDE(I)-BDF2)
()
to the problem (5.1). Table 1 shows the numerical errors at t = 10.
Table 1. The numerical errors at t = 10 when the algorithm OLIDE(I) is applied to the problem (5.1).
h OLIDE(I)-MP OLIDE(I)-BDF2
0.1 6.807355e − 004 2.922906e − 011
0.01 6.800335e − 006 2.811085e − 013
0.001 6.800264e − 008 2.775558e − 015

Note that for this problem, the two algorithms both are convergent of order 2 and the algorithm OLIDE(I)-BDF2 is more effective than the algorithm OLIDE(I)-MP.

5.2. Example 2: A Partial Functional Differential Equation

The next problem is defined for t ∈ [0,10] and x ∈ [0,1], as follows:
()
The function g(t, x) and the initial and boundary values are selected in such a way that the exact solution becomes
()
After application of the numerical method of lines, we obtain the following neutral-delay differential equations of the form
()
where Δx is the spatial step, Nx is a natural number such that ΔxNx = 1, xi = iΔx,   i = 0,1, 2, …, Nx, gi(t) = g(t, xi), and ui(t) is meant to approximate the solution of (5.4) at the point (t, xi).

We take Δx = 0.1 or Δx = 0.01 for the numerical method of lines and use the midpoint rule connecting the different interpolation approximations for the numerical integration of the problem (5.6).

For the purpose of comparison, we consider three groups of coefficients as follows:
  • Coefficient I: a(t) = sin 2t, b(t) = 0, c(t) = −0.0001;

  • Coefficient II: a(t) = sin 2t, b(t) = 0, c(t) = −0.9;

  • Coefficient III: a(t) = 100sin 2t, b(t) = 0, c(t) = −0.9.

It is easy to verify that α = −π2min t∈[0,10]a(t), , β = max t∈[0,10]b(t), γ = max t∈[0,10] | c(t)|, and . Then only for coefficients I, ϱ/(1 − γ) or (β + γL)/(1 − γ) is of moderate size.
Let
()
denote the error of an algorithm when applied to problem (5.6).

(a) Constant Delay Problem First of all, we consider the problem (5.6) with a constant delay τ(t) = tη(t) = 1. When we choose the step-sizes h = 0.1, h = 0.01, and h = 0.001, the two algorithms OLIDE(I) and OLIDE(II) extended by the midpoint rule are identical (OLIDE-MP), and the two algorithms OLIIT(I) and OLIIT(II) extended by the midpoint rule are identical (OLIIT-MP). But since (5.6) is a variable-coefficient system, the algorithm OLIDE-MP differs from the algorithm OLIIT-MP. The errors ϵ at t = 10 are listed in Tables 2, 3, and 4 when the methods are applied to the problem (5.6) with three groups of coefficients, respectively.

Observe that for the constant delay problem with the coefficients I, both algorithms, OLIDE-MP and OLIIT-MP, are convergent of order 2. But for the problem with the coefficients II, the numerical results of OLIIT-MP is not ideal when Nx = 100 and h = 0.1. This situation has further deteriorated when the coefficients become III. On the one hand, this implies that OLIDE-MP are stronger than OLIIT-MP, which confirm our theoretical analysis. On the other hand, this implies that the coefficients, the spatial step-size Δx and the time step size h affect the efficiency of the algorithm OLIIT-MP. It is well-known that the midpoint rule is A-stable and is convergent for stiff ODEs. But now the numerical results become worse when the time step-size is larger and the grid is finer, which further confirms our theoretical results.

Table 2. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group I, where τ(t) = 1.
h OLIDE-MP OLIIT-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
  0.1 2.804687e − 008 2.795105e − 008 2.808233e − 008 2.798658e − 008
0.01 2.802893e − 010 2.793375e − 010 2.806421e − 010 2.796910e − 010
0.001 2.802876e − 012 2.793359e − 012 2.806405e − 012 2.796895e − 012
Table 3. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group II, where τ(t) = 1.
h OLIDE-MP OLIIT-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 3.500122e − 007 3.433799e − 007 3.345386e − 006 5.307093e − 002
0.01 2.658399e − 009 2.575017e − 009 3.215464e − 008 3.198151e − 008
0.001 2.649853e − 011 2.566321e − 011 3.214185e − 010 3.196886e − 010
Table 4. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group III, where τ(t) = 1.
h OLIDE-MP OLIIT-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 1.932937e − 006 2.085638e − 006 5.303484e + 000 5.282569e + 001
0.01 1.432539e − 010 1.432852e − 010 2.926272e − 010 9.189176e − 004
0.001 1.429940e − 012 1.429875e − 012 2.938189e − 012 2.941581e − 012

(b) Bounded Variable Delay Problem Next we consider the problem (5.6) with a bounded variable delay τ(t) = tη(t) = sin t + 1. Observe that the function η(t) = t − sin (t) − 1 satisfies the condition (𝒞1) such that the convergence analysis and numerically solving this equation can be done interval-by-interval. Because we have known that the true solution is sufficiently differentiable on the whole interval, the step sizes h = 0.1, h = 0.01, and h = 0.001 are chosen.

In this case, we do not consider OLIDE(I) and OLIDE(II) since the two algorithms will produce implementation and computational complexity issues. We explore only the algorithm OLIIT(I) extended by the midpoint rule (OLIIT(I)-MP) and the algorithm OLIIT(II) extended by the midpoint rule (OLIIT(II)-MP). The errors ϵ at t = 10 are listed in Tables 5, 6, and 7 when the algorithms are applied to the problem (5.6) with three groups of coefficients, respectively.

Table 5. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group I, where τ(t) = sin  t + 1.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
  0.1 2.809033e − 008 2.799451e − 008 2.808259e − 008 2.798684e − 008
0.01 2.807181e − 010 2.797665e − 010 2.806578e − 010 2.797068e − 010
0.001 2.807207e − 012 2.797692e − 012 2.806491e − 012 2.796982e − 012
Table 6. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group II, where τ(t) = sin  t + 1.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
  0.1 1.126902e − 006 9.145590e − 004 4.938563e − 006   1.656698e − 002  
0.01 1.106792e − 008 1.102020e − 008 6.263681e − 009   6.274124e −   009
0.001 1.142587e − 010 1.136783e − 010 6.251279e − 011   6.255985e − 011  
Table 7. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group III, where τ(t) = sin  t + 1.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 9.066809e − 002 1.199248e + 000 1.651731e + 000 3.112600e + 001
0.01 2.328686e − 009 6.123243e − 005 2.292876e − 009 2.828239e − 004
0.001 2.261214e − 011 4.619492e − 009 2.239975e − 011 2.251171e − 011

From these numerical data, we also see that both algorithms, OLIIT(I)-MP and OLIIT(II)-MP, are convergent of order 2 for this equation with the coefficients I, but such a good situation is destroyed when the coefficients become II or III.

(c) Proportional Delay Problem Finally, we consider the problem (5.6) with a proportional variable delay τ(t) = 0.5t. We still choose the step size h = 0.1, h = 0.01, and h = 0.001. Similar to the case of the bounded variable delay, we use only the algorithm OLIIT(I)-MP and the algorithm OLIIT(II)-MP to solve the problem (5.6). Tables 8, 9, and 10 show the errors ϵ at t = 10.

Table 8. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group I, where τ(t) = 0.5t.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 3.007486e − 008 2.998332e − 008 2.985691e − 008 2.976663e − 008
0.01 3.005176e − 010 2.996115e − 010 2.983553e − 010 2.974621e − 010
0.001 3.005155e − 012 2.996096e − 012 2.983534e − 012 2.974603e − 012
Table 9. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group II, where τ(t) = 0.5t.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 3.794966e − 005 2.539557e − 001 1.627824e − 005 2.019670e − 004
0.01 4.905939e − 007 4.870288e − 007 1.643376e − 007 1.649717e − 007
0.001 5.026856e − 009 4.977278e − 009 1.770753e − 009 1.770849e − 009
Table 10. The errors ϵ at t = 10 when the algorithms are applied to the problem (5.6) with the coefficients group III, where τ(t) = 0.5t.
h OLIIT(I)-MP OLIIT(II)-MP
Nx = 10 Nx = 100 Nx = 10 Nx = 100
0.1 4.279370e + 001 3.962658e + 001 2.324247e − 002 6.345608e − 002
0.01 5.989558e − 008 3.347081e − 004 6.024260e − 008 6.023323e − 008
0.001 5.951239e − 010 5.951353e − 010 5.842018e − 011 5.842442e − 010

From these numerical data, we still observe that both algorithms, OLIIT(I)-MP and OLIIT(II)-MP, are convergent of order 2 for this equation with the coefficients I, but for the same equation with the coefficients II or III, the numerical results become worse when (β + γL)/(1 − γ) becomes larger.

6. Concluding Remarks

In this paper we introduced four algorithms connecting with an ODEs method to solve nonlinear NDDEs with general variable delay, established their convergence properties and compared their numerical results by means of extensive numerical data. This paper can be regarded as an extension from the nonneutral DDEs with a constant delay [38, 44] to neutral DDEs with general variable delay. Although some basic proof ideas are related to the ones used in [38, 44], the problems considered in this paper are more complex such that some new proof techniques were introduced to overcome a series of new difficulties encountered in theoretical analysis.

From theoretical analysis given in Sections 3 and 4 and numerical results shown in Section 5, we come to the following remarks:
  • (1)

    If α0 and ϱ/(1 − γ) are of moderate size, the algorithms OLIDE(I) and OLIDE(II) based on A-stable one-leg methods (ρ, σ) are convergent of order p, where p = 1,2 is consistent of order in the classical sense. When α0 and (β + γL)/(1 − γ) are of moderate size, the four algorithms introduced in this paper are convergent of order p if the one-leg method (ρ, σ) is A-stable and consistent of order p in the classical sense, where p = 1,2. But if (β + γL)/(1 − γ) is very large, the algorithms OLIIT(I) and OLIIT(II) may produce bad numerical results when the time step size is large even if the ODEs method is A−stable. This revels the difference between numerically solving ODEs and NDDEs.

  • (2)

    If using the direct estimation (2.10) does not create implementation or computational complexity problem, we prefer the algorithms OLIDE to the algorithms OLIIT. Furthermore, considering the algorithm OLIDE(II) has better stability properties than the algorithm OLIDE(I) (see [18] and the numerical Example 2 in Section 5.2), it is our belief that the algorithm OLIDE(II) could become an effective numerical method for solving this class of problem, for example, NDDEs with constant delay, if the algorithm is easy to implement. Of course, for general NDDEs with variable delay, we have to use the algorithms OLIIT(I) or OLIIT(II).

  • (3)

    The results we established in this paper can be extended in a straightforward way to the case of NDDEs with multiple time delay (1.3)-(1.2).

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 11001033), the Natural Science Foundation of Hunan Province (Grant no. 10JJ4003), Chinese Society for Electrical Engineering, and the Open Fund Project of Key Research Institute of Philosophies and Social Sciences in Hunan Universities.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.