Volume 2012, Issue 1 120792
Research Article
Open Access

Collocation for High-Order Differential Equations with Lidstone Boundary Conditions

Francesco Costabile

Francesco Costabile

Department of Mathematics, University of Calabria, 87036 Rende , Italy unical.it

Search for more papers by this author
Anna Napoli

Corresponding Author

Anna Napoli

Department of Mathematics, University of Calabria, 87036 Rende , Italy unical.it

Search for more papers by this author
First published: 07 August 2012
Citations: 15
Academic Editor: Roberto Barrio

Abstract

A class of methods for the numerical solution of high-order differential equations with Lidstone and complementary Lidstone boundary conditions are presented. It is a collocation method which provides globally continuous differentiable solutions. Computation of the integrals which appear in the coefficients is generated by a recurrence formula. Numerical experiments support theoretical results.

1. Introduction

We consider the general even-order boundary value problem as follows:
()
where y(x) = (y(x), y(x), …, y(q)(x)),   0 ≤ q ≤ 2n − 1, and the general odd-order boundary value problem is as follows:
()
where y(x) = (y(x), y(x), …, y(q)(x)),   0 ≤ q ≤ 2n. αh, βh, h = 0, …, n are real constants and f : [0,1]   ×   q+1   is continuous at least in the interior of the domain of interest.
We assume that f satisfies a uniform Lipschitz condition in y, which means that there exist nonnegative constants Lk, k = 0, …, q, s.t., whenever (x, y0, y1, …, yq) and are in the domain of f, the inequality is as follows:
()
holds. Under these hypotheses problems (1.1) and (1.2) have a unique solution y(x) in a certain appropriate domain of [0, 1] × q+1 [1, 2].

The boundary conditions in (1.1) and (1.2) are known, respectively, as Lidstone and complementary Lidstone boundary conditions [13].

Problems of these kinds model a wide spectrum of nonlinear phenomena. For this reason they have attracted considerable attention by many authors who studied existence of solutions using different methods. Often special boundary conditions are considered, under some restrictions on f. Some authors treat particular cases of problem of kind (1.1) as nonlinear eigenvalue problems [4, 5] or apply finite difference methods, shooting techniques, spline approximation or the method of upper and lower solution [611]. In [12] a collocation method for the numerical solution of second order nonlinear two-point boundary value problem has been derived. Problems of kind (1.2) were introduced in [3] and then they have been studied in [2].

In this paper, for the numerical solution of problems (1.1) and (1.2), as an alternative to existing numerical methods, we propose collocation methods which produce smooth, global approximations to the solution y(x) in the form of polynomial functions.

In Section 2 we consider even-order BVPs (1.1) and give an a priori estimation of error. For n = 1 we derive a method to solve second order BVPs, like, for example, the classical Bratu problem. Then, for n = 2, we construct a method for the solution of the following kind of BVPs as follows:
()
This problem is used to describe the deformation of an elastic beam its two ends of which are simply supported.
In Section 3 we propose a class of methods for the numerical solution of odd-order BVPs and, particularly, we construct an approximating polynomial for the solution of the fifth-order problem as follows:
()
Fifth-order boundary value problems generally arise in the mathematical modeling of viscoelastic flows [13, 14].

In order to implement the proposed method, in Section 4 we propose an algorithm to compute the numerical solution of (1.1) and (1.2) in a set of nodes. Finally, in Section 5, we present some numerical examples to demonstrate the efficiency of the proposed procedure.

2. The Even-Order BVP

Let′s consider the even-order BVP (1.1).

2.1. Preliminaries

Let y(x) be the solution of (1.1). If y(x) ∈ C(2n)[0,1], then, from Lidstone interpolation [1], we have
()
where Q2n−1 is the Lidstone interpolating polynomial [1] of degree 2n − 1 as follows:
()
satisfying the conditions
()
The error term T(x) is given by
()
where gn(x, t) is the Green′s function
()
is the sequence of Lidstone polynomials in the interval [0,1], which can be defined by the following recursive relations:
()

2.2. Derivation of the New Method

If xi, i = 1, …, m, are m distinct points in [0, 1], using Lagrange interpolation, we get
()
where li(x),   i = 1, …, m are the fundamental Lagrange polynomials and Rm(y(2n), x) is the remainder term. Inserting (2.7) into (2.4) and then by substituting into (2.1), in view of the first equation in (1.1), we obtain
()
This suggests, to define the polynomial we have
()
where and
()

The following theorem holds.

Theorem 2.1. The polynomial of degree 2n + m − 1 implicitly defined by (2.9) satisfies the following relations

()
()
that is yn,m(x) is a collocation polynomial for (1.1).

Proof . From properties (2.6) it follows that

()
This and (2.3) prove (2.11). Furthermore, from (2.10), we have
()
hence, since li(xj) = δij, (2.12) follows by deriving (2.9) 2n times.

2.3. The Error

In what follows for all yC(2n+m)[a, b] we define the norm [15] with the following:
()
and the constants
()
where Lk, k = 0, …, q, are the Lipschitz constants of f and Rm(y(2n), t) is the remainder term in the Lagrange interpolation of y(2n)(t) on the nodes {xi}. Further, we indicate with Br(x) the Bernoulli polynomial of degree r [16] and define the following:
()

For the global error the following theorem holds.

Theorem 2.2. With the previous notations, suppose that LQn,m < 1. Then

()

Proof . By deriving (2.8) and (2.9) s times, s = 0, …, q, we get

()
Now, since [1]
()
we have and from the property B2k+1(1 − x/2) = −B2k+1(x/2) we get
()
Hence
()
Thus
()
It is known [16] that the Bernoulli polynomials may be expressed as , where Bk = Bk(0) are the Bernoulli numbers [16]. Hence
()
It follows that
()
and inequality (2.18) follows.

2.4. Example 1: The Second-Order Case

Now let us consider the case of second-order BVPs
()
In this case the Lidstone interpolating polynomial (2.2) has the following expression
()
and the Green′s function is
()
Putting
()
and integrating by parts, we have
()
Hence
()

2.5. Example 2: The Fourth-Order Case

Now let us consider the case of fourth-order BVPs, that is problem (1.4).

In this case the Lidstone interpolating polynomial (2.2) is
()
and the Green′s function is
()
Using relations (2.29) and integrating by parts, we have
()
Hence
()
By deriving (2.35) we have
()
where can be easily computed using the same technique as for q2,i(x).
For the error we have that C2 = 1/3, while Q2,m depends on the nodes xi. If, for example, we consider equidistant points in [0,1], we get Q2,m = r2,m < 1 for m = 2, …, 10. In this case, if Lr2,m < 1, we have
()

3. The Odd-Order BVP

Let y(x) be the solution of the odd-order BVP (1.2). If y(x) ∈ C(2n+1)[0,1], from complementary Lidstone interpolation [2, 3], it is well known that y(x) = P2n(x) + R(x), where P2n is the complementary Lidstone interpolating polynomial [3] of degree 2n as follows
()
satisfying the conditions
()
The residue term R(x) [3] is given by where
()
By proceeding as in Section 2.2, given m distinct points in [0,1], xi, i = 1, …, m, we get the following polynomial of degree 2n + m
()
where
()
Polynomial (3.4) satisfies the relations
()
that is yn(x) is a collocation polynomial for (1.2).
Let us define
()
with Rm(y(2n+1), t) the remainder term in the Lagrange interpolation of y(2n+1) on the nodes {xi}.
If yC(2n+m+1)[0,1], and LHn,m < 1 where L is defined as in (2.16), similarly like in the proof of Theorem 2.2, it′s easy to prove that
()

3.1. Example: The Fifth-Order Case

Now let us consider the case of a fifth-order BVP, that is problem (1.5).

In this case the Lidstone interpolating polynomial (3.1) has the following expression
()
The Green′s function is
()
where Fik and Mik are defined as in (2.29). Hence
()

4. Algorithms and Implementation

To calculate the approximate solution of problems (1.1) and (1.2) by, respectively, (2.9) and (3.4) at x ∈ [0,1], we need the values , i = 1, …, m, s = 0, …, q. To do this we can solve the following system
()
where , Vn(x) = Q2n−1(x) and an,i(x) = qn,i(x) in the case of even order; Vn(x) = P2n(x) and an,i(x) = pn,i(x) in the case of odd order BVPs.
We observe that, putting
()
With
()
the system (4.1) can be written in the form
()

For the existence and uniqueness of solution of (4.4) we have the following.

Theorem 4.1. Let L be defined as in (2.16), k = m(q + 1) and M a positive constant s.t. for all g  s, ∥g∥≤Mg. If T = MLA < 1, the system (4.4) has a unique solution which can be calculated by an iterative method

()
with a fixed and
()
Moreover, if Y is the exact solution of the system,
()

Proof. If and , then ∥G(V) − G(W)∥ ≤ ML  AVW∥.

If MLA < 1, G is contractive. The proof goes on with usual techniques.

Remark 4.2. The (4.5) is equivalent to Picard′s iterations. In fact the boundary value problems (1.1) and (1.2) are equivalent to the following nonlinear Fredholm integral equation

()
where Vn(x) = Q2n−1(x), sn(x, t) = gn(x, t) in the case of problem (1.1) and Vn(x) = P2n(x), sn(x, t) = hn(x, t) in the case of problem (1.2).

Picard′s iterations for problem (4.8) are

()
If we use Lagrange interpolation on the nodes xi, i = 1, …, m,
()
we have
()
with . For x = xk, k = 1, …, m, (4.11) coincides with (4.5).

4.1. Numerical Computation of the Entries of Matrix A

To calculate the elements , i, k = 1, …, m, s = 0, …, q, of the matrix A we have to compute
()
i, j = 1, …m. To this aim it suffices to compute
()
where a = 0 or a = 1, r0,0(t) = 1,
()
For the computation of integrals (4.13) we use the recursive algorithm proposed in [17], for each i = 1, …, m, let us consider the new points
()
Moreover, let us define and, for s = 1, …, m − 1,
()
We can easily compute
()
For s > 0 the following recurrence formula holds [17]
()
Thus, if , then
()

5. Numerical Examples

Now we present some numerical results obtained by applying the proposed method to find a numerical approximation of the solution of some test problems. For the solution of the nonlinear system (4.4) the so-called modified Newton method [18] is applied (the same Jacobian matrix is used for more than one iteration). The initial value for the iterations is y(0). The stopping criterion is |yk+1yk| < δyk, where δyk is the solution of the system
()
being (JF(y)) ij = (Fi)/(yj)(y) the Jacobian matrix associated with F(, where ). The coefficients of the system are calculated by the algorithm (4.18).

As the true solutions are known, we considered the error functions e(x) = |y(x) − yn(x)|. In Examples 5.1 and 5.2 fourth-order problems are considered, and for each problem the solution is approximated by polynomials of degree, respectively, 6 and 9. Examples 5.3, 5.4, and 5.5 concern fifth-order problems, and the approximating polynomials have degree, respectively, 7 and 10. In Example 5.6 a sixth-order BVP is considered and the solution is approximated by polynomials of degree 8 and 11. The last two examples compare the proposed methods with other existing procedures. In all the considered examples equidistant points are used. Almost analogous results are obtained using as nodes the zeros of Chebyshev polynomials of first and second kind.

Example 5.1. Consider

()
with solution y(t) = log (1 + t).

Table 1 shows the error in some points of the interval (0,1) for m = 3 (polynomial of degree 6) and m = 6 (polynomial of degree 9).

Table 1. Problem (5.2)—Example 5.1.
t Error (m = 3) Error (m = 6)
0.1 5.34 · 10−4 3.84 · 10−6
0.2 9.43 · 10−4 6.51 · 10−6
0.3 1.15 · 10−3 8.11 · 10−6
0.4 1.16 · 10−3 8.96 · 10−6
0.5 1.01 · 10−3 9.07 · 10−6
0.6 7.76 · 10−4 8.41 · 10−6
0.7 5.21 · 10−4 7.12 · 10−6
0.8 3.00 · 10−4 5.33 · 10−6
0.9 1.31 · 10−4 2.95 · 10−6

Example 5.2. Consider

()
with solution y(t) = (1 − t2)et. Errors are shown in Table 2.

Table 2. Problem (5.3)—Example 5.2.
t Error (m = 3) Error (m = 6)
0.1 1.22 · 10−4 5.39 · 10−8
0.2 1.62 · 10−4 9.45 · 10−8
0.3 7.32 · 10−5 1.22 · 10−7
0.4 1.33 · 10−4 1.40 · 10−7
0.5 3.96 · 10−4 1.48 · 10−7
0.6 6.24 · 10−4 1.42 · 10−7
0.7 7.26 · 10−4 1.25 · 10−7
0.8 6.45 · 10−4 9.81 · 10−8
0.9 3.81 · 10−4 5.67 · 10−8

Example 5.3. Consider

()
with solution y(t) = t(1 − t)et.

Table 3 shows the error for m = 3 (polynomial of degree 7) and m = 6 (polynomial of degree 10).

Table 3. Problem (5.4)—Example 5.3.
t Error (m = 3) Error (m = 6)
0.1 7.69 · 10−6 3.07 · 10−9
0.2 2.57 · 10−5 1.14 · 10−8
0.3 4.13 · 10−5 2.35 · 10−8
0.4 3.97 · 10−5 3.81 · 10−8
0.5 1.03 · 10−5 5.42 · 10−8
0.6 4.84 · 10−5 7.04 · 10−8
0.7 1.27 · 10−4 8.52 · 10−8
0.8 2.07 · 10−4 9.77 · 10−8
0.9 2.67 · 10−4 1.06 · 10−7
1.0 2.90 · 10−4 1.09 · 10−7

Example 5.4. Consider

()
with solution y(t) = et.

The results are displayed in Table 4.

Table 4. Problem (5.5)—Example 5.4.
t Error (m = 3) Error (m = 6)
0.1 1.66 · 10−7 2.80 · 10−11
0.2 5.69 · 10−7 1.04 · 10−10
0.3 9.73 · 10−7 2.14 · 10−10
0.4 1.10 · 10−6 3.48 · 10−10
0.5 7.53 · 10−7 4.94 · 10−10
0.6 1.17 · 10−7 6.41 · 10−10
0.7 1.36 · 10−6 7.76 · 10−10
0.8 2.66 · 10−6 8.89 · 10−10
0.9 3.65 · 10−6 9.69 · 10−10
1.0 4.01 · 10−6 9.98 · 10−10

Example 5.5. Consider

()
with solution y(t) = (1 − t)sin (t) (Table 5).

Table 5. Problem (5.6)—Example 5.5.
t Error (m = 3) Error (m = 6)
0.1 1.37 · 10−6 7.74 · 10−11
0.2 4.99 · 10−6 2.89 · 10−10
0.3 9.63 · 10−6 5.96 · 10−10
0.4 1.38 · 10−5 9.71 · 10−10
0.5 1.63 · 10−5 1.38 · 10−9
0.6 1.66 · 10−5 1.80 · 10−9
0.7 1.49 · 10−5 2.19 · 10−9
0.8 1.23 · 10−5 2.51 · 10−9
0.9 1.01 · 10−5 2.74 · 10−9
1.0 9.16 · 10−5 2.83 · 10−9

Example 5.6. Consider

()
with solution y(t) = et.

Table 6 shows the errors for m = 3 (polynomial of degree 8) and m = 6 (polynomial of degree 11).

Table 6. Problem (5.7)—Example 5.6.
t Error (m = 3) Error (m = 6)
0.1 6.76 · 10−8 4.82 · 10−11
0.2 1.65 · 10−7 9.12 · 10−11
0.3 3.06 · 10−7 1.25 · 10−10
0.4 4.75 · 10−7 1.46 · 10−10
0.5 6.34 · 10−7 1.54 · 10−10
0.6 7.32 · 10−7 1.47 · 10−10
0.7 7.23 · 10−7 1.25 · 10−10
0.8 5.83 · 10−7 9.19 · 10−11
0.9 3.27 · 10−7 4.87 · 10−11

Example 5.7 (see [9].) Consider

()
where σi(x) are continuous functions,
()
and p, r, s are some positive integers ≥1. Then y(x) = sin (πx) is a solution of (5.8).

Let p = 1,   r = 2,   s = 3 and σi(x) = 0.1cos (πx), i = 1,2, 3.

In Table 7 we compare the results obtained with the presented method (2.9) for several values of m, with the results obtained with the method in [9] for different values of the step h. To demonstrate the accuracy of our method we show the maximum absolute error in the two cases. Further, method (2.9) provides an explicit expression of the approximate solution.

Table 7. Problem (5.8)—Example 5.7.
m Method (2.9) h Method in [9]
5 1.97 · 10−4 1/10 1.22 · 10−4
7 1.65 · 10−6 1/20 7.62 · 10−6
8 8.16 · 10−7 1/40 4.76 · 10−7
9 1.17 · 10−8 1/80 2.97 · 10−8
10 6.45 · 10−9 1/160 1.86 · 10−9
12 3.67 · 10−11 1/320 1.16 · 10−10

Example 5.8. Consider the classical Bratu problem [19]

()
with solution y(t) = −2log (cosh ((t − (1/2))(θ/2))/cosh (θ/4)) where θ is the solution of .

Tables 8 and 9 show the error, respectively for λ = 1 and for λ = 2, in some points of the interval (0,1). Observe that the degree of the approximating polynomials is m − 1. The last columns shows the error when the method proposed in [19] is applied.

Table 8. Problem (5.10)—Example 5.8, λ = 1.
t Error (m = 6) Error (m = 8) Method [19]
0.1 5.09 · 10−7 8.34 · 10−9 9.26 · 10−8
0.2 4.39 · 10−7 6.19 · 10−9 1.75 · 10−7
0.3 3.21 · 10−7 6.87 · 10−9 2.39 · 10−7
0.4 3.89 · 10−7 7.26 · 10−9 2.81 · 10−7
0.5 4.06 · 10−7 6.76 · 10−9 2.95 · 10−7
0.6 3.89 · 10−7 7.26 · 10−9 2.81 · 10−7
0.7 3.21 · 10−7 6.87 · 10−9 2.39 · 10−7
0.8 4.39 · 10−7 6.19 · 10−9 1.75 · 10−7
0.9 5.09 · 10−7 8.34 · 10−9 9.26 · 10−8
Table 9. Problem (5.10)—Example 5.8, λ = 2.
t Error (m = 7) Error (m = 8) Method [19]
0.1 1.14 · 10−6 5.77 · 10−7 9.26 · 10−6
0.2 1.06 · 10−6 4.64 · 10−7 1.75 · 10−6
0.3 1.07 · 10−6 5.34 · 10−7 2.39 · 10−6
0.4 1.17 · 10−6 5.76 · 10−7 2.81 · 10−6
0.5 1.19 · 10−6 5.46 · 10−7 2.95 · 10−6
0.6 1.17 · 10−6 5.76 · 10−7 2.81 · 10−6
0.7 1.07 · 10−6 5.34 · 10−7 2.39 · 10−6
0.8 1.06 · 10−6 4.64 · 10−7 1.75 · 10−6
0.9 1.14 · 10−6 5.77 · 10−7 9.26 · 10−6

6. Conclusions

This paper presents a general procedure to determine collocation methods for boundary value problems of 2n-th order with Lidstone boundary conditions and of (2n + 1)-th order with complementary Lidstone boundary conditions. In both cases, starting, respectively, from the Lidstone and the complementary Lidstone interpolation formula and using Lagrange interpolation, a polynomial approximating the solution is given explicitly. It is a collocation polynomial for the considered boundary value problem. Numerical examples support theoretical results and show that the proposed methods compare favorably with other existing methods.

Our future direction of study is to investigate the convergence and the numerical stability of the procedure. Furthermore, we will apply the same technique here used, to boundary value problems with different boundary conditions. Of course, for each type of conditions, a suitable class of approximating polynomials must be chosen.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.