Volume 2011, Issue 1 673085
Research Article
Open Access

Collocation Method via Jacobi Polynomials for Solving Nonlinear Ordinary Differential Equations

Ahmad Imani

Ahmad Imani

Department of Mathematics, Tarbiat Modares University, P.O. Box 14115-175, Tehran, Iran modares.ac.ir

Search for more papers by this author
Azim Aminataei

Corresponding Author

Azim Aminataei

Department of Mathematics, K. N. Toosi University of Technology, P.O. Box 16315-1618, Tehran 1541849611, Iran kntu.ac.ir

Search for more papers by this author
Ali Imani

Ali Imani

Department of Mathematics, K. N. Toosi University of Technology, P.O. Box 16315-1618, Tehran 1541849611, Iran kntu.ac.ir

Search for more papers by this author
First published: 26 May 2011
Citations: 14
Academic Editor: Andrei Volodin

Abstract

We extend a collocation method for solving a nonlinear ordinary differential equation (ODE) via Jacobi polynomials. To date, researchers usually use Chebyshev or Legendre collocation method for solving problems in chemistry, physics, and so forth, see the works of (Doha and Bhrawy 2006, Guo 2000, and Guo et al. 2002). Choosing the optimal polynomial for solving every ODEs problem depends on many factors, for example, smoothing continuously and other properties of the solutions. In this paper, we show intuitionally that in some problems choosing other members of Jacobi polynomials gives better result compared to Chebyshev or Legendre polynomials.

1. Introduction

The Jacobi polynomials with respect to parameters α > −1, β > −1 (see, e.g., [1, 2]) are sequences of polynomials satisfying the following relation
(1.1)
where
(1.2)
These polynomials are eigenfunctions of the following singular Sturm-Liouville equation:
(1.3)
A consequence of this is that spectral accuracy can be achieved for expansions in Jacobi polynomials so that
(1.4)
The Jacobi polynomials can be obtained from Rodrigue′s formula as
(1.5)
Furthermore, we have that
(1.6)
The Jacobi polynomials are normalized such that
(1.7)
An important consequence of the symmetry of weight function w(x) and the orthogonality of Jacobi polynomial is the symmetric relation
(1.8)
that is, the Jacobi polynomials are even or odd depending on the order of the polynomial. In this form the polynomials may be generated using the starting form
(1.9)
such that
(1.10)
which is obtained from Rodrigue′s formula as follows:
(1.11)
Following the two seminal papers of Doha [3, 4] let f(x) be an infinitely differentiable function defined on [−1, 1]; then we can write
(1.12)
and, for the qth derivative of f(x),
(1.13)
Then,
(1.14)
where
(1.15)
For the proof of the above, see [3]. The formula for the expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of those of the function is available for expansions in ultraspherical and Jacobi polynomials in Doha [5]. Another interesting formula is
(1.16)
with
(1.17)
where
(1.18)
For the proof see [4]. Chebyshev, Legendre, and ultraspherical polynomials are particular cases of the Jacobi polynomials. These polynomials have been used both in the solution of boundary value problems [6] and in computational fluid dynamics [7, 8]. For the ultraspherical coefficients of the moments of a general-order derivative of an infinitely differentiable function, see [5]. Collocation method is a kind of spectral method that uses the delta function as a test function. Test functions have an important role because these functions are applied to obtain the minimum value for residual by using inner product. Attention to this point is very important since Tau and collocation methods do not give the best approximation solution of ODEs. Due to adding the boundary conditions, we are forced to eliminate several conditions of orthogonality properties, and this decreases the accuracy of the ODEs solution. Previous explanations do not mean that the accuracy of the collocation method is less than that of the Galerkin method. Many studies on the collocation method have recently appeared in the literature on the numerical solution of boundary value problems [912]. The collocation solution is a piecewise continuous polynomial function. The error has been analyzed for different conditions by different authors. Frank and Ueberhuber [13] showed equivalence between solutions of collocation method and fixed points of Iterated Defect Correction (IDeC) method. They proved that IDeC method can be regarded as an efficient scheme for solving collocation equations. Çelik [14, 15] investigated the corrected collocation method for the approximate computation of eigenvalues of Sturm-Liouville and periodic Sturm-Liouville problems by a Truncated Chebyshev Series (TCS). On the other hand, some results on Jacobi approximation were used for analyzing the p-version of finite element method, boundary element method, spectral method for axisymmetrical domain, and various rational spectral methods [1621].

The rest of this paper is organized as follows. In Section 2, the method of solution is developed. In Section 3, we report our numerical findings and demonstrate the accuracy of the proposed scheme by considering a numerical example. Finally, a brief conclusion is presented in Section 4.

2. Jacobi Collocation Method

We consider the class of ODEs with the following form
(2.1)
where s, l, m ≥ 0, Pi(x), Qi(x), Ri(x), and F(x) are defined on the interval axb. The most general boundary conditions are
(2.2)
where the real coefficients γi,  αi, δ, and μ are appropriate constants and the solution is expressed in the form
(2.3)
under the certain conditions (r = 0,1, 2, …).
Any finite range awb can be transformed to the basic range −1 ≤ x ≤ 1 with the change of variable
(2.4)
Hence, there is no loss of generality in taking a = −1 and b = 1. To obtain such a solution, the Jacobi collocation method can be used. Collocation points can be taken as the roots of [22]. Until now we have not the explicit formulas that determine these roots. Therefore, we use the numerical method to determine these roots, for example, the Newton method. Substituting the Jacobi collocation points in (2.1), the following expression can be obtained:
(2.5)
The above system of equations constructs n + 1 nonlinear equations. We impose the boundary conditions to these equations as follows:
(2.6)
Equations (2.5) and (2.6) construct the n + 3 nonlinear equations and so we eliminate two equations from (2.1). Thus we have n + 1 equations and solve this system with Newton iteration method.

3. Illustrative Numerical Example

Consider the following nonlinear boundary value problem:
(3.1)
subject to boundary conditions
(3.2)

The exact solution is y = sin(x). We solve it with the above-mentioned method (for different values of α and β), and we show the results. We compute the numerical solution for 4 terms in Figure 1.

Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.
Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.
Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.
Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.
Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.
Details are in the caption following the image
Numerical solution of our example for different values of α and β. (a1)α = 0, β = 0; (a2) is error. (b1)α = −0.4, β = −0.4; (b2) is error. (c1)α = −0.5, β = −0.5; (c2) is error. (d1)α = −0.4, β = 0.6; (d2) is error. (e1)α = 0.6, β = 0.6; (e2) is error. (f1)α = 2, β = 2; (f2) is error.

At first we obtain the roots of . For this goal we must determine the α and β parameters, and Table 1 shows the roots.

Table 1. The roots of for different values of α and β.
α = 0, α = −0.4, α = −0.5, α = −0.4, α = 0.6, α = 2,
β = 0 β = −0.4 β = −0.5 β = 0.6 α = 0.6 β = 2
−0.8611 −0.9103 −0.6947 0.9284 −0.7996 −0.6947
0.8611 0.9103 0.6947 −0.7562 0.7996 0.6947
−0.3400 −0.3729 −0.2505 0.4879 −0.3038 −0.2506
0.3400 0.3729 0.2505 −0.4879 0.3038 0.2506
To apply the method we assume that the solution is in the following form:
(3.3)
By substituting y in (3.1), we obtain a system of equations, and, with attention to Gaussian nodes, we must solve the set of nonlinear equations. The boundary conditions are imposed on the system of equations with elimination of two equations, and we add these boundary conditions to the system as follows:
(3.4)
The above system can be solved with Newton iteration method. We show our results for every parameter as in Table 2. Representation of the error by increasing the number of terms in our polynomial solution is shown in Table 3. It is observable that as the number of terms increases, the absolute error decreases.
Table 2. The solution of our example for different values of α and β.

α = 0,

β = 0

  • α = −0.4,
  • β = −0.4
  • α = −0.5,
  • β = −0.5
  • α = −0.4,
  • β = 0.6
  • α = 0.6,
  • β = 0.6
  • α = 2,
  • β = 2
Table 3. Representation of the error by increasing the number of terms in our polynomial solution.
N = 10 N = 12 N = 14
(α, β) = (0,0) 1.02 × 10−9 2.13 × 10−10 2.65 × 10−12
(α, β) = (−0.4, −0.4) 2.26 × 10−10 3.23 × 10−11 2.12 × 10−13
(α, β) = (−0.5, −0.5) 2.33 × 10−10 1.18 × 10−10 1.05 × 10−13
(α, β) = (−0.4,0.6) 7.16 × 10−10 1.01 × 10−10 3.55 × 10−12
(α, β) = (0.6,0.6) 6.37 × 10−8 2.24 × 10−9 3.70 × 10−11
(α, β) = (2,2) 4.25 × 10−6 1.66 × 10−7 2.39 × 10−9
  N = 16 N = 18 N = 20
(α, β) = (0,0) 1.31 × 10−15 1.21 × 10−18 2.09 × 10−22
(α, β) = (−0.4, −0.4) 4.23 × 10−16 7.45 × 10−19 9.76 × 10−22
(α, β) = (−0.5, −0.5) 1.62 × 10−15 3.28 × 10−20 1.12 × 10−23
(α, β) = (−0.4,0.6) 5.60 × 10−14 4.08 × 10−19 5.35 × 10−22
(α, β) = (0.6,0.6) 2.36 × 10−13 1.85 × 10−17 3.19 × 10−20
(α, β) = (2,2) 4.17 × 10−11 3.06 × 10−13 7.11 × 10−16

4. Conclusion

In this paper, we show that this method is very accurate, even for small value of n (i.e., n = 4). We see that Chebyshev polynomial (the cases c1 and c2) and Legendre polynomial (the cases a1 and a2) have not obtained the best approximation (see the case α = −0.4,β = 0.6). We observe that the error is symmetric when α = β (see the cases a2, b2, c2, e2, and f2). When we increase the number of terms in our computation, we monitor that the error decreases and our computed results are the best.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.