Volume 2012, Issue 1 298640
Research Article
Open Access

Error Analysis of Galerkin′s Method for Semilinear Equations

Tadashi Kawanago

Corresponding Author

Tadashi Kawanago

Department of Mathematics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, Japan titech.ac.jp

Search for more papers by this author
First published: 24 October 2012
Citations: 1
Academic Editor: Hui-Shen Shen

Abstract

We establish a general existence result for Galerkin′s approximate solutions of abstract semilinear equations and conduct an error analysis. Our results may be regarded as some extension of a precedent work (Schultz 1969). The derivation of our results is, however, different from the discussion in his paper and is essentially based on the convergence theorem of Newton’s method and some techniques for deriving it. Some of our results may be applicable for investigating the quality of numerical verification methods for solutions of ordinary and partial differential equations.

1. Introduction

Let X be a real Hilbert space and XhX be a closed subspace. Here, h is a positive parameter (which will tend to zero). We denote by Ph the orthogonal projection onto Xh. We assume that
where I : XX is the identity operator. We are interested in studying error analysis of Galerkin’s method for the following equation:
(1.1)
Here, φ  :  U → X is a nonlinear map and U is a subset of X. We define by
(1.2)
The equation is the Galerkin approximate equation of (1.1). A precedent work [1] by Schultz reads as follows.

Theorem 1.1 (see [1], Theorems  3.1 and 3.2.)One assumes (H1). Let R ∈ (0, ) be a constant and U = {uX; ∥u∥ ≤ R}. One assumes that φ  :  U → X is a completely continuous map such that φ(U) ⊂ U. Then the following holds.

  • (i)

    The equation has a solution uh in U∩Xh for any h and there exists a monotone decreasing sequence with lim⁡khk = 0 and u ∈ U such that in X as k and u is a solution of (1.1). Moreover, if u is the unique solution of (1.1) in U, then one has limh→0uh = u in X.

  • (ii)

    Let u* ∈ U be a solution of (1.1). If φ has a Fréchet derivative, in a neighborhood, 𝒩, of u* and 0 is not in the spectrum of f, then u* is the unique solution of (1.1) in 𝒩 and has a solution uhXh for any h, which is unique for sufficiently small h and

    (1.3)
    which means that ∥u*uh  and ∥(IPh)u*  are equivalent infinitesimals as h → 0.

In this paper, we always assume (H1) and the following (H2) in what follows:
where U ⊂ X is an open set. Under the conditions (H1) and (H2), we obtain results similar to Theorem 1.1 (see Proposition 2.1 and Corollary 2.3). We also establish new other results on error analysis (see Theorems 2.4 and 2.5). Our results may be regarded as some extension of Theorem 1.1. The derivation of our results is, however, different from the proof of Theorem 1.1, which is based on the Brower fixed point theorem and the equality (4.10). Our proofs are essentially based on the convergence theorem of Newton’s method (Theorem 3.2) and some techniques for deriving it. We remark that a version of the same theorem is applied in [2] to an ordinary periodic system for a purpose similar to ours.

Various ordinary and partial differential equations appearing in mathematical physics can be written in the form (1.1) with (H2) under an appropriate setting of the functional spaces. See Section 5 for some concrete examples.

We define fh : U → X by
(1.4)
The map fh is a natural extension of and is very useful in our analysis below. Obviously, u is a solution of if and only if u is a solution of fh(u) = 0. We can treat the equation fh(u) = 0 more easily than since fh is defined globally.

One of our motivations for this study is to investigate the quality of a numerical verification method for solutions of differential equations. Some of our results in this paper may be applicable for such a purpose. See Remark 2.7 for further information.

The paper is organized as follows. In Section 2 we describe our main results. We prepare some preliminary abstract results in Section 3 and apply them to prove our main results in Section 4. In Section 5 we present some concrete examples on semilinear elliptic partial differential equations.

Notations. Let 𝒳 and 𝒴 be Banach spaces.

  • (1)

    We denote by ∥·∥𝒳 the norm of 𝒳. If 𝒳 is a Hilbert space, then ∥·∥𝒳 stands for the norm induced by the inner product of 𝒳. For u𝒳 and r ∈ (0, ), we write B𝒳(u ;  r): = {v𝒳;  ∥vu∥ < r}. The subscript will be often omitted if no possible confusion arises.

  • (2)

    For an open set V𝒳, C1(V, 𝒴) denotes the space of continuously differentiable functions from V to 𝒴.

  • (3)

    We denote by (𝒳, 𝒴) the space of bounded linear operators from 𝒳 to 𝒴 and (𝒳) stands for (𝒳, 𝒳). For T(𝒳, 𝒴), ∥T𝒳𝒴 denotes the operator norm of T. The subscript will be omitted if no possible confusion arises.

  • (4)

    Let ϕ(h) and ψ(h) be nonnegative functions. We write ϕ(h) ~ ψ(h) if ϕ(h) and ψ(h) are infinitesimals of the same order as h → 0, that is, ϕ(h) = O(1)ψ(h) and ψ(h) = O(1)ϕ(h) as h → 0. We write ϕ(h)≃ψ(h) if ϕ(h) and ψ(h) are equivalent infinitesimals as h → 0, that is, ϕ(h) = {1 + o(1)}ψ(h) as h → 0.

  • (5)

    Let Ω be a bounded domain of Rn. We denote Lebesgue spaces by Lp(Ω)(1 ≤ p) with the norms   for. We denote by the completion of (the space of C functions with compact support in Ω) in the Sobolev norm: . We denote by H−1(Ω) the Sobolev space with the norm . Here, 𝒟(Ω) stands for the set of distributions on Ω.

2. Main Results

In this section we describe our main results. We assume (H1) and (H2). Let u* ∈ U be an isolated solution of (1.1), that is, u* is a solution of (1.1) such that f(u*)  :  XX is bijective. We set
(2.1)
for simplicity. The operator Ah is an almost diagonal operator introduced in [3]. First we have an existence theorem for Galerkin’s approximate solutions of (1.1).

Proposition 2.1. There exist h* > 0  and such that the following (i)–(iii) hold.

  • (i)

    There exists R* > 0  such that u = uh  is the only solution of fh(u) = 0  in B(u*;   R*)  for any h ∈ (0, h*).

  • (ii)

    u = uh  is an isolated solution offh(u) = 0  for anyh ∈ (0, h*).

  • (iii)

    uhu*  in  X  as  h → 0  with the estimate

    (2.2)
    where and Ch → 1 as h → 0.

Remark 2.2. (i) Proposition 2.1(ii) is useful in our analysis below. Moreover, we immediately obtain from it that   u = uh  is an isolated solution of for any h ∈ (0, h*). This guarantees that we can always construct a Galerkin approximate solution uh by Newton’s method for small h > 0.

(ii) In various contexts in applications, Xh is finite-dimensional for any h. In such contexts the assumption (H1) implies that X is separable.

(iii) We do not assume dim⁡Xh < . We briefly explain that it has some practical benefits. The case dim⁡Xh = appears, for example, in the following context. We are interested in the semi-discrete approximation to a periodic system described by a partial differential equation with a periodic forcing term. We may apply a Galerkin method only in space to the original system in order to construct a simpler approximate system described by ordinary differential equations. Then, for an isolated periodic solution of the original system, our Proposition 2.1 may guarantee that in a small neighborhood of it the approximate system has a periodic solution. For example, we can actually apply Proposition 2.1 to a semi-discrete approximation to a periodic system treated in [3]. See [4, Remark  3.4] for how to rewrite the system in [3] as (1.1).

In what follows in this section, always denotes the sequence as described in Proposition 2.1. Since u*uh  is decomposed into the Xh-component Phu*uh  and the -component (IPh)u*, we have and ∥(IPh)u*∥ ≤ ∥u*uh∥. So, the last inequality and (2.2) immediately imply (2.3) below.

Corollary 2.3. We have

(2.3)
(2.4)
(2.5)

Actually, we easily verify that (2.3), (2.4) and (2.5) are mutually equivalent. They are very general features for the Galerkin method. The estimate (2.5) means that the Xh-component of the error ∥Phu*uh  is an infinitesimal of a higher order of smallness with respect to the whole error ∥u*uh  as h → 0.

The following two results are useful for applications (see Remark 2.7 below).

Theorem 2.4. We have the following:

(2.6)
(2.7)

Theorem 2.5. (i) We have

(2.8)

(ii) Let ɛh be a positive constant for h ∈ (0, h*) such that

(2.9)
for any h ∈ (0, h*). Then, there exist constants h1 ∈ (0, h*) and C1 > 0 such that
(2.10)

In view of Theorem 2.5 (i) and (ii), we can always take in (2.10) such that εh → 0 as h → 0. The following Remarks 2.6 and 5.3 below shows that our estimate (2.10) is in general sharper than an estimate which can be derived directly from the discussion in [1].

Remark 2.6. (i) In the same way as in the proof of [1, Theorem  3.2] we can obtain an estimate related to (2.10). We set ηh∶ = (2ph + qh + rh)/(1 − (ph + qh + rh)), ph∶ = ∥A−1(IPh)T∥, qh∶ = ∥A−1PhT(IPh)∥  and rh∶ = ∥A−1∥ · ∥φ(uh) − φ(u*) − T(uhu*)∥/∥uhu*∥. It follows from Proposition 2.1 (iii) and Proposition 3.1 below that ph, qh and rh converge to 0 as h → 0. So, ηk → 0 as h → 0. Let be a positive constant for h ∈ (0, h*) such that . Then we have

(2.11)
We can verify that
(2.12)
Indeed, we immediately obtain (2.12) from
(2.13)
We derive (2.11) and (2.13) at the end of Section 4.

(ii) When we compute for concrete examples (e.g., examples in Section 5 below), it seems reasonable to estimate qh  as qhCT(IPh)∥. Here, C  represents some positive constant independent of h. Then, it is actually necessary to take such that for small h > 0. On the other hand, roughly speaking, (2.9) means that we can take ɛh ≈ ∥T(IPh)∥  for small h > 0 (See Remark 5.3 below). (We note that Proposition 3.1 below implies that ∥T(IPh)∥ → 0  as h → 0.)

(iii) We consider the case where T  is self-adjoint (e.g., Example 5.1 below). In this case, we have ∥(IPh)T∥ = ∥T(IPh)∥. So, by (2.12) is larger than for small h > 0.

Remark 2.7. We mention applications of our results. Some of our results may be applicable for testing the quality of a numerical verification algorithm for solutions of differential equations. In general we obtain an upper bound of ∥u*uh∥ as output data from a numerical verification algorithm (See e.g., [5] and the references therein). By our Theorem 2.4  ∥u*uh∥ is sufficiently close to ∥f(uh)∥ for sufficiently small h. So, Theorem 2.4 shows that we can check the accuracy of the output upper bound of ∥u*uh∥ by finding the value of ∥f(uh)∥ when h is small. In [5] we proposed a numerical verification algorithm which also gives upper bounds of ∥Phu*uh∥ as output data. Our Theorem 2.5 may be applicable for testing the accuracy of such upper bounds. See Remark 5.4 for more detailed information.

3. Preliminary Abstract Results

In this section, we prepare some abstract results in order to prove our main results in Section 2.

Proposition 3.1. We assume (H1). Let K : XX be a compact operator. Then we have the following:

(3.1)
(3.2)

Proof. Though this result was proved in [6, Section 78], we give a simpler proof for the convenience of the reader. First we show that

(3.3)
We proceed by contradiction. We assume that (3.3) does not hold. Then we have δ∶ = limsup⁡h→0K(IPh)∥ > 0. Therefore, there exist and such that hn↘0  as n, ∥un∥ = 1  for nN  and
(3.4)
Since K  is compact and converges weakly to 0, we have as n. This contradicts (3.4). So, (3.3) holds. Since K*  is also compact, we obtain
(3.5)
So, we have (3.1), which implies (3.2).

Next, we describe some results in a more general setting. In what follows in this section, let 𝒳 and 𝒴 be Banach spaces and U ⊂ 𝒳 be an open set. We assume FC1(U, 𝒴).

Theorem 3.2. Let u0 ∈ U and L(𝒳, 𝒴) be bijective. We define a map g : U → 𝒳 by

(3.6)
Let R > 0 be a constant satisfying and b : [0, R]→[0, ) be a non-decreasing function such that
(3.7)
Let ε0 ≥ 0 be a constant such that
(3.8)
We assume that there exist constants r0  and r1  such that   0 < r0r1R,
(3.9)
(3.10)
Then the equation F(u) = 0 has an isolated solution . Moreover, the solution of F(u) = 0 is unique in .

Remark 3.3. (i) Theorem 3.2 is a new version of the convergence theorem of simplified Newton’s method, which is a refinement of the classical versions such as [5, Theorem  0.1]. Actually, the former implies the latter.

(ii) The convergence theorem of simplified Newton’s method is a very strong and general principle to verify the existence of isolated solutions. The reason is, roughly speaking, that the condition of the theorem is not only a sufficient condition to guarantee an isolated solution but also virtually a necessary condition for an isolated solution to exist. See [4, Remark  1.3] for the detail.

Proof of Theorem 3.2. Though we may consider Theorem 3.2 as a corollary of [5, Theorem  1.1], we describe the proof for completeness. We easily verify that u is a solution of F(u) = 0 if and only if u is a fixed point of g(u). Let u, vU. We obtain

(3.11)
By (3.7) and (3.11) we have
(3.12)
We set . Let uB0. In view of (3.7), (3.8), and (3.11) with v∶ = u0, we have
(3.13)
(3.14)
Combining (3.9), (3.13), and (3.14), we have ∥g(u) − u0∥ ≤ r0, which implies g(B0) ⊂ B0. Therefore, in view of (3.10) and (3.12) g is a contraction on B0. By the contraction mapping principle there exists a unique solution u = u* on B0 for the equation F(u) = 0. We immediately obtain from (3.10) and (3.12) that the solution of F(u) = 0 is unique on . Finally, it suffices to show that
(3.15)
in order to prove that u* is isolated. We denote by I the identity operator on 𝒳. Let . Then, by (3.7) and (3.10) we have ∥g(u)∥ ≤ b(r1) < 1. This implies that Ig(u) : 𝒳𝒳 is bijective. Since L is also bijective and F(u) = L{Ig(u)}, (3.15) holds.

The next result may be considered as a refinement of [7, Theorem  3.1 (3.14)] and [8, Theorem  3.1 (3.23)].

Proposition 3.4. Let u, v ∈ U , (1 − s)u + sv ∈ U   for any s ∈ (0,1)  and L(𝒳, 𝒴) be bijective. We set m : = max⁡s∈[0,1]  LF((1 − s)u + sv)∥. Then we have

(3.16)
Moreover, if   mL−1∥ < 1 then we also obtain
(3.17)

Proof. The proof is similar to that of Theorem 3.2. Let g  : U → 𝒳 be a map defined by (3.6). We have

(3.18)
It follows from (3.11) that   g(u) − g(v)∥ ≤ mL−1∥∥uv∥. Combining this inequality and (3.18), we obtain (3.16) and (3.17).

Theorem 3.5. Let u = u* ∈ U be an isolated solution of the equation F(u) = 0. Let h0 > 0 be a positive constant, FhC1(U, 𝒴) and Hh(𝒳, 𝒴)(0 < h < h0). We set H∶ = F(u*). We assume that

(3.19)
(3.20)
(3.21)
Here, . Then, there exist a constant h* ∈ (0, h0) and sequences , such that the following (a)–(f) hold:
  • (a)

    (3.22)

  • (b)

    u = uh  is an isolated solution of  Fh(u) = 0 for any  h ∈ (0, h*),

  • (c)

    (3.23)

  • (d)

    Hh is bijective with ,

  • (e)

    the solution of   Fh(u) = 0  is unique in  B(u*; Rh)  for  any  h ∈ (0, h*), where

    (3.24)

  • (f)

    (3.25)

Proof. By (3.20) and the stability property of linear operators (e.g., [3, Corollary  2.4.1]), and Hh are bijective for sufficiently small h > 0 and , in (𝒴, 𝒳) as h → 0. Let and . We set d(r): = d(r, u*) for r > 0 and define . Let ch : = 1/{1 − bh(2ηh)}, rh : = chηh and . Then, we easily verify that as h → 0,

(3.26)
Therefore, there exist h* ∈ (0, h0) and ɛ ∈ (0,1) such that for any h ∈ (0, h*), is bijective with (d), 1 ≤ ch < 2, d(rh + ε) < δh and . It follows that for any h ∈ (0, h*),   r > 0, and . We also have   rh + εRh   and
(3.27)
Let   h ∈ (0, h*)   and   R ∈ (rh, Rh). We apply Theorem 3.2 by setting F : = Fh, u0 : = u*, L : = Hh, b : = bh, r0 : = rh, r1 : = R and ε0 : = ηh. Then, we obtain the desired conclusions.

Remark 3.6. Theorem 3.5 is related to [7, Theorem 3.1] and [8, Theorem 3.1]. Actually, their proofs are similar to ours. Our proof is based on the convergence theorem of simplified Newton’s method, from which they may be derived similarly.

4. Proofs of Main Theorems

We prove the results in Section 2. We use the notation (2.1).

Proof of Proposition 2.1. We apply Theorem 3.5 by putting 𝒳 = 𝒴 : = X, F : = f, Fh : = fh, H : = A and Hh : = Ah. We show (3.19)–(3.21). By (H1) we have fh(u*) = (IPh)u*    0   in X   as   h → 0. Therefore, (3.19) holds. It follows from (H2) and Proposition 3.1 that

(4.1)
So, (3.20) holds. Let r > 0, u ∈ U and . Since φ is continuous, we have as r↘0, which implies (3.21). Therefore, by Theorem 3.5, there exist a small constant h* > 0, and such that (a)–(f) with ch : = Ch hold. So, we immediately obtain (ii) and uhu* in X as h → 0. Since , (a) and (c) imply (2.2). So, (iii) holds. In view of (d) and (e), we have (i) with
(4.2)
where . The proof is complete.

Proof of Theorem 2.4. We set u(s, h): = (1 − s)uh + su*  for simplicity. Proposition 2.1 (iii) implies   max⁡s∈[0,1]u*u(s, h)∥ = ∥u*uh∥→0   as   h → 0. First we show (2.6). We have   f(uh) = −(IPh)φ(uh) = (IPh)f(uh), in (X)   as h → 0  and

(4.3)
Since Ahf(uh) = f(uh), we have . We apply Proposition 3.4 with L : = Ah, u : = uh, v : = u* and F : = f to obtain
(4.4)
which implies ∥uhu*∥≃∥f(uh)∥. We also have ∥uhu*∥≃∥A−1f(uh)∥ by the above discussion with L : = Ah replaced by L : = A. Next, we show (2.7). In the same way as above we apply Proposition 3.4 with L : = A (resp., L : = Ah), u : = uh, v : = Phu* and F : = fh to have
(4.5)
Since fh(Phu*) = Phf(Phu*) and Ah commutes with Ph, we have . Combining (4.5) and fh(Phu*) = Ph{φ(u*) − φ(Phu*)}, we obtain ∥Phu*uh∥~∥Ph{φ(u*) − φ(Phu*)}∥.

Proof of Theorem 2.5. We set u*(s, h): = (1 − s)u* + sPhu* for simplicity.

  • (i)

    It follows from (H2) and Proposition 3.1 that

    (4.6)

By (H1) and the continuity of φ(u) at u = u* we have
(4.7)
We obtain (2.8) from (4.6) and (4.7).
  • (ii)

    In the same way as (3.11) we have

    (4.8)

By this equality, (2.7) and (2.9), we have (2.10).

Finally we derive (2.11) and (2.12).

Proof of (2.11) and (2.13). Without loss of generality we assume . First we derive (2.11). This proof is essentially the same as that of [1, Theorem 3.2]. It suffices to prove

(4.9)
which implies (2.11) in view of . We have
(4.10)
It follows that
(4.11)
which implies (4.9). Next we derive (2.13). Since A−1 = I + K  with K : = T(IT) −1, we obtain from Proposition 3.1 that
(4.12)
So, (2.13) holds.

5. Concrete Examples

In this section we consider the following semilinear elliptic boundary value problem:
(5.1)
where Ω is a bounded convex domain in RN (N ≤ 3) with piecewise smooth boundary Ω. We will rewrite (5.1) as the form (1.1) under the appropriate setting of functional spaces. We simply denote G(u): = G(·, u, ∇u). We assume . Let be the operator defined by Lu : = −Δu. We set with the norm and φ(u): = L−1G(u). Then, we have φ,   fC1(X, X). We can rewrite (5.1) as f(u) = 0. We choose Xh as an approximate finite element subspace of X with mesh size h.
In what follows, we concentrate on the cases: Ω = (0,1) ⊂ R and Ω = (0,1)×(0,1) ⊂ R2. We use finite element methods with piecewise linear and bilinear elements on the uniform (rectangular) mesh with mesh size h = 1/n(nN). Then, we have dim⁡Xh = n − 1 in the 1-dimensional case and dim⁡Xh = (n − 1) 2 in the 2-dimensional case. In this context the following basic estimates hold:
(5.2a)
(5.2b)
(5.2c)
where Ca, Cb, and Cc are some positive constants independent of h and u. As in previous sections, we denote by u* an isolated solution of f(u) = 0 and by uh a finite element solution of f(u) = 0 (i.e., a solution of ) in a small neighborhood of u*. In view of Proposition 2.1, uh exists uniquely in a small neighborhood of u* for sufficiently small h > 0. In our examples below we show that the following error estimate holds:
(5.3)
For simplicity we denote u*(s, h)∶ = (1 − s)u* + sPhu*. We will derive (5.3) from Theorem 2.5 and the duality
(5.4)

We now present two examples.

Example 5.1. We consider the following Burgers equation:

(5.5)
Here, g(x, y)  is a given function with gL2(Ω). As mentioned above, we rewrite (5.5) as f(u): = uφ(u) = 0. In the present case φ  :   XX is a nonlinear map defined by φ(u): = L−1(−uux + g). By the elliptic regularity property we have u*H2(Ω) (see e.g., [9]). We will derive (5.3). Let u, vX. We easily verify that
(5.6)
By (5.2b) we have
(5.7)
It follows that
(5.8)
We obtain from (5.2c) that
(5.9)
It follows from (5.4), (5.8), and (5.9) that
(5.10)
By (5.10), (5.2b), and Theorem 2.5 we have (5.3).

Example 5.2. We consider the Emden equation

(5.11)
We omit the one-dimensional case since it is easier. We can treat the present case in a similar way to Example 5.1. We rewrite (5.11) as (1.1). In the present case φ  : XX is defined by φ(u): = L−1(u2). Let u, vX. We verify that φ(v)u = 2L−1(vu) and that φ(v) is self-adjoint. By (5.2b) and Sobolev’s inequality we have
(5.12)
for any s ∈ [0,1] and uX. Here, C > 0 is a constant independent of s, h, and u. It follows from this inequality and (5.4) that
(5.13)
By (5.13), (5.2b), and Theorem 2.5 we have (5.3).

Remark 5.3. This remark is related to Remark 2.6.

(i) As mentioned in Remark 2.6 (ii), sup⁡s∈[0,1]φ((1 − s)u* + sPhu*)(IPh)∥ ≈ ∥T(IPh)∥ holds in general. Actually, in Example 5.2 (resp., Example 5.1) our best possible upper bound of sup⁡s∈[0,1]φ(u*(s, h))(IPh)∥  is the right-hand side of (5.13) (resp., (5.10)), which is just the same (resp., has the same order) as that of ∥T(IPh)∥.

(ii) We pointed out that our estimate (2.10) is in general sharper than (2.11), which is directly derived from the discussion in [1]. In order to show it concretely, we apply (2.11) to the equations in Examples 5.1 and 5.2. In both cases our best possible error estimate is the following:

(5.14)
Compare (5.14) with (5.3), which is based on (2.10). Though we omit the detailed derivation of (5.14), we show here that we cannot obtain a better estimate than (5.14) if we use (2.11) as a basic estimate. By the same discussion in Examples 5.1 and 5.2 we have
(5.15)
which is our best possible upper estimate of ∥(IPh)T∥. So, in view of (2.13), it is necessary to take such that for small h. Here, C > 0 is a constant independent of h. (Compare this estimate with (5.10) and (5.13).) Therefore, we cannot improve (5.14) if we use (2.11) and (5.2b) as basic estimates.

Remark 5.4. Various numerical verification algorithms for solutions of differential equations were proposed up to now (see e.g., [10]). Some of them give upper bounds of ∥Phu*uh∥ as output data (see [5]). Theorem 2.5 may be applicable for checking the accuracy of such output upper bounds since we can apply it to given problems in order to compute the concrete order of ∥Phu*uh∥ as h → 0. For example, we treated problems (5.5) and (5.11) as concrete numerical examples in [5], where we proposed a numerical verification algorithm based on a convergence theorem of Newton’s method. In these problems (5.3) is the theoretical estimate of ∥Phu*uh∥ derived from our Theorem 2.5. The output data as upper bounds of ∥Phu*uh∥ in [5, Section 3] seem to have just the order of h2 as h → 0. So, the accuracy of such output upper bounds in [5, Section 3] is satisfactory as long as we judge it by the theoretical estimate (5.3).

Acknowledgments

The author would like to express his sincere gratitude to Professor Takuya Tsuchiya and Professor Atsushi Yagi for their valuable comments and encouragement. He is grateful to the referee for constructive comments.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.