Volume 2013, Issue 1 474089
Research Article
Open Access

Analysis of Numerical Measure and Numerical Integration Based on Measure

Yinkun Wang

Corresponding Author

Yinkun Wang

Department of Mathematics and System Science, National University of Defense Technology, Changsha 410073, China nudt.edu.cn

Department of Mathematics, Syracuse University, Syracuse, NY 13244, USA syr.edu

Search for more papers by this author
Jianshu Luo

Jianshu Luo

Department of Mathematics and System Science, National University of Defense Technology, Changsha 410073, China nudt.edu.cn

Search for more papers by this author
Xiangling Chen

Xiangling Chen

College of Mathematics and Computer Science, Hunan Normal University, Changsha 410081, China hunnu.edu.cn

Search for more papers by this author
First published: 10 December 2013
Academic Editor: Abdelghani Bellouquid

Abstract

We present a convergence analysis for a general numerical method to estimate measure function. By combining Lagrange interpolation, we propose a specific method for approximating the measure function and analyze the convergence order. Further, we analyze the error bound of numerical measure integration and prove that the numerical measure integration can decrease the singularity for singular integrals. Numerical examples are presented to confirm the theoretical results.

1. Introduction

A numerical integration based on the definition of the Lebesgue integral was proposed in the paper “A New Approach to Numerical Integration” [1]. The method was said to be particularly useful for integrands which are highly oscillatory in character or singular. It provides an exact reduction of multidimensional integrals to one-dimensional integrals. It appeared to be a more economical way of treating certain multidimensional integrals. For concise writing, we call the new numerical integration proposed by B. L. Burrows the numerical measure integration (NMI).

Consider the integral ∫If(x)dx, where f(x) ∈ L(I) is a nonnegative function and I = [a, b]. Firstly, define E(f(x)⩾y): = {xI : f(x)⩾y} and μf(y): = m(E(f(x)⩾y)), y. It is easy to obtain that μf(y) is a monotonically decreasing and bounded function with 0 ⩽ μf(y) ⩽ ba, y. Define y0 : = min xIf(x) and yN : = max xIf(x). Denote the range of f by R(f): = [y0, yN], where y0⩾0 and yN can be infinite which means that f(x) is singular. The foundational equation of NMI proposed by [1] is
()
When calculating the Lebesgue integral numerically through (1), there are three kinds of errors [1]:
  • (i)

    the error in the method used to estimate ;

  • (ii)

    the error in the estimate of μf(y) for some y when necessary;

  • (iii)

    the error in estimates of y0 and yN whose values are not known.

If the values of μf(y), y0, and yN can be obtained exactly, the accuracy of the Lebesgue integral of f(x) is determined by the properties of μf(y) such as continuity and differentiability. One advantage of this method is that even though f(x) is oscillatory or singular in its bounded domain, its relative measure function μf(y) keeps some good properties, bounded and monotonically decreasing. It offers the opportunity to have better results than conventional numerical integral methods.

However, it is often difficult, or perhaps impossible, to find explicit formula for μf(y) and exact bound y0 and yN. The required values need to be estimated which leads to new errors (ii) and (iii) in the numerical results. What we desire is to reduce the effects of the additional errors on the final error of the Lebesgue integration as possible as we can and to obtain the extent of additional errors’ effects. Then we need to develop methods to estimate the unknown values and analyze the rate of convergence.

In this paper, we will mainly discuss the second kind of error caused by the numerical measures and its effect on the NMI. The paper is organized in 6 sections. In Section 2, we recall a general estimation of the measure function. Then we present its corresponding convergence analysis. Further, by combining Lagrange interpolation, we propose a specific method for approximating the measure function and analyze the rate of convergence. A highly accurate algorithm is presented especially for strictly monotonic functions. In Section 3, we present two examples of estimating measure with one monotonic function and another oscillatory function. In Section 4, we present the error bound of NMI which is controlled by the error of numerical measure and error of integral of exact measure. Further, we prove that the singularity can be reduced by NMI for singular integrals. In Section 5, we give three examples of numerical integrals calculated by NMI based on the numerical measure. Finally, we make a conclusion in Section 6.

2. Estimation of the Measure Function and Its Convergence Analysis

To calculate the Lebesgue integrals by the method NMI, it is significant to obtain the values of measure function at some necessary points. However, the values of measure function cannot be obtained exactly through an explicit formula in most of cases. They need to be estimated numerically. And it will be found that the accuracy of the values of measure function will greatly affect the final accuracy of the Lebesgue integral.

To estimate the measure of one-dimensional function f(x), we divide I into n subsets, {Ii : = [xi, xi+1] : i = 1,2, …, n}. Define pi : = m(Ii) = xi+1xi and obviously
()
On each interval Ii, we choose xixi,1 < xi,2 < ⋯<xi,kxi+1, k, as testing points. Then the values {f(xij) : xijIi, i = 1, …, n, j = 1, …, k}, k, are calculated. Given any yR(f), the approximation of μf(y) follows
()
where 0 ⩽ ϵi ⩽ 1.
Different methods of choosing {ϵi : i = 1,2, …, n} will determine different approximations of μf(y). A general assignment for {ϵi : i = 1,2, …, n} which was proposed by [1] is
()
Let χE(x) be the characteristic function, defined by
()
Set g(x): = χE(f(x)⩾y)(x). We have
()
Define . Then .

Set My : = number of the segments of E(f(x)⩾y). For example, consider f(x) = |sin2x| on [0, π]; then My = 2 for y ∈ (0,1). The following theorem gives the error analysis of measure approximation (3).

Theorem 1. Let f(x) ∈ C[a, b] and f(x)⩾0 for x ∈ [a, b]. {Ii : i = 1, …, n}, n, is a partition of [a, b]. If and {ϵi : i = 1,2, …, n} are determined by (3) and (4), respectively, then

()
where 𝕍 = {i : ∃x, x′′Ii    s.t. f(x) ⩽ y < f(x′′)}.

Proof. According to the approximate formula (3) and equation (6), we have

()
Set 𝕍 = {i : ∃x, x′′Ii    s.t. f(x) ⩽ y < f(x′′)}. According to the definition of ϵi, we have μi(y) − ϵipi = 0 for i𝕍. Then the inequality (8) can be simplified as
()

Here it is apparent that

()
for 0 ⩽ μi(y), ϵipipi.

By the intermediate value theorem, Ii, i𝕍 contains at least one end of a segment in the domain E(f(x)⩾y). Since E(f(x)⩾y) has only My segments, the number of Ii, i𝕍 is not more than 2My; that is,

()
where Card (𝕍) denotes the cardinality of the set 𝕍.

Combining inequalities (9), (10), and (11), we obtain

()

We note that when My is bounded and the domain I is equally divided into n segments, the numerical measure is uniformly convergent and the order of convergence is linear according to Theorem 1.

When the integrand functions have some better properties, the numerical measure can be approximated more correctively. Suppose that nonnegative function f(x) is invertible in each subinterval Ii, i = 1,2, …, n. Define and . Denote by R(fi) the range of fi in Ii and by the inverse function of fi in Ii. Then we obtain
()
Construct a (k − 1)th Lagrange interpolation polynomial Pi(x) in R(fi) which interpolates {(f(xij), xij) : j = 1,2, …, k}. Then the approximations of and μf(y) are, respectively,
()
()

Set which reveals that the estimation (15) is a specific case of the general assignment (4). That means that Theorem 1 is valid for the estimation (15).

Theorem 2. Suppose that nonnegative function f(x) has a partition {Ii : i = 1,2, …, n} in I and has inverse function in each subinterval. If are determined by (14) and (15), then

()
where 𝕍 = {i : ∃x, x′′Ii    s.t. f(x) ⩽ y < f(x′′)}.

Proof. According to Theorem 1 and (13), (14), and (15), we have

()

The proof is finished combining the well-known error of Lagrange interpolation polynomials.

According to the proof of Theorem 2, the accuracy of measure function depends on the accuracy of Lagrange interpolation polynomials of the inverse functions in each subinterval. Numerical experiments will be done to affirm the validity of the method. At the end of this section, we would like to introduce a highly accurate algorithm to calculate the measure especially for strictly monotonic functions. It is called split-half algorithm.

Algorithm 3 (split-half algorithm). Consider the following steps.

Step  1. Given yR(f) and N, set i : = 0, Δ = ba, xm = (ba)/2, and . Determine the monotonicity of the function f, increasing or decreasing.

Step  2. Evaluate the function at xm:

()

Step  3. If T1 = 0, we can get the accurate measure

()

Otherwise,

()
Set ii + 1, Δ ← Δ/2, and xmxmT1Δ/2 for increasing functions or xmxm + T1Δ/2 for decreasing functions. And go back to Step  2 until i = N.

The error of split-half algorithm satisfies
()

3. Numerical Examples of Measure

Example 1. Consider f(x) = ln x, x ∈ [1,5].

Since ln x is monotonically increasing, y0 = 0, yN = ln 5, and μf(y) = 5 − ey, yR(f). f−1(x) ∈ C(R(f)).

Table 1 lists the respective measure errors and their convergence orders (briefly, C.O.). The error of the approximation for values n and k is presented by , where is determined by (14) and each convergence order is computed by ln (en/em)/ln (m/n).

Table 1. Errors of μ(y).
n Errors (k = 2) C.O. Errors (k = 4) C.O.
8 2.5284e − 002 2.2050e − 005
16 6.9679e − 003 1.8594 1.6494e − 006 3.7408
32 1.8272e − 003 1.9311 1.1349e − 007 3.8614
64 4.7207e − 004 1.9526 5.4007e − 009 4.3932
128 1.1997e − 004 1.9763 3.9920e − 010 3.7580
256 2.7692e − 005 2.1151 2.6089e − 011 3.9356
512 7.1502e − 006 1.9534 1.5805e − 012 4.0450

Example 2. Consider an oscillatory function f(x) = |sin30x|, x ∈ [0, π/3].

The minimum and maximum of f(x) are y0 = 0 and yN = 1, respectively. For f(x) is a periodic function, we can obtain the explicit formula of measure function:
()

Numerical results of errors are shown in Table 2. Since the measure function does not possess good differentiability when y = 0, the errors do not have stable convergence order of 𝒪(nk).

Table 2. Errors of μ(y).
n Errors (k = 2) C.O. Errors (k = 4) C.O.
20 2.2045e − 001 1.0791e − 001
40 1.2578e − 001 .80948 8.3284e − 002 .37371
80 6.4771e − 002 .95754 4.6044e − 002 .85501
160 2.6235e − 002 1.3039 6.8435e − 003 2.7502
320 5.3436e − 003 2.2956 7.4735e − 005 6.5168
640 6.7018e − 004 2.9952 1.7822e − 005 2.0682
1280 2.6427e − 004 1.3425 4.8946e − 007 5.1863

4. Error Analysis of NMI

In this section, we will analyze the whole error of the integral . As we care greatly about the influence of errors caused by the numerical measures, we assume that the maximum and minimum of the function f(x) are known; that is, we will not consider the errors of y0 and yN here. To analyze the error of the integral , it is necessary to choose an integral rule for the Riemann integral of the measure function. An appropriate rule for the one-dimensional integral is dependent on the character of μ(y).

Let U be an interval and assume gC(U). To approximate the integral Q(g): = ∫Ug(t)dt, we consider numerical quadrature rule of the form
()
where tn,jU,   j = 0,1, …, n, are quadrature points and wn,j, j = 0,1, …, n, are real quadrature weights.

Define the integral error of g under the given quadrature rule.

Integrating the measure function by the numerical quadrature rule (23) in the fundamental equation (1), we have
()
where μf and are, respectively, the measure function and numerical measure function of f(x). So the numerical quadrature rule of NMI corresponding to f is
()
Define . We have the following fundamental theorem for the error of NMI.

Theorem 4. Let f(x) ∈ [a, b] and f(x)⩾0 for x ∈ [a, b]. By estimating ∫If(x)dx by the numerical quadrature rule of NMI (25), the error satisfies

()

According to Theorem 4, the accuracy of numerical quadrature rule of NMI depends on the accuracy of numerical measure and the character of measure function μf. A necessary condition to have better accuracy by NMI is that the measure function μf should possess better properties. For bounded oscillatory functions, the integral of the exact measure function can be approximated more accurate for its monotonic property. As to singular functions, it can be verified in the following that NMI can decrease the degree of the singularity of the integral.

When yN is infinite, by making the variable change of z = 1/y in the fundamental equation (1) of NMI, it can be rewritten as
()
where y0 is supposed to be y0ϵ > 0.

Denote g(x): = (1/x2)μf(1/x), x ∈ (0, 1/y0]. Before we prove that g(x) is less singular than f(x), a lemma should be stated.

Lemma 5. Let f(x) ∈ C(I), h(t): = f(x(t)), and x(t): = kt + c where k > 0, c. μf and μh are the measure functions corresponding to f and h, respectively. Then μf(y) = kμh(y), ∀yR(f), where R(f) is the range of f.

Proof. Since k > 0, g has the same monotonicity as f. When f is monotonically decreasing, μf(y) = f−1(y) − a and μh(y) = h−1(y)−(ac)/k, respectively. We obtain

()
Substituting the inverse function of h into μh, we have
()

When f is monotonically increasing, μf(y) = bf−1(y) and μh(y) = (bc)/kh−1(y), respectively. It can be obtained similarly that μh(y) = μf(y)/k.

For any fC(I), there is a partition {Ii : i} of I such that f is monotonic in each subinterval. Define and , i. Correspondingly, define hi : = fi(x(t)) and , i.

Then it is obvious that

()
According to the preceding proof,
()
since fi, ∀i, is monotonic in its domain. Combining (30) and (31), we finally arrive at the equation μf(y) = kμh(y).

Let S be a set of I = [a, b] containing a finite number of points. Define a function associated with S by
()
For 0 < α < 1, a real unbounded function f is said to be of Type(α, I, S), if
()
where C is a positive constant. The parameter α is called index of singularity.

Theorem 6. Let fType(α, I, S) and f(x)⩾ϵ > 0 for xI. g(x) = (1/x2)μf(1/x), x ∈ (0, 1/y0], where μf is the measure function of f and y0 = min xIf(x)⩾ϵ > 0. Then g(x) ∈ Type((2α − 1)/α, (0, 1/y0], {0}).

Proof. Assume S = {a}    or    {b}. By making the variable change x(t) = (ba)t + a for S = {a} or x(t) = (ba)t + b for S = {b}, we define h(t): = f(x(t)), t ∈ (0,1]. Then h∈ Type(α, (0,1], {0}).

Since h(t) is singular at t = 0, there exists t0 ∈ (0,1] such that h(t) is monotonic in the interval (0, t0]. That means that ∀yh−1(t0) s.t. μh(y) = |h−1(y)|. According to the property of h, we can obtain

()
The resultant inequality reveals that μh(y) ⩽ y−1/α.

By Lemma 5, we have μf(y) = (ba)μh(y)⩽(ba)y−1/α.

Then combining the inequality of μf and the expression of g, we can obtain 0 ⩽ g(x)⩽(ba)x−(2α−1)/α, ∀x ∈ (0, t0].

Since g(x) is continuous in [t0, 1/y0], there exists a positive constant C such that ∀x ∈ (0, 1/y0],

()
According to the definition (33), we have g(x)∈ Type((2α − 1)/α,(0, 1/y0],{0}).

More generally, S contains more than one point. Assume S = {a = s1 < s2 < ⋯<sm = b}. Set

()
In each interval (ti, ti+1), the function f(x) has only one singularity. Namely, a singularity is located at ti if i is even and at ti+1 if i is odd.

Define , , and . Then we have

()

By the preceding proof, it can be obtained that 0 ⩽ gi(x) < Cix−(2α−1)/α, ∀x ∈ (0, 1/y0i], where and Ci is a positive constant, i = 1,2, …, 2m. Since for x ∈ (1/y0i, 1/y0], 0 ⩽ gi(x) < Cix−(2α−1)/α is valid for x ∈ (0, 1/y0]. Combining (37), we prove that g(x)∈ Type((2α − 1)/α, (0, 1/y0], {0}).

Since (2α − 1)/α < α,   ∀ α ∈ (0,1), NMI can reduce the singularity for singular integrals. It supplies the possibility to improve the accuracy of integrals. Theoretically, we need not know where the singularities lie.

As to whether NMI can realize the better accuracy of integrals, it depends on the accuracy of the numerical measure. In the following part, we will present some numerical integrals results by NMI and compare them with results by conventional methods.

5. Numerical Examples of NMI

In this section, we will present three kinds of integrals, including normal, oscillatory, and singular integrals, to verify the efficiency of NMI. In the numerical experiments, the Gauss-Legendre formula or its composite one will be used to calculate the integral of measure functions.

Example 1. Consider the integration .

We have known that y0 = 0, yN = ln 5, and μ(y) = 5 − ey, y0yyN.

In this numerical example, we adopt 5-point Gauss-Legendre formula to calculate the integral of measure function and original integral. The errors of original integral and the integral of exact measure function are 3.4407e − 005 and 1.6778e − 010, respectively. It shows clearly that the accuracy of the numerical integral can be greatly improved by making the NMI transform for this example. Numerical results of errors of the integration by NMI are shown in Table 3. The measure is estimated by (14) and (15) with values n and k. The high accuracy of numerical results of NMI requires the accuracy of measure functions as illustrated in Table 3.

Table 3. Errors of integration by NMI.
n Errors (k = 1) Errors (k = 3) Errors (k = 5)
8 3.0958e − 001 1.9666e − 005 3.4806e − 008
16 9.9805e − 002 1.7423e − 005 2.5702e − 009
32 7.5973e − 002 4.3251e − 006 2.3129e − 010
64 6.4057e − 002 5.6059e − 007 1.6858e − 010
128 2.8068e − 002 2.2432e − 008 1.6776e − 010
256 1.3053e − 002 3.5017e − 009 1.6779e − 010
512 7.9865e − 003 5.2533e − 010 1.6778e − 010

Example 2. Consider the integration of an oscillatory function .

In this example, y0 = 0, yN = 1, and μ(y) = π/3 − (2arcsin(y)/3), y0yyN. Since the measure function is weakly singular at y = 1, we adopt the Gauss-type quadrature for weakly singular integrals proposed by [2]. According to the quadrature rule, we set the parameters k = 4 and q = 6. In detail, set N = n/k, where n is the number of evaluations of the function showed in Table 4. Then choose (N + 1) points tj = 1 − (j/N) q so that the subintervals Ij : = [tj, tj+1],   j = 0,1, …, N, form a partition for [0,1]. Then transform the partition into the domain of integral and use k-point Gauss-Legendre formula in each subinterval to calculate the integral. By the quadrature, the errors of the integral with exact measure function is showed in Table 4 represented by GL. To show the efficiency of the proposed method, we compare the results with those obtained by composite Simpson’s rule with n + 1 evaluations of function which is denoted by CS in Table 4. It is obvious that the accuracy can be improved under the correct measure function.

Table 4. Errors of integration .
n CS GL
8 1.5199e − 003 5.0402e − 004
16 8.9723e − 005 1.1475e − 005
32 5.5303e − 006 9.1145e − 008
64 3.4446e − 007 4.8650e − 010
128 2.1510e − 008 2.1969e − 012
256 1.3441e − 009 8.9928e − 015
512 8.4001e − 011 1.1102e − 016

Errors of the integration by NMI combining GL are shown in Table 5 which is denoted by GLNMI. We set the number of evaluations of function 32 for the integral of measure function by the Gauss-type quadrature. The measure is estimated by (14) and (15) with values n and k. The accuracy of NMI is extremely restricted by the accuracy of measure.

Table 5. Errors of integration by GLNMI.
n Errors (k = 1) Errors (k = 3) Errors (k = 5)
20 1.4307e − 001 3.1465e − 002 2.7730e − 002
40 4.5358e − 002 7.8500e − 003 1.1460e − 002
80 3.9992e − 003 1.2441e − 003 1.8890e − 003
160 1.5046e − 002 1.4846e − 004 1.8805e − 004
320 3.9658e − 003 2.6296e − 005 3.4863e − 005
640 4.1803e − 003 2.1141e − 006 7.1482e − 006
1280 2.9805e − 004 4.0182e − 007 2.3580e − 007

Example 3. Consider the integration of a singular function .

Since the integral and the measure integral are both weakly singular, we adopt the Gauss-type quadrature used in Example 2. We set the parameters k = 2 and q = 50/3. Split-half algorithm is used to approximate the measure function with N = 100 in this example. The numerical results are presented in Table 6 where GL represents using only the method of Gauss-type quadrature while GLNMI represents combining NMI and Gauss-type quadrature together. It is illustrated that the accuracy can be improved by NMI.

Table 6. Errors of integration .
n GL GLNMI
8 7.5026e − 001 2.1705e − 001
16 2.1096e − 001 3.9276e − 002
32 2.7858e − 002 4.0278e − 003
64 2.4575e − 003 3.1686e − 004
128 1.8065e − 004 2.2109e − 005
256 1.2212e − 005 1.4597e − 006
512 7.9320e − 007 9.5708e − 008

6. Conclusion

In this paper, we mainly discuss how to estimate the measure of a function and the convergence of the numerical measure. Further, we analyze the error of numerical measure integral and verify that the singular integrals can be improved theoretically. Accordingly, some numerical examples are given to validate the theoretical results.

To calculate the integration by NMI, it is greatly important to estimate the measures accurately as possible as we can. The method presented here is very effective for functions which have piecewise differential inverse function.

The study of the NMI field is still new. Investigations should be expanded to the multidimensional measures and integrations. Maybe, the method NMI is more suitable for multidimensionality because it presents an exact reduction of multidimensional integrals to one-dimensional integrals. More researches can be done in this field.

Acknowledgment

The authors are very grateful to Professor Tian-xiao He for his kind help and valuable recommendations. This work is partially supported by National Natural Science Foundation of China under Projects 11271370 and 61101183 and CSC Scholarship.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.