Volume 2013, Issue 1 718627
Research Article
Open Access

Approximate Solutions of Hybrid Stochastic Pantograph Equations with Levy Jumps

Wei Mao

Corresponding Author

Wei Mao

School of Mathematics and Information Technology, Jiangsu Second Normal University, Nanjing 210013, China jsie.edu.cn

Search for more papers by this author
Xuerong Mao

Xuerong Mao

Department of Statistics and Modelling Science, University of Strathclyde, Glasgow G1 1XH, UK strath.ac.uk

Search for more papers by this author
First published: 12 June 2013
Citations: 3
Academic Editor: Svatoslav Staněk

Abstract

We investigate a class of stochastic pantograph differential equations with Markovian switching and Levy jumps. We prove that the approximate solutions converge to the true solutions in L2 sense as well as in probability under local Lipschitz condition and generalize the results obtained by Fan et al. (2007), Milošević and Jovanović (2011), and Marion et al. (2002) to cover a class of more general stochastic pantograph differential equations with jumps. Finally, an illustrative example is given to demonstrate our established theory.

1. Introduction

Stochastic delay differential equations (SDDEs) have come to play an important role in many branches of science and industry. Such models have been used with great success in a variety of application areas, including biology, epidemiology, mechanics, economics and finance. Similar to SDEs, an explicit solution can rarely be obtained for SDDEs. It is necessary to develop numerical methods and to study the properties of these methods. There are many results for the numerical solutions of SDDEs [112].

Recently, as a special case of SDDEs, a class of stochastic pantograph delay equations (SPEs) has been received a great deal of attention and various studies have been carried out on the convergence of SPEs [1316]. However, all equations of the above-mentioned works are driven by white noise perturbations with continuous initial data, and white noise perturbations are not always appropriate to interpret real data in a reasonable way. In real phenomena, the state of stochastic pantograph delay system may be perturbed by abrupt pulses or extreme events. A more natural mathematical framework for these phenomena takes into account other than purely Brownian perturbations. In particular, we incorporate the Levy perturbations with jumps into stochastic pantograph delay system to model abrupt changes.

The study of the convergence of the numerical solutions to SDDEs with jumps is in its infancy [1720], and there is no research on the numerical solutions to SPEs with Markovian switching and Levy jumps (SPEwMsLJs). In this paper, we study the strong convergence of the Euler method for a class of SPEs with Markovian switching and Levy jumps (SPEwMsLJs). SPEwMsLJs may be regarded as an extension of SPEs with Markovian switching and SPEs with Levy jumps. The main aim is to prove that the Euler approximate solutions converges to the true solutions for SPEwMsLJs in L2 sense. On the other hand, we study the convergence in probability of the Euler approximate solutions to the true solutions under local Lipschitz condition and some additional conditions in term of Lyapunov-type functions. It should be pointed out that the proof for SPEwMsLJs is certainly not a straightforward generalization of that for SPEs and SPEwMs without Levy jumps. Although the way of analysis follows the ideas of [21], we need to develop several new techniques to deal with Levy jumps. Some known results in Fan et al. [14], Milošević and Jovanović [16], and Marion et al. [21] are generalized to cover a class of more general SPEwMsLJs.

The paper is organized as follows. In Section 2, we introduce some notations and hypotheses concerning (4), and the Euler methods is used to produce a numerical solutions. In Section 3, we establish some useful lemmas and prove that the approximate solutions converge to the true solutions of SPEwMsLJs in L2 sense. By applying Theorem 4, we study the convergence in probability of the approximate solutions to the true solutions in Section 4. Finally, we give an illustrative example in Section 5.

2. Preliminaries and the Approximate Solution

Let (Ω, , P) be a complete probability space with a filtration (t) t≥0 satisfying the usual condition; that is, the filtration (t) is continuous on the right and (0) contains all P-null sets. Let {W(t), t ≥ 0} be a d-dimensional Wiener process defined on the probability space (Ω, , P) adapted to the filtration (t) t≥0. Let D([0, T], Rn) denote the family of function f from [0, T] → Rn that are right continuous and have limits on the left. Also D([0, T], Rn) is equipped with the norm ∥f∥ = sup 0≤tT |f(t)|, where |·| is the Euclidean norm in Rn; that is, . Let T > 0, P ≥ 2, and P([0, T]; Rn) denote the family of all Rn-valued measurable (t)-adapted processes f = {f(t)} 0≤tT such that E sup 0≤tT |f(t)|P < . Let (Rn, (Rn)) be a measurable space and π(du) a σ-finite measure on it. Let p = p(t), and let tDp be a stationary t-Poisson point process on Rn with characteristic measure π. Denote by N(dt, du) the Poisson counting measure associated with p; that is,
()
We refer to Mao [3] for the properties of a Wiener process and SDDEs and to Ikeda and Watanabe [22] for the details on Poisson point process.
Let r(t), t ≥ 0, be a right-continuous Markov chain on the probability space (Ω, , P) taking values in a finite state space S = {1,2, …, N} with generator Γ = (γij) N×N given by
()
where Δ > 0. Here γij ≥ 0 is the transition rate from i to j, ij, while
()
We assume that Markov chain r(·) is independent of the Brownian motion W(·) and compensated Poisson random measure . It is known that almost every sample path of r(·) is right-continuous step function with a finite number of simple jumps in any finite subinterval of R+.
In this paper, we study the following hybrid stochastic pantograph equations with Levy jump:
()
where 0 < q < 1 and
()
W(t) is a standard m-dimensional Brownian motion, and is the compensated Poisson random measure given by
()
Here π(du) is the Levy measure associated to N.
Let C2(Rn × S, R+) denote the family of all nonnegative functions V(x, i) on Rn × S which are continuously twice differentiable in x. For each VC2(Rn × S, R+), define an operator LV from Rn × Rn × S to R by
()
where
()

In order to define the Euler approximate solution of (4), we need the property of embedded discrete Markov chain. The following lemma [23] describes this property.

Lemma 1. For h > 0 and n ≥ 0, then is a discrete Markov chain with the one-step transition probability matrix

()

Given a step size h > 0, the discrete Markov chain can be simulated as follows (see Mao and Yuan [24]). Let r(0) = i0 and generate a random number ζ1 which is uniformly distributed in [0,1]. If ζ1 = 1, then let or otherwise find the unique integer i1S for
()
and let , where we set as usual. Generate independently a new random number ζ2 which is again uniformly distributed in [0,1]. If ζ2 = 1 then let or otherwise find the unique integer i2S for
()
and let . Repeating this procedure, a trajectory of can be generated. This procedure can be carried out independently to obtain more trajectories.
Now we define the Euler approximate solution to (4) with discrete Markov chain . For system (4), the discrete approximation is given by the iterative scheme
()
with initial value Y0 = X(0), and [u] represents the integer part of u. Here tn = nh for n ≥ 0. We have YnX(tn), Y[qn]X(qtn), ΔWn = W(tn+1)−  W(tn), and .
Let us introduce the following notations:
()
for t ∈ [tn, tn+1). Then we define the continuous Euler approximate solution as follows:
()
which interpolates the discrete approximation (7).

In order to establish the strong convergence theorem, we suppose the following assumptions are satisfied.

Assumption 2. For each iS and uRn,

()

Assumption 3. For every d ≥ 1, there exists a positive constant Kd such that for all x1, y1, x2, y2, uRn and iS, |x1 | ∨|y1 | ∨|x2 | ∨|y2 | ≤ d,

()

3. Strong Convergence of Numerical Solutions

In this section, we will prove that the Euler approximate solutions converge to the true solutions in L2 sense under the local Lipschitz condition.

Theorem 4. If Assumptions 2 and 3 hold, then the Euler approximate solutions converge to the true solutions of (4) in L2 sense with order 1/2; that is, there exists a positive constant Cd such that

()
where θd = inf {t ∈ [0, T]:|X(t)| ≥ d} and ρd = inf {t ∈ [0, T]:|Y(t)| ≥ d}, and let τd = θdρd.

The proof of Theorem 4 is very technical, so we present some useful lemmas.

Lemma 5. Under Assumptions 2 and 3, for any t ∈ [0, T] and P ≥ 2, there exists a positive constant C1(d) such that

()
where C1(d) is a positive constant independent of the step size h.

Proof. For any t ∈ [0, Tτd], there exists an integer n such that t ∈ [nh, (n + 1)h). Then

()
Using the inequality |a + b + c|P ≤ 3P−1[|a|P + 3 | b|P + 3 | c|P], we get
()
By the Hölder inequality and Assumptions 2 and 3, we have
()
By the definition of τd, we have |Y(t)| < d, t ∈ [0, τdT]. so we get that |Z1(t)|P < dP, |Z2(t)|P < dP, and
()
By using the Burkholder-Davis-Gundy inequality and Assumptions 2 and 3, we have
()
()
Combing (22)–(24) together, we have
()
where . The proof is complete.

Lemma 6. Under Assumptions 2 and 3, for any t ∈ [0, T] and P ≥ 2, then

()
where C2(d) is a positive constant independent of the stepsize h.

The proof of this lemma is similar to that of Lemma 5.

Lemma 7. Under Assumptions 2 and 3, for any t ∈ [0, T] and P ≥ 2, then

()
where C3(d) is a positive constant independent of the stepsize h.

The proof of this lemma is similar to that of [16, 24].

Proof of Theorem 4. Combining (4) and (14), one has

()
Then applying the generalized Itô’s formula, we can show that
()
Hence, for any t ∈ [0, T], we get
()
By Assumption 3 and Lemmas 57, we have
()
Similarly, by Assumption 3 and Lemmas 57, we obtain
()
()
On the other hand, by the Burkholder-Davis-Gundy inequality, Young’s inequality, and Lemmas 57, we have for any ɛ1 > 0
()
We have for any ɛ2 > 0
()
where C is some constant that may change from line to line. Similar, we have
()
Substituting (31)–(36) into (30), we obtain that
()
By choosing ɛ1 > 0, ɛ2 > 0 sufficiently small and letting 6ɛ1 + Cɛ2 = 2/3, we have
()
Therefore, we apply Gronwall′s inequality to get
()
This completes the proof.

Remark 8. Under the local Lipschitz condition, Theorem 4 not only tells us the strong convergence of the approximate solutions to the true solutions but also tells us the rate of the convergence with order 1/2 by (39).

Remark 9. When r(t) ≡ 0 or h ≡ 0, (4) reduces to which was studied by Fan et al. [14], Xiao and Zhang [15], and Milošević and Jovanović [16]. Our results in the present paper generalized and improved the results in [1416].

4. Convergence of Numerical Solutions in Probability

In this section, by applying Theorem 4, we will show the convergence in probability of the approximate solutions to the true solutions under local Lipschitz condition. Before we give the convergence theorem, we need some additional conditions based on Lyapunov-type functions.

Assumption 10. For xRn and iS, there exist a positive function VC2(Rn × S; R+), K > 0, and two constants λ1 > λ2 ≥ 0 such that

()
()

Assumption 11. There exists a positive constant Ld such that, for all x, yRn and iS with |x | ∨|y | ≤ d,

()

Now, let us state our convergence theorem.

Theorem 12. Let the assumptions of Theorem 4 hold. Also assume that there exists a C2 function V : Rn × SR+ satisfying (40)–(42). Then the Euler approximate solutions converges to the true solutions of (4) in the sense of the probability.

That is,

()

Proof. The proof is rather technical, and we divide it into three steps.

Step 1. We assume the existence of the nonnegative Lyapunov function V(x, i) satisfying (40). Applying the Itô’s formula, V(X(t), r(t)) yields

()
Integrating from 0 to tθd and taking expectations gives
()
By (41), we have
()
Thus, for any t1 ∈ [0, T], it follows that
()
Using the Gronwall inequality, we obtain that
()
Let 𝒱d = inf {V(x, i):|x | ≥ d}. By (40), we have lim d𝒱d = . Noting that |X(θd)| = d, as θd < T, we derive from (48) that
()
That is,
()
Recall that 𝒱d as d. For a given T, X0, and r(0), it follows that
()
as d. Let
()
Thus we have
()

Step 2. We will give the estimate of P(ρd < T). By (14), applying the Itô’s formula to V(Y(t), r(t)) yields

()
By (7), we have
()
Integrating from 0 to ρdt, taking expectations, and by (41), we have
()
where
()
By (42) and Young’s inequality, we have
()
Let N = [T/h], the integer part of T/h, and let IG be the indicator function of the set G. Then
()
By setting V1 = sup |x|≤R,iSV(x, i) and using the Markov property, we have
()
Inserting (60) into (58), we have, by Lemma 5,
()
Similarly, by Assumptions 2, 3, and 11, Lemma 5, and Markov property, we have
()
where V2 = sup |x|≤R,iSVx(x, i). For the term 3 in (56), by Assumptions 2, 3, and 11 and Lemma 5, we have
()
where V3 = sup |x|≤R,iSVxx(x, i). For the term 4 in (56), by Assumptions 2, 3, and 11 and Lemma 5, we have
()
where
()
Inserting (65) into (64), we have
()
For the term 5, by Assumption 11, equation (56), and Lemma 5, we have
()
Combing these inequalities and (53), we derive that, for t1 ∈ [0, T],
()
where
()
For arbitrary 0 ≤ t1T, by the Gronwall inequality, we get
()
Noting that |Y(ρd)| = d, as ρd < T, we derive from (70) that
()
So we have
()
Step  3. Let ϵ, δ ∈ (0,1) be arbitrarily small. By setting
()
and using Theorem 4, we have
()
Combing (53) and (72) together, one gets
()
Hence on using (74), we conclude that
()
Recalling that 𝒱d as d, we can choose d sufficiently large for
()
and then choose h sufficiently small for
()
to obtain
()
The proof of Theorem 12 is now complete.

5. An Example

In this section, we construct one example to demonstrate the effectiveness of this theory. Let r(t) be a right-continuous Markov chain taking values in S = {1,2} with the generator
()
Let be a compensated Poisson random measures and is given by π(du)dt = λf(u)dudt, where λ = 2 and
()
is the density function of a lognormal random variable. Of course and r(t) are assumed to be independent. Consider a linear stochastic pantograph delay equations with Markovian switching and pure jumps
()
Here a(1) = −3, a(2) = −1, , and . Then (82) can be regarded as the result of the two equations
()
switching among each other according to the movement of the Markov chain r(t). Obviously, (82) satisfies Assumptions 2 and 3. Given the stepsize h, we can have the Euler method
()
with Y0 = X(0). Let , , and , for t ∈ [tn, tn+1). Then we define the continuous Euler approximate solution
()
Since the conditions of Theorem 4 are satisfied, then the approximate solution (85) will converge to the true solution of (82) in the sense of the L2-norm. To examine the convergence in the sense of the probability, we construct a function
()
Then
()
From the properties of the lognormal distribution, we have
()
()
If 6/(12 − e2) < (β2/β1) < 4 − (1/10)e2, then it follows that Assumptions 10 and 11 are satisfied. Consequently, the approximate solution (85) will converge to the true solution of (82) for any t ∈ [0, T] in the sense of Theorem 12.

Acknowledgment

This work is sponsored by Qing Lan project of Jiangsu Province (2012) and supported by the Grant of Jiangsu Second Normal University (Jsie2011zd04) and the Jiangsu Government Overseas Study Scholarship.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.