Volume 2013, Issue 1 752953
Research Article
Open Access

Stationary in Distributions of Numerical Solutions for Stochastic Partial Differential Equations with Markovian Switching

Yi Shen

Corresponding Author

Yi Shen

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China hust.edu.cn

Search for more papers by this author
Yan Li

Yan Li

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China hust.edu.cn

College of Science, Huazhong Agriculture University, Wuhan 430079, China hzau.edu.cn

Search for more papers by this author
First published: 16 April 2013
Citations: 2
Academic Editor: Qi Luo

Abstract

We investigate a class of stochastic partial differential equations with Markovian switching. By using the Euler-Maruyama scheme both in time and in space of mild solutions, we derive sufficient conditions for the existence and uniqueness of the stationary distributions of numerical solutions. Finally, one example is given to illustrate the theory.

1. Introduction

The theory of numerical solutions of stochastic partial differential equations (SPDEs) has been well developed by many authors [15]. In [2], Debussche considered the error of the Euler scheme for the nonlinear stochastic partial differential equations by using Malliavin calculus. Gyöngy and Millet [3] discussed the convergence rate of space time approximations for stochastic evolution equations. Shardlow [5] investigated the numerical methods of the mild solutions for stochastic parabolic PDEs derived by space-time white noise by applying finite difference approach.

On the other hand, the parameters of SPDEs may experience abrupt changes caused by phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances [69], and the continuous-time Markov chains have been used to model these parameter jumps. An important equation is a class of SPDEs with Markovian switching
()
Here the state vector has two components X(t) and r(t), the first one is normally referred to as the state while the second one is regarded as the mode. In its operation, the system will switch from one mode to another one in a random way, and the switching among the modes is governed by the Markov chain r(t).

Since only a few SPDEs with Markovian switching have explicit formulae, numerical (approximate) schemes of SPDEs with Markovian switching are becoming more and more popular. In this paper, we will study the stationary distribution of numerical solutions of SPDEs with Markovian switching. Bao et al. [10] investigated the stability in distribution of mild solutions to SPDEs. Bao and Yuan [11] discussed the numerical approximation of stationary distribution for SPDEs. For the stationary distribution of numerical solutions of stochastic differential equations in finite-dimensional space, Mao et al. [12] utilized the Euler-Maruyama scheme with variable step size to obtain the stationary distribution and they also proved that the probability measures induced by the numerical solutions converge weakly to the stationary distribution of the true solution. But since the mild solutions of SPDEs with Markovian switching do not have stochastic differential, a significant consequence of this fact is that the Itô formula cannot be used for mild solutions of SPDEs with Markovian switching directly. Consequently, we generalize the stationary distribution of numerical solutions of the finite dimensional stochastic differential equations with Markovian switching to that of infinite dimensional cases.

Motived by [1113], we will show in this paper that the mild solutions of SPDE with Markovian switching (1) have a unique stationary distribution for sufficiently small step size. So this paper is organised as follows: in Section 2, we give necessary notations and define Euler-Maruyama scheme of mild solutions. In Section 3, we give some lemmas and the main result in this paper. Finally, we will give an example to illustrate the theory in Section 4.

2. Statements of Problem

Throughout this paper, unless otherwise specified, we let (Ω, , {t} t≥0, ) be complete probability space with a filtration {t} t≥0 satisfying the usual conditions (i.e., it is increasing and right continuous while 0 contains all -null sets). Let (H, 〈·, ·〉 H, ∥· ∥H) be a real separable Hilbert space and W(t) an H-valued cylindrical Brownian motion (Wiener process) defined on the probability space. Let IG be the indicator function of a set G. Denote by ((H), ∥·∥) and (HS(H), ∥· ∥HS) the family of bounded linear operators and Hilbert-Schmidt operator from H into H, respectively. Let r(t), t ≥ 0, be a right-continuous Markov chain on the probability space taking values in a finite state space 𝕊 = {1,2, …, N} with the generator Γ = (γij) N×N given by
()
where δ > 0. Here γij > 0 is the transition rate from i to j if ij while
()
We assume that the Markov chain r(·) is independent of the Brownian motion W(·). It is well known that almost every sample path of r(·) is a right-continuous step function with finite number of simple jumps in any finite subinterval of + : = [0, +).
Consider SPDEs with Markovian switching on H
()
with initial value X(0) = xH and r(0) = i𝕊. Here f : H × 𝕊H, g : H × 𝕊HS(H). Throughout the paper, we impose the following assumptions.
  • (A1)

    (A, 𝒟(A)) is a self-adjoint operator on H generating a C0-semigroup {eAt} t≥0, such that ∥eAt∥≤eαt for some α > 0. In this case, −A has discrete spectrum 0 < ρ1ρ2 ≤ ⋯≤lim iρi = with corresponding eigenbasis {ei} i≥1 of H.

  • (A2)

    Both f and g are globally Lipschitz continuous. That is, there exists a constant L > 0 such that

    ()

  • (A3)

    There exist μ > 0 and λj > 0,  (j = 1,2, …, N) such that

    ()

It is well known (see [1, 8]) that under (A1)–(A3), (4) has a unique mild solution X(t) on t ≥ 0. That is, for any X(0) = xH and r(0) = i𝕊, there exists a unique H-valued adapted process X(t) such that
()
Moreover, the pair Z(t) = (X(t), r(t)) is a time-homogeneous Markov process.

Remark 1. We observe that (A2) implies the following linear growth conditions:

()
where .

Remark 2. We also establish another property from (A3):

()
where and for S, THS(H).

Denote by Zx,i(t) = (Xx,i(t), ri(t)) the mild solution of (4) starting from (x, i) ∈ H × 𝕊. For any subset A𝔅(H), B𝕊, let t((x, i), A × B) be the probability measure induced by Zx,i(t), t ≥ 0. Namely,
()
where 𝔅(H) is the family of the Borel subset of H.
Denote by 𝒫(H × 𝕊) the family by all probability measures on H × 𝕊. For P1, P2𝒫(H × 𝕊), define the metric d𝕃 as follows:
()
where 𝕃 = {φ : H × 𝕊:|φ(u, j) − φ(v, l)|≤∥uvH+|jl|, and |φ(u, j)| ≤ 1, for u, vK, j, l𝕊}.

Remark 3. It is known that the weak convergence of probability measures is a metric concept with respect to classes of test function. In other words, a sequence of probability measures {Pk} k≥1 of 𝒫(H × 𝕊) converges weakly to a probability measure P0𝒫(H × 𝕊) if and only if lim kd𝕃(Pk, P0) = 0.

Definition 4. The mild solution Z(t) = (X(t), r(t)) of (4) is said to have a stationary distribution π(·×·) ∈ 𝒫(H × 𝕊) if the probability measure t((x, i), (·×·)) converges weakly to π(·×·) as t for every i𝕊, and every xU, a bounded subset of H, that is,

()

By Theorem 3.1 in [10] and Theorem 3.1 in [14], we have the following.

Theorem 5. Under (A1)–(A3), the Markov process Z(t) has a unique stationary distribution π(·×·) ∈ 𝒫(H × 𝕊).

For any n ≥ 1, let πn : HHn : =   Span {e1, e2, … , en} be the orthogonal projection. Consider SPDEs with Markovian switching on Hn,
()
with initial data ,  xH. Here An = πnA,  fn = πnf,  gn = πng.
Therefore, we can observe that
()
By the property of the projection operator and (A2), we have
()
Hence, (13) admits a unique strong solution {Xn(t)} t≥0 on Hn (see [8]).

We now introduce an Euler-Maruyama based computational method. The method makes use of the following lemma (see [15]).

Lemma 6. Given Δ > 0, then {r(kΔ),   k = 0,1, 2, …} is a discrete Markov chain with the one-step transition probability matrix

()

Given a fixed step size Δ > 0 and the one-step transition probability matrix P(Δ) in (16), the discrete Markov chain {r(kΔ),   k = 0,1, 2, …} can be simulated as follows: let r(0) = i0, and compute a pseudorandom number ξ1 from the uniform (0,1) distribution.

Define
()
where we set as usual. Having computed r(0), r(Δ), …, r(kΔ), we can compute r((k + 1)Δ) by drawing a uniform (0,1) pseudorandom number ξk+1 and setting
()
The procedure can be carried out repeatedly to obtain more trajectories.
We now define the Euler-Maruyama approximation for (13). For a stepsize Δ ∈ (0,1), the discrete approximation , is formed by simulating from , and
()
where ΔWk = W((k + 1)Δ) − W(kΔ)).
To carry out our analysis conveniently, we give the continuous Euler-Maruyama approximation solution which is defined by
()
where ⌊t⌋ = [t/Δ]Δ and [t/Δ] denotes the integer part of t/Δ and , and .
It is obvious that Yn(t) coincides with the discrete approximation solution at the gridpoints. For any Borel set A𝔅(Hn), xHn, i, j𝕊, let ,
()
Following the argument of Theorem 5 in [13], we have the following.

Lemma 7. is a homogeneous Markov process with the transition probability kernel n((x, i), A × {j}).

To highlight the initial value, we will use notation .

Definition 8. For a given stepsize Δ > 0, is said to have a stationary distribution {πn(·×·)} ∈ 𝒫(Hn × 𝕊) if the k-step transition probability kernel converges weakly to πn(·×·) as k, for every (x, i) ∈ Hn × 𝕊, that is,

()

We will establish our result of this paper in Section 3.

Theorem 9. Under (A1)–(A3), for a given stepsize Δ > 0, and arbitrary xHn,  i𝕊, has a unique stationary distribution πn(·×·) ∈ 𝒫(Hn × 𝕊).

3. Stationary in Distribution of Numerical Solutions

In this section, we shall present some useful lemmas and prove Theorem 9. In what follows, C > 0 is a generic constant whose values may change from line to line.

For any initial value (x, i), let Yn,x,i(t) be the continuous Euler-Maruyama solution of (20) and starting from (x, i) ∈ H × 𝕊. Let Xx,i(t) be the mild solution of (4) and starting from (x, i) ∈ H × 𝕊.

Lemma 10. Under (A1)–(A3), then

()

Proof. Write Yn,x,i(t) = Yn(t),  Yn,x,i(⌊t⌋) = Yn(⌊t⌋). From (20), we have

()
Thus,
()
Then, by the Hölder inequality and the Itô isometry, we obtain
()
From (A1), we have
()
here we use the fundamental inequality 1 − eaa,  a > 0. And, by (8), it follows that
()
Substituting (27) and (28) into (26), the desired assertion (23) follows.

Lemma 11. Under (A1)–(A3), if , then there is a constant C > 0 that depends on the initial value x but is independent of Δ, such that the continuous Euler-Maruyama solution of (20) has

()
where q = max 1≤iNλi,  p = min 1≤iNλi.

Proof. Write Yn,x,i(t) = Yn(t),  ri(kΔ) = r(kΔ). From (20), we have the following differential form:

()
with Yn(0) = πnx.

Let . By the generalised Itô formula, for any θ > 0, we derive from (30) that

()
By the fundamental transformation, we obtain that
()
By Höld inequality, we have
()
Then, from (31), we have
()
By the elemental inequality: 2ab ≤ (a2/κ) + κb2,  a, b,  κ > 0, and (8), (27), we obtain that, for Δ < 1,
()
By (A2) and (8), we have
()
Similarly, we have
()
Thus, we obtain from (36) that
()
By Markov property, we compute
()
where . Substituting (39) into(38) gives
()
Furthermore, due to (37) and (39), we have
()
On the other hand, by Lemma 10, when , we have
()
Putting (35), (40), and (41) into (34), we have
()
By Lemma 10 and the inequality (42), we obtain that
()
Let θ = (4αp + μ)/4q, for , then
()
That is,
()

Lemma 12. Let (A1)–(A3) hold. If , then

()
where U is a bounded subset of Hn.

Proof. Write Yn,x,i(t) = Yx(t),  Yn,y,i(t) = Yy(t),  ri(kΔ) = r(kΔ). From (20), it is easy to show that

()
By using the argument of Lemma 10, we derive that, if Δ < 1,
()
()
If , then
()
Using (30) and the generalised Itô formula, for any θ > 0, we have
()
By the fundamental transformation, we obtain that
()
By the Höld inequality, we have
()
Then, from (52) and (A3), we have
()
By (A2) and (27), we have, for Δ < 1,
()
It is easy to show that
()
By (39), we have
()
Therefore, we obtain that
()
On the other hand, using the similar argument of (58), we have
()
Hence, we have
()
By (50), we obtain that
()
Let θ = (2αp + μ)/2q, for , then the desired assertion (47) follows.

We can now easily prove our main result.

Proof of Theorem 9. Since Hn is finite-dimensional, by Lemma 3.1 in [12], we have

()
uniformly in x, yHn, i, j𝕊.

By Lemma 7, there exists πn(·×·) ∈ 𝒫(Hn × 𝕊), such that

()

By the triangle inequality (63) and (64), we have

()

4. Corollary and Example

In this section, we give a criterion based M-matrices which can be verified easily in applications.
  • (A4)

    For each j𝕊, there exists a pair of constants βj and δj such that, for x, yH,

    ()

  • Moreover, 𝒜 : = −  diag (2β1 + δ1, …, 2βN + δN) − Γ is a nonsingular M-matrix [8].

Corollary 13. Under (A1), (A2), and (A4), for a given stepsize Δ > 0, and arbitrary xHn, i𝕊, has a unique stationary distribution πn(·×·) ∈ 𝒫(Hn × 𝕊).

Proof. In fact, we only need to prove that (A3) holds. By (A4), there exists (λ1, λ2, …, λN) T > 0, such that (q1, q2, …, qN) T = 𝒜(λ1, λ2, …, λN) T > 0.

Set μ = min 1≤jNqj, by (66), we have

()

In the following, we give an example to illustrate the Corollary 13.

Example 14. Consider

()

We take H = L2(0, π) and A = 2/ξ2 with domain , then A is a self-adjoint negative operator. For the eigenbasis ek(ξ) = (2/π) 1/2sin(kξ), ξ ∈ [0, π],  Aek = −k2ek,  k. It is easy to show that

()
This further gives that
()
where α = 1, thus (A1) holds.

Let W(t) be a scalar Brownian motion, let r(t) be a continuous-time Markov chain values in 𝕊 = 1,2, with the generator

()
Then , .

Moreover, g satisfies

()
where δ1 = 0.1,  δ2 = 0.06.

Defining f(x, j) = B(j)x, then

()
It is easy to compute
()
So the matrix 𝒜 becomes
()

It is easy to see that 𝒜 is a nonsingular M-matrix. Thus, (A4) holds. By Corollary 13, we can conclude that (68) has a unique stationary distribution πn(·×·).

Acknowledgment

This work is partially supported by the National Natural Science Foundation of China under Grants 61134012 and 11271146.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.