Volume 2013, Issue 1 267328
Research Article
Open Access

Ergodicity of Stochastic Burgers’ System with Dissipative Term

Guoli Zhou

Corresponding Author

Guoli Zhou

College of Mathematics and Statistics, Chong Qing University, Chong Qing 401331, China cqu.edu.cn

Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, China iapcm.ac.cn

Search for more papers by this author
Boling Guo

Boling Guo

Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, China iapcm.ac.cn

Search for more papers by this author
Daiwen Huang

Daiwen Huang

Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, China iapcm.ac.cn

Search for more papers by this author
Yongqian Han

Yongqian Han

Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, China iapcm.ac.cn

Search for more papers by this author
First published: 31 December 2013
Academic Editor: Hamid Reza Karimi

Abstract

A 2-dimensional stochastic Burgers equation with dissipative term perturbed by Wiener noise is considered. The aim is to prove the well-posedness, existence, and uniqueness of invariant measure as well as strong law of large numbers and convergence to equilibrium.

1. Introduction

The paper is concerned with the 2-dimensional Burgers equation in a bounded domain with Wiener noise as the body forces like this
()
where u(t, x) = (u1(t, x), u2(t, x)) is the velocity field, ν > 0 is viscid coefficient, Δ denotes the Laplace operator, ∇ represents the gradient operator, W stands for the Q-Wiener process, and D is a regular bounded open domain of 2. Burgers equation has received an extensive amount of attention since the studies by Burgers in the 1940s (and it has been considered even earlier by Beteman [1] and Forsyth [2]). But it is well known that the Burgers’ equation is not a good model for turbulence since it does not perform any chaos. Even if a force is added to equation, all solutions will converge to a unique stationary solution as time goes to infinity. However, if the force is a random one, the result is completely different. So, several authors have indeed suggested to use the stochastic Burgers’ equation to model turbulence, see [36]. The stochastic equation has also been proposed in [7] to study the dynamics of interfaces.

So far, most of the monographs concerning the equation focus on one-dimensional case, for example, Bertini et al. [8] solved the equation with additive space-time white noise by an adaptation of the Hopf-cole transformation. Da Prato et al. [9] studied the equation via a different approach based on semigroup property for the heat equation on a bounded interval. The more general equation with multiplicative noise was considered by Da Prato and Debussche [10]. With a similar method, Gyöngy and Nualart [11] extended the Burgers equation from bounded interval to real line. A large deviation principle for the solution was obtained by Gourcy [12]. Concerning the ergodicity, an important paper by Weinan et al. [13] proved that there exists a unique stationary distribution for the solutions of the random inviscid Burgers equation, and typical solutions are piecewise smooth with a finite number of jump discontinuities corresponding to shocks. For model with jumps, Dong and Xu [14] proved that the global existence and uniqueness of the strong, weak, and mild solutions for a one-dimensional Burgers equation perturbed by Lévy noise. When the noise is fractal, Wang et al. [15] get the well-posedness.

The main aim in our paper is to study the large time behavior of stochastic system. There are lots of the literature about the topic (see [1620]).

Burgers system is a well-known model for mechanics problems. But as far as we know, there are no results about the long-term behavior of stochastic Burgers’ system. We think that the difficulty lies in the fact that the dissipative term Δu cannot dominate the nonlinear term (u · ∇)u. However, in many practical cases, we cannot ignore the energy dissipation and external forces, especially considering the long-term behavior. Therefore, we introduce dissipative term f(u) and study the ergodicity of the following equation:
()
where f(u) = ϑ|u(t,x)|2u(t, x),  ϑ > 0, |·| denote the absolute value or norm for the real number or two-dimensional vector, respectively.

We believe that our work is new and is worth researching. The methods and results in this paper can be applied to stochastic reaction diffusion equations and stochastic real valued Ginzburg Landau equation in high dimensions. But we cannot extend our result to dynamical systems with state-delays. Since in order to show the existence of an invariant measure, we should consider the segments of a solution. In contrast to the scalar solution process, the process of segments is a Markov process. We show that the process of segments is also Feller and that there exists a solution of which the segments are tight. Then, we apply the Krylov-Bogoliubov method. Since the segment process has values in the infinite-dimensional space C([−r, 0], H), boundedness in probability does not automatically imply tightness. For solution processes of infinite-dimensional equations, one often uses compactness of the orbits of the underlying deterministic equation to obtain tightness. For an infinite-dimensional formulation of the functional differential equation, however, such a compactness property does not hold. For ergodicity of stochastic delay equations, we can see [21]. We believe that stochastic Burgers’ system with state-delays is a very interesting problem.

In order to study ergodicity of problem (2), we use a remarkable dissipativity property of the stochastic dynamic to obtain the existence of the invariant measure. For uniqueness, we try to use the method from [22] to prove that the distributions P(t, x, ·) induced by the solution are equivalent. It is well known that the equivalence of the distributions implies uniqueness, a strong law of large numbers, and the convergence to equilibrium.

The remaining of this paper is organized as follows. Some preliminaries are presented in Section 2, the local existence and global existence are presented, respectively, in Sections 3 and 4. In Section 5, we obtain the existence and uniqueness of the invariant measure as well as strong law of large numbers, and convergence to equilibrium. As usual, constants C may change from one line to the next; we denote by Ca a constant which depends on some parameter a.

2. Preliminaries on the Burgers Equation

Let u(t, x) = (u1(t, x), u2(t, x)) be a row vector valued function on [0, ) × 2. And it denotes the following:
()
Let [C(D)] 2 be infinitely differentiable 2-dimensional vector field on D, and let be infinitely differentiable 2-dimensional vector field with compact support strictly contained in D. We denote by Hα the closure of [C(D)] 2 in [Hα(D)] 2, whose norms are denoted by , when α ≠ 0. Let be the closure of in [H1(D)] 2 and [L2(D)] 2 whose norms are denoted by and ∥·∥H, respectively. Without confusion, set 〈·, ·〉 as the inner product in H or L2(D). For p > 0, let be the norm of vector filed in Lebesgue spaces [Lp(D)] 2. represents the norm in the usual sobolev spaces Hα(D) for real valued functions on D and α; stands for the norm in the usual Lebesgue spaces Lp(D) for real valued functions on D. Denote A≔−Δ; then A : D(A) ⊂ HH and . Since coincides with D(A1/2), we can endow with the norm . The operator A is positive self-adjoint with compact resolvent; we denote by 0 < α1α2 ≤ ⋯ the eigenvalues of A, and by e1, e2, … the eigenvectors which is a corresponding complete orthonormal system in H satisfying
()
for some positive constant C. We remark that . We define the bilinear operator B(u, v) : H1 × H1H−1 as
()
for all zH1. Then, (2) is equivalent to the following abstract equation:
()
W is the Q Wiener process having the following representative:
()
in which and βk are a sequence of mutually independent 1-dimensional Brownian motions in a fixed probability space (Ω, , P) adapted to a filtration {t} t≥0.
It can be derived from [23] that the solution to the linear problem corresponding to (2) with the following initial condition:
()
is unique, and when u0 = 0, it has the form of
()
Let
()
then u is a solution to (2) if and only if it solves the following evolution equation:
()
So, we see that when wΩ is fixed, this equation is in fact a deterministic equation. From now on, we will study the equation of the form (11) to get the existence and uniqueness of the solution a.s. wΩ.

3. Local Existence in Time

Definition 1 (see Definition 5.1.1 in [24].)We say a ((t)) t≥0 adapted process v(t) is a mild solution to (11), if and it satisfies

()

Lemma 2. For any θ ∈ (0,1), if , then A1/2WA has a version which is α-Hölder continuous with respect to t ∈ [0, T],  xD with any α∈]0, θ/2[.

Proof. Let T > 0 and s, t ∈ [0, T]; then

()
Then, we have
()
So, by the estimate of I1 and I2, we arrive at
()
For t ∈ [0, T],  x, yD, we get
()
Therefore,
()
As A1/2WA(t, x) − A1/2WA(s, y) is a Gaussian random variable, we obtain
()
for m = 1,2, … By Kolmogorov’ test theorem, we get the conclusion.

Remark 3. An example of the noise satisfying condition of Lemma 2 is

()
where {βn} is a sequence of independent 1-dimensional Brownian motion, and {λn} satisfies
()
It is so because the eigenvalues αn of the operator A, in 2-dimensional space, behave like n.

Remark 4. Another example of stochastic noise satisfying Lemma 2 is

()
where ,  L is an isomorphism in H, and
()

To prove the local existence of the solution of (1) in sense of Definition 1, we introduce the space m defined by
()
where T* ≥ 0 which in fact is a stopping time and m > 0, p > 0.

Lemma 5. For , and ui(0) is adapted to 0, i = 1,2; then there exists a unique mild solution v in sense of Definition 1 to (11) in m.

Proof. Choose a v in m, and set

()
Then,
()
For the second term on the right hand side of (25),
()
In the following, we will estimate Ii, respectively, i = 1,2, 3,4. Since {etA} t≥0 is contraction on Lp(D), p ≥ 1, it is known that
()
for all ,  s1, s2,  s1s2,  r ≥ 1, and C1 only depends on s1,  s2, and r. Before calculating each Ii, we outline the Sobolev embedding principle in fractional Sobolev spaces as follows:
()
when
()
where n is the dimension of the spatial. Let η1 = 3/4,  p1 = 2,  η2 = 1/4, q1 = 4 satisfying (29) such that
()

For I1, by (27) and Theorem A.8 in [25], we get

()
where
()
satisfying
()
The last inequality follows by (30). For the other term added to R, we have
()
So, by (31)–(34), we have
()
Similarly, we get for I2 that
()
For I3, by Theorem A.8 in [25], we get
()
where
()

For R1, we have

()
For the first term on the right hand side of (37), by (27), we have
()
For the second term on the right hand side of (37), by (27), we obtain
()
From (37) to (41), we get for I3 that
()
Analogously, for I4, we get
()
By (26), (35), (36), (42), and (43), we have
()
As u = v + WA, by (44), for tT*, we have
()
Since by Lemma 2,
()
For the last term on the right hand side of (25), we have
()
Therefore,
()
So by (25), (45), and (48), when T* is small enough,
()
For each v1, v2m, set ui = vi + WA, i = 1,2. To simplify the notation in the following calculation, we denote , i = 1,2. Then,
()
So,
()
In order to simplify the notation, we set
()
where
()
Then, we estimate fi, i = 1,2, 3,4, respectively. For f1, we have
()
We first consider
()
For the other term added to R2,
()
By (54)–(56),
()
Analogously, for f3,
()
For f2, by (53), we have
()
For the first term on the right hand side of (59), we have
()
For R3,
()
For the first term on the right hand side of (60), we arrive at
()
For the second term on the right hand side of (60), we obtain
()
By (59)–(63), we get for f2 that
()
Similarly, we get for f4 that
()
By (52), (53), (57), (58), (64), and (65), we have
()
For the second term on the right hand side of (51), we have
()
where
()
Then,
()
Similarly, we can get the same estimate for h2. So, we have
()
By (51), (66), and (70), we have
()
By (49), (71), and fixed point principle, we get the conclusion.

Remark 6. By making some minor modifications in the proof of Lemma 5, we can see that the conclusion in Lemma 5 is also true for (1). Our original aim is to get the global well-posedness of (1), but we find that the dissipative term Δu cannot dominate the nonlinear term (u · ∇)u. So, we introduce the dissipative term |u|2u which will also play an important role in obtaining the ergodicity.

4. Global Existence

Theorem 7. With conditions in Lemma 2, for satisfying (12), when ϑ > 1/16, one has

()
Subsequently, one gets the existence of the global solution belonging to .

Proof. Let be a sequence of vectors which satisfies and ,  i = 1,2, n ≥ 1, such that

()
in sense of . Let {Wn} n≥1 be a sequence of regular process, such that
()
in C(T × D) when a = 0 or a = 1. For h = (h1, h2), hiC([0, T] × D; ),  , where . Then, by (74), we have
()
()
If vn satisfies
()
then, vn is regular, such that
()
Taking inner product with respect to vn in (78), we have
()
For simplicity, we calculate the third term on the left hand side of (79) first as follows:
()
where . For I1, we have
()
In the following, we estimate the four terms for I1, respectively. For the first term,
()
For the second term, by (75), we have
()
similarly, for the third term,
()
For the last term, by (75) and (76),
()
By (81)–(85), it follows that
()
Similarly,
()
For I3,
()
For the first term on the right hand side of (88), we deduce that
()
where ϵ > 0. For the second term on the right hand side of (88), we have
()
Analogously, for the third term on the right hand side of (88), we see that
()
For the last term, by (75) and (76), we have
()
By (88)–(92), we get
()
Analogously, for I2, it follows that
()
By (80) and the estimates of I1, I2, I3, and I4, see (86), (87), (93), and (94), we have
()
For the last term on the left hand side of (79), we have
()
By (79), (95), and (96), we get
()
Rearranging the above inequality, we deduce that
()
Let ϵ ∈ (1/4ϑ, 4), and ɛ be small enough, such that
()
So, we integrate with respect to t on both sides of (98) to obtain
()
where Cϵ = 2(1 − ϵ/4 − 14ɛ), by Gronwall’s inequality, we arrive at
()
By (100) and (101), we have
()
Multiplying Avn on both sides of (78), and integrating with respect to xD, we have
()
which is equivalent to
()
We first estimate the second term on the right hand side of (104) as follows:
()
For J1, we have
()
For k1, we have
()
By interpolation inequality, there exists some C > 0, such that
()
Then,
()
where the last inequality follows from (101). For k2, we deduce that
()
For k3, we arrive at
()
For k4, we obtain
()
By (106) and (109)–(112),
()
Similarly, for J4, we infer that
()
For J2, we have
()
By interpolation inequality and (101), we deduce that
()
For l2, we have
()
Similarly, for l3,
()
As for l4, we get
()
By (115)-(119), we arrive at
()
Analogously to J2, we have
()
By (105) and the estimates of J1J4, see (113), (114), (120), and (121), we get that
()
For the first term on the right hand side of (104), we have
()
By (104), (122), and (123),
()
By the Gronwall inequality, we get
()
Let n, by Fatou Lemma,
()

5. Invariant Measures

5.1. Existence

In this section, we will establish the existence of invariant measure for (2). Analogously to [24], we extend the Wiener process W(t) to by setting
()
where W1(t) is another H-valued Wiener process satisfying conditions in Lemma 2 and being independent of W(t). For any τ ≥ 0, we consider the following equation:
()
By Theorem 7, we know that there exists unique solution. In order to obtain the invariant measure, we should show that the family of laws {(uτ(0))} τ≥0 is tight. Since H1+δH1 is compact, for any δ > 0, we only need to show that {(uτ(0))} τ≥0 is bounded in probability in H1+δ. As we know,
()
is the mild solution of (8) with the following initial condition:
()
Making the classical change of variable vτ(t) = uτ(t) − WA(t), (128) is equivalent to
()
with initial condition
()
In order to get the invariant measure of (131), it is enough to show that vτ(0) is bounded in probability in H1+δ, for some δ > 0. That is what we have to do in Theorem 8 below.

Theorem 8. With conditions in Lemma 2, when ϑ > 1/4, there exists an invariant measure for (2).

Proof. Multiplying (131) by vτ and integrating on D, we get

()
For the third term on the left hand side of (133), we deduce that
()
Substituting (134) into (133), we have
()
For the third term on the right hand side of (135), we get by the Young inequality that
()
For the last term on the right hand side of (135),
()

Since vτ(t) is vector field, we denote it by , where is real valued function, i = 1,2. For r1, we have

()
Similarly for r2,
()
Analogously to r1, we deduce that
()
For r4, we have
()
Since {A1/2WA(t)} t is a Gaussian process, we infer that
()
Then, with the proof of Lemma 2, we know that is continuous with respect to t. By (137)–(141), we have
()
By (135), (136), and (143), we arrive at
()
It is equivalent to
()
Since ϑ > 1/4, let ɛ be small enough, such that
()
Then, the above estimates can be changed into
()
By the Gronwall inequality, we get
()
Similarly to the argument of [26], we will prove that has at most polynomial growth, when t → − a.s. So, we conclude that
()
Multiplying eδt on both sides of (147) and integrating with respect to t, we have
()
As
()
by (149), we have
()
By Theorem 7, we know that for problem (131) there exists unique mild solution, which has the following:
()
Then, for any ζ ∈ (0, θ)∩(0, 1/4), where the θ is the parameter in Lemma 2,
()
Since
()
then,
()
For z1, we have
()
In the following, we use Theorem 6.13 in chapter two of [27] to estimate them respectively as follows:
()
the last inequality follows by Theorem A.8 in [25], where δ > 0,  , and . So, by Hölder inequality and interpolation inequality, we have
()
For z1,2, we have
()
where
()
Analogously to estimating z1,1, we have
()
Similarly, we can get the same estimates for z1,3 and z1,4. Therefore,
()
Analogously to estimating z1, we can get for z2, z3, and z4 that
()
So, by (163)–(164) and (156), we get
()
For the third term on the right hand side of (154), we obtain
()
since and are bounded for t, τ ∈ (−, T], the last inequality follows. For the first term on the right hand side of (154), we have
()
Similar to [26], we can prove that has at most polynomial growth when τ. For the reader convenience, we sketch a proof. By Lemma 2, we know that W(t) − W(s) is a D(Aθ/2) valued Brownian motion, for st ≤ 0. So, by the law of iterated logarithm, we have
()
Obviously, wn is a i.i.d sequence. By the law of large numbers, there exists an integer-valued random variable n0(w) > 0, when nn0(w), we have
()
This implies that
()
for all n > 0. In other words,
()
when st ≤ [s] + 1. By the law of iterated logarithm, we have
()
for some positive random variable. By Theorem 5.14 in [23], we know that
()
So, we have that
()
since s ≤ [t] − 1, the fourth inequality follows. By (167) and (174), we know that
()
If we let ζ = 1/2 < θ, repeating the argument of (174), we can see that also has at most polynomial growth, when t → − a.s., since we have the Sobolev embedding H3/2W1,4. Consider the second term on the right hand side of (154), by (165),
()
where the last inequality follows by (152). Analogously, we can prove that
()
where we used (149) and (152) for the last inequality. By (154) and (175)–(177), we get
()
for some positive random variable ξ(w). As H1+δH1 is compact, by Prohorov Theorem, we know that the family of laws for (vτ(0)) τ≥0 taking values in H1 is tight. Since vτ(0) = uτ(0) − WA(0), then so does the law of (uτ(0)) τ≥0 taking values in the same space. For t ≥ 0, set
()
where . Following the arguments in [24], for all t0 < s < t and all , by proving
()
we can show that u is a Markov process. Here, s is the σ-algebra generated by W(r) for rs. So, (Pt) t≥0 is the Markov semigroup. Define a dual semigroup in the space of probability measures on as follows:
()
Let μτ be the law of uτ(0), which is the solution of (2) with initial condition u(−τ) = 0. Then, we have
()
where we use the fact that u(τ, ·; 0,0) and uτ(0) have the same law, the second equality follows. Therefore,
()
Since (μτ) τ≥0 is tight, then by Prokhorov theorem, we know that (μτ) τ≥0 is relatively compact. We can choose a subsequence of (μτ) τ≥0 denoted by such that for μP(Hσ),
()

5.2. Uniqueness

The main result of this part is as follows.

Theorem 9. Assume θ > 1/2 in Lemma 2 and ϑ > 1/4; then,

  • (i)

    the stochastic Burgers equation (2) has a unique invariant measure μ;

  • (ii)

    for all and all Borel measurable functions φ, , such that ,

    ()

  • (iii)

    for every Borel measure μ* on , one has that

    ()

where ∥·∥TV stands for the total variation of a measure. In particularly, one has that
()
for every Borel set (the Borel σ-algebra of ).

In order to prove Theorem 9, we only need Theorem 10 below, see [28, Theorem 4.2.1]. We define P(t, x, ·), t > 0, , to be the transition probability measure that is,
()
for .

Theorem 10. Assume that the probability measures , are all equivalent, in the sense that they are mutually absolutely continuous. Then, Theorem 9 holds true.

In the following, we will prove the irreducibility and the strong Feller property in to get the equivalence of the measure P(t, x, ·). For the two notations, we outline them below. For , let
()
  • (I)

    For any , such that for all ɛ > 0,

    ()

  • for each t > 0.

  • (S)

    For all , every t > 0, and all such that xnx in , it holds that

    ()

Before checking the condition (I), we need Lemma 11 below. For and , set
()
where v(t, x, ϕ) is solution of the following equation:
()
for t ∈ [0, T], with initial condition v(0) = x. As it is proved in previously this equation has a unique solution as follows:
()
when and .

Lemma 11. Define Ψ(ϕ) = u(·, x, ϕ); then,

  • (i)

    the mapping

    ()

  • is continuous, where C0([0, T]; B)≔{hC([0, T]; B); h(0) = 0} for Banach space B;

  • (ii)

    for every x, yH3/2 and T > 0 there exists such that .

Proof. (i) is proved by (A.30) in the Appendix. To prove (ii), let x, y, ∈H3/2 and T > 0, define as

()
Obviously, . Define as the solution of the following equation:
()
with initial condition ; then . Set ; then it satisfies all the requirements of the lemma.

Proposition 12. With conditions in Theorem 9, the irreducibility property (I) is satisfied.

Proof. Let xH3/2 and be the same as (ii) in Lemma 11. By the above lemma, we have that for ɛ > 0, we can find δ > 0, such that

()
implies that
()
If θ > 1/2 in Lemma 2, and denote z and the corresponding Ornstein-Uhlenbeck process satisfying conditions in the lemma, then . Choose δ1 > 0 such that δ1 < δ and
()
Then, for , we have that
()
Recall now that the solution u of the stochastic Burgers equation is equal to Ψ(z), z being the Ornstein-Uhlenbeck process. Then, it remains to show that
()
But this is obviously true. So far, we have proved that for for all t > 0, for  all  x, yH3/2, for all ɛ > 0,
()
Next, we will prove for all , the above inequality also holds. Indeed, for 0 < h < t, by Chapman-Kolmogorov equation, we have
()
Since P(th, x0, H3/2) = 1, we will extend (204) to the case for all . If this is not true, there exists t0 > 0,  , ɛ > 0 such that
()
Then, we can choose y1H3/2,  ɛ1 > 0 such that B(y1, ɛ1) ⊂ B(y0, ɛ). By (204), we have
()
which is contrary to (205).

In this part, it is time to check the condition (S).

We will first obtain the strong Feller property in for modified Burgers equation (208) below, then let R to check the condition (S).

Fix R > 0, let KR : [0, [→[0, [ satisfy KRC1(+) such that |KR| ≤ 1,   and
()
Consider the following equation:
()

Proposition 13. There exists a unique mild solution for (208) which is Markov process with the Feller property in , that is for every R > 0, t > 0, there exists a constant L = L(t, R) > 0 such that

()
holds for all , and all , where is the transition probabilities corresponding to (204).

Proof. The proof of existence and uniqueness is similar to Section 2. Let ϕ1 = ϕ2 in (A.28), by the Gronwall inequality, we know that uR is Lipschitz continuous with respect to initial value. Using the method in Proposition 4.3.3 in [24], we can prove that the solution is a Markov process. To prove the Fell property, we first consider the following Galerkin approximations of (208). Let Pn be the orthogonal projection in H defined as . Clearly, HnPnH for every n. Consider the equation in Hn as follows:

()
with initial condition . This is a finite-dimensional equation with globally Lipschitz nonlinear functions, so it has a unique progressively measurable solution with P-a.e. trajectory , which is also a Markov process in Hn with associated semigroup defined as
()
for all xHn and ϕCb(Hn). For every R > 0, t > 0, we can prove that there exists a constant L = L(t, R) > 0 such that
()
hold for all n, x, yHn, and all ϕCb(Hn) with . Indeed, the following remarkable formula holds true for the differential in x of [29]:
()
for all hHn, where βn is a n-dimensional standard Wiener process with incremental covariance PnQ and Q is the covariance operator of W(t). Obviously, Q is nonnegative, adjoint, Hilbert-Schmidt operator with inverse. Since the eigenvalues αn of the Stokes operator A, in 2-space dimension, behave like n, let θ = 1/2 + ɛ for some ɛ > 0, in Lemma 2, we have D(A) ⊂ (Q) ⊂ D(A3/4), where (Q) is the image of Q. Therefore,
()
Since for yHn
()
it follows that
()
where the last inequality follows by the Estimate 4 of the Appendix (note that C(R) is independent of xHn and n). Indeed, is given by vn(t, x) + Pnz(t), where z is the Ornstein-Uhlenbeck process, and vn is the solution of (A.2). Therefore,
()
In the following step, we will let n to get the Fell property for (208). Let and be given. From the Appendix, Remark A.1, we know that converges to u(R)(t) strongly in , p-a.s.. By the boundedness and continuous of ϕ as well as Lebesgue dominated convergence theorem, we have
()
which implies that for some subsequence nk,
()
for a.e. t ∈ [0, T]. Take , by the previous argument, we can find a subsequence nk such that the previous almost sure convergence in t ∈ [0, T] holds true both x and y.

Thus, from (212), we have

()
for a.e. t ∈ [0, T]. As u(R)(t; x) has continuous trajectories with values in , the above inequality holds for all t ∈ [0, T].

Proposition 14. Under conditions of Theorem 9, (S) holds true.

Proof. Take satisfying xnx in H1. For every R > 0, we have that

()
as n by Proposition 13. Then,
()
where the inequality follows by the consistency of u(t; x) and u(R)(t; x), when , and the limit follows by (A.21). Therefore,
()
as n.

6. Example

Our theory can be applied to stochastic reaction diffusion equations or stochastic real valued Ginzburg Landau equation in high dimensions as follows:
()
where u(t, x) = (u1(t, x), u2(t, x)) is the velocity field, Δ denotes the Laplace operator, W stands for the Q-Wiener process, and D is a regular bounded open domain of 2.

Acknowledgments

The authors are thankful to the referee for careful reading and insightful comments which led to many improvements of the earlier version. This work was partially supported by the Fundamental Research Funds for the Central Universities (Grant no. CQDXWL-2013-003).

    Appendix

    Fix R > 0 and let KR : [0, [→[0, [ satisfy KRC1(+) such that and
    ()
    Consider the following equation:
    ()
    where ϕC([0, T]; H3/2).

    Estimate 1. We have the following estimate in H for (A.2):

    ()
    where C(a, b, c) indicates a constant C depending on a, b, c. Analogously to the derivation of (147), we get
    ()
    Therefore, for all t ∈ [0, T],
    ()
    Then, we get (A.3).

    Estimate 2. We obtain the following estimate in for (A.2):

    ()
    Since we have
    ()
    the equation is equivalent to
    ()
    Denote by unvn(t) + Pnϕ(t) and ; then
    ()
    As
    ()
    so, we have that
    ()
    For the first term on the right hand side of (A.3), we have
    ()
    Substitute (A.11) and (A.12) into (A.8), we get
    ()
    Denote
    ()
    Then,
    ()
    where
    ()
    For I1, we have
    ()
    where
    ()
    So,
    ()

    Analogously, we can get the same estimate for I2 and I3. Take advantage of the estimates for I1,  I2, and I3, we have

    ()
    By the Gronwall inequality and (A.3), we get (A.6).

    Remark A.1. It is standard to show that, for and ϕC([0, T]; H3/2), there exists a subsequence which converges to some v, strongly in L2([0, T]; H1), weekly in L2([0, T]; H2), and weak star in L([0, T]; H1). Therefore, we have

    ()

    Estimate 3. We compare, only in the case R = . Let be two solutions with the same initial condition xH1 but with different functions ϕ1,  ϕ2, there exists a constant , such that

    ()
    for every n, xH1, ϕ1, ϕ2, T. We have
    ()
    with initial condition , for i = 1,2. Set . Then,
    ()

    Take inner product in H with respect to Aηn, we have

    ()

    For the third term on the left hand side of (A.23), we have

    ()

    Similarly, we can get

    ()
    ()

    By (A.23)–(A.27), we have

    ()

    So, by the Gronwall inequality and (A.6), we get (A.21). By (A.6), we know that converges week star to vi in , for i = 1,2, we have

    ()

    Estimate 4. Let us consider only the case R ∈ (0, ), and denote by vn(t) the solution to (A.2). Let ξn be the differential mapping xvn in the direction h at point x, defined by, for given x, hH as follows:

    ()

    Set also

    ()

    so that ξn is also the differential of the mapping xun(t; x) in the direction h at the point x. Thus, ξn satisfies

    ()

    So,

    ()

    Therefore,

    ()

    By the Gronwall inequality and (A.6), we have

    ()

    And therefore, using again the previous inequality,

    ()

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.