Volume 2013, Issue 1 342038
Research Article
Open Access

Representation Theorem for Generators of BSDEs Driven by G-Brownian Motion and Its Applications

Kun He

Corresponding Author

Kun He

Department of Mathematics, Donghua University, 2999 North Renmin Road, Songjiang, Shanghai 201620, China dhu.edu.cn

Search for more papers by this author
Mingshang Hu

Mingshang Hu

School of Mathematics, Shandong University, 27 Shanda Nanlu, Jinan 250100, China sdu.edu.cn

Search for more papers by this author
First published: 26 December 2013
Citations: 2
Academic Editor: Litan Yan

Abstract

We obtain a representation theorem for the generators of BSDEs driven by G-Brownian motions and then we use the representation theorem to get a converse comparison theorem for G-BSDEs and some equivalent results for nonlinear expectations generated by G-BSDEs.

1. Introduction

Let (Ω, , P) be a probability space, and, for fixed T ∈ [0, +), let (Bt) 0≤tT be a standard Brownian motion and let t be the augmentation of σ{Bs, 0 ≤ st}. Then Pardoux and Peng [1] introduced the backward stochastic differential equations (BSDEs) and proved the existence and uniqueness result of the BSDEs. In 1997, Peng [2] promoted g-expectations based on BSDEs. One of the important properties of g-expectations is comparison theorem or monotonicity. Chen [3] first considers a converse result of BSDEs under equal case. After that, Briand et al. [4] obtained a converse comparison theorem for BSDEs under general case. They also derived a representation theorem for the generator g. Following this paper, Jiang [5] discussed a more general representation theorem then, in his another paper [6], showed a more general converse comparison theorem. Here the representation theorem is an important method in solving the converse comparison problem and other problems (see Jiang [7]).

Peng [813] defined the G-expectations and G-Brownian motions (G-BMs) and proved the representation theorem of G-expectation by a set of singular probabilities, which differs from nonlinear g-expectations because g-expectations are equivalent with a group of absolutely continuous probabilities with respect to the probability measure P. Soner et al. [14] obtained an existence and uniqueness result of 2 BSDEs. Recently, Hu et al. [15] proved another existence and uniqueness result on BSDEs driven by G-Brownian motions (G-BSDEs).

An important advantage of G-BSDEs is the easiness to define the nonlinear expectations. Hu et al. in [16] gave a comparison theorem for G-BSDEs and talked about the properties of corresponding nonlinear expectations. In this paper, we consider the representation theorem for generators of G-BSDEs and then consider the converse comparison theorem of G-BSDEs and some equivalent results for nonlinear expectations generated by G-BSDEs. In the following, in Section 2, we review some basic concepts and results about G-expectations. We give the representation theorem of G-BSDEs in Section 3. In Section 4, we consider the applications of representation theorem of G-BSDEs, which contain the converse comparison theorem and some equivalent results for nonlinear expectations generated by G-BSDEs.

2. Preliminaries

We review some basic notions and results of G-expectation, the related spaces of random variables, and the backward stochastic differential equations driven by a G-Brownian motion. The readers may refer to [10, 13, 15, 1719] for more details.

Definition 1. Let Ω be a given set and let be a vector lattice of real valued functions defined on Ω, namely, c for each constant c and |X | ∈ if X. is considered as the space of random variables. A sublinear expectation on is a functional satisfying the following properties: for all X, Y, one has

  • (a)

    monotonicity: if XY, then ;

  • (b)

    constant preservation: ;

  • (c)

    subadditivity: ;

  • (d)

    positive homogeneity: for each λ ≥ 0. is called a sublinear expectation space.

Definition 2. Let X1 and X2 be two n-dimensional random vectors defined, respectively, in sublinear expectation spaces and . They are called identically distributed, denoted by , if , for all    φCb·Lip(n), where Cb·Lip(n) denotes the space of bounded and Lipschitz functions on n.

Definition 3. In a sublinear expectation space , a random vector Y = (Y1, …, Yn), Yi, is said to be independent of another random vector X = (X1, …, Xm), Xi under , denoted by YX, if for every test function φCb·Lip(m × n) one has .

Definition 4 (G-normal distribution). A d-dimensional random vector X = (X1, …, Xd) in a sublinear expectation space is called G-normally distributed if for each a, b ≥ 0 one has

()
where is an independent copy of X; that is, and . Here, the letter G denotes the function
()
where 𝕊d denotes the collection of d × d symmetric matrices.

Peng [13] showed that X = (X1, …, Xd) is G-normally distributed if and only if for each φCb·Lip(d), , (t, x) ∈ [0, ) × d, is the solution of the following G-heat equation:
()
The function G(·) : 𝕊d is a monotonic, sublinear mapping on 𝕊d and implies that there exists a bounded, convex, and closed subset such that
()
where denotes the collection of nonnegative elements in 𝕊d.

In this paper, we only consider nondegenerate G-normal distribution; that is, there exists some such that for any AB.

Definition 5. (i) Let denote the space of d-valued continuous functions on [0, ) with ω0 = 0 and let Bt(ω) = ωt be the canonical process. Set

()
Let G : 𝕊d be a given monotonic and sublinear function. G-expectation is a sublinear expectation defined by
()
for all , where ξ1, …, ξn are identically distributed d-dimensional G-normally distributed random vectors in a sublinear expectation space such that ξi+1 is independent of (ξ1, …, ξi) for every i = 1, …, m − 1. The corresponding canonical process is called a G-Brownian motion.

(ii) For each fixed t ∈ [0, ), the conditional G-expectation for , where without loss of generality we suppose ti = t, is defined by

()
where
()

For each fixed T > 0, we set
()
For each p ≥ 1, we denote by (resp., ) the completion of Lip(Ω) (resp., Lip(ΩT)) under the norm . It is easy to check that for 1 ≤ pq and can be extended continuously to .
For each fixed ad, is a 1-dimensional Ga-Brownian motion, where , and . Let , N = 1,2, …, be a sequence of partitions of [0, t] such that ; the quadratic variation process of Ba is defined by
()
For each fixed , the mutual variation process of Ba and is defined by
()

Definition 6. For fixed T > 0, let be the collection of processes in the following form: for a given partition {t0, …, tN} = πT of [0, T],

()
where , j = 0,1, 2, …, N − 1. For p ≥ 1, one denotes by , the completion of under the norms , , respectively.

For each , we can define the integrals and for each a, . For each with p ≥ 1, we can define Itô′s integral .

Let . For p ≥ 1 and , set . Denote by the completion of under the norm .

We consider the following type of G-BSDEs (in this paper, we always use Einstein convention):
()
where
()
satisfy the following properties.
  • (H1)

    There exists some β > 1 such that for any .

  • (H2)

    There exists some L > 0 such that

    ()

For simplicity, we denote by the collection of processes (Y, Z, K) such that , , K is a decreasing G-martingale with K0 = 0 and .

Definition 7. Let and f and gij satisfy (H1) and (H2) for some β > 1. A triplet of processes (Y, Z, K) is called a solution of (13) if for some 1 < αβ the following properties hold:

  • (a)

    ;

  • (b)

    .

Theorem 8 (see [15].)Assume that and f and gij satisfy (H1) and (H2) for some β > 1. Then, (13) has a unique solution (Y, Z, K). Moreover, for any 1 < α < β, one has , , and .

We have the following estimates.

Proposition 9 (see [15].)Let and f, gij satisfy (H1) and (H2) for some β > 1. Assume that for some 1 < α < β is a solution of (13). Then, there exists a constant Cα > 0 depending on α, T, G, L such that

()
where .

Proposition 10 (see [15], [20].)Let α ≥ 1 and δ > 0 be fixed. Then, there exists a constant C depending on α and δ such that

()

Theorem 11 (see [16].)Let (Yl, Zl, Kl), l = 1,2, be the solutions of the following G-BSDEs:

()
where , f and gij satisfy (H1) and (H2) for some β > 1 and are RCLL processes in such that . If   is an increasing process, then for t ∈ [0, T].

In this paper, we also need the following assumptions for G-BSDE (13).
  • (H3)

    For each fixed (ω, y, z) ∈ ΩT × × d, tf(t, ω, y, z) and tgij(t, ω, y, z) are continuous.

  • (H4)

    For each fixed (t, y, z) ∈ [0, T) × × d, f(t, y, z), , and

    ()

  • (H5)

    For each (t, ω, y) ∈ [0, T] × ΩT × , f(t, ω, y, 0) = gij(t, ω, y, 0) = 0.

Assume that ; f and gij satisfy (H1), (H2), and (H5) for some β > 1. Let (YT,ξ, ZT,ξ, KT,ξ) be the solution of G-BSDE (13) corresponding to ξ, f, and gij on [0, T]. It is easy to check that on [0, T] for T > T. Following [16], we can define consistent nonlinear expectation
()
and set .

3. Representation Theorem of Generators of G-BSDEs

We consider the following type of G-FBSDEs:
()
()
where hij = hji and gij = gji, 1 ≤ i, jd.

We now give the main result in this section.

Theorem 12. Let b : nn, hij : nn, and σ : nn×d be Lipschitz functions and let f and gij satisfy (H1), (H2), (H3), and (H4) for some β > 1. Then, for each (t, x, y, p) ∈ [0, T) × n × × n and α ∈ (1, β), one has

()

Proof. For each fixed (t, x, y, p) ∈ [0, T) × n × × n, we write (Yε, Zε, Kε) instead of for simplicity. We have for each γ ≥ 1 (see [16, 19]). Thus, by Theorem 8, G-BSDE (22) has a unique solution (Yε, Zε, Kε) and . We set, for s ∈ [t, t + ε],

()
Applying Itô’s formula to on [t, t + ε], it is easy to verify that solves the following G-BSDE:
()
From Proposition 9,
()
hold for some constant Cα > 0, only depending on α, T, G, and L. By Proposition 10 and the Lipschitz assumption, we obtain
()
where C1 is a constant depending on x, y, p, α, β, T, G, and L. Noting that (see [16, 19]), where C2 depends on T and L, and the following inequality holds:
()
Together with assumption (H4), we get
()
where C3 depends on x, y, p, α, β, T, G, and L. Now, we prove (23). Let us consider
()
where
()
It is easy to check that , where C4 depends on G, L, and T. Thus, by (29), we get
()
which implies . We set
()
By the Lipschitz condition, we can get , where C5 depends on p, G, L, and T. Noting that (see [16, 19]), where C6 depends on L, G, and α, we obtain
()
which implies . Now, we set
()
It is easy to deduce that , where C7 depends on G. Then,
()
Take limit on both sides of the above inequality and use assumption (H4); then, we have
()
On the other hand,
()
Then, we have
()
The proof is complete.

4. Some Applications

4.1. Converse Comparison Theorem for G-BSDEs

We consider the following G-BSDEs:
()
where .

We first generalized the comparison theorem in [16].

Proposition 13. Let fl and satisfy (H1) and (H2) for some β > 1, l = 1,2. If , then, for each , one has for t ∈ [0, T].

Proof. From the above G-BSDEs, we have

()
where
()
By the assumption, it is easy to check that (Vt) tT is a decreasing process. Thus, using Theorem 11, we obtain for t ∈ [0, T].

Remark 14. Suppose d = 1 and let f1 = 10 | z|, f2 = |z|, g1 = |z|, and g2 = 2 | z|. It is easy to check that f2f1 + 2G(g2g1) ≤ 0. Thus, does not imply f2f1 and .

Now, we give the converse comparison theorem.

Theorem 15. Let fl and satisfy (H1), (H2), (H3), (H4), and (H5) for some β > 1, l = 1,2. If for each t ∈ [0, T] and , then q.s..

Proof. For simplicity, we take the notation , l = 1,2. For each fixed (t, y, z) ∈ [0, T) × × d, let us consider

()
where hij = hjid. By Theorem 12, we have, for each α ∈ (1, β),
()
Since ,
()
Take a hij such that . Therefore, q.s. By the assumptions (H2) and (H3), it is easy to deduce that q.s.

In the following, we use the notation , l = 1,2.

Corollary 16. Let fl and be deterministic functions and satisfy (H1), (H2), (H3), and (H5) for some β > 1, l = 1,2. If for each , then .

Proof. Taking ηε as in Theorem 15, since fl and are deterministic, we could get , for l = 1, 2. And the proof in Theorem 15 still holds true.

4.2. Some Equivalent Relations

We consider the following G-BSDE:
()
where gij = gji. We use the notation .

Proposition 17. Let f and gij satisfy (H1), (H2), (H3), (H4), and (H5) for some β > 1 and fix α ∈ (1, β). Then, one has

  • (1)

    for t ∈ [0, T], , and if and only if for each t ∈ [0, T], y, y, zd,

    ()

  • (2)

    for t ∈ [0, T], , and if and only if for each t ∈ [0, T], y, y, z, zd,

    ()

  • (3)

    for t ∈ [0, T], λ ∈ [0,1], , and if and only if for each t ∈ [0, T], y, y, z, zd, λ ∈ [0,1],

    ()

  • (4)

    for t ∈ [0, T], λ ≥ 0, and if and only if for each t ∈ [0, T], y, zd, λ ≥ 0,

    ()

Proof. (1) “⇒” part. For each fixed t ∈ [0, T), y, y, zd, we take

()
where hij = hjid. Then, by Theorem 12 and , we can obtain
()
We choose hij such that gij(t, y, z)+〈z, hij〉 = 0, which implies (47).

“⇐” part. Let (Y, Z, K) be the solution of G-BSDE (46) corresponding to terminal condition ξ. We claim that (Ys + η, Zs, Ks) s∈[t,T] is the solution of G-BSDE (46) corresponding to terminal condition ξ + η on [t, T]. For this, we only need to check that, for s ∈ [t, T],

()
By (47) we can get
()
which implies (53). The proof of (1) is complete.

(2) “⇒” part. For each fixed t ∈ [0, T), y, y, z, zd, we consider ξε = y + 〈z, hij〉(〈Bi, Bj〉 t+ε − 〈Bi, Bj〉 t)+〈z, Bt+εBt〉 and , where hij = hjid and . Then, by Theorem 12 and , we obtain

()
We choose hij, such that gij(t, y, z)+〈z, hij〉 = 0 and , which implies (48).

“⇐” part. Let (Y, Z, K) and (Y, Z, K) be the solutions of G-BSDE (46) corresponding to terminal condition ξ and η, respectively. Then, (Y + Y, Z + Z, K) solves the following G-BSDE:

()
where
()
By (48), it is easy to check that Vt is an increasing process. Then, by Theorem 11, we can get . The proof of (2) is complete.

Finally, we could prove (3) as in (2) and (4) as in (1).

Proposition 18. One has the following.

  • (1)

    If G(A) + G(−A) > 0 for any A𝕊d and A ≠ 0, then (47) holds if and only if f and gij are independent of y.

  • (2)

    If there exists an A𝕊d with A ≠ 0 such that G(A) + G(−A) = 0 and G(A) ≠ 0, then, for any fixed g(t, y, z) satisfying (H1)–(H5), one has f(t, y, z) = −2G(A)g(t, y, z) and satisfying (47).

Proof. It is easy to verify (2), and we only need to prove (1). If (47) holds, it is easy to check that holds. Then, from the assumption, we get gij(t, y, z) = gij(t, 0, z). Therefore, by (47), we have f(t, y, z) = f(t, 0, z), which implies that f and gij are independent of y. The converse part is obvious.

Acknowledgments

K. He acknowledges the financial support from the National Natural Science Foundation of China (Grant nos. 11301068 and 11171062) and the Innovation Program of Shanghai Municipal Education Commission (Grant no. 12ZZ063). M. Hu acknowledges the financial support from the National Natural Science Foundation of China (Grant nos. 11201262 and 11101242) and the Scientific Research Foundation for the Excellent Middle-Aged and Young Scientists of Shandong Province of China (Grant no. BS2013SF020).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.