Volume 2017, Issue 1 2876961
Research Article
Open Access

Semigroup Solution of Path-Dependent Second-Order Parabolic Partial Differential Equations

Sixian Jin

Sixian Jin

Claremont Graduate University, Claremont, CA, USA cgu.edu

Search for more papers by this author
Henry Schellhorn

Corresponding Author

Henry Schellhorn

Claremont Graduate University, Claremont, CA, USA cgu.edu

Search for more papers by this author
First published: 27 February 2017
Academic Editor: Lukasz Stettner

Abstract

We apply a new series representation of martingales, developed by Malliavin calculus, to characterize the solution of the second-order path-dependent partial differential equations (PDEs) of parabolic type. For instance, we show that the generator of the semigroup characterizing the solution of the path-dependent heat equation is equal to one-half times the second-order Malliavin derivative evaluated along the frozen path.

1. Introduction

In this paper we consider semilinear second-order path-dependent PDEs (PPDEs) of parabolic type. These equations were first introduced by Dupire [1] and Cont and Fournie [2] and will be defined properly in the next section.

To motivate our result, we first consider the heat equation expressed in terms of a backward time variable. For tT we look for a function v(x, t) that solves
()
()
It is well known (see, e.g., [3], chapter 9.2 or [4]) that the solution is given by the flow of the semigroup S(t); that is, v(·, t) = S(t)Ψ, where
()
The differential operator (1/2)(2/x2) is said to be the (infinitesimal) generator of the semigroup S(t). Consider now the path-dependent version of the heat equation:
()
where is a continuous path on the interval [0, t] and the derivatives are Dupire’s path derivatives. Our goal is to find the generator of the semigroup (flowing the solution) of PPDEs, which we will refer to as the semigroup of the PPDE. It turns out that 1/2Dxx, that is, one-half times the second-order vertical derivative, is not the appropriate infinitesimal generator, because of path dependence. Indeed, the vertical derivative is the rate of change of the functional v(·, t) for a change at time t. The correct infinitesimal generator is equal to , where is the second-order Malliavin derivative of . An important difference is that F is now viewed as a random variable, and the (first-order) Malliavin derivative is a stochastic process in the canonical probability space for Brownian motion. The stopping path operator ωt was introduced in [5]. Informally, the action of the stopping path operator (which we define rigorously later) is to freeze the path after time t:
()
where ωt is the stopped path. The stopped Malliavin derivative is thus an extension of both
  • (i)

    the Dupire derivative; while the Dupire derivative corresponds to changes of the path at only one time, the iterated derivatives are taken with respect to changes of the canonical path at many different times s1, …, sn;

  • (ii)

    the Malliavin derivative; while the Dupire derivative can be taken pathwise, as far as we know, the construction of the Malliavin derivative necessitates the introduction of a probability space.

The proof of the representation result is straightforward. Let us consider the path-independent case (1). Let B be Brownian motion. By Itô’s lemma, it is obvious that v(B(t), t) is a martingale, say Mt, and that the value of this martingale is the conditional expectation at time t of Ψ(B(T)). Consider now a general path-dependent terminal condition Ψ(B), in [5], Jin et al. gave a new representation of Brownian martingales Mt (with tT) as an exponential of a time-dependent generator, applied to the terminal value MTΨ(B):
()

By the functional Feynman-Kac formula introduced in [1, 6], it is immediate that is the generator of the semigroup of the PPDE.

The main advantage of the semigroup method is that the solution of the PPDE can be constructed semianalytically: indeed, the method is similar to the Cauchy-Kowalewsky method, of calculating iteratively all the Malliavin derivatives of Ψ; (6) can be rewritten indeed as
()

The main disadvantage can be seen immediately by considering (7): the terminal condition Ψ must be infinitely Malliavin differentiable. In contradistinction, the viscosity solution given in [7] necessitates Ψ to be only bounded and continuous. However, compared to the result shown in [6], Ψ needs only to be defined on continuous paths.

This paper is composed of two parts. In the first part, we give a rigorous proof of the result (7). Indeed, we complete the proof of Theorem 2.3 in our article [5]; although the statement was correct in that paper, one step of the proof was not obvious to finish. In the second part we characterize the generator of the semilinear PPDE.

2. Martingale Representation

We first introduce some basic notations of Malliavin calculus. For a detailed introduction, we refer to [8] and our paper [5]. Let and be the complete filtered probability space, where the filtration is the usual augmentation of the filtration generated by Brownian motion B on . The canonical Brownian motion can be also denoted by B(t) = B(t, ω) = ω(t),   t ∈ [0, T],   ωΩ, by emphasizing its sample path. We denote by the space of square integrable random variables. For simplicity, we denote (du)k≔du1 ⋯ duk.

We denote the Malliavin derivative of order l at time t1, …, tn by . We call the set of random variables which are infinitely Malliavin differentiable and -measurable, that is, for any integer n and :
()

Definition 1. For any deterministic function fL2([0, T]), we define the “stopping path” operator ωt for tT as

()
In particular, ωtB(s) = B(st) that is to “freeze” Brownian motion after time t.

From the definition, it is not hard to obtain that, for any n-variable smooth function g, ωtg(B(s1), …, B(sn)) = g(B(s1t), …, B(snt)). For a general random variable , ωtF refers to the value of F along the stopping scenario ωtωt(ω) of Brownian motion. According to the Wiener-Chaos decomposition, for any , there exists a sequence of deterministic function {fn} n≥1 such that with convergence in L2([0, T] n). Therefore, in order to obtain an explicit representation of ωt acting on a general variable F, we first show the following proposition.

Proposition 2. Let fnL2([0, T] n), an n-variable square integrable deterministic function; then

()
Therefore
()
as well as the isometry:
()

Theorem 3. Let . Then for any fixed time t and ts < T, there exists a sequence {FN} N≥0 that satisfies the following:

  • (i)

    FNF in ;

  • (ii)

    DuFN = Ds+1/NFN for any us, s + 1/N];

  • (iii)

    there exist ε ∈ (0,1) and a constant C which does not depend on N such that

    ()

We introduce the derivative d in as, for any process Fs,
()
Then we can set up an operator differential equation for Es. The following theorem is a generalization of Theorem 2.2. in [5] to functionals that are not discrete.

Theorem 4. For 0 ≤ tsT, assuming that , one has

()

Then our main theorem is the integral version of this operator differential equation. We first introduce the convergence condition.

Condition 1. For any n ≥ 0, F satisfies

()

According to isometry (12), this condition implies .

Remark 5. We claim that other conditions exist which are easier to check than Condition 1. One of them is the convergence of the terms of series (23):

()
To this “ local” condition, that is, a condition based on the calculation along the frozen path only, one needs to add a “global” condition involving all the paths to make it sufficient; that is, for any s ∈ [t, T] and n ≥ 1, with a constant c.

Moreover, with different structures of F, we have different alternative conditions which are easier to check for practical calculations. Here we list two examples.
  • (1)

    If with smooth deterministic function f and square integrable deterministic function g, it is not hard to obtain

    ()

  • Therefore, if there exists a constant C such that, for all n ≥ 1,

    ()

  • with the help of Stirling approximation , Condition 1 is satisfied.

  • (2)

    If F has its chaos decomposition , we have

    ()

  • Then according to (12), Condition 1 can be replaced by

    ()

  • with some constant C or some much stronger but easier conditions like the following: for m ≥ 1

    ()

Then we have the following main result.

Theorem 6. Suppose that F satisfies Condition 1 and is -measurable. For tT, then, in ,

()

The importance of the exponential formula (23) stems from the Dyson series representation, which we rewrite hereafter in a more convenient way:
()

3. Representation of Solutions of Path-Dependent Partial Differential Equations

3.1. Functional Itô Calculus

We now introduce some key concepts of the functional Itô calculus introduced by Dupire [1]. For more information, the reader is referred to [6], which we copy hereafter almost verbatim. Let T > 0 be fixed. For each t ∈ [0, T] we denote by Λt the set of càdlàg (right continuous with left limits) -valued functions on [0, t]. For each γtΛt, the value of γt at s ∈ [0, t] is denoted by γ(s). Denote . For each γtΛ, Tst, and , we define
()

Definition 7. Given a function , there exists such that

()

Then we say that is vertically differentiable at γtΛ and define . The function is said to be vertically differentiable if exists for each γtΛ. The second-order derivative Dxx is defined similarly.

Definition 8. For a given γtΛ, if

()
then we say that is horizontally differentiable at γt and define . The function is said to be horizontally differentiable if exists for each γtΛ.

Definition 9. The function is said to be in if , , and exist and we have

()
where , C and k are some constants depending only on φ, and
()
is the distance on Λ. The classes and are defined analogously.

For each t ∈ [0, T], we denote by Ωt the set of continuous -valued functions on [0, t]. We denote . Clearly ΩΛ. Given and , we say that u is consistent with on Ω if (since we already use the symbol ωt to denote our freezing path operator (see Definition 1), we here use ωt to denote a sample path) for each ωtΩ,
()

Definition 10. The function is said to be in if there exists a function such that (30) holds and for ωtΩ we denote

()

Note. In the introduction, we use the notation {v(·, t)} for a family of nonanticipative functionals where . In order to highlight the symmetry between PDEs and PPDEs, the notation in PPDEs shows that is the counterpart of the argument x in PDEs and is used instead of ωt. This is in spirit closer to the original notation of [1, 2]. The reader will have no problem identifying .

3.2. Non-Markovian BSDEs

As in [6], we use to denote the completion of the σ-algebra generated by B(s) − B(t) with s ∈ [t, r]. Then we introduce , the space of all -adapted -valued processes (X(s))s∈[t, T] with , and S2(t, T), the space of all -adapted -valued continuous processes (X(s))s∈[t, T] with E[sups∈[t, T]|X(s)|2] < . Denote now .

We will make the following assumptions:

(H1) Φ is a -valued function defined on ΛT. Moreover,

(H2) The drift a(γt) is a given -valued continuous function defined on Λ (see [6] for a definition of continuity). For any γtΛ and s ∈ [0, t], the function is differentiable and its derivative satisfies
()
where C and k are constants depending only on φ.
We now assume that (H1) and (H2) hold. We consider a non-Markovian BSDE, which is a particular case of (3.2) in [6]. From Theorem  2.8 in [6], for any γtΛ, there exists a unique solution of the following BSDE:
()
where
()
In particular, defines a deterministic mapping from Λ to .

3.3. Path-Dependent PDEs

The drift a and terminal condition Ψ are required to be extended to the space of càdlàg paths because of the definition of the Dupire derivatives. We require the following (see [6] again):

(B1) The function Ψ is a -valued function defined on ΩT. Moreover, there is a function such that Ψ = Φ on ΩT.

(B2) The drift a(ωt) is a given -valued continuous function defined on (see [6] for a definition of continuity). Moreover, there exists a function b satisfying (H2) such that a = b on Ω.

We can now define the following quasilinear parabolic path-dependent PDE:
()

Theorem  4.2 in [6] states the following: let be a solution of the above equation. Then we have for each ωtΩ, where is the unique solution of BSDE (33).

Theorem 11. Suppose that, for each t ∈ [0, T], the random variable

()
satisfies Condition 1. Then the solution of (35) is
()

Proof. According to (2.20) in [9] page 351, the solution of (33) is, for tsT,

()
The result now follows by Theorem 6 and the fact that

We note that, in the case of no drift (a = 0), we recover the result (6).

3.4. Proof of Proposition 2

This proof is made up by several inductions. Therefore we separate them into several steps.

Step 1. We first apply Itô’s lemma and integration by parts formula of the Skorohod integral of Brownian motion to provide an explicit expansion for In(fn). The goal of the following step is to transform Skorohod integrals into time-integrals. For example, f(s1, s2) is symmetric:

()

By the integration by parts formula (see (1.49) in [8]),
()
Based on this idea, for n ≥ 1 and 1 ≤ rn, we define
()
and . For n = 0, . Then we are going to prove
()
based on the following recurrence formula of Ar: for any r = 0, …, n − 1
()
To prove (43), we apply the integration by parts formula. For simplicity, we only keep the variables s1, …, sk and sr+1. The notation means that the variable x is not an argument of a function. We also emphasize again the symmetricity of function fn:
()
()
()
()
()
()
Observing the properties of the binomial coefficients,
()
We can see that, under the summation over k, (47) and (49) cancel each other, (45) and (46) combine into , and (48) remains as the integral of . Rigorously, we proved (43).
To prove (42), we use induction. Supposing that case n is correct, we observe case n + 1: by (43),
()

Step 2. Now we are going to consider the action of the freezing path operator. We first prove that for all rn

()
We only present the proof of r = n and the general case is the same. By definition, we know that ωtBs = Bsχ[0, t](s) + Btχ[t, T](s). Therefore
()
Now we recall a basic integration rule for a smooth function gn as
()
We apply (54) on (53) and obtain
()
Since the number of variable T is nk + kk1j = nk1j, which does not depend on k, it enlightens us to change the order of summations. We want to sum over k first. Observe that ; we obtain
()
According to the property of binomial coefficient again
()
We claim that (56) is not 0 only when n = j + k1. Thus we have
()

Step 3. Now we can prove recurrence formula (10).

By (52) and (42), we have

()
Now we calculate the right hand side of (10):
()
Let m = k + k1 and we continue the above formula:
()
Now we apply another basic rule of integration, for a m-variable symmetric function gm
()

Now apply (62) in (61) and we finally obtain

()

Step 4. We now use induction to prove (11), based on (10). For simplicity, we introduce

()
for k ≤ ⌊n/2⌋. Then (10) implies
()
We calculate the right hand side of (11) with (65): let m = k + k1
()
The proposition is proved.

3.5. Proof of Theorem 3

The proof is constructive. For any fixed t ∈ [0, T], if F has its chaos decomposition , then for fixed N (depending on M), we will study , where
()
In other words, the kernel is constant when its arguments lie between s and s + 1/N. Then we have the following lemma.

Lemma 12. and in particular

()
where C is a constant which does not depend on N and n.

Proof. For any fixed n, we define a sequence of sets as

()
Observe that on the kernels fn and coincide. According to (67), we obtain
()
To bound (70), we apply Proposition 2 to obtain
()
Now we apply (71) on (70) and by Cauchy-Schwartz inequality, we have
()
Since fn is differentiable with respect to s1, …, sn, there exists a constant Cn such that
()
Therefore following (72), we obtain
()
where C is a constant which does not depend on n and N.

Now we construct FN by . To prove the theorem, we introduce two subseries FM,N and FM by

()
For enough large N, we choose M such that . Then by Lemma 12 and Cauchy-Schwarz inequality, there exists a constant ε ∈ (0,1) such that
()
Then using triangle inequality, we prove the theorem.

3.6. Proof of Theorem 4

For any , s ∈ [t, T], we choose the sequence constructed in Theorem 3. Then by the Clark-Ocone formula, we obtain
()
where
()
On one hand, by Lemma  5.2 in [5], we obtain
()
On the other hand, we can compute
()
Then we can establish the equation as
()
where the last equality follows from (79), Proposition 2. Thus combining (77), (79), (80), and (81) as well as the assumption and Proposition 2, we have
()
Here, for simplicity, we define L2 norm . Then, from Theorem 3 and the closability of the Malliavin derivative operator, for some constant ε < 1,
()
With triangle inequality and Cauchy-Schwartz inequality, we finally have, using (81) and (83),
()
or in other words
()

3.7. Proof of Theorem 6

For i = 1, …, N(Ts), define
()
We rewrite (84) as
()
Jensen’s inequality states that
()
Since is bounded in , then
()
Using (88), we thus proved that, in ,
()
Then for positive integer n we define the operator by
()
where
()
Then by iterating (90) we obtain the following: for n > 0
()
Thus according to Condition 1,
()
We now take s = t and obtain
()

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.