Volume 2016, Issue 1 8694365
Research Article
Open Access

Stochastic Analysis of Gaussian Processes via Fredholm Representation

Tommi Sottinen

Tommi Sottinen

Department of Mathematics and Statistics, University of Vaasa, P.O. Box 700, 65101 Vaasa, Finland uva.fi

Search for more papers by this author
Lauri Viitasaari

Corresponding Author

Lauri Viitasaari

Department of Mathematics and System Analysis, Aalto University School of Science, P.O. Box 11100, Helsinki, 00076 Aalto, Finland aalto.fi

Search for more papers by this author
First published: 31 July 2016
Citations: 12
Academic Editor: Ciprian A. Tudor

Abstract

We show that every separable Gaussian process with integrable variance function admits a Fredholm representation with respect to a Brownian motion. We extend the Fredholm representation to a transfer principle and develop stochastic analysis by using it. We show the convenience of the Fredholm representation by giving applications to equivalence in law, bridges, series expansions, stochastic differential equations, and maximum likelihood estimations.

1. Introduction

The stochastic analysis of Gaussian processes that are not semimartingales is challenging. One way to overcome the challenge is to represent the Gaussian process under consideration, X, say, in terms of a Brownian motion, and then develop a transfer principle so that the stochastic analysis can be done in the “Brownian level” and then transferred back into the level of X.

One of the most studied representations in terms of a Brownian motion is the so-called Volterra representation. A Gaussian Volterra process is a process that can be represented as
()
where W is a Brownian motion and KL2([0, T] 2). Here, the integration goes only up to t, hence the name “Volterra.” This Volterra nature is very convenient: it means that the filtration of X is included in the filtration of the underlying Brownian motion W. Gaussian Volterra processes and their stochastic analysis have been studied, for example, in [1, 2], just to mention a few. Apparently, the most famous Gaussian process admitting Volterra representation is the fractional Brownian motion and its stochastic analysis indeed has been developed mostly by using its Volterra representation; see, for example, the monographs [3, 4] and references therein.

In discrete finite time the Volterra representation (1) is nothing but the Cholesky lower-triangular factorization of the covariance of X, and hence every Gaussian process is a Volterra process. In continuous time this is not true; see Example 16 in Section 3.

There is a more general representation than (1) by Hida; see [5, Theorem  4.1]. However, this Hida representation includes possibly infinite number of Brownian motions. Consequently, it seems very difficult to apply the Hida representation to build a transfer principle needed by stochastic analysis. Moreover, the Hida representation is not quite general. Indeed, it requires, among other things, that the Gaussian process is purely nondeterministic. The Fredholm representation (2) does not require pure nondeterminism. Example 16 in Section 3, which admits a Fredholm representation, does not admit a Hida representation, and the reason is the lack of pure nondeterminism.

The problem with the Volterra representation (1) is the Volterra nature of the kernel K, as far as generality is concerned. Indeed, if one considers Fredholm kernels, that is, kernels where the integration is over the entire interval [0, T] under consideration, one obtains generality. A Gaussian Fredholm process is a process that admits the Fredholm representation
()
where W is a Brownian motion and KTL2([0, T] 2). In this paper we show that every separable Gaussian process with integrable variance function admits representation (2). The price we have to pay for this generality is twofold:
  • (i)

    The process X is generated, in principle, from the entire path of the underlying Brownian motion W. Consequently, X and W do not necessarily generate the same filtration. This is unfortunate in many applications.

  • (ii)

    In general the kernel KT depends on T even if the covariance R does not, and consequently the derived operators also depend on T. This is why we use the cumbersome notation of explicitly stating out the dependence when there is one. In stochastic analysis this dependence on T seems to be a minor inconvenience, however. Indeed, even in the Volterra case as examined, for example, by Alòs et al. [1], one cannot avoid the dependence on T in the transfer principle. Of course, for statistics, where one would like to let T tend to infinity, this is a major inconvenience.

Let us note that the Fredholm representation has already been used, without proof, in [6], where the Hölder continuity of Gaussian processes was studied.

Let us mention a few papers that study stochastic analysis of Gaussian processes here. Indeed, several different approaches have been proposed in the literature. In particular, fractional Brownian motion has been a subject of active study (see the monographs [3, 4] and references therein). More general Gaussian processes have been studied in the already mentioned work by Alòs et al. [1]. They considered Gaussian Volterra processes where the kernel satisfies certain technical conditions. In particular, their results cover fractional Brownian motion with Hurst parameter H > 1/4. Later Cheridito and Nualart [7] introduced an approach based on the covariance function itself rather than on the Volterra kernel K. Kruk et al. [8] developed stochastic calculus for processes having finite 2-planar variation, especially covering fractional Brownian motion H ≥ 1/2. Moreover, Kruk and Russo [9] extended the approach to cover singular covariances, hence covering fractional Brownian motion H < 1/2. Furthermore, Mocioalca and Viens [10] studied processes which are close to processes with stationary increments. More precisely, their results cover cases where , where γ satisfies some minimal regularity conditions. In particular, their results cover some processes which are not even continuous. Finally, the latest development we are aware of is a paper by Lei and Nualart [11] who developed stochastic calculus for processes having absolute continuous covariance by using extended domain of the divergence introduced in [9]. Finally, we would like to mention Lebovits [12] who used the S-transform approach and obtained similar results to ours, although his notion of integral is not elementary as ours.

The results presented in this paper give unified approach to stochastic calculus for Gaussian processes and only integrability of the variance function is required. In particular, our results cover processes that are not continuous.

The paper is organized as follows.

Section 2 contains some preliminaries on Gaussian processes and isonormal Gaussian processes and related Hilbert spaces.

Section 3 provides the proof of the main theorem of the paper: the Fredholm representation.

In Section 4 we extend the Fredholm representation to a transfer principle in three contexts of growing generality: First we prove the transfer principle for Wiener integrals in Section 4.1, then we use the transfer principle to define the multiple Wiener integral in Section 4.2, and, finally, in Section 4.3 we prove the transfer principle for Malliavin calculus, thus showing that the definition of multiple Wiener integral via the transfer principle done in Section 4.2 is consistent with the classical definitions involving Brownian motion or other Gaussian martingales. Indeed, classically one defines the multiple Wiener integrals either by building an isometry with removed diagonals or by spanning higher chaos by using the Hermite polynomials. In the general Gaussian case one cannot of course remove the diagonals, but the Hermite polynomial approach is still valid. We show that this approach is equivalent to the transfer principle. In Section 4.3 we also prove an Itô formula for general Gaussian processes and in Section 4.4 we extend the Itô formula even further by using the technique of extended domain in the spirit of [7]. This Itô formula is, as far as we know, the most general version for Gaussian processes existing in the literature so far.

Finally, in Section 5 we show the power of the transfer principle in some applications. In Section 5.1 the transfer principle is applied to the question of equivalence of law of general Gaussian processes. In Section 5.2 we show how one can construct net canonical-type representation for generalized Gaussian bridges, that is, for the Gaussian process that is conditioned by multiple linear functionals of its path. In Section 5.3 the transfer principle is used to provide series expansions for general Gaussian processes.

2. Preliminaries

Our general setting is as follows: let T > 0 be a fixed finite time-horizon and let X = (Xt) t∈[0, T] be a Gaussian process with covariance R that may or may not depend on T. Without loss of any interesting generality we assume that X is centered. We also make the very weak assumption that X is separable in the sense of the following definition.

Definition 1 (separability). The Gaussian process X is separable if the Hilbert space is separable.

Example 2. If the covariance R is continuous, then X is separable. In particular, all continuous Gaussian processes are separable.

Definition 3 (associated operator). For a kernel ΓL2([0, T]2) one associates an operator on L2([0, T]), also denoted by Γ, as

()

Definition 4 (isonormal process). The isonormal process associated with X, also denoted by X, is the Gaussian family , where the Hilbert space is generated by the covariance R as follows:

  • (i)

    Indicators 1t≔1[0,t),  tT, belong to .

  • (ii)

    is endowed with the inner product .

Definition 4 states that X(h) is the image of in the isometry that extends the relation
()
linearly. Consequently, we can have the following definition.

Definition 5 (Wiener integral). X(h) is the Wiener integral of the element with respect to X. One will also denote

()

Remark 6. Eventually, all the following will mean the same:

()

Remark 7. The Hilbert space is separable if and only if X is separable.

Remark 8. Due to the completion under the inner product it may happen that the space is not a space of functions but contains distributions; compare [13] for the case of fractional Brownian motions with Hurst index bigger than half.

Definition 9. The function space is the space of functions that can be approximated by step-functions on [0, T] in the inner product .

Example 10. If the covariance R is of bounded variation, then is the space of functions f satisfying

()

Remark 11. Note that it may be that but for some T < T we have ; compare [14] for an example with fractional Brownian motion with Hurst index less than half. For this reason we keep the notation instead of simply writing . For the same reason we include the dependence of T whenever there is one.

3. Fredholm Representation

Theorem 12 (Fredholm representation). Let X = (Xt) t∈[0,T] be a separable centered Gaussian process. Then there exists a kernel KTL2([0, T]2) and a Brownian motion W = (Wt) t≥0, independent of T, such that

()
in law if and only if the covariance R of X satisfies the trace condition
()

Representation (8) is unique in the sense that any other representation with kernel , say, is connected to (8) by a unitary operator U on L2([0, T]) such that . Moreover, one may assume that KT is symmetric.

Proof. Let us first remark that (9) is precisely what we need to invoke Mercer’s theorem and take square root in the resulting expansion.

Now, by Mercer’s theorem we can expand the covariance function R on [0, T] 2 as

()
where and are the eigenvalues and the eigenfunctions of the covariance operator
()
Moreover, is an orthonormal system on L2([0, T]).

Now, RT, being a covariance operator, admits a square root operator KT defined by the relation

()
for all and . Now, condition (9) means that RT is trace class and, consequently, KT is Hilbert-Schmidt operator. In particular, KT is a compact operator. Therefore, it admits a kernel. Indeed, a kernel KT can be defined by using the Mercer expansion (10) as
()
This kernel is obviously symmetric. Now, it follows that
()
and representation (8) follows from this.

Finally, let us note that the uniqueness up to a unitary transformation is obvious from the square root relation (12).

Remark 13. The Fredholm representation (8) holds also for infinite intervals, that is, T = , if the trace condition (9) holds. Unfortunately, this is seldom the case.

Remark 14. The above proof shows that the Fredholm representation (8) holds in law. However, one can also construct the process X via (8) for a given Brownian motion W. In this case, representation (8) holds of course in L2. Finally, note that in general it is not possible to construct the Brownian motion in representation (8) from the process X. Indeed, there might not be enough randomness in X. To construct W from X one needs that the indicators 1t,   t ∈ [0, T], belong to the range of the operator KT.

Remark 15. We remark that the separability of X ensures representation of form (8) where the kernel KT only satisfies a weaker condition KT(t, ·) ∈ L2([0, T]) for all t ∈ [0, T], which may happen if the trace condition (9) fails. In this case, however, the associated operator KT does not belong to L2([0, T]), which may be undesirable.

Example 16. Let us consider the following very degenerate case: suppose Xt = f(t)ξ, where f is deterministic and ξ is a standard normal random variable. Suppose T > 1. Then

()
So, KT(t, s) = f(t)1[0,1)(s). Now, if fL2([0, T]), then condition (9) is satisfied and KTL2([0, T] 2). On the other hand, even if fL2([0, T]) we can still write X in form (15). However, in this case the kernel KT does not belong to L2([0, T] 2).

Example 17. Consider a truncated series expansion

()
where ξk are independent standard normal random variables and
()
where ,  , is an orthonormal basis in L2([0, T]). Now it is straightforward to check that this process is not purely nondeterministic (see [15] for definition) and, consequently, X cannot have Volterra representation while it is clear that X admits a Fredholm representation. On the other hand, by choosing the functions to be the trigonometric basis on L2([0, T]), X is a finite-rank approximation of the Karhunen-Loève representation of standard Brownian motion on [0, T]. Hence by letting n tend to infinity we obtain the standard Brownian motion and hence a Volterra process.

Example 18. Let W be a standard Brownian motion on [0, T] and consider the Brownian bridge. Now, there are two representations of the Brownian bridge (see [16] and references therein on the representations of Gaussian bridges). The orthogonal representation is

()
Consequently, B has a Fredholm representation with kernel KT(t, s) = 1t(s) − t/T. The canonical representation of the Brownian bridge is
()
Consequently, the Brownian bridge has also a Volterra-type representation with kernel K(t, s) = (Tt)/(Ts).

4. Transfer Principle and Stochastic Analysis

4.1. Wiener Integrals

Theorem 22 is the transfer principle in the context of Wiener integrals. The same principle extends to multiple Wiener integrals and Malliavin calculus later in the following subsections.

Recall that for any kernel ΓL2([0, T] 2) its associated operator on L2([0, T]) is
()

Definition 19 (adjoint associated operator). The adjoint associated operator Γ of a kernel ΓL2([0, T] 2) is defined by linearly extending the relation

()

Remark 20. The name and notation of “adjoint” for comes from Alòs et al. [1] where they showed that in their Volterra context admits a kernel and is an adjoint of KT in the sense that

()
for step-functions f and g belonging to L2([0, T]). It is straightforward to check that this statement is valid also in our case.

Example 21. Suppose the kernel Γ(·, s) is of bounded variation for all s and that f is nice enough. Then

()

Theorem 22 (transfer principle for Wiener integrals). Let X be a separable centered Gaussian process with representation (8) and let . Then

()

Proof. Assume first that f is an elementary function of form

()
for some disjoint intervals Ak = (tk−1, tk]. Then the claim follows by the very definition of the operator and Wiener integral with respect to X together with representation (8). Furthermore, this shows that provides an isometry between and L2([0, T]). Hence can be viewed as a closure of elementary functions with respect to which proves the claim.

4.2. Multiple Wiener Integrals

The study of multiple Wiener integrals goes back to Itô [17] who studied the case of Brownian motion. Later Huang and Cambanis [18] extended the notion to general Gaussian processes. Dasgupta and Kallianpur [19, 20] and Perez-Abreu and Tudor [21] studied multiple Wiener integrals in the context of fractional Brownian motion. In [19, 20] a method that involved a prior control measure was used and in [21] a transfer principle was used. Our approach here extends the transfer principle method used in [21].

We begin by recalling multiple Wiener integrals with respect to Brownian motion and then we apply transfer principle to generalize the theory to arbitrary Gaussian process.

Let f be an elementary function on [0, T] p that vanishes on the diagonals; that is,
()
where Δk≔[tk−1, tk) and whenever for some For such f we define the multiple Wiener integral as
()
where we have denoted For p = 0 we set Now, it can be shown that elementary functions that vanish on the diagonals are dense in L2([0, T] p). Thus, one can extend the operator to the space L2([0, T] p). This extension is called the multiple Wiener integral with respect to the Brownian motion.

Remark 23. It is well-known that can be understood as multiple or iterated. Itô integrals if and only if f(t1, …, tp) = 0 unless t1 ≤ ⋯≤tp. In this case we have

()
For the case of Gaussian processes that are not martingales this fact is totally useless.

For a general Gaussian process X, recall first the Hermite polynomials:
()
For any p ≥ 1 let the pth Wiener chaos of X be the closed linear subspace of L2(Ω) generated by the random variables , where Hp is the pth Hermite polynomial. It is well-known that the mapping provides a linear isometry between the symmetric tensor product and the pth Wiener chaos. The random variables are called multiple Wiener integrals of order p with respect to the Gaussian process X.

Let us now consider the multiple Wiener integrals IT,p for a general Gaussian process X. We define the multiple integral IT,p by using the transfer principle in Definition 25 and later argue that this is the “correct” way of defining them. So, let X be a centered Gaussian process on [0, T] with covariance R and representation (8) with kernel KT.

Definition 24 (p-fold adjoint associated operator). Let KT be the kernel in (8) and let be its adjoint associated operator. Define

()
In the same way, define
()
Here the tensor products are understood in the sense of Hilbert spaces; that is, they are closed under the inner product corresponding to the p-fold product of the underlying inner product.

Definition 25. Let X be a centered Gaussian process with representation (8) and let . Then

()

The following example should convince the reader that this is indeed the correct definition.

Example 26. Let p = 2 and let h = h1h2, where both h1 and h2 are step-functions. Then

()

The following proposition shows that our approach to define multiple Wiener integrals is consistent with the traditional approach where multiple Wiener integrals for more general Gaussian process X are defined as the closed linear space generated by Hermite polynomials.

Proposition 27. Let Hp be the pth Hermite polynomial and let . Then

()

Proof. First note that without loss of generality we can assume Now by the definition of multiple Wiener integral with respect to X we have

()
where
()
Consequently, by [22, Proposition  1.1.4] we obtain
()
which implies the result together with Theorem 22.

Proposition 27 extends to the following product formula, which is also well-known in the Gaussian martingale case but apparently new for general Gaussian processes. Again, the proof is straightforward application of transfer principle.

Proposition 28. Let and .Then

()
where
()
and denotes the preimage of .

Proof. The proof follows directly from the definition of IT,p(f) and [22, Proposition  1.1.3].

Example 29. Let and be of forms and . Then

()
Hence
()

Remark 30. In the literature multiple Wiener integrals are usually defined as the closed linear space spanned by Hermite polynomials. In such a case Proposition 27 is clearly true by the very definition. Furthermore, one has a multiplication formula (see, e.g., [23]):

()
where denotes symmetrization of tensor product
()
and {ek,   k = 1, …} is a complete orthonormal basis of the Hilbert space . Clearly, by Proposition 27, both formulas coincide. This also shows that (39) is well-defined.

4.3. Malliavin Calculus and Skorohod Integrals

We begin by recalling some basic facts on Malliavin calculus.

Definition 31. Denote by the space of all smooth random variables of the form

()
where and all its derivatives are bounded. The Malliavin derivative of F is an element of defined by
()
In particular, DTXt = 1t.

Definition 32. Let be the Hilbert space of all square integrable Malliavin differentiable random variables defined as the closure of with respect to norm

()

The divergence operator δT is defined as the adjoint operator of the Malliavin derivative DT.

Definition 33. The domain Dom δT of the operator δT is the set of random variables satisfying

()
for any and some constant cu depending only on u. For u ∈ DomδT the divergence operator δT(u) is a square integrable random variable defined by the duality relation
()
for all .

Remark 34. It is well-known that .

We use the notation
()

Theorem 35 (transfer principle for Malliavin calculus). Let X be a separable centered Gaussian process with Fredholm representation (8). Let DT and δT be the Malliavin derivative and the Skorohod integral with respect to X on [0, T]. Similarly, let and be the Malliavin derivative and the Skorohod integral with respect to the Brownian motion W of (8) restricted on [0, T]. Then

()

Proof. The proof follows directly from transfer principle and the isometry provided by with the same arguments as in [1]. Indeed, by isometry we have

()
where denotes the preimage, which implies that
()
which justifies . Furthermore, we have relation
()
for any smooth random variable F and . Hence, by the very definition of Domδ and transfer principle, we obtain
()
and proving the claim.

Now we are ready to show that the definition of the multiple Wiener integral IT,p in Section 4.2 is correct in the sense that it agrees with the iterated Skorohod integral.

Proposition 36. Let be of form . Then h is iteratively p times Skorohod integrable and

()
Moreover, if is such that it is p times iteratively Skorohod integrable, then (55) still holds.

Proof. Again the idea is to use the transfer principle together with induction. Note first that the statement is true for p = 1 by definition and assume next that the statement is valid for k = 1, …, p. We denote . Hence, by induction assumption, we have

()
Put now F = IT,p(fp) and u(t) = hp+1(t). Hence by [22, Proposition  1.3.3] and by applying the transfer principle we obtain that Fu belongs to DomδT and
()
Hence the result is valid also for p + 1 by Proposition 28 with q = 1.

The claim for general follows by approximating with a product of simple function.

Remark 37. Note that (55) does not hold for arbitrary in general without the a priori assumption of p times iterative Skorohod integrability. For example, let p = 2 and X = BH be a fractional Brownian motion with H ≤ 1/4 and define ht(s, v) = 1t(s)1s(v) for some fixed t ∈ [0, T]. Then

()
But does not belong to DomδT (see [7]).

We end this section by providing an extension of Itô formulas provided by Alòs et al. [1]. They considered Gaussian Volterra processes; that is, they assumed the representation
()
where the kernel K satisfied certain technical assumptions. In [1] it was proved that in the case of Volterra processes one has
()
if f satisfies the growth condition
()
for some c > 0 and . In the following we will consider different approach which enables us to
  • (i)

    prove that such formula holds with minimal requirements,

  • (ii)

    give more instructive proof of such result,

  • (iii)

    extend the result from Volterra context to more general Gaussian processes,

  • (iv)

    drop some technical assumptions posed in [1].

For simplicity, we assume that the variance of X is of bounded variation to guarantee the existence of the integral
()
If the variance is not of bounded variation, then integral (62) may be understood by integration by parts if f is smooth enough or, in the general case, via the inner product . In Theorem 40 we also have to assume that the variance of X is bounded.

The result for polynomials is straightforward, once we assume that the paths of polynomials of X belong to .

Proposition 38 (Itô formula for polynomials). Let X be a separable centered Gaussian process with covariance R and assume that p is a polynomial. Furthermore, assume that for each polynomial p one has . Then for each t ∈ [0, T] one has

()
if and only if X·1t belongs to DomδT.

Remark 39. The message of the above result is that once the processes , then they automatically belong to the domain of δT which is a subspace of . However, in order to check one needs more information on the kernel KT. A sufficient condition is provided in Corollary 43 which covers many cases of interest.

Proof. By definition and applying transfer principle, we have to prove that p(X·)1t belongs to domain of δT and that

()
for every random variable G from a total subset of L2(Ω). In other words, it is sufficient to show that (69) is valid for random variables of form , where h is a step- function.

Note first that it is sufficient to prove the claim only for Hermite polynomials Hk,   k = 1, …. Indeed, it is well-known that any polynomial can be expressed as a linear combination of Hermite polynomials and, consequently, the result for arbitrary polynomial p follows by linearity.

We proceed by induction. First it is clear that first two polynomials H0 and H1 satisfy (64). Furthermore, by assumption belongs to DomδT from which (64) is easily deduced by [22, Proposition  1.3.3]. Assume next that the result is valid for Hermite polynomials Hk,   k = 0,1, …, n. Then, recall well-known recursion formulas

()
The induction step follows with straightforward calculations by using the recursion formulas above and [22, Proposition  1.3.3]. We leave the details to the reader.

We will now illustrate how the result can be generalized for functions satisfying the growth condition (61) by using Proposition 38. First note that the growth condition (61) is indeed natural since it guarantees that the left side of (60) is square integrable. Consequently, since operator δT is a mapping from into L2(Ω), functions satisfying (61) are the largest class of functions for which (60) can hold. However, it is not clear in general whether f(X·)1t belongs to DomδT. Indeed, for example, in [1] the authors posed additional conditions on the Volterra kernel K to guarantee this. As our main result we show that implies that (60) holds. In other words, not only is the Itô formula (60) natural but it is also the only possibility.

Theorem 40 (Itô formula for Skorohod integrals). Let X be a separable centered Gaussian process with covariance R such that all the polynomials Assume that fC2 satisfies growth condition (61) and that the variance of X is bounded and of bounded variation. If

()
for any t ∈ [0, T], then
()

Proof. In this proof we assume, for notational simplicity and with no loss of generality, that sup0≤sTR(s, s) = 1.

First it is clear that (66) implies that f(X·)1t belongs to domain of δT. Hence we only have to prove that

()
for every random variable .

Now, it is well-known that Hermite polynomials, when properly scaled, form an orthogonal system in when equipped with the Gaussian measure. Now each f satisfying the growth condition (61) has a series representation

()
Indeed, the growth condition (61) implies that
()

Furthermore, we have

()
where the series converge almost surely and in L2(Ω), and similar conclusion is valid for derivatives f(Xs) and f(Xs).

Then, by applying (66) we obtain that for any ϵ > 0 there exists N = Nϵ such that we have

()
where
()
Consequently, for random variables of form we obtain, by choosing N large enough and applying Proposition 38, that
()
Now the left side does not depend on n which concludes the proof.

Remark 41. Note that actually it is sufficient to have

()
from which the result follows by Proposition 38. Furthermore, taking account growth condition (61) this is actually sufficient and necessary condition for formula (60) to hold. Consequently, our method can also be used to obtain Itô formulas by considering extended domain of δT (see [9] or [11]). This is the topic of Section 4.4.

Example 42. It is known that if X = BH is a fractional Brownian motion with H > 1/4, then f(X·)1t satisfies condition (66) while for H ≤ 1/4 it does not (see [22, Chapter  5]). Consequently, a simple application of Theorem 40 covers fractional Brownian motion with H > 1/4. For the case H ≤ 1/4 one has to consider extended domain of δT which is proved in [9]. Consequently, in this case we have (75) for any .

We end this section by illustrating the power of our method with the following simple corollary which is an extension of [1, Theorem  1].

Corollary 43. Let X be a separable centered continuous Gaussian process with covariance R that is bounded such that the Fredholm kernel KT is of bounded variation and

()
Then, for any t ∈ [0, T] one has
()

Proof. Note that assumption is a Fredholm version of condition (K2) in [1] which implies condition (66). Hence, the result follows by Theorem 40.

4.4. Extended Divergence Operator

As shown in Section 4.3, the Itô formula (60) is the only possibility. However, the problem is that the space may be too small to contain the elements f(X·)1t. In particular, it may happen that not even the process X itself belongs to (see, e.g., [7] for the case of fractional Brownian motion with H ≤ 1/4). This problem can be overcome by considering an extended domain of δT. The idea of extended domain is to extend the inner product for simple φ to more general processes u and then define extended domain by (47) with a restricted class of test variables F. This also gives another intuitive reason why extended domain of δT can be useful; indeed, here we have proved that Itô formula (60) is the only possibility, and what one essentially needs for such result is the following:
  • (i)

    belongs to DomδT.

  • (ii)

    Equation (75) is valid for functions satisfying (61).

Consequently, one should look for extensions of operator δT such that these two things are satisfied.

To facilitate the extension of domain, we make the following relatively moderate assumption:
  • (H)

    The function tR(t, s) is of bounded variation on [0, T] and

    ()

Remark 44. Note that we are making the assumption on the covariance R, not the kernel KT. Hence, our case is different from that of [1]. Also, [11] assumed absolute continuity in R; we are satisfied with bounded variation.

We will follow the idea from Lei and Nualart [11] and extend the inner product beyond .

Consider a step-function φ. Then, on the one hand, by the isometry property we have
()
where gt(s) = K(t, s) ∈ L2([0, T]). On the other hand, by using adjoint property (see Remark 20) we obtain
()
where, computing formally, we have
()
Consequently,
()
This gives motivation to the following definition similar to that of [11, Definition  2.1].

Definition 45. Denote by the space of measurable functions g satisfying

()
and let φ be a step-function of form . Then we extend to by defining
()
In particular, this implies that, for g and φ as above, we have
()

We define extended domain DomEδT similarly as in [11].

Definition 46. A process belongs to DomEδT if

()
for any smooth random variable . In this case, δ(u) ∈ L2(Ω) is defined by duality relationship
()

Remark 47. Note that in general DomδT and DomE  δT are not comparable. See [11] for discussion.

Note now that if a function f satisfies the growth condition (61), then since (61) implies
()
for any . Consequently, with this definition we are able to get rid of the problem that processes might not belong to corresponding -spaces. Furthermore, this implies that the series expansion (69) converges in the norm defined by
()
which in turn implies (75). Hence, it is straightforward to obtain the following by first showing the result for polynomials and then by approximating in a similar manner as done in Section 4.3, but using the extended domain instead.

Theorem 48 (Itô formula for extended Skorohod integrals). Let X be a separable centered Gaussian process with covariance R and assume that fC2 satisfies growth condition (61). Furthermore, assume that (H) holds and that the variance of X is bounded and is of bounded variation. Then for any t ∈ [0, T] the process f(X·)1t belongs to DomEδT and

()

Remark 49. As an application of Theorem 48 it is straightforward to derive version of Itô-Tanaka formula under additional conditions which guarantee that for a certain sequence of functions fn we have the convergence of term to the local time. For details we refer to [11], where authors derived such formula under their assumptions.

Finally, let us note that the extension to functions f(t, x) is straightforward, where f satisfies the following growth condition:
()
for some c > 0 and .

Theorem 50 (Itô formula for extended Skorohod integrals, II). Let X be a separable centered Gaussian process with covariance R and assume that fC1,2 satisfies growth condition (91). Furthermore, assume that (H) holds and that the variance of X is bounded and is of bounded variation. Then for any t ∈ [0, T] the process xf(·, X·)1t belongs to DomEδT and

()

Proof. Taking into account that we have no problems concerning processes to belong to the required spaces, the formula follows by approximating with polynomials of form p(x)q(t) and following the proof of Theorem 40.

5. Applications

We illustrate how some results transfer easily from the Brownian case to the Gaussian Fredholm processes.

5.1. Equivalence in Law

The transfer principle has already been used in connection with the equivalence of law of Gaussian processes in, for example, [24] in the context of fractional Brownian motions and in [2] in the context of Gaussian Volterra processes satisfying certain nondegeneracy conditions. The following proposition uses the Fredholm representation (8) to give a sufficient condition for the equivalence of general Gaussian processes in terms of their Fredholm kernels.

Proposition 51. Let X and be two Gaussian processes with Fredholm kernels KT and , respectively. If there exists a Volterra kernel such that

()
then X and are equivalent in law.

Proof. Recall that by the Hitsuda representation theorem [5, Theorem  6.3] a centered Gaussian process is equivalent in law to a Brownian motion on [0, T] if and only if there exists a kernel and a Brownian motion W such that admits the representation

()

Let X have the Fredholm representations

()
Then, is equivalent to X if it admits, in law, the representation
()
where is connected to W of (95) by (94).

In order to show (96), let

()
be the Fredholm representation of . Here W is some Brownian motion. Then, by using connection (93) and the Fubini theorem, we obtain
()
Thus, we have shown representation (96) and consequently the equivalence of and X.

5.2. Generalized Bridges

We consider the conditioning, or bridging, of X on N linear functionals of its paths:
()
We assume, without any loss of generality, that the functions gi are linearly independent. Also, without loss of generality we assume that X0 = 0 and the conditioning is on the set instead of the apparently more general conditioning on the set . Indeed, see in [16] how to obtain the more general conditioning from this one.

The rigorous definition of a bridge is as follows.

Definition 52. The generalized bridge measure is the regular conditional law

()
A representation of the generalized Gaussian bridge is any process Xg satisfying
()

We refer to [16] for more details on generalized Gaussian bridges.

There are many different representations for bridges. A very general representation is the so-called orthogonal representation given by
()
where, by the transfer principle,
()
A more interesting representation is the so-called canonical representation where the filtration of the bridge and the original process coincide. In [16] such representations were constructed for the so-called prediction-invertible Gaussian processes. In this subsection we show how the transfer principle can be used to construct a canonical-type bridge representation for all Gaussian Fredholm processes. We start with an example that should make it clear how one uses the transfer principle.

Example 53. We construct a canonical-type representation for X1, the bridge of X conditioned on XT = 0. Assume X0 = 0. Now, by the Fredholm representation of X we can write the conditioning as

()
Let us then denote by the canonical representation of the Brownian bridge with conditioning (104). Then, by [16, Theorem  4.12],
()
Now, by integrating against the kernel KT, we obtain from this that
()
This canonical-type bridge representation seems to be a new one.

Let us then denote
()
Then, in the same way as Example 53, by applying the transfer principle to [16, Theorem  4.12], we obtain the following canonical-type bridge representation for general Gaussian Fredholm processes.

Proposition 54. Let X be a Gaussian process with Fredholm kernel KT such that X0 = 0. Then the bridge Xg admits the canonical-type representation

()
where .

5.3. Series Expansions

The Mercer square root (13) can be used to build the Karhunen-Loève expansion for the Gaussian process X. But the Mercer form (13) is seldom known. However, if one can find some kernel KT such that representation (8) holds, then one can construct a series expansion for X by using the transfer principle of Theorem 22 as follows.

Proposition 55 (series expansion). Let X be a separable Gaussian process with representation (8). Let be any orthonormal basis on L2([0, T]). Then X admits the series expansion

()
where is a sequence of independent standard normal random variables. Series (109) converges in L2(Ω) and also almost surely uniformly if and only if X is continuous.

The proof below uses reproducing kernel Hilbert space technique. For more details on this we refer to [25] where the series expansion is constructed for fractional Brownian motion by using the transfer principle.

Proof. The Fredholm representation (8) implies immediately that the reproducing kernel Hilbert space of X is the image KTL2([0, T]) and KT is actually an isometry from L2([0, T]) to the reproducing kernel Hilbert space of X. The L2-expansion (109) follows from this due to [26, Theorem  3.7] and the equivalence of almost sure convergence of (109) and continuity of X follows [26, Theorem  3.8].

5.4. Stochastic Differential Equations and Maximum Likelihood Estimators

Let us briefly discuss the following generalized Langevin equation:
()
with some Gaussian noise X, parameter θ > 0, and initial condition X0. This can be written in the integral form
()
Here the integral can be understood in a pathwise sense or in a Skorohod sense, and both integrals coincide. Suppose now that the Gaussian noise X has the Fredholm representation
()
By applying the transfer principle we can write (111) as
()
This equation can be interpreted as a stochastic differential equation with some anticipating Gaussian perturbation term . Now the unique solution to (111) with an initial condition is given by
()
By using integration by parts and by applying the Fredholm representation of X this can be written as
()
which, thanks to stochastic Fubini’s theorem, can be written as
()
In other words, the solution Xθ is a Gaussian process with a kernel
()
Note that this is just an example of how transfer principle can be applied in order to study stochastic differential equations. Indeed, for a more general equation
()
the existence or uniqueness result transfers immediately to the existence or uniqueness result of equation
()
and vice versa.
Let us end this section by discussing briefly how the transfer principle can be used to build maximum likelihood estimators (MLEs) for the mean-reversal-parameter θ in (111). For details on parameter estimation in such equations with general stationary-increment Gaussian noise we refer to [27] and references therein. Let us assume that the noise X in (111) is infinite-generate, in the sense that the Brownian motion W in its Fredholm representation is a linear transformation of X. Assume further that the transformation admits a kernel so that we can write
()
Then, by operating with the kernels KT and , we see that (111) is equivalent to the anticipating equation
()
where
()
Consequently, the MLE for (111) is the MLE for (121), which in turn can be constructed by using a suitable anticipating Girsanov theorem. There is a vast literature on how to do this; see, for example, [28] and references therein.

6. Conclusions

We have shown that virtually every Gaussian process admits a Fredholm representation with respect to a Brownian motion. This apparently simple fact has, as far as we know, remained unnoticed until now. The Fredholm representation immediately yields the transfer principle which allows one to transfer the stochastic analysis of virtually any Gaussian process into stochastic analysis of the Brownian motion. We have shown how this can be done. Finally, we have illustrated the power of the Fredholm representation and the associated transfer principle in many applications.

Stochastic analysis becomes easy with the Fredholm representation. The only obvious problem is to construct the Fredholm kernel from the covariance function. In principle this can be done algorithmically, but analytically it is very difficult. The opposite construction is, however, trivial. Therefore, if one begins the modeling with the Fredholm kernel and not with the covariance, one’s analysis will be simpler and much more convenient.

Competing Interests

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

Lauri Viitasaari was partially funded by Emil Aaltonen Foundation. Tommi Sottinen was partially funded by the Finnish Cultural Foundation (National Foundations’ Professor Pool).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.