Volume 2012, Issue 1 838397
Research Article
Open Access

The Sum and Difference of Two Lognormal Random Variables

C. F. Lo

Corresponding Author

C. F. Lo

Institute of Theoretical Physics and Department of Physics, The Chinese University of Hong Kong, Hong Kong cuhk.edu.hk

Search for more papers by this author
First published: 10 September 2012
Citations: 19
Academic Editor: Mehmet Sezer

Abstract

We have presented a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. By the Lie-Trotter operator splitting method, both the sum and difference are shown to follow a shifted lognormal stochastic process, and approximate probability distributions are determined in closed form. Illustrative numerical examples are presented to demonstrate the validity and accuracy of these approximate distributions. In terms of the approximate probability distributions, we have also obtained an analytical series expansion of the exact solutions, which can allow us to improve the approximation in a systematic manner. Moreover, we believe that this new approach can be extended to study both (1) the algebraic sum of N lognormals, and (2) the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.

1. Introduction

Given two correlated lognormal stochastic variables, what is the stochastic dynamics of the sum or difference of the two variables?”; or equivalently “What is the probability distribution of the sum or difference of two correlated lognormal stochastic variables?” The solution to this long-standing problem has wide applications in many fields such as telecommunication studies [16], financial modelling [79], actuarial science [1012], biosciences [13], physics [14], and so forth. Although the lognormal distribution is well known in the literature [15, 16], yet almost nothing is known of the probability distribution of the sum or difference of two correlated lognormal variables. However, it is commonly agreed that the distribution of either the sum or difference is neither normal nor lognormal.

The aforesaid problem can be formulated as follows. Given two lognormal stochastic variables S1 and S2 obeying the following stochastic differential equations:
(1.1)
where , dZi denotes a standard Weiner process associated with Si, and the two Weiner processes are correlated as dZ1dZ2 = ρdt, the time evolution of the joint probability distribution function P(S1, S2, t; S10, S20, t0) of the two correlated lognormal variables is governed by the backward Kolmogorov equation
(1.2)
where
(1.3)
subject to the boundary condition
(1.4)
This joint probability distribution function tells us how probable the two lognormal variables assume the values S1 and S2 at time t > t0, provided that their values at t0 are given by S10 and S20. Since P(S1, S2, t; S10, S20, t0) is known in closed form as follows:
(1.5)
the probability distribution of the sum or difference, namely S±S1   ±   S2, of the two correlated lognormal variables can be obtained by evaluating the integral
(1.6)
Despite that many methods have been developed to address the problem, a closed-form representation for the probability distribution of the sum or difference is still missing. Hence, we must resort to numerical methods to perform the integration. Nevertheless, the numerically exact solution does not provide any information about the stochastic dynamics of the sum or difference explicitly.

In the lack of knowledge about the probability distribution of the sum or difference of two correlated lognormal variables, several analytical approximation methods which focus on finding a good approximation for the desired probability distribution have been proposed in the literature [16, 8, 1727]. Essentially, these analytical approximations assume a specific distribution that the sum or difference of the two correlated lognormal variables follow, and then use a variety of methods to identify the parameters for that specific distribution. However, no mathematical justification for the specific distribution was apparently given. In spite of this shortcoming, these approximations attract considerable attention and have been extended to tackle the algebraic sums of N correlated lognormal variables, too.

In this communication, we apply the Lie-Trotter operator splitting method [28] to derive an approximation for the dynamics of the sum or difference of two correlated lognormal variables. It is shown that both the sum and difference can be described by a shifted lognormal stochastic process. Approximate probability distributions of both the sum and difference of the lognormal variables are determined in closed form, and illustrative numerical examples are presented to demonstrate the accuracy of these approximate distributions. Unlike previous studies which treat the sum and difference in a separate manner, our proposed method thus provides a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. In addition, in terms of the approximate solutions, we are able to obtain an analytical series expansion of the exact solutions, which can allow us to improve the approximation systematically. Moreover, we believe that this new approach can be extended to study both (1) the algebraic sum of N lognormals, and (2) the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.

2. Lie-Trotter Operator Splitting Method

It is observed that the probability distribution of the sum or difference of the two correlated lognormal variables, that is, , also satisfies the same backward Kolmogorov equation given in (1.2), but with a different boundary condition
(2.1)
To solve for , we first rewrite the backward Kolmogorov equation in terms of the new variables as
(2.2)
where
(2.3)
The corresponding boundary condition now becomes
(2.4)
Accordingly, the formal solution of (2.2) is given by
(2.5)
Since the exponential operator is difficult to evaluate, we apply the Lie-Trotter operator splitting method [28] to approximate the operator by (see the appendix)
(2.6)
and obtain an approximation to the formal solution , namely
(2.7)
where the relation is utilized. For , which is normally valid unless S10 and S20 are both close to zero, the operators and can be approximately expressed as
(2.8)
in terms of the two new variables:
(2.9)
where and. Without loss of generality, we assume that σ1 > σ2. Obviously, both and are lognormal (LN) random variables defined by the stochastic differential equations
(2.10)
and their closed-form probability distribution functions are given by
(2.11)
for t > t0. As a result, it can be inferred that within the Lie-Trotter splitting approximation both S+ and S are governed by a shifted lognormal process. It should be noted that for the Lie-Trotter splitting approximation to be valid, needs to be small.
Alternatively, we can also approximate the operator by
(2.12)
and
(2.13)
It is not difficult to recognize that R follows the square-root (SR) stochastic process defined by the stochastic differential equation
(2.14)
and has the closed-form probability distribution function
(2.15)
for t > t0, where I1(·) is the modified Bessel function of the first kind of order one. Accordingly, we have shown that within the Lie-Trotter splitting approximation, which requires to be small, S can be described by a shifted square-root process, too.
Moreover, in terms of the approximate solutions , we can express the exact solutions in the following form:
(2.16)
where
(2.17)
The integrals over the temporal variables {ξi; i = 1,2, 3, …} can be evaluated analytically. If we keep terms up to the order of , then can be approximated by
(2.18)
This analytical series expansion can allow us to improve the approximate solutions systematically.

3. Illustrative Numerical Examples

In Figure 1 we plot the approximate closed-form probability distribution function of the sum S+ given in (2.11) for different values of the input parameters. We start with S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15 in Figure 1(a). Then, in order to examine the effect of S20, we decrease its value to 70 in Figure 1(b) and to 40 in Figure 1(c). In Figures 1(d), 1(e), and 1(f) we repeat the same investigation with a new set of values for σ1 and σ2, namely σ1 = 0.3 and σ2 = 0.2. Without loss of generality, we set tt0 = 1 for simplicity. The (numerically) exact results which are obtained by numerical integrations are also included for comparison. It is clear that the proposed approximation can provide accurate estimates for the exact values. Moreover, to have a clearer picture of the accuracy, we plot the corresponding errors of the estimation in Figure 2. We can easily see that major discrepancies appear around the peak of the probability distribution function, and that the estimation deteriorates as the correlation parameter ρ decreases from 0.5 to −0.5. It is also observed that the errors increase with the ratio as expected but they seem to be not very sensitive to the changes in σ1 and σ2.

Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1 + S2: The solid lines denote the distributions of the approximate shifted lognormal process, and the dash lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1 + S2: The error is calculated by subtracting the approximate estimate from the exact result. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.

Next, we apply the same sequence of analysis to the two approximate closed-form probability distribution functions of the difference S given in (2.11) and (2.15). Similar observations about the accuracy of the proposed approximation can be made for the difference S, too (see Figures 3 and 4). However, contrary to the case of S+, the estimation performs better for positive correlation. Of the two different approximation schemes for the S, the shifted LN process seems to have a comparatively better performance than the shifted SR process, as evidenced by the numerical results.

Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Probability density versus S1S2: the dash lines denote the distributions of the approximate shifted lognormal process, the dotted lines indicate the distributions of the approximate shifted square-root process, and the solid lines show the exact results. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.
Details are in the caption following the image
Error versus S1S2: the error is calculated by subtracting the approximate estimate from the exact result. The dash lines denote the errors of the approximate shifted square-root process, and the solid lines show the errors of the approximate shifted lognormal process. (a) S10 = 110, S20 = 100, σ1 = 0.25, and σ2 = 0.15; (b) S10 = 110, S20 = 70, σ1 = 0.25, and σ2 = 0.15; (c) S10 = 110, S20 = 40, σ1 = 0.25, and σ2 = 0.15; (d) S10 = 110, S20 = 100, σ1 = 0.3, and σ2 = 0.2; (e) S10 = 110, S20 = 70, σ1 = 0.3, and σ2 = 0.2; (f) S10 = 110, S20 = 40, σ1 = 0.3, and σ2 = 0.2.

4. Conclusion

In this paper we have presented a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. By the Lie-Trotter operator splitting method, both the sum and difference are shown to follow a shifted lognormal stochastic process, and approximate probability distributions are determined in closed form. Illustrative numerical examples are presented to demonstrate the validity and accuracy of these approximate distributions. In terms of the approximate probability distributions, we have also obtained an analytical series expansion of the exact solutions, which can allow us to improve the approximation in a systematic manner. Moreover, we believe that this new approach can be extended to study both (1) the algebraic sum of N lognormals, and (2) the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.

Appendix

Lie-Trotter Splitting Approximation

Suppose that one needs to exponentiate an operator which can be split into two different parts, namely and . For simplicity, let us assume that , where the exponential operator is difficult to evaluate but and are either solvable or easy to deal with. Under such circumstances, the exponential operator , with ε being a small parameter, can be approximated by the Lie-Trotter splitting formula [28]:
(A.1)
This can be seen as the approximation to the solution at t = ε of the equation by a composition of the exact solutions of the equations and at time t = ε.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.