Volume 2012, Issue 1 906373
Research Article
Open Access

Chover-Type Laws of the Iterated Logarithm for Continuous Time Random Walks

Kyo-Shin Hwang

Kyo-Shin Hwang

Research Institute of Natural Science, Gyeongsang National University, Jinju 660-701, Republic of Korea gnu.ac.kr

Search for more papers by this author
Wensheng Wang

Corresponding Author

Wensheng Wang

Department of Mathematics, Hangzhou Normal University, Hangzhou 310036, China hznu.edu.cn

Search for more papers by this author
First published: 10 July 2012
Academic Editor: P. G. L. Leach

Abstract

A continuous time random walk is a random walk subordinated to a renewal process used in physics to model anomalous diffusion. In this paper, we establish Chover-type laws of the iterated logarithm for continuous time random walks with jumps and waiting times in the domains of attraction of stable laws.

1. Introduction

Let {Yi, Ji} be a sequence of independent and identically distributed random vectors, and write S(n) = Y1 + Y2 + ⋯+Yn and T(n) = J1 + J2 + ⋯+Jn. Let Nt = max   {n ≥ 0 : T(n) ≤ t} the renewal process of Ji. A continuous time random walk (CTRW) is defined by
(1.1)
In this setting, Yi represents a particle jump, and Ji > 0 is the waiting time preceding that jump, so that S(n) represents the particle location after n jumps and T(n) is the time of the nth jump. Then Nt is the number of jumps by time t > 0, and the CTRW X(t) represents the particle location at time t > 0, which is a random walk subordinated to a renewal process.

It should be mentioned that the subordination scheme of CTRW processes is going back to Fogedby [1] and that it was expanded by Baule and Friedrich [2] and Magdziarz et al. [3]. It should also be mentioned that the theory of subordination holds for nonhomogeneous CTRW processes, that were introduced in the following works: Metzler et al. [4, 5] and Barkai et al. [6].

The CTRW is useful in physics for modeling anomalous diffusion. Heavy-tailed particle jumps lead to superdiffusion, where a cloud of particles spreads faster than the classical Brownian motion, and heavy-tailed waiting times lead to subdiffusion. CTRW models and the associated fractional diffusion equations are important in applications to physics, hydrology, and finance; see, for example, Berkowitz et al. [7], Metzler and Klafter [8], Scalas [9], and Meerchaert and Scalas [10] for more information. In applications to hydrology, the heavy tailed particle jumps capture the velocity irregularities caused by a heterogeneous porous media, and the waiting times model particle sticking or trapping. In applications to finance, the particle jumps are price changes or log returns, separated by a random waiting time between trades.

If the jumps Yi belong to the domain of attraction of a stable law with index α, (0 < α < 2), and the waiting times Ji belong to the domain of attraction of a stable law with index β, (0 < β < 1), Becker-Kern et al. [11] and Meerschaert and Scheffler [12] showed that as c,
(1.2)
a non-Markovian limit with scaling , where A(t) is a stable Lévy motion and E(t) is the inverse or hitting time process of a stable subordinator. Densities of the CTRW scaling limit A(E(t)) solve a space-time fractional diffusion equation that also involves a fractional time derivative of order β; see Meerschaert and Scheffler [13], Becker-Kern et al. [11], and Meerschaert and Scheffler [12] for complete details. Becker-Kern et al. [14], Meerschaert and Scheffler [15], and Meerschaert et al. [16] discussed the related limit theorems for CTRWs based on two time scales, triangular arrays and dependent jumps, respectively. The aim of the present paper is to investigate the laws of the iterated logarithm for CTRWs. We establish Chover-type laws of the iterated logarithm for CTRWs with jumps and waiting times in the domains of attraction of stable laws.

Throughout this paper we will use C to denote an unspecified positive and finite constant which may be different in each occurrence and use “i.o.” to stand for “infinitely often” and “a.s." to stand for “almost surely” and “u(x) ~ v(x)” to stand for “lim  u(x)/v(x) = 1”. Our main results read as follows.

Theorem 1.1. Let {Yi} be a sequence of i.i.d. nonnegative random variables with a common distribution F, and let {Ji}, independent of {Yi}, be a sequence of i.i.d. nonnegative random variables with a common distribution G. Assume that 1 − F(x) ~ xαL(x), 0 < α < 2, where L is a slowly varying function, and that G is absolutely continuous and 1 − G(x) ~ Cxβ, 0 < β < 1. Let {B(n)} be a sequence such that nL(B(n))/B(n) αC as n. Then one has

(1.3)

The following is an immediate consequence of Theorem 1.1.

Corollary 1.2. If the tail distribution of Yi satisfies P(Y1 > x) ~ Cxα in Theorem 1.1, then one has

(1.4)

In the course of our arguments we often make statements that are valid only for sufficiently large values of some index. When there is no danger of confusion, we omit explicit mention of this proviso.

2. Chung Type LIL for Stable Summands

In this section we consider a Chung-type law of the iterated logarithm for sums of random variables in the domain of attraction of a stable law, which will take a key role to show Theorem 1.1. When Ji has a symmetric stable distribution function G characterized by
(2.1)
0 < β < 2. Chover [17] established that
(2.2)
We call (2.2) as Chover′s law of the iterated logarithm. Since then, several papers have been devoted to develop Chover′s LIL; see, for example, Hedye [1820], Pakshirajan and Vasudeva [21], Vasudeva [22], Qi and Cheng [23], Scheffler [24], Chen [25], and Peng and Qi [26] for reference. For some reason the obvious corresponding statement for the “lim  inf” result does not seem to have been recorded, and it is the purpose of this section to do so and may be of independent interest.

Theorem 2.1. Let {Ji} be a sequence of i.i.d. nonnegative random variables with a common distribution G(x), and let V(x) = inf  {y > 0 : 1 − G(y) ≤ 1/x}. Assume that G is absolutely continuous and 1 − G(x) ~ xβl(x), 0 < β < 1, where l is a slowly varying function. Then one has

(2.3)

In order to prove Theorem 2.1, we need some lemmas.

Lemma 2.2. Let h(x) be a slowly varying function. Then, if yn, zn, one has for any given τ > 0,

(2.4)

Proof. See Seneta [27].

Lemma 2.3. Let {Ji} be a sequence of i.i.d. nonnegative random variables with a common distribution G and let M(n) = max  {J1, J2, …, Jn}. Assume that G is absolutely continuous and 1 − G(x) ~ xβl(x), 0 < β < 1, where l is a slowly varying function. Then one has for some given small t > 0

(2.5)

Proof. We will follow the argument of Lemma 2.1 in Darling [28]. Without loss of generality we can assume J1 = max  {J1, J2, …, Jn} = M(n) since each Ji has a probability of 1/n of being the largest term, and P(Ji = Jj) = 0 for ij since G(x) is presumed continuous.

For notational simplicity we will use the tail distribution and denote by g(x) the corresponding density, so that . Then, the joint density of J1, J2, …, Jn, given J1 = M(n), is

(2.6)
Thus
(2.7)
Let us put
(2.8)
so that
(2.9)
It follows from Doeblin′s theorem that if λ > 0,
(2.10)
for yy0 with some large y0 > 0. Then, for yy0, we can choose t > 0 small enough such that t < −log  G(y0) since G has regularly varying tail distribution, so that
(2.11)
It follows that
(2.12)
Consider the case yy0. By a slight transformation we find that
(2.13)
Putting
(2.14)
we have η < 1 since 0 < β < 1 and t is small. Thus
(2.15)
By (2.9) and making the change of variable to give
(2.16)
which yields the desired result.

The following large deviation result for stable summands is due to Heyde [19].

Lemma 2.4. Let {ξi} be a sequence of i.i.d. nonnegative random variables with a common tail distribution satisfying P(ξ1 > x) ~ xrh(x), 0 < r < 2, where h is a slowly varying function. Let {λn} be a sequence such that as n, and let {xn} be a sequence with xn as n. Then

(2.17)

Now we can show Theorem 2.1.

Proof of Theorem 2.1. In order to show (2.3), it is enough to show that for all ɛ > 0

(2.18)
(2.19)

We first show (2.18). Let nk = [θk], 1 < θ < 2. Put again . Let be the inverse of . Obverse that , 0 < y ≤ 1, where H is a slowly varying function and , so that

(2.20)
(2.21)
by Lemma 2.2. Let U, U1, U2, …, Un be i.i.d. random variables with the distribution of U Uniform over (0,1), and let M*(n) = max  {U1, U2, …, Un}. Then, from the fact that G(Jn) is a Uniform (0,1) random variable, we note that , n ≥ 1. From (2.21), Ji nonnegative, and and nonincreasing, it follows that
(2.22)
Hence, the sum of the left hand side of the previously mentioned probability is finite; by the Borel-Cantelli lemma, we get
(2.23)
Thus, by (2.20) we have
(2.24)
Therefore, by the arbitrariness of θ > 1, (2.18) holds.

We now show (2.19). Let , δ > 0. For notational simplicity, we introduce the following notations:

(2.25)
By Lemma 2.3, we have
(2.26)
Thus, we get ∑ P(Ok) < .

Observe again that and V(n) ~ n1/βH(n), so that

(2.27)
(2.28)
by Lemma 2.2. Thus, we note
(2.29)
which yields easily . Hence, since P(Ek) ≥ P(Fk) − P(Ok), we get ∑ P(Ek) = . Since Ek are independent, by the Borel-Cantelli lemma, we get
(2.30)

By applying Lemma 2.4 and (2.27) and some simple calculation, we have easily that , so that

(2.31)
which, together with (2.30), implies
(2.32)
This yields (2.19). The proof of Theorem 2.1 is now completed.

3. Proof of Theorem 1.1

Proof of Theorem 1.1. We have to show that for all ɛ > 0

(3.1)
(3.2)

We first show (3.1). Let tk = θk, 1 < θ < 2. For notational simplicity, we introduce the following notations:

(3.3)

By (2.18), we have

(3.4)

Put . Let be the inverse of . Recall that , 0 < y ≤ 1, where is a slowly varying function, so that and

(3.5)

Note that

(3.6)
Thus, by noting U increasing,
(3.7)
Hence, by Lemma 2.2,
(3.8)
Thus, by (3.8) and Lemma 2.4, we have
(3.9)
Therefore, . By the Borel-Cantelli lemma, we get .

Observe that

(3.10)
where Ec stands for the complement of E. Thus, letting n, we have
(3.11)
which implies that
(3.12)
Thus, by (3.5), we have
(3.13)
This yields (3.1) immediately by letting θ ↓ 1.

We now show (3.2). Let , δ > 0. To show (3.2), it is enough to prove

(3.14)

Put

(3.15)

By (2.19), we have

(3.16)

Note that

(3.17)
Thus, by noting U1 increasing,
(3.18)
Hence, by Lemma 2.2,
(3.19)
Similarly, by noting tk/tk−1, one can have
(3.20)
Thus, by Lemma 2.4, we have
(3.21)
Therefore, ∑ P(Wk) = . Since the events {Wk} are independent, by the Borel-Cantelli lemma, we get P(Wk  i.o.) = 1.

Now, observe that

(3.22)
Therefore, by letting m, we get
(3.23)
which implies (3.14). The proof of Theorem 1.1 is now completed.

Remark 3.1. By the proof Theorem 1.1, (1.3) can be modified as follows:

(3.24)
That is to say that the form of (1.3) is no rare and the variables (B(tβ)) −1X(t) must be cut down additionally by the factors (log t) −1/α to achieve a finite lim  sup.

Acknowledgments

The authors wish to express their deep gratitude to a referee for his/her valuable comments on an earlier version which improve the quality of this paper. K. S. Hwang is supported by the Korea Research Foundation Grant Funded by Korea Government (MOEHRD) (KRF-2006-353-C00004), and W. Wang is supported by NSFC Grant 11071076.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.