1. Introduction
The Poisson Stochastic Index process (PSI-process) represents a special kind of a random process when the discrete time of a random sequence is replaced by the continuous time of a “counting” process of a Poisson type.
Throughout this paper, we consider the triplet
of jointly independent components defined on a probability space
. Here,
is a standard Poisson process on
,
is an almost surely (a.s.) non-negative random variable, which plays a role of random intensity, and
denotes a random sequence
of independent and identically distributed (i.i.d.) random variables. Let us define a PSI-process in the following way:
The mechanism of PSI-processes is reduced to sequential replacements of terms of the “driven” sequence at arrival times of the “driving” doubly stochastic Poisson process .
Let us introduce a “natural” filtration
, generated by the PSI-process
Note that if the distribution of
has no atoms, then the natural filtration
coincides with a filtration, which is generated by a compound Poisson type process with the random intensity
:
starting at the random point
. (In the case when
has an atom at 0, some jumps of
may be “missed” in
Y, the process
Y is known as
a stuttering compound Poisson process. A similar phenomenon happens with a PSI-process when
has any atom, not necessarily at 0. For details we refer to [
1].)
PSI-processes may have a lot of interpretations. For instance, in insurance models and their applications: while a compound Poisson process is monitoring the cumulative value of claims up to a current time t, the corresponding PSI-process is monitoring the last claim.
Another interpretation arises in models of information channels. Here, plays a role of random loads on an information channel. The driving doubly stochastic Poisson process affects in the following manner. At arrival points of the driving process , the current term of is replaced with the next term.
In view of these interpretations, as well as from a point of view of the classical probability theory, it makes sense to consider sums of independent PSI-processes. In this paper, we confine ourselves to the case when all terms in these sums are identically distributed PSI-processes and when the terms of the driven sequences have a finite second moment. Without loss of generality, we assume that
and
. Let
,
, denote independent copies of
. Note that the Poisson processes in the definition (
1) are also independent in different copies, as well as the time change factors
, for any
. Introduce
the normalized cumulative sum. Note that
is a stationary process for any
n.
When one of the processes
changes its value, all the values of other processes remain the same a.s. Hence, the change mechanism behind the sums of type (
3) can be described as a projection of some information from past to future and replacement of other information with new independent values. This can be opposed to autoregression schemes, which are based on contractions of information. This mechanism of projection survives after a passage to the limit as
. Hence, if the limit exists in some sense, it has to be described by so-called “trawl” or “upstairs represented” processes introduced by O.E. Barndorf-Nielsen [
2,
3] and R. Wolpert, M.Taqqu [
4], respectively. A relationship of PSI-processes with trawl processes is discussed briefly in [
5].
Our main result is a functional limit theorem for normalized cumulative sums (
3) (Theorem 1): random processes
weakly converge, as
, in the Skorokhod space of càdlàg functions defined on a compact
,
. The limit process
is Gaussian, centered, stationary, and its covariance function is
,
, where
denotes the Laplace transform of the random intensity
. In a simpler case of non-random intensity
, the analogous functional limit theorem has been established by the second author in [
6]. In this case, the limit is necessarily an Ornstein–Uhlenbeck process. Introducing a random intensity significantly widens the class of possible limiting processes but makes a proof of the corresponding functional limit theorem more involved. Our method of proof is essentially based on a detailed analysis of a modulus of continuity for the PSI-process.
In our research, we came upon the following interesting phenomena, which occurs if . Then, the fatter the tail of is, the more moments of are needed for the relative compactness of the family . When , our method of proof requires just a condition , for some .
As an example of a functional of the PSI-process, we construct a martingale adapted to the natural filtration
generated by the PSI-process defined in (
2). Consider a pathwise integrated PSI-process
and define a so-called
M-process associated with the PSI-process as
Suppose that
is a positive constant and
. Then,
is an
-martingale, starting at the origin. The proof presented in
Section 3 is reduced to a direct calculation and exploits the fact that the pair
is an
-valued Markov process (moreover, a strong Markov process with respect to
).
This example shows that the PSI-process
is the stationary solution of the Langevin equation driven by the martingale
:
As one of the consequences of our main result, we obtain as a limit the classical martingale
,
, which replaces
in (
6). Here and below,
is a standard Brownian motion.
Remark that if is a non-degenerate random variable, then is not measurable with respect to , and hence, it is not an -martingale. However, if we supplement with to generate an initially enlarged filtration , then the M-process becomes a local martingale with respect to the new adjusted filtration. If , then it is a martingale (see Proposition 2).
Suppose now as usual that
. Direct application of Theorem VIII.3.46 [
7] (p. 481) allows us to obtain a functional limit theorem for the martingale
, i.e., for
where
,
, are independent copies of
. Here, the convergence takes place in the Skorokhod space, and the limit process is
,
.
The rest of the paper is organized as follows. In
Section 2, we introduce some notation and formulate our main result, Theorem 1. In
Section 3, the
M-process described above is studied in some details, as an example of the application of Theorem 1. Another example of the PSI-process such that the normalized cumulative sums do not converge in the Skorokhod space is constructed in
Section 4 in order to show that some conditions are indeed necessary in a functional limit theorem.
Section 5 collects some auxiliary facts about PSI-processes and their modulus of continuity. In
Section 6, we study sums of PSI-processes and prove our main result. We finish the article with some conclusions in
Section 7.
2. Main Results
Let
be a sequence of random variables. Consider an independent of
standard Poisson process
,
. Then, one can subordinate the sequence by the Poisson process to obtain a continuous time process
Consider also a non-negative random variable
, which is independent of
and
. The time-changed Poisson process
is a Poisson process with random intensity, also known as (a specific case of) a Cox process or a doubly stochastic Poisson process. We consider the PSI-process with the random time-change
We call the Poisson stochastic index process, or PSI-process for short.
It turns out that if random variables
,
, are uncorrelated and have zero expectations and unit variances, then the covariance function for
is equal to the Laplace transform of
Lemma 1. Let be a sequence of uncorrelated random variables with and . Let λ be a non-negative random variable and be a standard Poisson process. Suppose that , λ, and Π are mutually independent. Then, for any In particular, ψ is a wide sense stationary process.
Proof. First note that
since any
. Hence,
. Suppose without loss of generality that
. Given
, one has
Here and below,
denotes the indicator of an event
A. We used the assumption that
, the Kronecker delta, and also the stationarity of the increments of the Poisson process. Taking expectation with respect to
yields the result. □
Remark 1. Unlike [8], we allow λ to have an atom at 0, which implies that . Corollary 1. Let the triplet satisfy the assumptions of Lemma 1. Then, the processes defined in (3) as normalized cumulative sums of independent copies of converge in the sense of finite dimensional distributions (f.d.d.), as , to a stationary centered Gaussian process with the covariance function , . Proof. This is an immediate consequence of the central limit theorem (CLT) for vectors. Indeed, for any fixed time moments
, the finite-dimensional distributions of
are i.i.d. for different
k and have zero mean and the covariation matrix
Lemma 1 emphasizes a special role played by the Laplace transform in the study of PSI-processes with random intensities. We will need asymptotics of the Laplace transform in the right neighborhood of 0.
Assumption 1. For some and any , the Laplace transform (9) of λ satisfies It is well known that (
10) holds with
if
or with
if the tail
of
varies regularly of index
at
, see, e.g., [
9] (Theorem 8.1.6).
Below, we shall always suppose that terms of the sequence
are i.i.d., hence uncorrelated, and satisfy the assumptions of Lemma 1. By Corollary 1, random processes
have a limit
as
but in the rather weak f.d.d. sense. The aim of this paper is to establish a more strong result, a functional limit theorem for
in an appropriate functional space. If Assumption 1 holds, then the covariance function of the limiting process
behaves in a controllable way at 0, and
has a version with almost surely continuous paths because
in (
10), see, e.g., [
10] (§9.2). Our main result is that, under additional moment assumptions
for some
(where
is the exponent in (
10)), the convergence indeed takes place in the Skorokhod space
, for any
.
Theorem 1. Consider a triplet that consists of a standard Poisson process Π, a non-negative random variable λ satisfying Assumption 1, and a sequence of i.i.d. random variables such that and . Elements of the triplet are supposed to be independent and to satisfy the condition Let , , be a sequence of independent copies of the triplet , be the PSI-process (1) constructed from the k-th triplet, and be defined by (3). Then, for any , the sequence of stochastic processes converges in the Skorokhod space , as , to a zero mean stationary Gaussian process with the covariance function , . Remark 2. Nowadays, it is common to consider a weak convergence in the space . Due to specific features of our model (stationary of for every n, continuity of ζ), this implies a weak convergence in for all . Since we essentially use the results from Billingsley’s book [11] that deals with , we prefer to formulate our results in , , as in Theorem 1. We prove Theorem 1 in
Section 6 and now proceed with studying some of its consequences.
3. Example: A PSI-Martingale
Recall the definition (
2) of the natural filtration
given in the Introduction. Note that since PSI-processes (with non-random
) belong to a so-called class of “Pseudo-Poisson processes” [
12] (Ch. X), they have the Markov property with the following transition probabilities: for
;
,
Denote the pathwise integrated PSI-process . Note that a pair is an -valued Markov process, although itself is not Markovian.
Proposition 1 (The PSI-martingale).
Assume that are i.i.d. and . Then, for a non-random , the stochastic process defined in (5) is a starting at the origin -martingale for . Proof. Let us introduce a slightly modified
M-process
First, we show that it is an
-martingale starting at the random point
. Since the pair
is a Markov process adapted to the filtration
, and
is determined by
, we have
Let
be jump times of the driving Poisson process
. Denote the random period
; that is the time for which the Poisson process
does not change after time
t. For each fixed
t, the period
has the exponential distribution with the intensity
. Using this notation, we can calculate
Multiplying (
14) by
and adding (
13), we obtain
, which proves the assertion about
due to (
12).
Now, the claim of Proposition 1 easily follows from and . □
As it has been mentioned in the Introduction, for a random non-degenerate
, the process
is not
-measurable, and the filtration
should be augmented by
:
The following analog of Proposition 1 holds, but the proof is more tricky.
Proposition 2 (The PSI-martingale with random intensity).
Assume that is a sequence of i.i.d. random variables with , is a standard Poisson process, a random variable λ is positive a.s.; λ, , and Π are independent. Then, the stochastic process , , defined in (5) is a local martingale with respect to . If , then is a martingale. Proof. Let
be jump times of the Poisson process
and
corresponding jump times of the process
. Recall that filtrations
and
are defined in (
2) and (
15), respectively. It is easy to check that a set
belongs to
(resp. to
),
, if and only if
(resp.
) for every
. Here,
the latter equality being held if
, and
In particular, the filtrations and are right-continuous.
First, we calculate the
-compensator of the locally integrable process
Since, for
,
is a Poisson process with intensity
, its
-compensator is
. This means that
is an
-martingale. Denoting
, this can be written as
for every
,
, and any bounded Borel function
f from
in
. Consider now the case of random
. Note that
for any
t and
. This allows us to take a conditional expectation given
in the expression below, where
f is as above and
g is a bounded measurable function from
to
:
This means
where
if
and
otherwise. We conclude that
is an
-local martingale, and
is the
-compensator of
.
The same proof shows that
is an
-local martingale. Indeed, it is a compound Poisson process with zero mean; hence, it itself is an
-martingale for a deterministic
. To ensure that the corresponding expectation is finite, we note that
.
The final step of the proof is to determine the
-compensator of the process
We can represent
as the pathwise Lebesgue–Stieltjes integral of a predictable process
with respect to
. Note that the integral process
is a process with
-locally integrable variation because its variation up to
is estimated from above similarly to
. This allows us to conclude that the
-compensator of
is the Lebesgue–Stieltjes integral process of
with respect to the
-compensator of
, see, e.g., Theorem 2.21 (2) in [
13], i.e., the
-compensator of
equals
Summarizing, we obtain that the
-compensator of
that is
.
Finally, the quadratic variation of
M is
Therefore,
is a martingale according to Davis’ inequality (see [
14] (Ch. 9)). □
If we assume also that
, then the
-martingale
has
for all
. Its quadratic variation is calculated in (
16). The variance of
can then be calculated as follows:
If
(in particular, if
is not random), then the variance of
is finite for any
. Hence, direct application of Theorem VIII.3.46 [
7] (p. 481) allows us to obtain a functional limit theorem for properly normalized sums of independent copies
,
, of the martingale
, i.e., for the processes
Here, the convergence takes place in the Skorokhod space, and the limit process is , where , , is a standard Brownian motion.
Assume now that is non-random. It is easy to see that the mapping is continuous in the Skorokhod space , for any . Hence, as a corollary of Theorem 1, we reconstruct the above result that the convergence takes place in the Skorokhod space, under the condition that , for some .
4. Counterexample: Diverging Sums
For
, denote
and consider a function
of
. This is a probability density. Let
be a random variable with this density, then, by the choice of
the mean
for any
, and
for any
. Moreover, all absolute moments of non-negative order less than
exist, while
. The tail distribution function is
for
. Let
be a sequence of i.i.d. random variables distributed as
.
For
, let
be independent of
and have the tail distribution function
for
. The Laplace transform of
can be expressed in terms of the (upper) incomplete Gamma function function
By a simple change of variables, we obtain
The asymptotics of
as
can be read, say, from Theorem 8.1.6 [
9] (p. 333): as
,
Hence, satisfies Assumption 1 with .
Let
be a standard Poisson process, independent of both
and
. Define a PSI-process
with the random intensity
as in (
1).
Consider independent copies
,
, where
are independent copies of
, and let
be their normalized cumulative sums, as in (
3). The CLT for vectors implies that, for
and
, in terms of finite-dimensional distributions, the processes
converge, as
, to a stationary centered Gaussian process with the covariance function
,
. We claim that, nevertheless, for certain parameters
and
, the functional limit theorem cannot hold true for these
. The proof is based on the following technical result.
Proposition 3. One can find such that for any , with probability not less than , one of the PSI-processes has a jump of size at least , for .
Proof. The cumulative distribution function of
is
Notice that . Hence, for large enough n, there exists such that with probability not less than . Since is independent of and the Poisson distribution is asymptotically symmetric around its mean as the parameter becomes large, we may claim that . Hence, with probability not less than among PSI-process , at least one process engages more than random variables on the time interval ; that is, . Here and below for , we denote the floor function.
Consider now
. For any fixed
n, they are i.i.d. and have the cumulative distribution function
and
with probability not less than
for all
m large enough, because
as
. This maximum is attained on some
, and with probability
at least one of
and
is less than
. (We neglect a situation when the maximum is attained for
or
, which happens with the probability
, see, e.g., [
15].) It means that, for large
m,
has at least one jump greater than
, with probability at least
.
Combining the above estimates and using the independence between and the corresponding driven sequence , we see that, with probability not less than , the process , , has a jump of size at least , for all . □
Since all these PSI-processes jump at different moments of time a.s., the jump of any process is not compensated by other PSI-processes and makes a contribution to
. If
, then after the scaling by
in (
3), the size of the jump that exists according to Proposition 3 exceeds
as
. Hence, the limit in the Skorokhod space
, if it exists, should have jumps with positive probability. However, it is well known that the stationary Gaussian process with the covariance function
,
, where
is given by (
17), has a continuous modification a.s. This contradiction shows that the convergence
cannot take place in
as
.
Remark 3. The considered counterexample suggests that the correct condition for the functional limit theorem could be for some . Theorem 1 is proved under the more restrictive condition . In the case , Assumption 1 holds with , so both inequalities become . In the more interesting case , we conjecture that the less restrictive inequality should be enough. The only place in our proof where we need is Lemma 4, which is proved with a straightforward and rather rough approach. A more sophisticated technique is needed to show that the same or similar result holds if .
6. Sums of PSI-Processes
Since the limit of the normalized cumulative sums
is an a.s. continuous stochastic process, we can use Theorem 15.5 from Billingsley’s book [
11] (p. 127), which gives the conditions for convergence of processes from the Skorokhod space
to a process with realizations lying in
a.s., in terms of the modulus of continuity
It claims that if
- (i)
for any there exists t such that for all ;
- (ii)
for any positive
and
w there exist
and
such that
- (iii)
converges weakly in terms of finite-dimensional distributions to some random function as , then converges to as , in and is continuous a.s.
In order to bound
in probability, Billingsley suggests to use a corollary to Theorem 8.3 in the same book, which can be formulated as follows. Suppose that
is some random element in
, then for any
and
The sum (
26) can be estimated efficiently in our settings because
is stationary by construction for any
n. Hence, all the probabilities in the sum (
26) are the same and
Remark 4. Actually, the events whose probabilities are added in the right-hand side of (26) are dependent since for a large n and a small δ, an appearance of a big () jump of on suggests that there are many jumps of some , and hence, the correspondent is large; so it is probable that there would be many jumps on other intervals and a probability of a big jump is not too small. Perhaps this observation can be used to find a better bound than the union bound (27), but we have not used it. In order to check assumption (ii) of Billingsley’s theorem, we apply the following two-stage procedure. We use (
27) to bound the “global” probability of jumps greater than
w on some interval of the length
. We aim to show that for any
and
, one can find positive
C,
, and
such that
for all
n greater than some
. To this end, we first show that one can find positive
C,
,
, and
such that (
28) holds for
and then analyze the local structure of
to show that (
28) actually holds for all
.
Our analysis of
is based on the results of
Section 5. Consider the Poisson processes with random intensity
,
, used in the construction of
, and denote
the (random) number of these processes that have at least one jump on
:
This is a binomial random variable with
n trials and the success probability
Lemma 2 provides an upper bound for . We are interested just in the case when is small compared to , that is, when is small. Then, the probability that decays fast enough even for an appropriately chosen but fixed b.
Lemma 3. Let λ satisfy Assumption1
. Then, for any , and , one can find positive τ and such that for all n satisfying , it holds Proof. The well-known Chernoff bound [
16] (Theorem 2.1) ensures that for any
,
where
. For
, Lemma 2 along with the assumption
guarantee that
for any
and some
C (which may depend on
). Taking
yields
as
. Plugging
, which is positive for small
, into (
31) gives
Restricting further to be less than , which is positive by the assumptions, implies that the coefficient of , that is , is bigger than 1, and Lemma 3 is proved. □
Lemma 4. Suppose that the random λ satisfies Assumption 1
and that for some . Let . Then for any (
with the right bound understood as ∞ if )
and for any fixed , there exist positive and τ such that for all Proof. Let
and
be fixed. Denote for short
. By the law of total probability,
for any integer
. Consider an event
, which means that not more than some
k of
n processes
jump on
, and other
processes are constant. Then,
implies that at least one of
k PSI-processes that jumps on
changes by more than
. So, for
,
Proposition 5 provides a bound for the probability in the right-hand part of (
34), and since
has the binomial distribution with the parameters
n and
, using the total probability formula, we continue (
33) as
for any
,
such that
, and some
C depending on the choice of
, where the last inequality follows from Proposition 5.
Suppose now that
. Then
, where the right part is understood as
∞ if
. Choose
and an integer
. Then, by Lemma 3, there exists a positive
such that
for small enough
. Bounds
give
Choosing
ensures that the power of
is minimal for
, and the inequality
guarantees that for
this power
for small enough
; thus, (
32) follows from (
35). □
The estimates that are used in the proof of Lemma 4 essentially rely on the relation between
and
n. Therefore, this argument cannot be used to provide a bound (
28) uniformly for all
. In order to obtain such bound, we apply the technique close to the one used in Billingsley’s book [
11] (Ch. 12). If we impose some moment condition on
, then the following bound holds:
Lemma 5. Suppose that , and for some . Then, for some constant and for all and where is defined by (30). Proof. Due to stationarity of
for each
n, it is enough to consider the case
. For any
, we can represent the increment
as a sum of i.i.d. random variables
Each summand
has a symmetric distribution, and two factors in the right-hand part of (
38) are independent. By Rosenthal’s inequality (see, e.g., [
17] (Th. 2.9)), we obtain
for some constant
. Both moments can be easily evaluated. Since the summands are i.i.d.,
because
. Similarly,
Plugging these two values into (
39), we readily obtain (
36), maybe with another constant
C than in (
39). □
Corollary 2. Suppose that Assumption 1
holds, and in the settings of Lemma 5. Then, for any fixed , one can find positive and τ such that for all it holds Proof. By the Markov inequality, we have
Lemma 5 gives a bound for the right-hand side in terms of
and
n. Lemma 2 provides the upper bound for
, and the condition on
n imposed in the claim implies
. Hence, for any
, there exists a constant
such that for all
Taking
, which is positive by the assumptions, makes both exponents above equal:
. Hence, this choice of
yields (
40) with
for all
, but with a constant in the right-hand side of the inequality. Reducing to
lying in a proper interval
allows us to get rid of the constant. □
Proof of Theorem 1. Without loss of generality, we may assume
(otherwise perform a non-random time change
). We need to show that the conditions of Theorem 15.5 of [
11] (recalled in the beginning of
Section 6) hold. Condition (iii) was already verified (see Corollary 1), and it implies condition (i). So it remains to check condition (ii), which follows from (
28).
Suppose that we are given positive
and
w and want to find
and
such that (
25) holds. Lemma 4 applied with
,
implies that for some positive
,
and any
inequality (
32) holds for
. Corollary 2 guarantees that for some positive
, inequality (
40) holds for
n sufficiently large and
, and in our application below, the lower bound on
n will be fulfilled if
. Choose some
(this interval is not empty if
), fix a positive
and let
.
For this choice of parameters, Lemma 4 (again with
,
) ensures that (
28) holds for all
. Suppose now that
and let
. (Note that
, so
if
.) Then for
,
we have
, so (
32) holds with
instead of
, implying that for any
due to the stationarity of
. Let
Take
and
for some
. Now, we aim to apply Corollary 2 for these
s and
t. Note that
by the choice of
, so it remains to check that the assumption
holds. Indeed,
and
; thus,
by the choice of
a. Hence, Corollary 2 implies
for some
. Hence, Theorem 12.2 from Billingsley’s book [
11] implies that
for some
, which depends on
but not on
.
Suppose now that
and
for all
. Then,
by the triangle inequality. Hence,
with
, by inequalities (
41) and (
42). This argument works for any
, with
and
given by Lemma 4 and Corollary 2, and choosing
small enough, one can guarantee that
. This proves (
28) (with
instead of
w, but
is arbitrary) for all
, and the claim follows by application of Theorem 15.5 from Billingsley’s book [
11]. □