Next Article in Journal
Approximation by Shifts of Compositions of Dirichlet L-Functions with the Gram Function
Next Article in Special Issue
The Calculation of the Density and Distribution Functions of Strictly Stable Laws
Previous Article in Journal
Decision Trees for Evaluation of Mathematical Competencies in the Higher Education: A Case Study
Previous Article in Special Issue
Statistical Indicators of the Scientific Publications Importance: A Stochastic Model and Critical Look
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Scale-Mixed Stable Distributions and Related Limit Theorems

by
Yury Khokhlov
1,2,*,
Victor Korolev
1,2,3,4 and
Alexander Zeifman
1,4,5,6
1
Moscow Center for Fundamental and Applied Mathematics, Moscow State University, 119991 Moscow, Russia
2
Faculty of Computational Mathematics and Cybernetics, Moscow State University, 119991 Moscow, Russia
3
Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
4
Institute of Informatics Problems, Federal Research Center <<Computer Science and Control>> of the Russian Academy of Sciences, 119993 Moscow, Russia
5
Department of Applied Mathematics, Vologda State University, 160000 Vologda, Russia
6
Vologda Research Center of the Russian Academy of Sciences, 160014 Vologda, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 749; https://doi.org/10.3390/math8050749
Submission received: 13 April 2020 / Revised: 1 May 2020 / Accepted: 3 May 2020 / Published: 8 May 2020
(This article belongs to the Special Issue Stability Problems for Stochastic Models: Theory and Applications)

Abstract

:
In the paper, multivariate probability distributions are considered that are representable as scale mixtures of multivariate stable distributions. Multivariate analogs of the Mittag–Leffler distribution are introduced. Some properties of these distributions are discussed. The main focus is on the representations of the corresponding random vectors as products of independent random variables and vectors. In these products, relations are traced of the distributions of the involved terms with popular probability distributions. As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag–Leffler distributions are considered in detail. Their relations with multivariate ‘ordinary’ Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate Mittag–Leffler and generalized Mittag–Leffler distributions are discussed. Limit theorems are proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. The property of scale-mixed multivariate elliptically contoured stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture is used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.

1. Introduction

Actually, this paper can be regarded as variations on the theme of ‘multiplication theorem’ 3.3.1 in the famous book of V. M. Zolotarev [1]. Here, multivariate probability distributions are considered that are representable as scale mixtures of multivariate stable distributions. Some properties of these distributions are discussed. Attention is paid to the representations of the corresponding random vectors as products of independent random variables and vectors. In these products, relations of the distributions of the involved terms with popular probability distributions are traced.
As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag–Leffler distributions are considered in detail. Limit theorems are proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.
Along with general multiplicative properties of the class of scale mixtures of multivariate stable distributions, some important and popular special cases are considered in detail. Multivariate analogs of the Mittag–Leffler distribution are proposed. We study the multivariate (generalized) Linnik and related (generalized) Mittag–Leffler distributions, their interrelation and their relations with multivariate ‘ordinary’ Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate ‘ordinary’ Mittag–Leffler distributions. Namely, we consider mixture representations for the multivariate generalized Mittag–Leffler and multivariate generalized Linnik distributions. We continue the research we started in [2,3,4,5]. In most papers (see, e.g., [6,7,8,9,10,11,12,13,14,15,16,17,18]), the properties of the (multivariate) generalized Mittag–Leffler and Linnik distributions were deduced by analytical methods from the properties of the corresponding probability densities and/or characteristic functions. Instead, here we use the approach which can be regarded as arithmetical in the space of random variables or vectors. Within this approach, instead of the operation of scale mixing in the space of distributions, we consider the operation of multiplication in the space of random vectors/variables provided the multipliers are independent. This approach considerably simplifies the reasoning and makes it possible to notice some general features of the distributions under consideration. We prove mixture representations for general scale mixtures of multivariate stable distributions and their particular cases in terms of normal, Laplace, generalized gamma (including exponential, gamma and Weibull) and stable laws and establish the relationship between the mixing distributions in these representations. In particular, we prove that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the univariate generalized Mittag–Leffler mixing distribution and, moreover, show that this representation can be used as the definition of the multivariate generalized Linnik distribution. Based on these representations, we prove some limit theorems for random sums of independent random vectors with covariance matrices. As a particular case, we prove some theorems in which the multivariate generalized Linnik distribution plays the role of the limit law. By doing so, we demonstrate that the scheme of geometric (or, in general, negative binomial) summation is not the only asymptotic setting (even for sums of independent random variables) in which the multivariate generalized Linnik law appears as the limit distribution.
In [2], we showed that along with the traditional and well-known representation of the univariate Linnik distribution as the scale mixture of a strictly stable law with exponential mixing distribution, there exists another representation of the Linnik law as the normal scale mixture with the Mittag–Leffler mixing distribution. The former representation makes it possible to treat the Linnik law as the limit distribution for geometric random sums of independent identically distributed random variables (random variables) in which summands have very large variances. The latter normal scale mixture representation opens the way to treating the Linnik distribution as the limit distribution in the central limit theorem for random sums of independent random variables in which summands have finite variances. Moreover, being scale mixtures of normal laws, the Linnik distributions can serve as the one-dimensional distributions of a special subordinated Wiener process. Subordinated Wiener processes with various types of subordinators are often used as models of the evolution of stock prices and financial indexes, e.g., [19]. Strange as it may seem, the results concerning the possibility of representation of the Linnik distribution as a scale mixture of normals were never explicitly presented in the literature in full detail before [2], although the property of the Linnik distribution to be a normal scale mixture is something almost obvious. Perhaps, the paper [10] was the closest to this conclusion and exposed the representability of the Linnik law as a scale mixture of Laplace distributions with the mixing distribution written out explicitly. These results became the base for our efforts to extend them from the Linnik distribution to the multivariate generalized Linnik law and more general scale mixtures of multivariate stable distributions. Methodically, the present paper is very close to the work of L. Devroye [20] where many examples of mixture representations of popular probability distributions were discussed from the simulation point of view. The presented material substantially relies on the results of [2,5,15].
In many situations related to experimental data analysis, one often comes across the following phenomenon: although conventional reasoning based on the central limit theorem of probability theory concludes that the expected distribution of observations should be normal, instead, the statistical procedures expose the noticeable non-normality of real distributions. Moreover, as a rule, the observed non-normal distributions are more leptokurtic than the normal law, having sharper vertices and heavier tails. These situations are typical in financial data analysis (see, e.g., Chapter 4 in [19] or Chapter 8 in [21] and the references therein), in experimental physics (see, e.g., [22]) and other fields dealing with statistical analysis of experimental data. Many attempts were undertaken to explain this heavy-tailedness. Most significant theoretical breakthrough is usually associated with the results of B. Mandelbrot and others [23,24,25] who proposed, instead of the standard central limit theorem, to use reasoning based on limit theorems for sums of random summands with very large variances (also see [26,27]) resulting in non-normal stable laws as heavy-tailed models of the distributions of experimental data. However, in most cases, the key assumption within this approach, the lareg size of the variances of elementary summands, can hardly be believed to hold in practice. To overcome this contradiction, in [28], we considered an extended limit setting where it may be assumed that the intensity of the flow of informative events is random resulting in that the number of jumps up to a certain time in a random-walk-type model or the sample size is random. We show that in this extended setting, actually, heavy-tailed scale mixtures of stable laws can also be limit distributions for sums of a random number of random vectors with finite covariance matrices.
The paper is organized as follows. Section 2 contains basic notations and definitions. Some properties of univariate scale distributions are recalled in Section 3. In Section 4, we introduce multivariate stable distributions and prove a multivariate analog of the univariate ’multiplication theorem’ (see Theorem 3.3.1 in [1]). In Section 5 we discuss some properties of scale-mixed multivariate elliptically contoured stable laws. In particular, we prove that these mixtures are identifiable. Section 6 contains the description of the properties of uni- and multi-variate generalized Mittag–Leffler distributions. In Section 7, we consider the multivariate generalized Linnik distribution. Here, we discuss different approaches to the definition of this distribution and prove some new mixture representations for the multivariate generalized Linnik distribution. General properties of scale-mixed multivariate stable distributions are discussed in Section 8. In Section 9, we first prove a general transfer theorem presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of scalar normalized random sums of random vectors with covariance matrices to scale mixtures of multivariate stable distributions and their special cases: ‘pure’ multivariate stable distributions and the multivariate generalized Linnik distributions. The results of this section extend and refine those proved in [29].

2. Basic Notation and Definitions

Let r N . We will consider random elements taking values in the r-dimensional Euclidean space R r . The Euclidean norm of a vector x R r will be denoted x . Assume that all the random variables and random vectors are defined on one and the same probability space ( Ω , A , P ) . The distribution of a random variable Y or an r-variate random vector Y with respect to the measure P will be denoted L ( Y ) and L ( Y ) , respectively. The weak convergence, the coincidence of distributions and the convergence in probability with respect to a specified probability measure will be denoted by the symbols ⟹, = d and P , respectively. The product of independent random elements will be denoted by the symbol ∘. The vector with all zero coordinates will be denoted 0 .
A univariate random variable with the standard normal distribution function Φ ( x ) will be denoted X,
P ( X < x ) = Φ ( x ) = 1 2 π x e z 2 / 2 d z , x R .
Let Σ be a positive definite ( r × r ) -matrix. The normal distribution in R r with zero vector of expectations and covariance matrix Σ will be denoted N Σ . This distribution is defined by its density
ϕ ( x ) = exp { 1 2 x Σ 1 x } ( 2 π ) r / 2 | Σ | 1 / 2 , x R r .
The characteristic function f ( X ) ( t ) of a random vector X such that L ( X ) = N Σ has the form
f ( X ) ( t ) = E exp { i t X } = exp 1 2 t Σ t , t R r .
A random variable having the gamma distribution with shape parameter r > 0 and scale parameter λ > 0 will be denoted G r , λ ,
P ( G r , λ < x ) = 0 x g ( z ; r , λ ) d z , with g ( x ; r , λ ) = λ r Γ ( r ) x r 1 e λ x , x 0 ,
where Γ ( r ) is Euler’s gamma-function,
Γ ( r ) = 0 x r 1 e x d x , r > 0 .
In this notation, obviously, G 1 , 1 is a random variable with the standard exponential distribution: P ( G 1 , 1 < x ) = 1 e x 1 ( x 0 ) (here and in what follows 1 ( A ) is the indicator function of a set A).
The gamma distribution is a particular representative of the class of generalized gamma distributions (GG distributions), that was first described in [30] as a special family of lifetime distributions containing both gamma and Weibull distributions. A generalized gamma (GG) distribution is the absolutely continuous distribution defined by the density
g ¯ ( x ; r , α , λ ) = | α | λ r Γ ( r ) x α r 1 e λ x α , x 0 ,
with α R , λ > 0 , r > 0 . A random variable with the density g ¯ ( x ; r , α , λ ) will be denoted G ¯ r , α , λ . It is easy to see that
G ¯ r , α , μ = d G r , μ 1 / α = d μ 1 / α G r , 1 1 / α = d μ 1 / α G ¯ r , α , 1 .
Let γ > 0 . The distribution of the random variable W γ :
P W γ < x = 1 e x γ 1 ( x 0 ) ,
is called the Weibull distribution with shape parameter γ . It is obvious that W 1 is the random variable with the standard exponential distribution: P ( W 1 < x ) = 1 e x 1 ( x 0 ) . The Weibull distribution is a particular case of GG distributions corresponding to the density g ¯ ( x ; 1 , γ , 1 ) . It is easy to see that W 1 1 / γ = d W γ . Moreover, if γ > 0 and γ > 0 , then P ( W γ 1 / γ x ) = P ( W γ x γ ) = e x γ γ = P ( W γ γ x ) , x 0 , that is, for any γ > 0 and γ > 0
W γ γ = d W γ 1 / γ .
In the paper [31], it was shown that any gamma distribution with shape parameter no greater than one is mixed exponential. Namely, the density g ( x ; r , μ ) of a gamma distribution with 0 < r < 1 can be represented as
g ( x ; r , μ ) = 0 z e z x p ( z ; r , μ ) d z ,
where
p ( z ; r , μ ) = μ r Γ ( 1 r ) Γ ( r ) · 1 ( z μ ) ( z μ ) r z .
Moreover, a gamma distribution with shape parameter r > 1 cannot be represented as a mixed exponential distribution.
In [32] it was proved that if r ( 0 , 1 ) , μ > 0 and G r , 1 and G 1 r , 1 are independent gamma-distributed random variables, then the density p ( z ; r , μ ) defined by (2) corresponds to the random variable
Z r , μ = μ ( G r , 1 + G 1 r , 1 ) G r , 1 = d μ Z r , 1 = d μ 1 + 1 r r V 1 r , r ,
where V 1 r , r is the random variable with the Snedecor–Fisher distribution defined by the probability density
q ( x ; 1 r , r ) = ( 1 r ) 1 r r r Γ ( 1 r ) Γ ( r ) · 1 x r [ r + ( 1 r ) x ] , x 0 .
In other words, if r ( 0 , 1 ) , then
G r , μ = d W 1 Z r , μ 1 .

3. Univariate Stable Distributions

Let r N . Recall that the distribution of an r-variate random vector S is called stable, if for any a , b R there exist c R and d R r such that a S 1 + b S 2 = d c S + d , where S 1 and S 2 are independent and S 1 = d S 2 = d S . In what follows we will concentrate our attention on a special sub-class of stable distributions called strictly stable. This sub-class is characterized by that in the definition given above d = 0 .
In the univariate case, the characteristic function f ( t ) of a strictly stable random variable can be represented in several equivalent forms (see, e.g., [1]). For our further constructions the most convenient form is
f α , θ ( t ) = exp { | t | α + i θ w ( t , α ) } , t R ,
where
w ( t , α ) = tan π α 2 · | t | α sign t , α 1 , 2 π · t log | t | , α = 1 .
Here α ( 0 , 2 ] is the characteristic exponent, θ [ 1 , 1 ] is the skewness parameter (for simplicity we consider the “standard” case with unit scale coefficient at t). Any random variable with characteristic function (5) will be denoted S ( α , θ ) and the characteristic function (5) itself will be written as f α , θ ( t ) . For definiteness, S ( 1 , 1 ) = 1 .
From (5) it follows that the characteristic function of a symmetric ( θ = 0 ) strictly stable distribution has the form
f α , 0 ( t ) = e | t | α , t R .
From (7) it is easy to see that S ( 2 , 0 ) = d 2 X .
Univariate stable distributions are popular examples of heavy-tailed distributions. Their moments of orders δ α do not exist (the only exception is the normal law corresponding to α = 2 ), and if 0 < δ < α , then
E | S ( α , 0 ) | δ = 2 δ π · Γ ( δ + 1 2 ) Γ ( 1 δ α ) Γ ( 2 δ 1 )
(see, e.g., [33]). Stable laws and only they can be limit distributions for sums of a non-random number of independent identically distributed random variables with very large variance under linear normalization.
Let 0 < α 1 . By S ( α , 1 ) we will denote a positive random variable with the one-sided stable distribution corresponding to the characteristic function f α , 1 ( t ) , t R . The Laplace–Stieltjes transform ψ α , 1 ( S ) ( s ) of the random variable S ( α , 1 ) has the form
ψ α , 1 ( S ) ( s ) = E exp { s S ( α , 1 ) } = e s α , s > 0 .
The moments of orders δ α of the random variable S ( α , 1 ) are very large and for 0 < δ < α we have
E S δ ( α , 1 ) = 2 δ Γ ( 1 δ α ) Γ ( 1 δ )
(see, e.g., [33]). For more details see [27] or [1].
The following product representations hold for strictly stable random variables. Let α ( 0 , 2 ] , | θ | min { 1 , 2 α 1 } , α ( 0 , 1 ] . Then
S ( α α , θ ) = d S 1 / α ( α , 1 ) S ( α , θ ) ,
see Theorem 3.3.1 in [1]. In particular,
S ( α , 0 ) = d 2 S ( α / 2 , 1 ) X .
Another particular case of (9) concerns one-sided strictly stable random variables: if 0 < α 1 and 0 < α 1 , then
S ( α α , 1 ) = d S 1 / α ( α , 1 ) S ( α , 1 ) ,
see Corollary 1 to Theorem 3.3.1 in [1]. Finally, if 0 < α 1 , then
S ( α , θ ) = d S ( α , 1 ) S ( 1 , θ ) ,
see Corollary to Theorem 3.3.2 (relation (3.3.10)) in [1].

4. Multivariate Stable Distributions

Now turn to the multivariate case. By Q r we denote the unit sphere: Q r = { u R r : u = 1 } . Let μ be a finite (‘spectral’) measure on Q r . It is known that the characteristic function of a strictly stable random vector S has the form
E exp { i t S } = exp Q r | t s | α + i w ( t s , α ) μ ( d s ) , t R r ,
with w ( · , α ) defined in (6), see [34,35,36,37]. An r-variate random vector with the characteristic function (13) will be denoted S ( α , μ ) . We will sometimes use the notation S α , μ for L S ( α , μ ) .
As is known, a random vector S has a strictly stable distribution with some characteristic exponent α if and only if for any u R r the random variable u S (the projection of S ) has the univariate strictly stable distribution with the same characteristic exponent α and some skewness parameter θ ( u ) up to a scale coefficient γ ( u ) :
u S ( α , μ ) = d γ ( u ) S α , θ ( u ) ,
see [38]. Moreover, the projection parameter functions are related with the spectral measure μ as
γ ( u ) α = Q r | u s | α μ ( d s ) ,
θ ( u ) γ ( u ) α = Q r | u s | α sign ( u s ) μ ( d s ) , u R r ,
see [36,37,38]. Conversely, the spectral measure μ is uniquely determined by the projection parameter functions γ ( u ) and θ ( u ) . However, there is no simple formula for this [37].
An r-variate analog of a one-sided univariate strictly stable random variable S ( α , 1 ) is the random vector S ( α , μ + ) where 0 < α 1 and μ + is a finite measure concentrated on the set Q + r = { u = ( u 1 , , u r ) : u i 0 , i = 1 , , r } .
Consider multivariate analogs of product representations (9) and (11).
Theorem 1.
Let 0 < α 2 , 0 < α 1 , μ be a finite measure on Q r , S ( α , μ ) be an r-variate random vector having the strictly stable distribution with characteristic exponent α and spectral measure μ. Then
S ( α α , μ ) = d S 1 / α ( α , 1 ) S ( α , μ ) .
If, in addition, 0 < α < 1 , and μ + is a finite measure on Q + r , then
S ( α α , μ + ) = d S 1 / α ( α , 1 ) S ( α , μ + ) .
Proof. 
Let γ ( u ) and θ ( u ) , u R r , be the projection parameter functions corresponding to the measure μ (see (15) and (16)). Then, in accordance with (9) and (14), for any u R r we have
u S 1 / α ( α , 1 ) S ( α , μ ) = S 1 / α ( α , 1 ) u S ( α , μ ) = d S 1 / α ( α , 1 ) γ ( u ) S ( α , θ ( u ) ) = d
= d γ ( u ) · S 1 / α ( α , 1 ) S α , θ ( u ) = d γ ( u ) S α α , θ ( u ) .
The remark that γ ( u ) and θ ( u ) uniquely determine μ concludes the proof of (17). Representation (18) is a particular case of (17). □
Remark 1.
Actually, the essence of Theorem 1 is that all multivariate strictly stable distributions with α < 2 are scale mixtures of multivariate scale laws with no less characteristic exponent, the mixing distribution being univariate one-sided strictly stable law. The case α = 2 is not an exception: in this case the mixing distribution is degenerate concentrated in the unit point. This degenerate law formally satisfies the definition of a stable distribution being the only stable law that is not absolutely continuous.
Let Σ be a symmetric positive definite ( r × r ) -matrix, α ( 0 , 2 ] . If the characteristic function f α , μ ( t ) of a strictly stable random vector S ( α , μ ) has the form
f α , μ ( t ) = E exp { i t S α , μ } = exp { ( t Σ t ) α / 2 } , t R r ,
then the random vector S ( α , μ ) is said to have the (centered) elliptically contoured stable distribution with characteristic exponent α . In this case for better vividness we will use the special notation S ( α , μ ) = S ( α , Σ ) . The corresponding characteristic function (19) will be denoted f α , Σ ( t ) and the elliptically contoured stable distribution with characteristic function (19) will be denoted S α , Σ . It is easy to see that S 2 , Σ = N 2 Σ .
Let α ( 0 , 2 ] . If X is a random vector such that L ( X ) = N Σ independent of the random variable S ( α / 2 , 1 ) , then from (17) it follows that
S ( α , Σ ) = d S 1 / 2 ( α / 2 , 1 ) S ( 2 , Σ ) = d 2 S ( α / 2 , 1 ) X
(also see Proposition 2.5.2 in [27]). More general, If 0 < α 2 and 0 < α 1 , then
S ( α α , Σ ) = d S 1 / α ( α , 1 ) S ( α , Σ ) .
If α = 2 , then (21) turns into (20).

5. Scale Mixtures of Multivariate Elliptically Contoured Stable Distributions

Let U be a nonnegative random variable. The symbol E N U Σ ( · ) will denote the distribution which for each Borel set A in R r is defined as
E N U Σ ( A ) = 0 N u Σ ( A ) d P ( U < u ) .
It is easy to see that if X is a random vector such that L ( X ) = N Σ , then E N U Σ = L ( U X ) .
In this notation, relation (20) can be written as
S α , Σ = E N 2 S ( α / 2 , 1 ) Σ .
By analogy, the symbol E S α , U 2 / α Σ will denote the distribution that for each Borel set A in R r is defined as
E S α , U 2 / α Σ ( A ) = 0 S α , u 2 / α Σ ( A ) d P ( U < u ) .
The characteristic function corresponding to the distribution E S α , U 2 / α Σ has the form
0 exp t ( u 2 / α Σ ) t α / 2 d P ( U < u ) = 0 exp ( u 1 / α t ) Σ ( u 1 / α t ) α / 2 d P ( U < u ) =
= E exp i t U 1 / α S ( α , Σ ) , t R r ,
where the random variable U is independent of the random vector S ( α , Σ ) , that is, the distribution E S α , U 2 / α Σ corresponds to the product U 1 / α S ( α , Σ ) .
Let U be the set of all nonnegative random variables. Now consider an auxiliary statement dealing with the identifiability of the family of distributions { E S α , U 2 / α Σ : U U } .
Lemma 1.
Whatever a nonsingular positive definite matrix Σ is, the family { E S α , U 2 / α Σ : U U } is identifiable in the sense that if U 1 U , U 2 U and
E S α , U 1 2 / α Σ ( A ) = E S α , U 2 2 / α Σ ( A )
for any set A B ( R r ) , then U 1 = d U 2 .
Proof. 
The proof of this lemma is very simple. If U U , then it follows from (13) that the characteristic function v α , Σ ( U ) ( t ) corresponding to the distribution E S α , U 2 / α Σ has the form
v α , Σ ( U ) ( t ) = 0 exp t ( u 2 / α Σ ) t α / 2 d P ( U < u ) =
= 0 exp { u s } d P ( U < u ) , s = ( t Σ t ) α / 2 , t R r ,
But on the right-hand side of (25) there is the Laplace–Stieltjes transform of the random variable U. From (24) it follows that v α , Σ ( U 1 ) ( t ) = v α , Σ ( U 2 ) ( t ) whence by virtue of (25) the Laplace–Stieltjes transforms of the random variables U 1 and U 2 coincide, whence, in turn, it follows that U 1 = d U 2 . The lemma is proved. □
Remark 2.
When proving Lemma 1 we established a simple but useful by-product result: if ψ ( U ) ( s ) is the Laplace–Stieltjes transform of the random variable U, then the characteristic function v α , Σ ( U ) ( t ) corresponding to the distribution E S α , U 2 / α Σ has the form
v α , Σ ( U ) ( t ) = ψ ( U ) ( t Σ t ) α / 2 , t R r .
Let X be a random vector such that L ( X ) = N Σ with some positive definite ( r × r ) -matrix Σ . Define the multivariate Laplace distribution as L 2 W 1 X = E N 2 W 1 Σ . The random vector with this multivariate Laplace distribution will be denoted Λ Σ . It is well known that the Laplace—Stieltjes transform ψ ( W 1 ) ( s ) of the random variable W 1 with the exponential distribution has the form
ψ ( W 1 ) ( s ) = ( 1 + s ) 1 , s > 0 .
Hence, in accordance with (27) and Remark 2, the characteristic function f Σ ( Λ ) ( t ) of the random variable Λ Σ has the form
f Σ ( Λ ) ( t ) = ψ ( W 1 ) t Σ t = 1 + t Σ t 1 , t R r .

6. Generalized Mittag–Leffler Distributions

We begin with the univariate case. The probability distribution of a nonnegative random variable M δ whose Laplace transform is
ψ δ ( M ) ( s ) = E e s M δ = 1 + λ s δ 1 , s 0 ,
where λ > 0 , 0 < δ 1 , is called the Mittag–Leffler distribution. For simplicity, in what follows we will consider the standard scale case and assume that λ = 1 .
The origin of the term Mittag–Leffler distribution is due to that the probability density corresponding to Laplace transform (28) has the form
f δ ( M ) ( x ) = 1 x 1 δ n = 0 ( 1 ) n x δ n Γ ( δ n + 1 ) = d d x E δ ( x δ ) , x 0 ,
where E δ ( z ) is the Mittag–Leffler function with index δ that is defined as the power series
E δ ( z ) = n = 0 z n Γ ( δ n + 1 ) , δ > 0 , z Z .
With δ = 1 , the Mittag–Leffler distribution turns into the standard exponential distribution, that is, F 1 M ( x ) = [ 1 e x ] 1 ( x 0 ) , x R . But with δ < 1 the Mittag–Leffler distribution density has the heavy power-type tail: from the well-known asymptotic properties of the Mittag–Leffler function it can be deduced that if 0 < δ < 1 , then
f δ ( M ) ( x ) sin ( δ π ) Γ ( δ + 1 ) π x δ + 1
as x , see, e.g., [39].
It is well-known that the Mittag–Leffler distribution is geometrically stable. This means that if X 1 , X 2 , are independent random variables whose distributions belong to the domain of attraction of a one-sided α -strictly stable law L ( S ( α , 1 ) ) and N B 1 , p is the random variable independent of X 1 , X 2 , and having the geometric distribution
P ( N B 1 , p = n ) = p ( 1 p ) n 1 , n = 1 , 2 , , p ( 0 , 1 ) ,
then for each p ( 0 , 1 ) there exists a constant a p > 0 such that a p X 1 + + X N B 1 , p M δ as p 0 , see, e.g., [40].
The history of the Mittag–Leffler distribution was discussed in [2]. For more details see e.g., [2,3] and the references therein. The Mittag–Leffler distributions are of serious theoretical interest in the problems related to thinned (or rarefied) homogeneous flows of events such as renewal processes or anomalous diffusion or relaxation phenomena, see [41,42] and the references therein.
Let ν > 0 , δ ( 0 , 1 ] . It can be easily seen that the Laplace transform ψ δ ( M ) ( s ) (see (28)) is greatly divisible. Therefore, any its positive power is a Laplace transform, and, moreover, is greatly divisible as well. The distribution of a nonnegative random variable M δ , ν defined by the Laplace–Stieltjes transform
ψ δ , ν ( M ) ( s ) = E e s M δ , ν = 1 + s δ ν , s 0 ,
is called the generalized Mittag–Leffler distribution, see [43,44] and the references therein. Sometimes this distribution is called the Pillai distribution [20], although in the original paper [18] R. Pillai called it semi-Laplace. In the present paper we will keep to the first term generalized Mittag–Leffler distribution.
The properties of univariate generalized Mittag–Leffler distribution are discussed in [4,43,44,45]. In particular, if δ ( 0 , 1 ] and ν > 0 , then
M δ , ν = d S ( δ , 1 ) G ¯ ν , δ , 1 = d S ( δ , 1 ) G ν , 1 1 / δ
(see [43,44]). If ν = 1 , then (31) turns into
M δ = d S ( δ , 1 ) W 1 1 / δ .
If β δ , then the moments of order β of the random variable M δ , ν are very large, and if 0 < β < δ < 1 , then
E M δ , ν β = Γ ( 1 β δ ) Γ ( ν + β δ ) Γ ( 1 β ) Γ ( ν ) ,
see [4].
In [4] it was demonstrated that the generalized Mittag–Leffler distribution can be represented as a scale mixture of ‘ordinary’ Mittag–Leffler distributions: if ν ( 0 , 1 ] and δ ( 0 , 1 ] , then
M δ , ν = d Z ν , 1 1 / δ M δ .
In [4] it was also shown that any generalized Mittag–Leffler distribution is a scale mixture a one-sided stable law with any greater characteristic parameter, the mixing distribution being the generalized Mittag–Leffler law: if δ ( 0 , 1 ] , δ ( 0 , 1 ) and ν > 0 , then
M δ δ , ν = d S ( δ , 1 ) M δ , ν 1 / δ .
Now turn to the multivariate case. As the starting point for our consideration we take representation (31). The nearest aim is to obtain its multivariate generalization. Let S ( α , μ ) be a strictly stable random vector with α 1 . Consider the characteristic function h α , ν , μ ( t ) of the random vector G ν , 1 1 / α S ( α , μ ) . From (13) and (6) we have
h α , ν , μ ( t ) = E exp i t G ν , 1 1 / α S ( α , μ ) = 1 Γ ( ν ) 0 f α , μ ( s 1 / α t ) s ν 1 e s d s =
= 1 Γ ( ν ) 0 exp s 1 log f α , μ ( t ) s ν 1 d s =
= 1 Γ ( ν ) 1 log f α , μ ( t ) ν 0 e s s ν 1 d s = 1 log f α , μ ( t ) ν .
That is, from (1) and (35) we obtain the following result.
Lemma 2.
The characteristic function h α , ν , μ ( t ) of the product of the random variable G ¯ ν , α , 1 with the generalized gamma distribution with parameters ν > 0 , 0 < α 2 , α 1 , λ = 1 and the random vector S ( α , μ ) with the multivariate strictly stable distribution with the characteristic exponent α and spectral measure μ, independent of G ¯ ν , α , 1 , has the form (35).
Now, comparing the right-hand side of (35) with (30) we can conclude that, if G ¯ ν , α , 1 is the random variable with the generalized gamma distribution with parameters ν > 0 , 0 < α 2 , α 1 , λ = 1 , and S ( α , μ + ) is a random vector with the one-sided strictly stable distribution with characteristic exponent α ( 0 , 1 ) and spectral measure μ + concentrated on Q + r , then we have all grounds to call the distribution of the random vector G ¯ ν , α , 1 S ( α , μ + ) multivariate generalized Mittag–Leffler distribution with parameters α , ν and μ + . To provide the possibility to consider the univariate generalized Mittag–Leffler distribution as a particular case of a more general multivariate definition, here we use the measure μ + and α ( 0 , 1 ) characterising the “one-sided” stable law, although from the formal viewpoint this is not obligatory. Moreover, as we will see below, in the multivariate case the (generalized) Mittag–Leffler distribution can be regarded as a special case of the (generalized) Linnik law defined in the same way but with μ and α ( 0 , 2 ] .
By M α , ν , μ + we will denote the random vector with the multivariate generalized Mittag–Leffler distribution, M α , ν , μ + = d G ¯ ν , α , 1 S ( α , μ + ) .
Setting ν = 1 we obtain the definition of the ‘ordinary’ multivariate Mittag–Leffler distribution as the distribution of the random vector M α , μ + = d W 1 1 / α S ( α , μ + ) given by the characteristic function h α , μ + ( t ) = 1 log f α , μ + ( t ) 1 .
Some properties of the multivariate generalized Mittag–Leffler distributions generalizing (33) and (34) to the multivariate case are presented in the following theorem.
Theorem 2.
Let δ ( 0 , 1 ) , δ ( 0 , 1 ) and ν > 0 . Then
M δ δ , ν , μ + = d M δ , ν 1 / δ S ( δ , μ + ) ,
M δ , ν , μ + = d Z ν , 1 1 / δ M δ , μ +
with the random variable Z ν , 1 defined in (3).
Proof. 
To prove (36) use (1) together with representation (18) in Theorem 1 and obtain
M δ δ , ν , μ + = d G ¯ ν , δ δ , 1 S ( δ δ , μ + ) = d G ¯ ν , δ , 1 1 / δ S 1 / δ ( δ , 1 ) S ( δ , μ + ) = d M δ , ν 1 / δ S ( δ , μ + ) .
To prove (37) use (1) together with (4) and obtain
M δ , ν , μ + = d G ¯ ν , δ , 1 S ( δ , μ + ) = d Z ν , 1 1 / δ W 1 1 / δ S ( δ , μ + ) = d Z ν , 1 1 / δ M δ , μ + .
The theorem is proved. □

7. Generalized Linnik Distributions

In 1953 Yu. V. Linnik [46] introduced a class of symmetric distributions whose characteristic functions have the form
f α ( L ) ( t ) = 1 + | t | α 1 , t R ,
where α ( 0 , 2 ] . The distributions with the characteristic function (38) are traditionally called the Linnik distributions. Although sometimes the term α-Laplace distributions [18] is used, we will use the first term which has already become conventional. If α = 2 , then the Linnik distribution turns into the Laplace distribution corresponding to the density
f ( Λ ) ( x ) = 1 2 e | x | , x R .
A random variable with density (39) will be denoted Λ . A random variable with the Linnik distribution with parameter α will be denoted L α .
Perhaps, most often Linnik distributions are recalled as examples of symmetric geometric stable distributions. This means that if X 1 , X 2 , are independent random variables whose distributions belong to the domain of attraction of an α -strictly stable symmetric law and N B 1 , p is the random variable independent of X 1 , X 2 , and having the geometric distribution (29), then for each p ( 0 , 1 ) there exists a constant a p > 0 such that a p X 1 + + X N B 1 , p L α as p 0 , see, e.g., [47] or [40].
The properties of the Linnik distributions were studied in many papers. We should mention [7,8,9,48] and other papers, see the survey in [2].
In [2,7] it was demonstrated that
L α = d W 1 1 / α S ( α , 0 ) = d 2 M α / 2 X ,
where the random variable M α / 2 has the Mittag–Leffler distribution with parameter α / 2 .
The multivariate Linnik distribution was introduced by D. N. Anderson in [49] where it was proved that the function
f α , Σ ( L ) ( t ) = 1 + ( t Σ t ) α / 2 1 , t R r , α ( 0 , 2 ) ,
is the characteristic function of an r-variate probability distribution, where Σ is a positive definite ( r × r ) -matrix. In [49] the distribution corresponding to the characteristic function (41) was called the r-variate Linnik distribution. For the properties of these distributions see [16,49]. To distinguish from the general case, in what follows, the distribution corresponding to characteristic function (41) will be called multivariate (centered) elliptically contoured Linnik distribution.
The r-variate elliptically contoured Linnik distribution can also be defined in another way. Let X be a random vector such that L ( X ) = N Σ , where Σ is a positive definite ( r × r ) -matrix, independent of the random variable M α / 2 . By analogy with (40) introduce the random vector L α , Σ as
L α , Σ = 2 M α / 2 X .
Then, in accordance with what has been said in Section 5,
L ( L α , Σ ) = E N 2 M α / 2 Σ .
Using Remark 1 we can easily make sure that the two definitions of the multivariate elliptically contoured Linnik distribution coincide. Indeed, with the account of (28), according to Remark 2, the characteristic function of the random vector L α , Σ defined by (42) has the form
E exp { i t L α , Σ } = ψ α / 2 ( M ) t Σ t = 1 + ( t Σ t ) α / 2 1 = f α , Σ ( L ) ( t ) , t R r ,
that coincides with Anderson’s definition (41).
Based on (40), one more equivalent definition of the multivariate elliptically contoured Linnik distribution can be proposed. Namely, let L α , Σ be an r-variate random vector such that
L α , Σ = W 1 1 / α S ( α , Σ ) .
In accordance with (27) and Remark 2 the characteristic function of the random vector L α , Σ defined by (43) again has the form
E exp { i t L α , Σ } = ψ ( W 1 ) ( t Σ t ) α / 2 = 1 + ( t Σ t ) α / 2 1 = f α , Σ ( L ) ( t ) , t R r .
The definitions (42) and (43) open the way to formulate limit theorems stating that the multivariate elliptically contoured Linnik distribution can not only be limiting for geometric random sums of independent identically distributed random vectors with very large second moments [50], but it also can be limiting for random sums of independent random vectors with finite covariance matrices.
It can be easily seen that the characteristic function f α ( L ) ( t ) (see (38)) is very largely divisible. Therefore, any its positive power is a characteristic function and, moreover, is also very largely divisible. In [17], Pakes showed that the probability distributions known as generalized Linnik distributions which have characteristic functions
f α , ν ( L ) ( t ) = 1 + | t | α ν , t R , 0 < α 2 , ν > 0 ,
play an important role in some characterization problems of mathematical statistics. The class of probability distributions corresponding to characteristic function (44) have found some interesting properties and applications, see [6,7,10,11,12,14,51,52] and related papers. In particular, they are good candidates to model financial data which exhibits high kurtosis and heavy tails [53].
Any random variable with the characteristic function (44) will be denoted L α , ν .
Recall some results containing mixture representations for the generalized Linnik distribution. The following well-known result is due to Devroye [7] and Pakes [17] who showed that
L α , ν = d S ( α , 0 ) G ν , 1 1 / α = d S ( α , 0 ) G ¯ ν , α , 1
for any α ( 0 , 2 ] and ν > 0 .
It is well known that
E G ν , 1 γ = Γ ( ν + γ ) Γ ( ν )
for γ > ν . Hence, for 0 β < α from (8) and (45) we obtain
E | L α , ν | β = E | S ( α , 0 ) | β · E G ν , 1 β / α = 2 β π · Γ ( β + 1 2 ) Γ ( 1 β α ) Γ ( ν + β α ) Γ ( 2 β 1 ) Γ ( ν ) .
Generalizing and improving some results of [15,17], with the account of (31) in [5] it was demonstrated that for ν > 0 and α ( 0 , 2 ]
L α , ν = d X 2 S ( α / 2 , 1 ) G ν , 1 1 / α = d X 2 S ( α / 2 , 1 ) G ¯ ν , α / 2 , 1 = d X 2 M α / 2 , ν .
that is, the generalized Linnik distribution is a normal scale mixture with the generalized Mittag–Leffler mixing distribution.
It is easy to see that for any α > 0 and α > 0
G ¯ ν , α α , 1 = d G ν , 1 1 / α α = d ( G ν , 1 1 / α ) 1 / α = d G ¯ ν , α , 1 1 / α .
Therefore, for α ( 0 , 2 ] , α ( 0 , 1 ) and ν > 0 using (45) and the univariate version of (14) we obtain the following chain of relations:
L α α , ν = d S ( α α , 0 ) G ν , 1 1 / α α = d S ( α , 0 ) S 1 / α ( α , 1 ) G ν , 1 1 / α α = d
= d S ( α , 0 ) S ( α , 1 ) G ¯ ν , α , 1 1 / α = d S ( α , 0 ) M α , ν 1 / α .
Hence, the following statement, more general than (46), holds representing the generalized Linnik distribution as a scale mixture of a symmetric stable law with any greater characteristic parameter, the mixing distribution being the generalized Mittag–Leffler law: if α ( 0 , 2 ] , α ( 0 , 1 ) and ν > 0 , then
L α α , ν = d S ( α , 0 ) M α , ν 1 / α .
Now let ν ( 0 , 1 ] . From (45) and (4) it follows that
L α , ν = d S ( α , 0 ) G ν , 1 1 / α = d S ( α , 0 ) W 1 1 / α Z ν , 1 1 / α = d L α Z ν , 1 1 / α
yielding the following relation proved in [5]: if ν ( 0 , 1 ] and α ( 0 , 2 ] , then
L α , ν = d L α · Z ν , 1 1 / α .
In other words, with ν ( 0 , 1 ] and α ( 0 , 2 ] , the generalized Linnik distribution is a scale mixture of ‘ordinary’ Linnik distributions. In the same paper the representation of the generalized Linnik distribution via the Laplace and ‘ordinary’ Mittag–Leffler distributions was obtained.
For δ ( 0 , 1 ] denote
R δ = S ( δ , 1 ) S ( δ , 1 ) ,
where S ( δ , 1 ) and S ( δ , 1 ) are independent random variables with one and the same one-sided stable distribution with the characteristic exponent δ . In [2] it was shown that the probability density f δ ( R ) ( x ) of the ratio R δ of two independent random variables with one and the same one-sided strictly stable distribution with parameter δ has the form
f δ ( R ) ( x ) = sin ( π δ ) x δ 1 π [ 1 + x 2 δ + 2 x δ cos ( π δ ) ] , x > 0 ,
also see [1], Section 3.3, where it was hidden among other calculations, but was not stated explicitly. In [5] it was proved that if ν ( 0 , 1 ] and α ( 0 , 2 ] , then
L α , ν = d X Z ν , 1 1 / α 2 M α / 2 = d Λ Z ν , 1 1 / α R α / 2 .
So, the density of the univariate generalized Linnik distribution admits a simple integral representation via known elementary densities (2), (39) and (45).
As concerns the property of geometric stability, the following statement holds.
Lemma 3.
Any univariate symmetric random variable Y α is geometrically stable if and only if it is representable as
Y α = W 1 1 / α S ( α , 0 ) , 0 < α 2 .
Any univariate positive random variable Y α is geometrically stable if and only if it is representable as
Y α = W 1 1 / α S ( α , 1 ) , 0 < α 1 .
Proof. 
These representations immediately follow from the definition of geometrically stable distributions and the transfer theorem for cumulative geometric random sums, see, e.g., [54]. □
Corollary 1.
If ν 1 , then from the identifiability of scale mixtures of stable laws (see Lemma 1) it follows that the generalized Linnik distribution and the generalized Mittag–Leffler distributions are not geometrically stable.
Let Σ be a positive definite ( r × r ) -matrix, α ( 0 , 2 ] , ν > 0 . As the ‘ordinary’ multivariate Linnik distribution, the multivariate elliptically contoured generalized Linnik distribution can be defined in at least two equivalent ways. First, it can be defined by its characteristic function. Namely, a multivariate distribution is called (centered) elliptically contoured generalized Linnik law, if the corresponding characteristic function has the form
f α , ν , Σ ( L ) ( t ) = 1 + ( t Σ t ) α / 2 ν , t R r .
Second, let X be a random vector such that L ( X ) = N Σ , independent of the random variable M α / 2 , ν with the generalized Mittag–Leffler distribution. By analogy with (46), introduce the random vector L α , ν , Σ as
L α , ν , Σ = 2 M α / 2 , ν X .
Then, in accordance with what has been said in Section 5,
L ( L α , ν , Σ ) = E N 2 M α / 2 , ν Σ .
The distribution (42) will be called the multivariate (centered) elliptically contoured generalized Linnik distribution.
Using Remark 2 we can easily make sure that the two definitions of the multivariate elliptically contoured generalized Linnik distribution coincide. Indeed, with the account of (30), according to Remark 2, the characteristic function of the random vector L α , ν , Σ defined by (48) has the form
E exp { i t L α , ν , Σ } = ψ α / 2 , ν ( M ) t Σ t = 1 + ( t Σ t ) α / 2 ν = f α , ν , Σ ( L ) ( t ) , t R r ,
that coincides with (47).
Based on (45), one more equivalent definition of the multivariate elliptically contoured generalized Linnik distribution can be proposed. Namely, let L α , ν , Σ be an r-variate random vector such that
L α , ν , Σ = G ν , 1 1 / α S ( α , Σ ) .
If ν = 1 , then, by definition, we obtain the random vector W 1 1 / α S ( α , Σ ) = L α , Σ with the ‘ordinary’ multivariate elliptically contoured Linnik distribution.
It is well known that the Laplace—Stieltjes transform ψ ν , 1 ( G ) ( s ) of the random variable G ν , 1 having the gamma distribution with the shape parameter ν has the form
ψ ν , 1 ( G ) ( s ) = ( 1 + s ) ν , s > 0 .
Then in accordance with Remark 1 the characteristic function of the random vector L α , ν , Σ defined by (49) again has the form
E exp { i t L α , ν , Σ } = ψ ν , 1 ( G ) ( t Σ t ) α / 2 = 1 + ( t Σ t ) α / 2 ν = f α , ν , Σ ( L ) ( t ) , t R r .
Definitions (48) and (49) open the way to formulate limit theorems stating that the multivariate elliptically contoured generalized Linnik distribution can be limiting both for random sums of independent identically distributed random vectors with very large second moments, and for random sums of independent random vectors with finite covariance matrices.
There are some different ways of generalization of the univariate symmetric Linnik and generalized Linnik laws to the asymmetric case. The traditional (and formal) approach to the asymmetric generalization of the Linnik distribution (see, e.g., [15,55,56]) consists in the consideration of geometric sums of random summands whose distributions are attracted to an asymmetric strictly stable distribution. The variances of such summands are very large. Since in modeling real phenomena, as a rule, there are no solid reasons to reject the assumption of the finiteness of the variances of elementary summands, in [57], two alternative asymmetric generalizations were proposed based on the representability of the Linnik distribution as a scale mixture of normal laws or a scale mixture of Laplace laws.
Nevertheless, for our purposes it is convenient to deal with the traditional asymmetric generalization of the generalized Linnik distribution. Let S ( α , θ ) be a random variable with the strictly stable distribution defined by the characteristic exponent α ( 0 , 2 ] and asymmetry parameter θ [ 1 , 1 ] , G ¯ ν , α , 1 be a random variable having the GG distribution with shape parameter ν > 0 and exponent power parameter α independent of S ( α , θ ) . Based on representation (45), we define the asymmetric generalized Linnik distribution as L G ¯ ν , α , 1 S ( α , θ ) . A random variable with this distribution will be denoted L α , ν , θ .
In the multivariate case a natural way of construction of the asymmetric Linnik laws consists in the application of Lemma 2 with not necessarily elliptically contoured strictly stable distribution. Namely, let the random variable G ¯ ν , α , 1 have the generalized gamma distribution and be independent of the random vector S ( α , μ ) with the strictly stable distribution with characteristic exponent α ( 0 , 2 ] and spectral measure μ . Extending the definitions of multivariate elliptically contoured generalized Linnik distribution given above, we will say that the distribution of the random vector G ¯ ν , α , 1 S ( α , μ ) is the multivariate generalized Linnik distribution. Formally, this definition embraces both multivariate elliptically contoured generalized Linnik laws and, moreover, multivariate generalized Mittag–Leffler laws (if μ = μ + ). A random vector with the multivariate generalized Linnik distribution will be denoted L α , ν , μ .
If ν = 1 , then we have the ‘ordinary’ multivariate Linnik distribution. By definition, L α , 1 , μ = L α , μ .
Mixture representations for the generalized Mittag–Leffler distribution were considered in [5] and discussed in Section 6 together with their extensions to the multivariate case. Here, we will focus on the mixture representations for the multivariate generalized Linnik distribution. Our reasoning is based on the definition of the multivariate generalized Linnik distribution given above and Theorem 1.
For α ( 0 , 2 ] , α ( 0 , 1 ) and ν > 0 using (1), (17) and (31) we obtain the following chain of relations:
L α α , ν , μ = d G ¯ ν , α α , 1 S ( α α , μ ) = d G ν , 1 1 / α α S ( α α , μ ) = d S 1 / α ( α , 1 ) G ν , 1 1 / α α S ( α , μ ) = d
= d S ( α , 1 ) G ¯ ν , α , 1 1 / α S ( α , μ ) = d M α , ν 1 / α S ( α , μ ) .
Hence, the following statement holds representing the multivariate generalized Linnik distribution as a scale mixture of a multivariate stable law with any greater characteristic parameter, the mixing distribution being the univariate generalized Mittag–Leffler law.
Theorem 3.
If α ( 0 , 2 ] , α ( 0 , 1 ) and ν > 0 , then
L α α , ν , μ = d M α , ν 1 / α S ( α , μ ) .
Now let ν ( 0 , 1 ] . From (45) and (4) it follows that
L α , ν , μ = d G ν , 1 1 / α S ( α , μ ) = d Z ν , 1 1 / α W 1 1 / α S ( α , μ ) = d Z ν , 1 1 / α = d Z ν , 1 1 / α L α , μ
yielding the following statement.
Theorem 4.
If ν ( 0 , 1 ] and α ( 0 , 2 ] , then
L α , ν , μ = d Z ν , 1 1 / α L α , μ .
In other words, with ν ( 0 , 1 ] and α ( 0 , 2 ] , the multivariate generalized Linnik distribution is a scale mixture of ‘ordinary’ multivariate Linnik distributions.
Consider projections of a random vector with the multivariate generalized Linnik distribution. For an arbitrary u R r we have
u L α , ν , μ = d u G ¯ ν , α , 1 S ( α , μ ) = G ¯ ν , α , 1 u S ( α , μ ) = d G ¯ ν , α , 1 γ ( u ) S ( α , θ ( u ) ) =
= γ ( u ) · G ¯ ν , α , 1 S ( α , θ ( u ) ) = d γ ( u ) L α , ν , θ ( u ) .
This means that the following statement holds.
Theorem 5.
Let the random vector L α , ν , μ have the multivariate generalized Linnik distribution with α ( 0 , 2 ] , ν > 0 and spectral measure μ. Let to the spectral measure μ there correspond the projection scale parameter function γ ( u ) and projection asymmetry parameter function θ ( u ) , u R r . Then any projection of the random vector L α , ν , μ has the univariate asymmetric Linnik distribution with the asymmetry parameter θ ( u ) scaled by γ ( u ) :
u L α , ν , μ = d γ ( u ) L α , ν , θ ( u ) .
Now consider the elliptically contoured case. Let α ( 0 , 2 ] and the random vector Λ Σ have the multivariate Laplace distribution with some positive definite ( r × r ) -matrix Σ . In [33] it was shown that if δ ( 0 , 1 ] , then
W δ = d W 1 S 1 ( δ , 1 ) .
Hence, it can be easily seen that
L α , Σ = d W 1 1 / α S ( α , Σ ) = d 2 W α / 2 S ( α / 2 , 1 ) X = d 2 W 1 R α / 2 X = d R α / 2 Λ Σ .
So, from Theorem 4 and (52) we obtain the following statement.
Corollary 2.
If ν ( 0 , 1 ] and α ( 0 , 2 ] , then the multivariate elliptically contoured generalized Linnik distribution is a scale mixture of multivariate Laplace distributions:
L α , ν , Σ = d Z ν , 1 1 / α R α / 2 Λ Σ .
From (31) with ν = 1 and (51) it can be seen that
L α , Σ = d 2 M α / 2 X .
Therefore we obtain one more corollary of Theorem 4 representing the multivariate generalized Linnik distribution via ‘ordinary’ Mittag–Leffler distributions.
Corollary 3.
If ν ( 0 , 1 ] and α ( 0 , 2 ] , then
L α , ν , Σ = d Z ν , 1 1 / α 2 M α / 2 X .

8. General Scale-Mixed Stable Distributions

In the preceding sections we considered special scale-mixed stable distributions in which the mixing distribution was generalized gamma leading to popular Mittag–Leffler and Linnik laws. Now turn to the case where the mixing distribution can be arbitrary.
Let α ( 0 , 2 ] , let U be a positive random variable and S ( α , μ ) be a random vector with the strictly stable distribution defined by the characteristic exponent α and spectral measure μ . An r-variate random vector Y α , μ is said to have the U-scale-mixed stable distribution, if
Y α , μ = d U 1 / α S ( α , μ )
Correspondingly, for 0 < α 1 , a univariate positive random variable Y α , 1 is said to have the U-scale-mixed one-sided stable distribution, if Y α , 1 is representable as
Y α , 1 = d U 1 / α S ( α , 1 ) .
As above, in the elliptically contoured case, where to the spectral measure μ there corresponds a positive definite ( r × r ) -matrix Σ , instead of Y α , μ we will write Y α , Σ .
The following statement generalizes Theorem 2 (Equation (36)) and Theorem 3.
Theorem 6.
Let U be a positive random variable, α ( 0 , 2 ] , α ( 0 , 1 ] . Let S ( α , μ ) be a random vector with the strictly stable distribution defined by the characteristic exponent α and spectral measure μ. Let an r-variate random vector Y α α , Σ have the U-scale-mixed stable distribution and a random variable Y α , 1 have the U-scale-mixed one-sided stable distribution. Assume that S α , μ and Y α , 1 are independent. Then
Y α α , μ = d Y α , 1 1 / α S ( α , μ ) .
Proof. 
From the definition of a U-scale-mixed stable distribution and (17) we have
Y α α , μ = d U 1 / α α S ( α α , μ ) = d U 1 / α α S 1 / α ( α , 1 ) S ( α , μ ) = d
= d U 1 / α S ( α , 1 ) 1 / α S ( α , μ ) = d Y α , 1 1 / α S ( α , μ ) .
 □
In the elliptically contoured case with α = 2 , from Theorem 6 we obtain the following statement.
Corollary 4.
Let α ( 0 , 2 ) , U be a positive random variable, Σ be a positive definite ( r × r ) -matrix, X be a random vector such that L ( X ) = N Σ . Then
Y α , Σ = d 2 Y α / 2 , 1 X .
In other words, any multivariate scale-mixed symmetric stable distribution is a scale mixture of multivariate normal laws. On the other hand, since the normal distribution is stable with α = 2 , any multivariate normal scale mixture is a ‘trivial’ multivariate scale-mixed stable distribution.
To give particular examples of ‘non-trivial’ scale-mixed stable distributions, note that
  • if U = d W 1 , then Y α , 1 = d M α and Y α , μ = d L α , μ ;
  • if U = d G ν , 1 , then Y α , 1 = d M α , ν and Y α , μ = d L α , ν , μ ;
  • if U = d S ( α , 1 ) with 0 < α 1 , then Y α , 1 = d S ( α α , 1 ) and Y α , μ = d S ( α α , μ ) .
Among possible mixing distributions of the random variable U, we will distinguish a special class that can play important role in modeling observed regularities by heavy-tailed distributions. Namely, assume that V is a positive random variable and let
U = d V G ν , 1 ,
that is, the distribution of U is a scale mixture of gamma distributions. We will denote the class of these distributions as G ( V ) . This class is rather wide and besides the gamma distribution and its particular cases (exponential, Erlang, chi-square, etc.) with exponentially fast decreasing tail, contains, for example, Pareto and Snedecor–Fisher laws with power-type decreasing tail. In the last two cases the random variable V is assumed to have the corresponding gamma and inverse gamma distributions, respectively.
For L ( U ) G ( V ) we have
Y α , 1 = d ( V G ν , 1 ) 1 / α S ( α , 1 ) = d V 1 / α G ν , 1 1 / α S ( α , 1 ) = d V 1 / α M α , ν
and
Y α , μ = d ( V G ν , 1 ) 1 / α S ( α , μ ) = d V 1 / α G ν , 1 1 / α S ( α , μ ) = d V 1 / α L α , ν , μ .
This means that with L ( U ) G ( V ) , the U-scale-mixed stable distributions are scale mixtures of the generalized Mittag–Leffler and multivariate generalized Linnik laws.
Therefore, we pay a special attention to mixture representations of the generalized Mittag–Leffler and multivariate generalized Linnik distributions. These representations can be easily extended to any U-scale-mixed stable distributions with L ( U ) G ( V ) .

9. Convergence of the Distributions of Random Sequences with Independent Indices to Multivariate Scale-Mixed Stable Distributions

In applied probability it is a convention that a model distribution can be regarded as well-justified or adequate, if it is an asymptotic approximation, that is, if there exists a rather simple limit setting (say, schemes of maximum or summation of random variables) and the corresponding limit theorem in which the model under consideration manifests itself as a limit distribution. The existence of such limit setting can provide a better understanding of real mechanisms that generate observed statistical regularities, see e.g., [54].
In this section we will prove some limit theorems presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions will be obtained for the convergence of the distributions of random sums of random vectors with both very large and finite covariance matrices to the multivariate generalized Linnik distribution.
Consider a sequence { S n } n 1 of random elements taking values in R r . Let Ξ ( R r ) be the set of all nonsingular linear operators acting from R r to R r . The identity operator acting from R r to R r will be denoted I r . Assume that there exist sequences { B n } n 1 of operators from Ξ ( R r ) and { a n } n 1 of elements from R r such that
Q n = B n 1 ( S n a n ) Q ( n )
where Q is a random element whose distribution with respect to P will be denoted H, H = L ( Q ) .
Along with { S n } n 1 , consider a sequence of integer-valued positive random variables { N n } n 1 such that for each n 1 the random variable N n is independent of the sequence { S k } k 1 . Let c n R r , D n Ξ ( R r ) , n 1 . Now we will formulate sufficient conditions for the weak convergence of the distributions of the random elements Z n = D n 1 ( S N n c n ) as n .
For g R r denote W n ( g ) = D n 1 ( B N n g + a N n c n ) . By measurability of a random field we will mean its measurability as a function of two variates, an elementary outcome and a parameter, with respect to the Cartesian product of the σ -algebra A and the Borel σ -algebra B ( R r ) of subsets of R r .
In [58,59] the following theorem was proved which establishes sufficient conditions of the weak convergence of multivariate random sequences with independent random indices under operator normalization.
Theorem 7.
[58,59]. Let D n 1 as n and let the sequence of random variables { D n 1 B N n } n 1 be tight. Assume that there exist a random element Q with distribution H and an r-dimensional random field W ( g ) , g R r , such that (53) holds and
W n ( g ) W ( g ) ( n )
for H-almost all g R r . Then the random field W ( g ) is measurable, linearly depends on g and
Z n W ( Q ) ( n ) ,
where the random field W ( · ) and the random element Q are independent.
Now consider a special case of the general limit setting and assume that the normalization is scalar and the limit random vector Q in (53) has an elliptically contoured stable distribution. Namely, let { b n } n 1 be an very largely increasing sequence of positive numbers and, instead of the general condition (53) assume that
L b n 1 / α S n S α , Σ
as n , where α ( 0 , 2 ] and Σ is some positive definite matrix. In other words, let
b n 1 / α S n S ( α , Σ ) ( n ) .
Let { d n } n 1 be an very largely increasing sequence of positive numbers. As Z n take the scalar normalized random vector
Z n = d n 1 / α S N n .
The following result can be considered as a multivariate generalization of the main theorem of [29].
 Theorem 8. 
Let N n in probability as n . Assume that the random vectors S 1 , S 2 , satisfy condition (54) with α ( 0 , 2 ] and a positive definite matrix Σ. Then a distribution F such that
L ( Z n ) F ( n ) ,
exists if and only if there exists a distribution function V ( x ) satisfying the conditions
(i) 
V ( x ) = 0 for x < 0 ;
(ii) 
for any A B ( R r )
F ( A ) = E S α , U 2 / α Σ ( A ) = 0 S α , u 2 / α Σ ( A ) d V ( u ) , x R 1 ;
(iii) 
P ( b N n < d n x ) V ( x ) , n .
Proof. 
The ‘if’ part. We will essentially exploit Theorem 7. For each n 1 let a n = c n = 0 , B n = b n 1 / α I r , D n = d n 1 / α I r . Let U be a random variable with the distribution function V ( x ) . Note that the conditions of the theorem guarantee the tightness of the sequence of random variables
D n 1 B N n = ( b N n / d n ) 1 / α , n = 1 , 2 ,
implied by its weak convergence to the random variable U 1 / α . Further, in the case under consideration we have W n ( g ) = ( b N n / d n ) 1 / α · g , g R r . Therefore, the condition b N n / d n U implies W n ( g ) U 1 / α g for all g R r .
Condition (54) means that in the case under consideration H = S α , Σ . Hence, by Theorem 7 Z n U 1 / α S ( α , Σ ) (recall that the symbol ∘ stands for the product of independent random elements). The distribution of the random element U 1 / α S ( α , Σ ) coincides with E S α , U 2 / α Σ , see Section 5.
The ‘only if’ part. Let condition (55) hold. Make sure that the sequence { D n 1 B N n } n 1 is tight. Let Q = d S ( α , Σ ) . There exist δ > 0 and ρ > 0 such that
P ( Q > ρ ) > δ .
For ρ specified above and an arbitrary x > 0 we have
P ( Z n > x ) P d n 1 / α S N n > x ; b N n 1 / α S N n > ρ =
= P ( b N n / d n ) 1 / α > x · b N n 1 / α S N n 1 ; b N n 1 / α S N n > ρ
P ( b N n / d n ) 1 / α > x / ρ ; b N n 1 / α S N n > ρ =
= k = 1 P ( N n = k ) P ( b k / d n ) 1 / α > x / ρ ; b k 1 / α S k > ρ =
= k = 1 P ( N n = k ) P ( b k / d n ) 1 / α > x / ρ P b k 1 / α S k > ρ
(the last equality holds since any constant is independent of any random variable). Since by (54) the convergence b k 1 / α S k Y takes place as k , from (56) it follows that there exists a number k 0 = k 0 ( ρ , δ ) such that
P b k 1 / α S k > ρ > δ / 2
for all k > k 0 . Therefore, continuing (57) we obtain
P ( Z n > x ) δ 2 k = k 0 + 1 P ( N n = k ) P ( b k / d n ) 1 / α > x / ρ =
= δ 2 P ( b N n / d n ) 1 / α > x / ρ k = 1 k 0 P ( N n = k ) P ( b k / d n ) 1 / α > x / ρ
δ 2 P ( b N n / d n ) 1 / α > x / ρ P ( N n k 0 ) .
Hence,
P ( b N n / d n ) 1 / α > x / R 2 δ P ( Z n > x ) + P ( N n k 0 ) .
From the condition N n P as n it follows that for any ϵ > 0 there exists an n 0 = n 0 ( ϵ ) such that P ( N n n 0 ) < ϵ for all n n 0 . Therefore, with the account of the tightness of the sequence { Z n } n 1 that follows from its weak convergence to the random element Z with L ( Z ) = F implied by (55), relation (58) implies
lim x sup n n 0 ( ϵ ) P ( b N n / d n ) 1 / α > x / ρ ϵ ,
whatever ϵ > 0 is. Now assume that the sequence
D n 1 B N n = ( b N n / d n ) 1 / α , n = 1 , 2 ,
is not tight. In that case there exists an γ > 0 and sequences N of natural and { x n } n N of real numbers satisfying the conditions x n ( n , n N ) and
P ( b N n / d n ) 1 / α > x n > γ , n N .
But, according to (59), for any ϵ > 0 there exist M = M ( ϵ ) and n 0 = n 0 ( ϵ ) such that
sup n n 0 ( ϵ ) P ( b N n / d n ) 1 / α > M ( ϵ ) 2 ϵ .
Choose ϵ < γ / 2 where γ is the number from (60). Then for all n N large enough, in accordance with (60), the inequality opposite to (61) must hold. The obtained contradiction by the Prokhorov theorem proves the tightness of the sequence { D n 1 B N n } n 1 or, which in this case is the same, of the sequence { b N n / d n } n 1 .
Introduce the set W ( Z ) containing all nonnegative random variables U such that P ( Z A ) = E S α , U 2 / α Σ ( A ) for any A B ( R r ) . Let λ ( · , · ) be any probability metric that metrizes weak convergence in the space of r-variate random vectors, or, which is the same in this context, in the space of distributions, say, the Lévy–Prokhorov metric. If X 1 and X 2 are random variables with the distributions F 1 and F 2 respectively, then we identify λ ( X 1 , X 2 ) and λ ( F 1 , F 2 ) ). Show that there exists a sequence of random variables { U n } n 1 , U n W ( Z ) , such that
λ b N n / d n , U n 0 ( n ) .
Denote
β n = inf λ b N n / d n , U : U W ( Z ) .
Prove that β n 0 as n . Assume the contrary. In that case β n δ for some δ > 0 and all n from some subsequence N of natural numbers. Choose a subsequence N 1 N so that the sequence { b N n / d n } n N 1 weakly converges to a random variable U (this is possible due to the tightness of the family { b N n / d n } n 1 established above). But then W n ( g ) U 1 / α g as n , n N 1 for any g R r . Applying Theorem 7 to n N 1 with condition (54) playing the role of condition (53), we make sure that U W ( Z ) , since condition (55) provides the coincidence of the limits of all weakly convergent subsequences. So, we arrive at the contradiction to the assumption that β n δ for all n N 1 . Hence, β n 0 as n .
For any n = 1 , 2 , choose a random variable U n from W ( Z ) satisfying the condition
λ b N n / d n , U n β n + 1 n .
This sequence obviously satisfies condition (62). Now consider the structure of the set W ( Z ) . This set contains all the random variables defining the family of special mixtures of multivariate centered elliptically contoured stable laws considered in Lemma 1, according to which this family is identifiable. So, whatever a random element Z is, the set W ( Z ) contains at most one element. Therefore, actually condition (62) is equivalent to
b N n / d n U ( n ) ,
that is, to condition (iii) of the theorem. The theorem is proved. □
Corollary 5.
Under the conditions of Theorem 7, non-randomly normalized random sequences with independent random indices d n 1 / α S N n have the limit stable distribution S α , Σ with some positive definite matrix Σ if and only if there exists a number c > 0 such that
b N n / d n c ( n ) .
Moreover, in this case Σ = c 2 / α Σ .
This statement immediately follows from Theorem 8 with the account of Lemma 1.
Now consider convergence of the distributions of random sums of random vectors to special scale-mixed multivariate elliptically contoured stable laws.
In Section 4 (see (20)) we made sure that all scale-mixed centered elliptically contoured stable distributions are representable as multivariate normal scale mixtures. Together with Theorem 8 this observation allows to suspect at least two principally different limit schemes in which each of these distributions can appear as limiting for random sums of independent random vectors. We will illustrate these two cases by the example of the multivariate generalized Linnik distribution.
As we have already mentioned, ‘ordinary’ Linnik distributions are geometrically stable. Geometrically stable distributions are only possible limits for the distributions of geometric random sums of independent identically distributed random vectors. As this is so, the distributions of the summands belong to the domain of attraction of the multivariate strictly stable law with some characteristic exponent α ( 0 , 2 ] and hence, for 0 < α < 2 the univariate marginals have very large moments of orders greater or equal to α . As concerns the case α = 2 , where the variances of marginals are finite, within the framework of the scheme of geometric summation in this case the only possible limit law is the multivariate Laplace distribution.
Correspondinly, as we will demonstrate below, the multivariate generalized Linnik distributions can be limiting for negative binomial sums of independent identically distributed random vectors. Negative binomial random sums turn out to be important and adequate models of characteristics of precipitation (total precipitation volume, etc.) during wet (rainy) periods in meteorology [60,61,62]. However, in this case the summands (daily rainfall volumes) also must have distributions from the domain of attraction of a strictly stable law with some characteristic exponent α ( 0 , 2 ] and hence, with α ( 0 , 2 ) , have very large variances, that seems doubtful, since to have an very large variance, the random variable must be allowed to take arbitrarily large values with positive probabilities. If α = 2 , then the only possible limit distribution for negative binomial random sums is the so-called variance gamma distribution which is well known in financial mathematics [54].
However, when the (generalized) Linnik distributions are used as models of statistical regularities observed in real practice and an additive structure model is used of type of a (stopped) random walk for the observed process, the researcher cannot avoid thinking over the following question: which of the two combinations of conditions can be encountered more often:
  • the distribution of the number of summands (the number of jumps of a random walk) is asymptotically gamma (say, negative binomial), but the distributions of summands (jumps) have so heavy tails that, at least, their variances are very large, or
  • the second moments (variances) of the summands (jumps) are finite, but the number of summands exposes an irregular behavior so that its very large values are possible?
Since, as a rule, when real processes are modeled, there are no serious reasons to reject the assumption that the variances of jumps are finite, the second combination at least deserves a thorough analysis.
As it was demonstrated in the preceding sections, the scale-mixed multivariate elliptically contoured stable distributions (including multivariate (generalized) Linnik laws) even with α < 2 can be represented as multivariate normal scale mixtures. This means that they can be limit distributions in analogs of the central limit theorem for random sums of independent random vectors with finite covariance matrices. Such analogs with univariate ‘ordinary’ Linnik limit distributions were presented in [2] and extended to generalized Linnik distributions in [5]. In what follows we will present some examples of limit settings for random sums of independent random vectors with principally different tail behavior. In particular, it will de demonstrated that the scheme of negative binomial summation is far not the only asymptotic setting (even for sums of independent random variables!) in which the multivariate generalized Linnik law appears as the limit distribution.
Remark 3.
Based on the results of [63], by an approach that slightly differs from the one used here by the starting point, in the paper [64] it was demonstrated that if the random vectors { S n } n 1 are formed as cumulative sums of independent random vectors:
S n = X 1 + + X n
for n N , where X 1 , X 2 , are independent r-valued random vectors, then the condition N n P in the formulations of Theorem 8 and Corollary 4 can be omitted.
Throughout this section we assume that the random vectors S n have the form (63).
Let U U (see Section 5), α ( 0 , 2 ] , Σ be a positive definite matrix. In Section 8 the r-variate random vector Y α , Σ with the the multivariate U-scale-mixed elliptically contoured stable distribution was introduced as Y α , Σ = U 1 / α S ( α , Σ ) . In this section we will consider the conditions under which multivariate U-scale-mixed stable distributions can be limiting for sums of independent random vectors.
Consider a sequence of integer-valued positive random variables { N n } n 1 such that for each n 1 the random variable N n is independent of the sequence { S k } k 1 . First, let { b n } n 1 be an very largely increasing sequence of positive numbers such that convergence (54) takes place. Let { d n } n 1 be an very largely increasing sequence of positive numbers. The following statement presents necessary and sufficient conditions for the convergence
d n 1 / α S N n Y α , Σ ( n ) .
Theorem 9.
Under condition (54), convergence (64) takes place if and only if
b N n / d n U ( n ) .
Proof. 
This theorem is a direct consequence of Theorem 8 and the definition of Y α , Σ with the account of Remark 3. □
Corollary 6.
Assume that ν > 0 . Under condition (54), the convergence
d n 1 / α S N n L α , ν , Σ ( n )
takes place if and only if
b N n / d n G ν , 1 ( n ) .
Proof. 
To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a U-scale-mixed stable distribution with U = d G ν , 1 (see representation (49)) and refer to Theorem 9 with the account of Remark 3.
Condition (65) holds, for example, if b n = d n = n , n N , and the random variable N n has the negative binomial distribution with shape parameter ν > 0 , that is, N n = N B ν , p n ,
P ( N B ν , p n = k ) = Γ ( ν + k 1 ) ( k 1 ) ! Γ ( r ) · p n ν ( 1 p n ) k 1 , k = 1 , 2 , ,
with p n = n 1 (see, e.g., [65,66]). In this case E N B ν , p n = n ν . □
Now consider the conditions providing the convergence in distribution of scalar normalized random sums of independent random vectors satisfying condition (54) with some α ( 0 , 2 ] and Σ to a random vector Y β , Σ with the U-scale-mixed stable distribution E S β , U 2 / β Σ with some β ( 0 , α ) . For convenience, let β = α α where α ( 0 , 1 ) .
Recall that in Section 8, for α ( 0 , 1 ] the positive random variable Y α , 1 with the univariate one-sided U-scale-mixed stable distribution was introduced as Y α , 1 = d U 1 / α S ( α , 1 ) .
Theorem 10.
Let α ( 0 , 1 ] . Under condition (54), the convergence
d n 1 / α S N n Y α α , Σ ( n )
takes place if and only if
b N n / d n Y α , 1 ( n ) .
Proof. 
This statement directly follows from Theorems 5 and 7 with the account of Remark 3. □
Corollary 7.
Let α ( 0 , 1 ] , ν > 0 . Under condition (54), the convergence
d n 1 / α S N n L α α , Σ , ν ( n )
takes place if and only if
b N n / d n M α , ν ( n ) .
Proof. 
This statement directly follows from Theorems 3 (see representation (50)) and 9 with the account of Remark 3. □
From the case of heavy tails turn to the ‘light-tails’ case where in (54) α = 2 . In other words, assume that the properties of the summands X j provide the asymptotic normality of the sums S n . More precisely, instead of (54), assume that
b n 1 / 2 S n X ( n ) .
The following results show that even under condition (66), heavy-tailed U-scale-mixed multivariate stable distributions can be limiting for random sums.
Theorem 11.
Under condition (66), convergence (64) takes place if and only if
b N n / d n Y α / 2 , 1 ( n ) .
Proof. 
This theorem is a direct consequence of Theorem 8 and Corollary 4, according to which Y α , Σ = d 2 Y α / 2 , 1 X with the account of Remark 3. □
Corollary 8.
Assume that N n in probability as n . Under condition (66), non-randomly normalized random sums d n 1 / 2 S N n have the limit stable distribution S α , Σ if and only if
b N n / d n 2 S ( α / 2 , 1 ) ( n ) .
Proof. 
This statement follows from Theorem 11 with the account of (13) and Remark 3. □
Corollary 9.
Assume that N n in probability as n , ν > 0 . Under condition (66), the convergence
d n 1 / 2 S N n L α , ν , Σ ( n )
takes place if and only if
b N n / d n 2 M α / 2 , ν ( n ) .
Proof. 
To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the generalized Mittag–Leffler mixing distribution (see definition (48)) and refer to Theorem 11 with the account of Remark 3.
Another way to prove Corollary 9 is to deduce it from Corollary 7. □
Product representations for limit distributions in these theorems proved in the preceding sections allow to use other forms of the conditions for the convergence of random sums of random vectors to particular scale mixtures of multivariate stable laws.

10. Conclusions

In this paper, multivariate probability distributions were considered that are representable as scale mixtures of multivariate stable distributions. Multivariate analogs of the Mittag–Leffler distribution were introduced. Some properties of these distributions were discussed. Attention was paid to the representations of the corresponding random vectors as products of independent random variables and vectors. In these products, relations were traced of the distributions of the involved terms with popular probability distributions. As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag–Leffler distributions were considered in detail. Their relations with multivariate ‘ordinary’ Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate Mittag–Leffler and generalized Mittag–Leffler distributions were discussed. Limit theorems were proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. The property of scale-mixed multivariate elliptically contoured stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture was used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.
The key points of the paper are:
  • analogs of the multiplication theorem for stable laws were proved for scale-mixed multivariate stable distributions relating these laws with different parameters;
  • some alternative but equivalent definitions are proposed for the generalized multivariate Linnik distributions based on their property to be scale-mixed multivariate stable distributions;
  • The multivariate analog of the (generalized) Mittag–Leffler distribution was introduced and it was noticed that the multivariate (generalized) Mittag–Leffler distribution can be regarded as a special case of the multivariate (generalized) Linnik distribution;
  • new mixture representations were presented for the multivariate generalized Mittag–Leffler and Linnik distributions;
  • a general transfer theorem was proved establishing necessary and sufficient conditions for the convergence of the distributions of sequences of multivariate random vectors with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to multivariate elliptically contoured scale-mixed stable distributions;
  • the property of scale-mixed multivariate elliptically contoured stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture was used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors to the multivariate elliptically contoured generalized Linnik distribution in covariance matrices.

Author Contributions

Conceptualization, Y.K., V.K. and A.Z.; methodology, Y.K. and V.K.; validation, Y.K. and V.K.; investigation, Y.K., V.K. and A.Z.; supervision, V.K.; project administration, V.K. and A.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Russian Science Foundation, project 18-11-00155.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zolotarev, V.M. One-Dimensional Stable Distributions. In Translation of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1986; Volume 65. [Google Scholar]
  2. Korolev, V.Y.; Zeifman, A.I. Convergence of statistics constructed from samples with random sizes to the Linnik and Mittag–Leffler distributions and their generalizations. J. Korean Stat. Soc. 2017, 46, 161–181. [Google Scholar] [CrossRef]
  3. Korolev, V.Y.; Zeifman, A.I. A note on mixture representations for the Linnik and Mittag–Leffler distributions and their applications. J. Math. Sci. 2017, 218, 314–327. [Google Scholar] [CrossRef] [Green Version]
  4. Korolev, V.Y.; Gorshenin, A.K.; Zeifman, A.I. New mixture representations of the generalized Mittag–Leffler distributions and their applications. Inform. Appl. 2018, 12, 75–85. [Google Scholar]
  5. Korolev, V.Y.; Gorshenin, A.K.; Zeifman, A.I. On mixture representations for the generalized Linnik distribution and their applications in limit theorems. J. Math. Sci. 2020, 246, 503–518. [Google Scholar] [CrossRef] [Green Version]
  6. Anderson, D.N.; Arnold, B.C. Linnik distributions and processes. J. Appl. Prob. 1993, 30, 330–340. [Google Scholar] [CrossRef]
  7. Devroye, L. A note on Linnik’s distribution. Stat. Probab. Lett. 1990, 9, 305–306. [Google Scholar] [CrossRef]
  8. Kotz, S.; Ostrovskii, I.V.; Hayfavi, A. Analytic and asymptotic properties of Linnik’s probability densities, I. J. Math. Anal. Appl. 1995, 193, 353–371. [Google Scholar] [CrossRef] [Green Version]
  9. Kotz, S.; Ostrovskii, I.V.; Hayfavi, A. Analytic and asymptotic properties of Linnik’s probability densities, II. J. Math. Anal. Appl. 1995, 193, 497–521. [Google Scholar] [CrossRef] [Green Version]
  10. Kotz, S.; Ostrovskii, I.V. A mixture representation of the Linnik distribution. Stat. Probab. Lett. 1996, 26, 61–64. [Google Scholar] [CrossRef] [Green Version]
  11. Kotz, S.; Kozubowski, T.J.; Podgorski, K. The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance; Birkhauser: Boston, MA, USA, 2001. [Google Scholar]
  12. Kozubowski, T.J. Mixture representation of Linnik distribution revisited. Stat. Probab. Lett. 1998, 38, 157–160. [Google Scholar] [CrossRef]
  13. Kozubowski, T.J. Exponential mixture representation of geometric stable distributions. Ann. Inst. Stat. Math. 1999, 52, 231–238. [Google Scholar] [CrossRef]
  14. Lin, G.D. A note on the Linnik distributions. J. Math. Anal. Appl. 1998, 217, 701–706. [Google Scholar] [CrossRef] [Green Version]
  15. Lim, S.C.; Teo, L.P. Analytic and asymptotic properties of multivariate generalized Linnik’s probability densities. J. Fourier Anal. Appl. 2010, 16, 715–747. [Google Scholar] [CrossRef] [Green Version]
  16. Ostrovskii, I.V. Analytic and asymptotic properties of multivariate Linniks distribution. Math. Phys. Anal. Geom. [Mat. Fiz. Anal. Geom.] 1995, 2, 436–455. [Google Scholar]
  17. Pakes, A.G. Mixture representations for symmetric generalized Linnik laws. Stat. Probab. Lett. 1998, 37, 213–221. [Google Scholar] [CrossRef]
  18. Pillai, R.N. Semi-α-Laplace distributions. Commun. Stat. Theory Methods 1985, 14, 991–1000. [Google Scholar] [CrossRef]
  19. Shiryaev, A.N. Foundations of Financial Mathematics. In Facts, Models; World Scientific: Singapore, 1998; Volume 1. [Google Scholar]
  20. Devroye, L. Random variate generation in one line of code. In Proceedings of the 1996 Winter Simulation Conference, Coronado, CA, USA, 8–11 December 1996; Charnes, J.M., Morrice, D.J., Brunner, D.T., Swain, J.J., Eds.; IEEE Press: Piscataway, NJ, USA, 1996; pp. 265–272. [Google Scholar]
  21. Bening, V.; Korolev, V. Generalized Poisson Models and Their Applications in Insurance and Finance; VSP: Utrecht, The Netherlands, 2002. [Google Scholar]
  22. Meerschaert, M.M.; Scheffler, H.-P. Limit theorems for continuous-time random walks with very large mean waiting times. J. Appl. Probab. 2004, 41, 623–638. [Google Scholar] [CrossRef] [Green Version]
  23. Fama, E. The behavior of stock market prices. J. Bus. 1965, 38, 34–105. [Google Scholar] [CrossRef]
  24. Mandelbrot, B.B. The variation of certain speculative prices. J. Bus. 1963, 36, 394–419. [Google Scholar] [CrossRef]
  25. Mandelbrot, B.B. The variation of some other speculative prices. J. Bus. 1967, 40, 393–413. [Google Scholar] [CrossRef]
  26. McCulloch, J.H. Financial applications of stable distributions. In Handbook of Statistics; Maddala, G.S., Rao, C.R., Eds.; Elsevier Science: Amsterdam, The Netherlands, 1996; Volume 14, pp. 393–425. [Google Scholar]
  27. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes. In Stochastic Models with Infinite Variance; Chapman and Hall: New York, NY, USA, 1994. [Google Scholar]
  28. Korolev, V.Y.; Zeifman, A.I. From Asymptotic Normality to Heavy-Tailedness via Limit Theorems for Random Sums and Statistics with Random Sample Sizes. In Probability, Combinatorics and Control; Korolev, V., Kostogryzov, A., Eds.; IntechOpen: London, UK, 2019; pp. 1–23. [Google Scholar]
  29. Korolev, V. On the convergence of distributions of random sums of independent random variables to stable laws. Theory Probab. Appl. 1998, 42, 695–696. [Google Scholar] [CrossRef]
  30. Stacy, E.W. A generalization of the gamma distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  31. Gleser, L.J. The gamma distribution as a mixture of exponential distributions. Am. Stat. 1989, 43, 115–117. [Google Scholar]
  32. Korolev, V.Y. Analogs of Gleser’s theorem for negative binomial and generalized gamma distributions and some their applications. Inform. Appl. 2017, 11, 2–17. [Google Scholar]
  33. Korolev, V.Y. Product representations for random variables with Weibull distributions and their applications. J. Math. Sci. 2016, 218, 298–313. [Google Scholar] [CrossRef]
  34. Feldheim, M.E. Étude de la Stabilité des Lois de Probabilité. Ph.D. Thesis, Faculté des Sciences de Paris, Paris, France, 1937. [Google Scholar]
  35. Lévy, P. Théorie de Laddition des Variables Aléatoires, 2nd ed.; Gauthier-Villars: Paris, France, 1937. [Google Scholar]
  36. Nolan, J.P. Modeling financial data with stable distributions. In Handbook of Heavy Tailed Distributions in Finance; Rachev, S.T., Ed.; Elsevier: Boston, MA, USA, 2003; Chapter 3; pp. 105–130. [Google Scholar]
  37. Nolan, J.P. Multivariate stable densities and distribution functions: General and elliptical case. In Proceedings of the Deutsche Bundesbank’s 2005 Annual Autumn Conference, Eltville, Germany, 11 November 2005; pp. 1–20. [Google Scholar]
  38. Press, S.J. Multivariate stable distributions. J. Multivar. Anal. 1972, 2, 444–462. [Google Scholar] [CrossRef] [Green Version]
  39. Gorenflo, R.; Kilbas, A.A.; Mainardi, F.; Rogosin, S.V. Mittag–Leffler Functions, Related Topics and Applications; Springer: Berlin, Germany; New York, NY, USA, 2014. [Google Scholar]
  40. Klebanov, L.B.; Rachev, S.T. Sums of a random number of random variables and their approximations with ε-accompanying very largely divisible laws. Serdica 1996, 22, 471–498. [Google Scholar]
  41. Gorenflo, R.; Mainardi, F. Continuous time random walk, Mittag–Leffler waiting time and fractional diffusion: Mathematical aspects. In Anomalous Transport: Foundations and Applications; Klages, R., Radons, G., Sokolov, I.M., Eds.; Wiley-VCH: Weinheim, Germany, 2008; Chapter 4; pp. 93–127. Available online: http://arxiv.org/abs/0705.0797 (accessed on 6 May 2007).
  42. Weron, K.; Kotulski, M. On the Cole-Cole relaxation function and related Mittag–Leffler distributions. Physica A 1996, 232, 180–188. [Google Scholar] [CrossRef]
  43. Jose, K.K.; Uma, P.; Lekshmi, V.S.; Haubold, H.J. Generalized Mittag–Leffler Distributions and Processes for Applications in Astrophysics and Time Series Modeling. Astrophys. Space Sci. Proc. 2010, 202559, 79–92. [Google Scholar]
  44. Mathai, A.M.; Haubold, H.J. Matrix-variate statistical distributions and fractional calculus. Fract. Calc. Appl. Anal. Int. J. Theory Appl. 2011, 14, 138–155. [Google Scholar] [CrossRef] [Green Version]
  45. Mathai, A.M. Some properties of Mittag–Leffler functions and matrix-variate analogues: A statistical perspective. Fract. Calc. Appl. Anal. Int. J. Theory Appl. 2010, 13, 113–132. [Google Scholar]
  46. Linnik, Y.V. Linear forms and statistical criteria, I, II. Sel. Transl. Math. Stat. Probab. 1963, 3, 41–90, (Original Paper Appeared in Ukrainskii Matematicheskii Zhournal 1953, 5, 207–243, 247–290). [Google Scholar]
  47. Bunge, J. Compositions semigroups and random stability. Ann. Probab. 1996, 24, 1476–1489. [Google Scholar] [CrossRef]
  48. Laha, R.G. On a class of unimodal distributions. Proc. Am. Math. Soc. 1961, 12, 181–184. [Google Scholar] [CrossRef]
  49. Anderson, D.N. A multivariate Linnik distribution. Stat. Probab. Lett. 1992, 14, 333–336. [Google Scholar] [CrossRef]
  50. Kozubowski, T.J.; Rachev, S.T. Multivariate geometric stable laws. J. Comput. Anal. Appl. 1999, 1, 349–385. [Google Scholar]
  51. Baringhaus, L.; Grubel, R. On a class of characterization problems for random convex combinations. Ann. Inst. Statist. Math. 1997, 49, 555–567. [Google Scholar] [CrossRef]
  52. Jayakumar, K.; Kalyanaraman, K.; Pillai, R.N. α-Laplace processes. Math. Comput. Model. 1995, 22, 109–116. [Google Scholar] [CrossRef]
  53. Mittnik, S.; Rachev, S.T. Modeling asset returns with alternative stable distributions. Econom. Rev. 1993, 12, 261–330. [Google Scholar] [CrossRef]
  54. Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  55. Kozubowski, T.J.; Podgórski, K.; Rychlik, I. Multivariate generalized Laplace distribution and related random fields. J. Multivar. Anal. 2013, 113, 59–72. [Google Scholar] [CrossRef] [Green Version]
  56. Pakes, A.G. A characterization of gamma mixtures of stable laws motivated by limit theorems. Stat. Neerl. 1992, 46, 209–218. [Google Scholar] [CrossRef]
  57. Korolev, V.Y.; Zeifman, A.I.; Korchagin, A.Y. Asymmetric Linnik distributions as limit laws for random sums of independent random variables with finite variances. Inform. Primen. 2016, 10, 21–33. [Google Scholar]
  58. Korolev, V.Y.; Kossova, E.V. On limit distributions of randomly indexed multidimensional random sequences with an operator normalization. J. Math. Sci. 1992, 72, 2915–2929. [Google Scholar] [CrossRef]
  59. Korolev, V.Y.; Kossova, E.V. Convergence of multidimensional random sequences with independent random indices. J. Math. Sci. 1995, 76, 2259–2268. [Google Scholar] [CrossRef]
  60. Korolev, V.Y.; Gorshenin, A.K.; Gulev, S.K.; Belyaev, K.P.; Grusho, A.A. Statistical Analysis of Precipitation Events. AIP Conf. Proc. 2017, 1863, 090011. [Google Scholar]
  61. Korolev, V.Y.; Gorshenin, A.K. The probability distribution of extreme precipitation. Dokl. Earth Sci. 2017, 477, 1461–1466. [Google Scholar] [CrossRef]
  62. Korolev, V.Y.; Gorshenin, A.K. Probability models and statistical tests for extreme precipitation based on generalized negative binomial distributions. Mathematics 2020, 8, 604. [Google Scholar] [CrossRef] [Green Version]
  63. Korolev, V.; Zeifman, A. On normal variancemean mixtures as limit laws for statistics with random sample sizes. J. Stat. Plan. Inference 2016, 169, 34–42. [Google Scholar] [CrossRef]
  64. Korchagin, A.Y. On convergence of random sums of independent random vectors to multivariate generalized variance-gamma distributions. Syst. Means. Inform. 2015, 25, 127–141. [Google Scholar]
  65. Bening, V.E.; Korolev, V.Y. On an application of the Student distribution in the theory of probability and mathematical statistics. Theory Probab. Appl. 2005, 49, 377–391. [Google Scholar] [CrossRef]
  66. Korolev, V.Y. On the relationship between the generalized Student t-distribution and the variance gamma distribution in statistical analysis of random-size samples. Dokl. Math. 2012, 86, 566–570. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Khokhlov, Y.; Korolev, V.; Zeifman, A. Multivariate Scale-Mixed Stable Distributions and Related Limit Theorems. Mathematics 2020, 8, 749. https://doi.org/10.3390/math8050749

AMA Style

Khokhlov Y, Korolev V, Zeifman A. Multivariate Scale-Mixed Stable Distributions and Related Limit Theorems. Mathematics. 2020; 8(5):749. https://doi.org/10.3390/math8050749

Chicago/Turabian Style

Khokhlov, Yury, Victor Korolev, and Alexander Zeifman. 2020. "Multivariate Scale-Mixed Stable Distributions and Related Limit Theorems" Mathematics 8, no. 5: 749. https://doi.org/10.3390/math8050749

APA Style

Khokhlov, Y., Korolev, V., & Zeifman, A. (2020). Multivariate Scale-Mixed Stable Distributions and Related Limit Theorems. Mathematics, 8(5), 749. https://doi.org/10.3390/math8050749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop