Next Article in Journal
Fast Algorithms for Basic Supply Chain Scheduling Problems
Next Article in Special Issue
A Random Walk Model for Spatial Galaxy Distribution
Previous Article in Journal
Optimization Method for Guillotine Packing of Rectangular Items within an Irregular and Defective Slate
Previous Article in Special Issue
On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics

by
Victor Korolev
1,2,3
1
Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow, Russia
2
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
3
Moscow Center for Fundamental and Applied Mathematics, 119991 Moscow, Russia
Mathematics 2020, 8(11), 1918; https://doi.org/10.3390/math8111918
Submission received: 2 October 2020 / Revised: 19 October 2020 / Accepted: 26 October 2020 / Published: 1 November 2020
(This article belongs to the Special Issue Analytical Methods and Convergence in Probability with Applications)

Abstract

:
In the paper, a survey of the main results concerning univariate and multivariate exponential power (EP) distributions is given, with main attention paid to mixture representations of these laws. The properties of mixing distributions are considered and some asymptotic results based on mixture representations for EP and related distributions are proved. Unlike the conventional analytical approach, here the presentation follows the lines of a kind of arithmetical approach in the space of random variables or vectors. Here the operation of scale mixing in the space of distributions is replaced with the operation of multiplication in the space of random vectors/variables under the assumption that the multipliers are independent. By doing so, the reasoning becomes much simpler, the proofs become shorter and some general features of the distributions under consideration become more vivid. The first part of the paper concerns the univariate case. Some known results are discussed and simple alternative proofs for some of them are presented as well as several new results concerning both EP distributions and some related topics including an extension of Gleser’s theorem on representability of the gamma distribution as a mixture of exponential laws and limit theorems on convergence of the distributions of maximum and minimum random sums to one-sided EP distributions and convergence of the distributions of extreme order statistics in samples with random sizes to the one-sided EP and gamma distributions. The results obtained here open the way to deal with natural multivariate analogs of EP distributions. In the second part of the paper, we discuss the conventionally defined multivariate EP distributions and introduce the notion of projective EP (PEP) distributions. The properties of multivariate EP and PEP distributions are considered as well as limit theorems establishing the conditions for the convergence of multivariate statistics constructed from samples with random sizes (including random sums of random vectors) to multivariate elliptically contoured EP and projective EP laws. The results obtained here give additional theoretical grounds for the applicability of EP and PEP distributions as asymptotic approximations for the statistical regularities observed in data in many fields.

1. Introduction

Let α > 0 . The symmetric exponential power (EP) distribution is an absolutely continuous distribution defined by its Lebesgue probability density
p α ( x ) = α 2 Γ ( 1 α ) · e | x | α , < x < .
To make notation and calculation simpler, hereinafter we will use a single parameter α in representation (1) because this parameter determines the shape of distribution (1). If α = 1 , then relation (1) defines the classical Laplace distribution with zero expectation and variance 2. If α = 2 , then relation (1) defines the normal (Gaussian) distribution with zero expectation and variance 1 2 . Any random variable (r.v.) with probability density p α ( x ) will be denoted Q α .
The distribution (1) was introduced and studied by M. T. Subbotin in 1923 [1]. In that paper distribution (1) was called generalized Laplace distribution. For distribution (1), several other different terms are used. For example, this distribution is called exponential power distribution [2,3], power exponential distribution [4,5,6], generalized error distribution [7,8,9], generalized exponential distribution [10], generalized normal [11] and generalized Gaussian [12,13,14]. Distribution (1) is widely used in Bayesian analysis and various applied problems from signal and image processing to astronomy and insurance as more general alternatives to the normal law. The paper [14] contains a survey of applications of univariate EP distributions. Particular fields of application of multivariate EP models are enlisted in [13,15]. Concerning the methods of statistical estimation of the parameters of these distributions, see [13] and the references therein.
In the present paper we focus on mixture representations for EP and related distributions. In [16] it was proved that for 0 < α 2 , distributions of type (1) can be represented as scale mixtures of normal laws. Ten years later this result was re-proved in [17] with no reference to [16]. In the present paper, this result is generalized. We also consider and discuss some alternative uniform mixture representations for univariate and multivariate EP distributions and obtain some unexpected representations for the exponential and normal laws. Mixture representations of EP and related distributions are of great interest due to the following reasons.
In the book [18], a principle was implicitly formulated that a formal model of a probability distribution can be treated as reliable or trustworthy in applied probability only if it is an asymptotic approximation or a limit distribution in a more or less simple limit setting. This principle can be interrelated with the universal principle stating that in closed systems the uncertainty does not decrease, as it was done in the book [19]. It is a convention to measure the uncertainty of a probability distribution by entropy. It has already been mentioned that with 0 < α 2 the EP distribution can be represented as a scale mixture of normal laws. As is known, the normal distribution has the maximum (differential) entropy among all distributions with finite second moment and supported by the whole real axis. In the book [19], it was emphasized that in probability theory the principle of the non-decrease of entropy manifests itself in the form of limit theorems for sums of independent r.v.s. Therefore, if the system under consideration was information-isolated from the environment, then the observed statistical distributions of its characteristics could be regarded as very close to the normal law which possesses maximum possible entropy. However, by definition, a mathematical model cannot take into account all factors which influence the evolution of the system under consideration. Therefore, the parameters of this normal law vary depending on the evolution of the “environmental” factors. In other words, these parameters should be treated as random depending on the information interchange between the system and environment. Therefore, mixtures of normal laws are reasonable mathematical models of statistical regularities of the behavior of the observed characteristics of systems in many situations. Therefore, the EP distribution (1), being a normal mixture, is of serious analytical and practical interest.
Probably, the simplicity of representation (1) has been the main (at least, substantial) reason for using the EP distributions in many applied problems. The first attempt to provide “asymptotic” reasons of possible adequacy of this model was made in [20]. In this paper we prove more general results than those presented in [20] and demonstrate that the (multivariate) EP distribution can be asymptotic in simple limit theorems for those statistics constructed from samples with random sizes, that are asymptotically normal, if the sample size is non-random, in particular, in the scheme of random summation.
The EP distributions (at least with 0 < α 2 ) turn out to be closely related with stable distributions. The book [21] by V. M. Zolotarev became a milestone on the way of development of the theory of stable distributions. The representation of an EP distribution with 0 < α 2 as a normal scale mixture can be easily proved using the famous ‘multiplication’ Theorem 3.3.1 in [21]. Moreover, in these representations the mixing distributions are defined via stable densities. In the present paper, we show that these mixing laws that play an auxiliary role in the theory of EP distributions, can play quite a separate role being limit laws for the random sample size providing that the extreme order statistics follow the gamma distribution.
This paper can be regarded as a complement to the recent publication [14]. We give a survey of main results concerning univariate and multivariate EP distributions, consider the properties of mixing distributions appearing in the generalizations mentioned above and prove some asymptotic results based on mixture representations for EP and related distributions. Unlike the conventional analytical approach used in [14], here the presentation follows the lines of a kind of arithmetical approach in the space of random variables or vectors. Here the operation of scale mixing in the space of distributions is replaced with the operation of multiplication in the space of random vectors/variables under the assumption that the multipliers are independent. By doing so, the reasoning becomes much simpler, the proofs become shorter and some general features of the distributions under consideration become more vivid. Section 2 contains some preliminaries. Section 3 concerns the univariate case. We discuss some known results mentioned in [14] and present simple alternative proofs of some of them as well as several new results concerning both EP distributions and some related topics including an extension of Gleser’s theorem on representability of the gamma distribution as a mixture of exponential laws and limit theorems on convergence of the distributions of maximum and minimum random sums to one-sided EP distributions and convergence of the distributions of extreme order statistics in samples with random sizes to the one-sided EP and gamma distributions. The results obtained here open the way to deal with natural multivariate analogs of EP distributions. In Section 4, we discuss the conventionally defined multivariate EP distributions and introduce the notion of projective EP (PEP) distributions. The properties of multivariate EP and PEP distributions are considered as well as limit theorems establishing the conditions for the convergence of multivariate statistics constructed from samples with random sizes (including random sums of random vectors) to multivariate elliptically contoured EP and projective EP laws. The results obtained here give additional theoretical grounds for the applicability of EP and PEP distributions as asymptotic approximations for the statistical regularities observed in data in many fields.

2. Mathematical Preliminaries

The symbol = d will stand for the coincidence of distributions. The symbol □ marks the end of the proof. The indicator function of a set A will be denoted I A ( z ) : if z A , then I A ( z ) = 1 , otherwise I A ( z ) = 0 . The symbol ∘ denotes product of independent random elements.
All the r.v.s and random vectors will be assumed to be defined on one and the same probability space ( Ω , A , P ) . The symbols L ( Y ) and L ( Y ) will denote the distribution of an r.v. Y and an r-variate random vector Y with respect to the measure P , respectively.
An r.v. with the standard exponential distribution will be denoted W 1 :
P ( W 1 < x ) = 1 e x I [ 0 , ) ( x ) .
A gamma-distributed r.v. with shape parameter r > 0 and scale parameter λ > 0 will be denoted G r , λ ,
P ( G r , λ < x ) = 0 x g ( z ; r , λ ) d z , with g ( x ; r , λ ) = λ r Γ ( r ) x r 1 e λ x I [ 0 , ) ( x ) ,
where Γ ( r ) is Euler’s gamma-function,
Γ ( r ) = 0 x r 1 e x d x , r > 0 .
In this notation, obviously, G 1 , 1 is an r.v. with the standard exponential distribution: G 1 , 1 = W 1 .
It is easy to make sure that G 1 / α , 1 = d | Q α | α .
Let γ > 0 . The distribution of the r.v. W γ :
P W γ < x = 1 e x γ I [ 0 , ) ( x ) ,
is called the Weibull distribution with shape parameter γ . It is easy to see that W 1 1 / γ = d W γ . Moreover, if γ > 0 and γ > 0 , then P ( W γ 1 / γ x ) = P ( W γ x γ ) = e x γ γ = P ( W γ γ x ) , x 0 , that is, for any γ > 0 and γ > 0
W γ γ = d W γ 1 / γ .
The standard normal distribution function (d.f.) and its density will be denoted Φ ( x ) and φ ( x ) ,
φ ( x ) = 1 2 π e x 2 / 2 , Φ ( x ) = x φ ( z ) d z ,
respectively. An r.v. with the standard normal distribution will be denoted X.
By g α , θ ( x ) and G α , θ ( x ) we will respectively denote the probability density and the d.f. of the strictly stable law with characteristic exponent α and symmetry parameter θ corresponding to the characteristic function
g α , θ ( t ) = exp | t | α exp i π θ α 2 sign t , t R ,
with 0 < α 2 , | θ | θ α = min { 1 , 2 α 1 } (see, e.g., [21]). An r.v. with characteristic function (2) will be denoted S α , θ . To symmetric strictly stable distributions there correspond the value θ = 0 and the ch.f. f ( t ; α , 0 ) = e | t | α , t R . It is easy to see that S 2 , 0 = d 2 X .
The values θ = 1 and 0 < α 1 correspond to one-sided strictly stable distributions concentrated on the nonnegative halfline. The couples α = 1 , θ = ± 1 correspond to the distributions degenerate in ± 1 , respectively. All other stable distributions are absolutely continuous. Stable densities cannot explicitly be represented via elementary functions except for four cases: the normal distribution ( α = 2 , θ = 0 ), the Cauchy distribution ( α = 1 , θ = 0 ), the Lévy distribution ( α = 1 2 , θ = 1 ) and the distribution symmetric to the Lévy law ( α = 1 2 , θ = 1 ). Stable densities can be expressed in terms of the Fox functions (generalized Meijer G-functions), see [22,23].
According to the <<multiplication theorem>> (see, e.g., Theorem 3.3.1 in [21]) for any admissible pair of parameters ( α , θ ) and any α ( 0 , 1 ] , the product representation
S α α , θ = d S α , 1 1 / α S α , θ
holds, in which the factors on the right-hand side are independent. From (3), it follows that for any α ( 0 , 2 ]
S α , 0 = d 2 S α / 2 , 1 X ,
that is, any symmetric strictly stable distribution can be represented as a normal scale mixture.
As is well known, if 0 < α < 2 , then E | S α , θ | β < for any β ( 0 , α ) , while the moments of the r.v. S α , θ of orders β α do not exist (see, e.g., [21]). Although the densities of stable distributions cannot be explicitly expressed in terms of elementary functions, it can be shown [24] that
E | S α , 0 | β = 2 β π · Γ β + 1 2 Γ α β α Γ 2 β β
for 0 < β < α < 2 and
E S α , 1 β = Γ 1 β α Γ ( 1 β )
for 0 < β < α 1 .

3. Univariate Case

3.1. Higher-Order EP Scale Mixture Representations and Related Topics

Proposition 1.
Let α ( 0 , 2 ] , α ( 0 , 1 ] . Then
Q α α = d Q α U α , α 1 / α ,
where U α , α is an r.v. such that: if α = 1 , then U α , α = 1 for any α ( 0 , 2 ] and if 0 < α < 1 , then U α , α is absolutely continuous with probability density
u α , α ( x ) = α Γ ( 1 α ) Γ ( 1 α α ) · g α , 1 ( x ) x 1 / α · I ( 0 , ) ( x ) .
Proof. 
From (2), it follows that the symmetric ( θ = 0 ) strictly stable distribution has the characteristic function
g α , 0 ( t ) = e | t | α , t R .
Rewrite (3) with θ = 0 in terms of characteristic functions:
e | t | α α = 0 e | t | α z g α , 1 ( z ) d z .
Then, changing the notation t x , by formal transformations of Equality (8), we obtain
p α α ( x ) = α α 2 Γ ( 1 α α ) e | x | α α = α α 2 Γ ( 1 α α ) · 2 Γ ( 1 α ) α 0 α z 1 / α 2 Γ ( 1 α ) exp { | x | α z } g α , 1 ( z ) z 1 / α d z =
= 0 α z 1 / α 2 Γ ( 1 α ) exp { | x | α z } · α Γ ( 1 α ) Γ ( 1 α α ) g α , 1 ( z ) z 1 / α d z = 0 z 1 / α p α ( x z 1 / α ) u α , α ( z ) d z .
It can be easily verified that u α , α ( z ) is the probability density of a nonnegative r.v. Indeed, since p α ( z ) is a probability density, for any z > 0 we have
z 1 / α p α ( x z 1 / α ) d x = 1 .
Therefore, it follows from (9) that
1 = p α ( x ) d x = 0 z 1 / α p α ( x z 1 / α ) u α , α ( z ) d z d x =
= 0 u α , α ( z ) z 1 / α p α ( x z 1 / α ) d x d z = 0 u α , α ( z ) d z .
The proposition is thus proved. □
Let 0 < β α 2 . Then the assertion of Proposition 1 can be rewritten as
Q β = d Q α U α , β / α 1 / α .
It is easily seen that Q 2 = d 1 2 X . Setting α = 2 , from (10) we obtain
Corollary 1. 
Any symmetric EP distribution with α ( 0 , 2 ] is a scale mixture of normal laws [16]:
Q α = d 1 2 U 2 , α / 2 1 X .
Now let α = 1 . The r.v. having the Laplace distribution with variance 2 will be denoted Λ . As it has already been noted, Q 1 = d Λ . It is well known that
Λ = d 2 W 1 X .
On the other hand, from Corollary 1 it follows that
Λ = d Q 1 = d 1 2 U 1 , 1 / 2 1 X .
Therefore, by virtue of identifiability of scale mixtures of normal laws (see [25] and details below), having compared (11) and (12), we obtain
U 1 , 1 / 2 = d 1 4 W 1 1 ,
that is, the r.v. U 1 , 1 / 2 1 has the exponential distribution with parameter 1 4 whereas the r.v. U 1 , 1 / 2 has the inverse exponential (Fréchet) distribution, P ( U 1 , 1 / 2 < x ) = exp { 1 4 x } , x 0 .
Corollary 2.
Any symmetric EP distribution with α ( 0 , 1 ] is a scale mixture of Laplace laws:
Q α = d U 1 , α 1 Λ .
For α ( 0 , 1 ] , from Corollary 2 we obtain one more representation of the EP distribution as a scale mixture of normal laws:
Q α = d U 1 , α 1 2 W 1 X .
Corollary 3.
If α ( 0 , 1 ] , then L ( Q α ) is infinitely divisible.
Proof. 
By virtue of identifiability of scale mixtures of normal laws, from (13) and Corollary 1 we obtain that if α ( 0 , 1 ] , then the distribution of the r.v. U 2 , α / 2 1 is mixed exponential:
U 2 , α / 2 1 = d 4 U 1 , α 2 W 1 .
Hence, in accordance with the result of [26] which states that the product of two independent non-negative r.v.s is infinitely divisible, provided one of the two is exponentially distributed, from (14) it follows that, with α ( 0 , 1 ] , the distribution of U 2 , α / 2 1 is infinitely divisible. It remains to use Corollary 1 and a well-known result that a normal scale mixture is infinitely divisible, if the mixing distribution is infinitely divisible (see, e.g., [27], Chapter XVII, Section 3). □
The interval ( 0 , 1 ] does not cover all values of α providing the infinite divisibility of L ( Q α ) . Another obvious value of α for which L ( Q α ) is infinitely divisible is α = 2 : the distribution of Q 2 is normal and hence, infinitely divisible as well. Moreover, as is shown in [14], with values of α ( 0 , 1 ] { 2 } , the EP distributions are not infinitely divisible.
From Proposition 1, as a by-product, we can obtain simple expressions for the moments of negative orders of one-sided strictly stable distributions.
Corollary 4.
If 1 2 δ < and 0 < α < 1 , then
E S α , 1 δ = Γ ( δ α ) α Γ ( δ ) .
Proof. 
As it was made sure in the proof of Proposition 1, the function u 1 / δ , α ( x ) is a probability density, that is,
0 u 1 / δ , α ( x ) d x = α Γ ( δ ) Γ ( δ α ) 0 x δ g α , 1 ( x ) d x = 1 .
Therefore,
E S α , 1 δ = 0 x δ g α , 1 ( x ) d x = Γ ( δ α ) α Γ ( δ ) .
Now consider some properties of the mixing r.v. U α , α in (7). First, we present some inequalities for the tail of the distribution of U α , α .
Proposition 2.
( i ) For any α ( 0 , 2 ) and α ( 0 , 1 ) , we have
P ( U α , α > x ) = O ( x ( α + 1 / α ) )
as x .
( i i ) Let 0 < δ < α 1 , α ( 0 , 2 ] . Then for any x > 0
P ( U α , α > x ) α Γ ( 1 α ) Γ ( 1 δ α ) Γ ( 1 α α ) Γ ( 1 δ ) x δ + 1 / α .
( i i i ) For any 0 < β < α < 2 , α ( 0 , 1 ) and x > 0 , we have
P ( U α , α > x ) Γ ( 1 α ) Γ ( 1 α β ) Γ ( 1 α α ) Γ ( 1 β ) · x α β α β · P ( U β , α > x ) .
Proof. 
( i ) With the account of the well-known relation
lim x x α 1 G α , 1 ( x ) = ( some ) c ( 0 , )
(e.g., see [18], Chapter 7, Section 36) we conclude that for any x > 0 , there exists a c ( 0 , ) such that
x 1 / α P ( U α , α > x ) = x 1 / α x g α , 1 ( u ) u 1 / α d u 1 G α , 1 ( x ) c x α .
Therefore, P ( U α , α > x ) = O ( x ( α + 1 / α ) ) as x .
( i i ) With the account of (6), we have
P ( U α , α > x ) = x u α , α ( y ) d y = α Γ ( 1 α ) Γ ( 1 α α ) x y δ g α , 1 ( y ) y δ + 1 / α d y
α Γ ( 1 α ) Γ ( 1 α α ) x δ + 1 / α 0 y δ g α , 1 ( y ) d y = α Γ ( 1 α ) Γ ( 1 δ α ) Γ ( 1 α α ) Γ ( 1 δ ) x δ + 1 / α .
( i i i ) We have
P ( U α , α > x ) = α Γ ( 1 α ) Γ ( 1 α α ) x g α , 1 ( u ) u 1 / α d u = α Γ ( 1 α ) Γ ( 1 α α ) x u α β α β g α , 1 ( u ) u 1 / β d u
x α β α β · α Γ ( 1 α ) Γ ( 1 α α ) x g α , 1 ( u ) u 1 / β d u = x α β α β · Γ ( 1 α ) Γ ( 1 α β ) Γ ( 1 α α ) Γ ( 1 β ) P ( U β , α > x ) .
Proposition 3.
( i ) Let α ( 0 , 2 ] , α ( 0 , 1 ] . The moments of the r.v. U α , α of orders δ 1 α + α are infinite, whereas for δ < 1 α + α , we have
E U α , α δ = α Γ ( 1 α ) Γ ( α ( α δ ) + 1 α α ) Γ ( 1 α α ) Γ ( 1 δ + 1 α ) .
( i i ) Let δ 1 α . Then,
E U α , α δ = Γ ( 1 α ) Γ ( δ α + 1 α α ) Γ ( 1 α α ) Γ ( δ α + 1 α ) .
Proof. 
( i ) To prove (15), notice that, by the definition of u α , α ( x ) ,
E U α , α δ = 0 x δ u α , α ( x ) d x = α Γ ( 1 α ) Γ ( 1 α α ) 0 x δ 1 / α g α , 1 ( x ) d x = α Γ ( 1 α ) Γ ( 1 α α ) · E S α , 1 δ 1 / α
and use (6).
( i i ) To prove (16), note that for arbitrary β ( 0 , 2 ] and γ β 1
E | Q β | γ = β Γ ( 1 β ) 0 e x β x γ d x = 1 Γ ( 1 β ) 0 e x x ( γ + 1 ) / β 1 d x = Γ ( γ + 1 β ) Γ ( 1 β ) .
Then (7) implies
Γ ( γ + 1 α α ) Γ ( 1 α α ) = Γ ( γ + 1 α ) Γ ( 1 α ) · E U α , α γ / α
Letting δ = γ / α , from (17) we obtain (16). The proposition is proved. □
Now consider the property of identifiability of scale mixtures of EP distributions. Recall the definition of identifiability of scale mixtures. Let Q be an r.v. with the d.f. F Q ( x ) , V 1 and V 2 be two nonnegative r.v.s. The family of scale mixtures of F Q is said to be identifiable, if the equality Q V 1 = d Q V 2 implies V 1 = d V 2 .
Lemma 1.
Let F + ( x ) be a d.f. such that F + ( 0 ) = 0 . The family of scale mixtures of F + is identifiable, if the Fourier–Stieltjes transform of the d.f. F ^ + ( x ) = F + ( e x ) is not identically zero in some nondegenerate real interval [25].
Proposition 4.
For any fixed α ( 0 , 2 ] , the family of scale mixtures of EP distributions ( 1 ) is identifiable; that is, if V 1 and V 2 are two nonnegative r.v.s, then the equality Q α V 1 = d Q α V 2 implies V 1 = d V 2 .
Proof. 
First assume that | Q α | V 1 = d | Q α | V 2 and prove that V 1 = d V 2 . For this purpose use Lemma 1. Denote F α + ( x ) = P ( | Q α | < x ) , F ^ α + ( x ) = F α + ( e x ) , x 0 . We obviously have
α + ( x ) d d x F α + ( x ) = α Γ ( 1 α ) · e x α , x 0 .
Therefore, by the chain differentiation rule we have
d d x F ^ α + ( x ) = d d x F α + ( e x ) = d d z F α + ( z ) | z = e x · d d x e x = α Γ ( 1 α ) · e x e e α x .
Hence, the Fourier–Stieltjes transform ψ ^ α + ( t ) of the d.f. F ^ α + ( x ) is
ψ ^ α + ( t ) = 0 e i t x d F ^ α + ( x ) = α Γ ( 1 α ) 0 e i t x e x e e α x d x , t R .
Multiplying the integrand in the last integral by 1 = e α x e α x and changing the variables e α x y so that d y = α e α x d x and e x = y 1 / α , we obtain
ψ ^ α + ( t ) = α Γ ( 1 α ) 0 e i t x e x e e α x e α x e α x d x = 1 Γ ( 1 α ) 0 y ( i t + 1 ) / α 1 e y d y = Γ ( i t + 1 α ) Γ ( 1 α ) , t R .
The reference to Lemma 1 proves that V 1 = d V 2 . Now assume that Q α V 1 = d Q α V 2 . Then, obviously, | Q α | V 1 = d | Q α | V 2 and the desired result follows from what has just been proved. □
Proposition 5.
Let 0 < γ β α 2 . Then,
U α , γ / α = d U α , β / α U β , γ / β α / β .
Proof. 
From Proposition 1 (see (10)) we have
Q γ = d Q β U β , γ / β 1 / β = d Q α U α , β / α 1 / α U β , γ / β 1 / β
and
Q γ = d Q α U α , γ / α 1 / α .
That is,
Q α U α , β / α 1 / α U β , γ / β 1 / β = d Q α U α , γ / α 1 / α .
Now the desired result follows from Proposition 4 which states that the family of scale mixtures of EP distributions ( 1 ) is identifiable; that is, if α ( 0 , 2 ] and Q α V 1 = d Q α V 2 for some r.v.s V 1 and V 2 , then V 1 = d V 2 . In the case under consideration V 1 = U α , β / α 1 / α U β , γ / β 1 / β , V 2 = U α , γ / α 1 / α . □
Proposition 5 relates the distributions of the r.v.s U α , α with different values of α but with the same values of α . As regards the relation between the distributions of the r.v.s U α , α with different values of α but with the same values of α , it can be easily seen that for any α ( 0 , 1 ] and α , β ( 0 , 2 ]
u β , α ( x ) = Γ ( 1 β ) Γ ( 1 α α ) Γ ( 1 α ) Γ ( 1 β α ) · x ( β α ) / ( α β ) u α , α ( x ) .
In other words, for any x > 0
P ( U β , α > x ) = E U α , α ( β α ) / ( α β ) · I ( x , ) ( U α , α ) .
Consider some properties of the one-sided EP distribution of the r.v. | Q α | . Obviously, the density α + ( x ) of | Q α | is given by (18), so that for δ > 1
E | Q α | δ = α Γ ( 1 α ) 0 x δ e x α d x = Γ ( δ + 1 α ) Γ ( 1 α ) .
Lemma 2.
A d.f. F ( x ) such that F ( 0 ) = 0 corresponds to a mixed exponential distribution, if and only if its complement 1 F ( x ) is completely monotone: F C and ( 1 ) n + 1 F ( n ) ( x ) 0 for all x > 0 .
Proof. 
This statement immediately follows from the Bernstein theorem [28]. □
Proposition 6.
The distribution of the r.v. | Q α | can be represented as mixed exponential if and only if α ( 0 , 1 ] . In that case the mixing density is u 1 , α ( x ) .
Proof. 
Let α ( 0 , 1 ] . As is known, the Laplace–Stieltjes transform ψ α ( s ) of the nonnegative strictly stable r.v. S α , 1 is
ψ α ( s ) = E e s S α , 1 = 0 e s x g α , 1 ( x ) d x = e s α , s 0 .
Hence, by formal transformation, we obtain
α + ( s ) = α Γ ( 1 α ) 0 x e s x · g α , 1 ( x ) x d x = 0 x e s x u 1 , α ( x ) d x ,
where the function u 1 , α ( x ) was introduced in Proposition 1 and proved to be a probability density function. Relation (20) means that if α ( 0 , 1 ] , then the distribution of | Q α | is mixed exponential.
Now, let α > 1 . We have
d 2 d x 2 F α + ( x ) = d d x α + ( x ) = α 2 Γ ( 1 α ) x α 1 e x α
and
d 3 d x 3 F α + ( x ) = d 2 d x 2 α + ( x ) = α 2 Γ ( 1 α ) d d x x α 1 e x α = α 2 Γ ( 1 α ) · x α 2 e x α α ( x α 1 ) + 1 .
It can be easily seen that for x x α ( 1 1 / α ) 1 / α we have d 3 d x 3 F α + ( x ) 0 while for x x α we have d 3 d x 3 F α + ( x ) 0 with strict inequalities for x x α . Hence, by Lemma 2 the distribution of | Q α | is not mixed exponential. The proposition is proved. □
In terms of r.v.s the statement of Proposition 6 can be formulated as
| Q α | = d W 1 U 1 , α 1
provided α ( 0 , 1 ] (also see Corollary 1).
Corollary 5.
Let α ( 0 , 1 ] . Then the d.f. F α + ( x ) is infinitely divisible.
Proof. 
This statement immediately follows from (21) and the result of [26] which states that the product of two independent non-negative r.v.s is infinitely divisible, provided one of the two is exponentially distributed. □

3.2. Convergence of the Distributions of Maximum and Minimum Random Sums to One-Sided EP Laws

From Corollary 1 and (13), we obtain
Corollary 6.
F α + ( x ) is a scale mixture of folded normal distributions: if α ( 0 , 2 ] , then
| Q α | = d 1 2 U 2 , α / 2 1 | X | ;
moreover, if α ( 0 , 1 ] , then | Q α | = d 2 W 1 U 1 , α 1 | X | .
In this section we demonstrate that the one-sided EP distribution can be limiting for maximum sums of a random number of independent r.v.s (maximum random sums), minimum random sums and absolute values of random sums. Convergence in distribution will be denoted by the symbol ⟹.
Consider independent not necessarily identically distributed r.v.s X 1 , X 2 , with E X i = 0 and 0 < σ i 2 = D X i < , i N . For n N denote S n = X 1 + + X n , S ¯ n = max 1 i n S i , S ̲ n = min 1 i n S i , B n 2 = σ 1 2 + + σ n 2 . Assume that the r.v.s X 1 , X 2 , satisfy the Lindeberg condition: for any τ > 0
lim n 1 B n 2 i = 1 n | x | τ B n x 2 d P ( X i < x ) = 0 .
It is well known that under these assumptions P S n < B n x Φ ( x ) (this is the classical Lindeberg central limit theorem) and P S ¯ n < B n x 2 Φ ( x ) 1 , x 0 , and P S ̲ n < B n x 2 Φ ( x ) , x 0 , (this is one of manifestations of the invariance principle).
Let N 1 , N 2 , be a sequence of nonnegative r.v.s such that for each n N the r.v.s N n , Y 1 , Y 2 , are independent. For n N let S N n = X 1 + + X N n , S ¯ N n = max 1 i N n S i , S ̲ N n = min 1 i N n S i (for definiteness assume that S 0 = S ¯ 0 = S ̲ 0 = 0 ). Let { d n } n 1 be an infinitely increasing sequence of positive numbers. Here and in what follows convergence is meant as n .
Lemma 3
Assume that the r.v.s X 1 , X 2 , and N 1 , N 2 , satisfy the conditions specified above. In particular, let Lindeberg condition (22) hold. Moreover, let N n in probability. Then the distributions of normalized random sums weakly converge to some distribution; that is, there exists an r.v. Y such that d n 1 S N n Y , if and only if any of the following conditions holds:
(i)
d n 1 | S N n | | Y | ;
(ii)
there exists an r.v. Y ¯ such that d n 1 S ¯ N n Y ¯ ;
(iii)
there exists an r.v. Y ̲ such that d n 1 S ̲ N n Y ̲ ;
(iv)
there exists a nonnegative r.v. U such that d n 2 B N n 2 U .
Moreover, P Y < x = E Φ x U 1 / 2 , x R ; P Y ̲ < x = 2 E Φ x U 1 / 2 , x 0 ; P Y ¯ < x = P | Y | < x = 2 E Φ x U 1 / 2 1 , x 0 .
The proof of Lemma 3 was given in [29].
Lemma 3 and Corollary 6 imply the following statement.
Proposition 7.
Let α ( 0 , 2 ] . Assume that the r.v.s X 1 , X 2 , and N 1 , N 2 , satisfy the conditions specified above. In particular, let the Lindeberg condition (22) hold. Moreover, let N n in probability. Then the following five statements are equivalent:
d n 1 S N n Q α ; d n 1 S ¯ N n | Q α | ; d n 1 S ̲ N n | Q α | ; d n 1 | S N n | | Q α | ; d n 2 B N n 2 1 2 U 2 , α / 2 .

3.3. Extensions of Gleser’s Theorem for Gamma Distributions

In [30], it was shown that a gamma distribution can be represented as mixed exponential if and only if its shape parameter is no greater than one. Namely, the density g ( x ; r , μ ) of a gamma distribution with 0 < r < 1 can be represented as
g ( x ; r , μ ) = 0 z e z x p ( z ; r , μ ) d z ,
where
p ( z ; r , μ ) = μ r Γ ( 1 r ) Γ ( r ) · I [ μ , ) ( z ) ( z μ ) r z .
In [31], it was proved that if r ( 0 , 1 ) , μ > 0 and G r , 1 and G 1 r , 1 are independent gamma-distributed r.v.s, then the density p ( z ; r , μ ) defined by (23) corresponds to the r.v.
Z r , μ = μ ( G r , 1 + G 1 r , 1 ) G r , 1 = d μ Z r , 1 = d μ 1 + 1 r r R 1 r , r ,
where R 1 r , r is the r.v. with the Snedecor–Fisher distribution corresponding to the probability density
f ( x ; 1 r , r ) = ( 1 r ) 1 r r r Γ ( 1 r ) Γ ( r ) · I ( 0 , ) ( x ) x r [ r + ( 1 r ) x ] .
In other words, if r ( 0 , 1 ) , then
G r , μ = d W 1 Z r , μ 1 .
A natural question arises: is there a product representation of G r , μ in terms of exponential r.v.s for r > 1 similar to (26)? The results of the preceding section can give an answer to this question.
For simplicity, without loss of generality let μ = 1 .
Proposition 8.
Let r 1 . Then,
G r , 1 = d W 1 1 / r U 1 , 1 / r 1 / r = d W r U 1 , 1 / r 1 / r .
Proof. 
As it has been already mentioned,
G r , 1 = d | Q 1 / r | 1 / r .
Therefore, with the account of (21), we obtain the desired result. □
Gamma distributions, as well as one-sided EP distributions, are particular representatives of the class of generalized gamma distributions (GG distributions), that was first described (under another name) in [32,33] in relation with some hydrological problems. The term “generalized gamma distribution” was proposed in [34] by E. W. Stacy who considered a special family of lifetime distributions containing both gamma and Weibull distributions. However, these distributions are particular cases of a more general family introduced by L. Amoroso [35]. A generalized gamma distribution is the absolutely continuous distribution defined by the density
g ¯ ( x ; r , α , μ ) = | α | μ r Γ ( r ) x α r 1 e μ x α I [ 0 , ) ( x )
with α R , μ > 0 , r > 0 . An r.v. with the density g ¯ ( x ; r , α , μ ) will be denoted G ¯ r , α , μ . It is easy to see that
G ¯ r , α , μ = d G r , μ 1 / α = d μ 1 / α G r , 1 1 / α = d μ 1 / α G ¯ r , α , 1 .
The following statement can be regarded as a generalization of (27).
Proposition 9.
Let r 1 2 , t 1 . Then,
G r t , 1 = d G r , 1 1 / t U 1 / r , 1 / t 1 / t = d G ¯ r , t , 1 U 1 / r , 1 / t 1 / t .
Proof. 
From Proposition 1 it follows that if α ( 0 , 2 ] and α ( 0 , 1 ] , then
| Q α α | = d | Q α | U α , α 1 / α .
Now let α = 1 / r , α = 1 / t and use (28) to obtain the desired result. □

3.4. Alternative Mixture Representations

Let < a < b < . By Y [ a , b ] we will denote an r.v. with the uniform distribution on the segment [ a , b ] .
Lemma 4.
For any α [ 1 , )
Q α = d Y [ 2 , 2 ] G 1 + α , 1 1 / α = d Y [ 2 , 2 ] G ¯ 1 + α , α , 1
For the proof see [36].
Note that Lemma 4 with α = 2 yields an ‘unexpected’ uniform mixture representation for the normal distribution:
X = d 2 Q 2 = d Y [ 2 , 2 ] G 3 / 2 , 1 .
The following statement extends and generalizes a result of [36] (see Lemma 4).
Proposition 10.
For any α ( 0 , ) , the EP distribution can be represented as a scale mixture of uniform distributions: the case α 1 is covered by Lemma 4 and if 0 < α 1 , then
Q α = d Y [ 2 , 2 ] G 1 + β , 1 U β , α / β 1 1 / β = d Y [ 2 , 2 ] G ¯ 1 + β , β , 1 U β , α / β 1 / β ,
for any β [ 1 , 2 ] .
Proof. 
Let 0 < α 1 β 2 . From (10) with the positions of α and β switched and Lemma 4 we have
Q α = d Q β U β , α / β 1 / β = d Y [ 2 , 2 ] G 1 + β , 1 U β , α / β 1 1 / β .
Now it remains to use (29). □
Setting α = 2 , we obtain
Corollary 7.
Let α ( 0 , 2 ] . Then
Q α = d Y [ 2 , 2 ] G 3 / 2 , 1 U 2 , α / 2 1 .
Now turn to other mixture representations. If in (28) r = 1 / α , then | Q α | = d G 1 / α , 1 1 / α . From this fact and Gleser’s result (26), we obtain the following statement.
Proposition 11.
If α 1 , then the one-sided EP distribution is a scale mixture of Weibull distributions:
| Q α | = d W α Z 1 / α , 1 1 / α .
Let Y ± 1 be an r.v. such that P ( Y ± 1 = 1 ) = P ( Y ± 1 = 1 ) = 1 2 . For α 1 define the r.v. V α as the symmetrized r.v. Z 1 / α , 1 1 / α :
V α = Y ± 1 Z 1 / α , 1 1 / α .
With the account of (25), it is easy to make sure that the probability density v α ( x ) of the r.v. V α has the form
v α ( x ) = α 2 ( α 1 ) 1 / α Γ ( α 1 α ) Γ ( 1 α ) · | x | α 1 I ( , 1 ] [ 1 , ) ( x ) ( | x | α 1 ) 1 / α [ 1 + ( α 1 ) ( | x | α 1 ) ] , α 1 .
It is worth noting that the probability density of the r.v. Z 1 / α , 1 1 / α is 2 v α ( x ) , x 1 . Then, from Proposition 11 we obtain one more mixture representation for the EP distribution with α 1 via the Weibull distribution.
Corollary 8.
Let α 1 . Then,
Q α = d W α V α 1 .
As by-products of Proposition 10 and Corollary 7, consider some mixture representations for the exponential and normal distributions. Using Corollary 1 we obtain for 0 < α 2 that
G 1 / α , 1 = d | Q α | α = d | X | α 1 2 U 2 , α / 2 1 α / 2 = d 1 2 χ 1 2 U 2 , α / 2 1 α / 2 = d G 1 / 2 , 1 U 2 , α / 2 1 α / 2 .
Here we use the notation χ m 2 for the r.v. having the chi-squared distribution with m degrees of freedom. Setting α = 1 in (36), we obtain the following representation for the exponentially distributed r.v.:
W 1 = d | Q 1 | = d G 1 / 2 , 1 U 2 , 1 / 2 1 .
Now on the left-hand side of (37) use the easily verified relation W 1 = d 2 W 1 | X | and on the right-hand side of (37) use relation G 1 / 2 , 1 = d W 1 Z 1 / 2 , 1 1 (see (26)). Then (37) will be transformed into
W 1 X 2 = d W 1 ( 2 Z 1 / 2 , 1 U 2 , 1 / 2 ) 1
and since the family of mixed exponential distributions is identifiable, this yields the following mixture representation for the folded normal distribution:
| X | = d 2 | Q 2 | = d ( 2 Z 1 / 2 , 1 U 2 , 1 / 2 ) 1 / 2 .
Along with (32), from (38) we obtain one more product representation for the normal r.v., this time in terms of the ‘scaling’ r.v.s in (26) and Corollary 1:
X = d 2 Q 2 = d Y ± 1 ( 2 Z 1 / 2 , 1 U 2 , 1 / 2 ) 1 / 2 .
Since the r.v. Y ± 1 has the discrete uniform distribution on the set { 1 , + 1 } , relation (39) can be regarded as one more uniform mixture representation for the normal distribution.

3.5. Some Limit Theorems for Extreme Order Statistics in Samples with Random Sizes

Proposition 11 declares that the one-sided EP distribution with α 1 is a scale mixture of Weibull distribution with shape parameter α . In other words, relation (35) can be expressed in the following form: for any x 0
P ( | Q α | > x ) = 2 0 e z x α v α ( z ) d z .
At the same time, Proposition 8 means that any gamma distribution with shape parameter r 1 can also be represented as a scale mixture of the Weibull distribution with the same shape parameter. In other words, relation (27) can be expressed in the following form: for any x 0
P ( G r , 1 > x ) = 0 e z x r u 1 , 1 / r ( z ) d z .
From (40) and (41), it follows that one-sided EP distribution with α 1 and the gamma distribution with r > 1 can appear as a limit distribution in limit theorems for extreme order statistics constructed from samples with random sizes. To illustrate this, we will consider the limit setting dealing with the min-compound doubly stochastic Poisson processes.
A doubly stochastic Poisson process (also called Cox process) is defined in the following way. A stochastic point process is called a doubly stochastic Poisson process, if it has the form N 1 ( L ( t ) ) , where N 1 ( t ) , t 0 , is a time-homogeneous Poisson process with intensity equal to one and the stochastic process L ( t ) , t 0 , is independent of N 1 ( t ) and has the following properties: L ( 0 ) = 0 , P ( L ( t ) < ) = 1 for any t > 0 , the trajectories of L ( t ) are right-continuous and do not decrease. In this context, the Cox process N ( t ) is said to be lead or controlled by the process L ( t ) .
Now let N ( t ) , t 0 , be the a doubly stochastic Poisson process (Cox process) controlled by the process L ( t ) . Let T 1 , T 2 , be the points of jumps of the process N ( t ) . Consider a marked Cox point process { ( T i , X i ) } i 1 , where X 1 , X 2 , are independent identically distributed (i.i.d.) r.v.s assumed to be independent of the process N ( t ) . Most studies related to the point process { ( T i , X i ) } i 1 deal with compound Cox process S ( t ) which is a function of the marked Cox point process { ( T i , X i ) } i 1 defined as the sum of all marks of the points of the marked Cox point process which do not exceed the time t, t 0 . In S ( t ) , the operation of summation is used for compounding. Another function of the marked Cox point process { ( T i , X i ) } i 1 that is of no less importance is the so-called max-compound Cox process which differs from S ( t ) by that compounding operation is the operation of taking maximum of the marking r.v.s. The analytic and asymptotic properties of max-compound Cox processes were considered in [37,38]. Here we will consider the min-compound Cox process.
Let N ( t ) be a Cox process. The process M ( t ) defined as
M ( t ) = + , if N ( t ) = 0 , min 1 k N ( t ) X k , if N ( t ) 1 ,
t 0 , is called a min-compound Cox process.
We will also use the conventional notation lext ( F ) = inf { x : F ( x ) > 0 } .
Lemma 5.
Assume that there exist a positive infinitely increasing function d ( t ) and a positive r.v. L such that
L ( t ) d ( t ) L
as t . Additionally assume that lext ( F ) > and the d.f. P F ( x ) F lext ( F ) x 1 satisfies the condition: there exists a number γ > 0 such that for any x > 0
lim y P F ( y x ) P F ( y ) = x γ .
Then there exist functions a ( t ) and b ( t ) such that
P M ( t ) a ( t ) b ( t ) < x H ( x )
as t , where
H ( x ) = 0 ( 1 e λ x γ ) d P ( L < λ )
for x 0 and H ( x ) = 0 for x < 0 . Moreover, the functions a ( t ) and b ( t ) can be defined as
a ( t ) = lext ( F ) , b ( t ) = sup x : F ( x ) 1 d ( t ) lext ( F ) .
Proof. 
This lemma can be proved in the same way as Theorem 2 in [37] dealing with max-compound Cox processes using the fact that min { X 1 , , X N ( t ) } = max { X 1 , , X N ( t ) } . □
Proposition 12.
Let α 1 . Assume that there exists a positive infinitely increasing function d ( t ) such that
L ( t ) d ( t ) Z 1 / α , 1 1 / α
as t . Additionally assume that lext ( F ) > and the d.f. P F ( x ) F lext ( F ) x 1 satisfies condition (42) with γ = α . Then there exist functions a ( t ) and b ( t ) such that
M ( t ) a ( t ) b ( t ) | Q α |
as t . Moreover, the functions a ( t ) and b ( t ) can be defined by (43).
Proof. 
This statement directly follows from Lemma 5 with the account of (40). □
Proposition 13.
Let r 1 . Assume that there exists a positive infinitely increasing function d ( t ) such that
L ( t ) d ( t ) U 1 , 1 / r
as t . In addition, assume that lext ( F ) > and the d.f. P F ( x ) F lext ( F ) x 1 satisfies condition (42) with γ = r . Then there exist functions a ( t ) and b ( t ) such that
M ( t ) a ( t ) b ( t ) G r , 1
as t . Moreover, the functions a ( t ) and b ( t ) can be defined by (43).
Proof. 
This statement directly follows from Lemma 5 with the account of (41). □
Propositions 11 and 12 describe the conditions for the convergence of the distributions of extreme order statistics to one-sided EP distributions with α 1 and to gamma distributions with r 1 , respectively. Using (21) and (26) instead of (40) and (41) correspondingly, we can also cover the cases α ( 0 , 1 ] and r ( 0 , 1 ) .
Proposition 14.
Let α ( 0 , 1 ] . Assume that there exists a positive infinitely increasing function d ( t ) such that
L ( t ) d ( t ) U 1 , α
as t . In addition, assume that lext ( F ) > and the d.f. P F ( x ) F lext ( F ) x 1 satisfies condition (42) with γ = 1 . Then there exist functions a ( t ) and b ( t ) such that (44) holds as t . Moreover, the functions a ( t ) and b ( t ) can be defined by (43).
Proof. 
This statement directly follows from Lemma 5 with the account of (21). □
Proposition 15.
Let r ( 0 , 1 ] . Assume that there exists a positive infinitely increasing function d ( t ) such that
L ( t ) d ( t ) Z r , 1
as t . In addition, assume that lext ( F ) > and the d.f. P F ( x ) F lext ( F ) x 1 satisfies condition (42) with γ = 1 . Then there exist functions a ( t ) and b ( t ) such that (45) holds as t . Moreover, the functions a ( t ) and b ( t ) can be defined by (43).
Proof. 
This statement directly follows from Lemma 5 with the account of (26). □
It is very simple to give examples of processes satisfying the conditions described in Propositions 11–14. Let L ( t ) U t and d ( t ) t , t 0 , where U is a positive r.v. Then choosing an appropriately distributed U we can provide the validity of the corresponding condition for the convergence of L ( t ) / d ( t ) . Moreover, the parameter t may not have the meaning of physical time. For example, it may be some location parameter of L ( t ) , so that the statements of this section concern the case of large mean intensity of the Cox process.

4. Multivariate Case

4.1. Conventional Approach Higher-Order EP Scale Mixture Representation

Let r N . In this section, we will consider random elements taking values in the r-dimensional Euclidean space R r . The notation x will mean the vector-column x = ( x 1 , , x r ) . The vector with all zero coordinates will be denoted 0 .
Let Σ be a symmetric positive definite ( r × r ) -matrix. The normal distribution in R r with zero vector of expectations and covariance matrix Σ will be denoted N r , Σ . This distribution is defined by its density
φ ( x ) = exp { 1 2 x Σ 1 x } ( 2 π ) r / 2 | Σ | 1 / 2 , x R r .
The characteristic function f X r , Σ ( t ) of a random vector X r , Σ such that L ( X r , Σ ) = N r , Σ has the form
f X r , Σ ( u ) E exp { i u X r , Σ } = exp 1 2 u Σ u , u R r .
Let α > 0 , Σ be a symmetric positive definite ( r × r ) -matrix. Following the conventional approach (see, e.g., [4]), we define the r-variate elliptically contoured EP distribution with parameters α and Σ as an absolutely continuous probability distribution corresponding to the probability density
p r , α , Σ ( x ) = α Γ ( r 2 ) 2 1 + r / 2 π r / 2 | Σ | 1 / 2 Γ ( r α ) · exp x Σ 1 x α / 2 , x R r .
The random vector whose density is given by (47) will be denoted Q r , α , Σ . It is easy to see that Q r , 2 , Σ = d X r , Σ .
Having obtained the formula for the density of Q r , α , Σ , we are in a position to prove the multivariate generalization of Proposition 1 for α ( 0 , 2 ] .
First, notice that, if 0 < α β 2 , then in accordance with Corollary 4,
0 g α / β , 1 ( x ) x r / β d x = β Γ ( r α ) α Γ ( r β ) .
Therefore, the function
u r , β , α / β ( x ) = α Γ ( r β ) β Γ ( r α ) · g α / β , 1 ( x ) x r / β · I ( 0 , ) ( x )
is a probability density. Comparing this function with the function u α , α ( x ) introduced in Proposition 1, we see that
u r , β , α / β ( x ) u β / r , α / β ( x ) .
Recall that by U β / r , α / β we denote an r.v. with density u β / r , α / β ( x ) . If α = β , then, by definition, P ( U β , 1 = 1 ) = 1 .
From (19), it follows that if α ( 0 , 2 ] , α ( 0 , 1 ] and r N , then
u α / r , α ( x ) = Γ ( r α ) Γ ( 1 α α ) Γ ( r α α ) Γ ( 1 α ) · x ( 1 r ) / α u α , α ( x ) .
Proposition 16.
Let 0 < α β 2 , Σ be a symmetric positive definite ( r × r ) -matrix. Then
Q r , α , Σ = d U β / r , α / β 1 / β Q r , β , Σ .
Proof. 
Since Σ is positive definite, the inverse matrix Σ 1 is positive definite as well (see, e.g., [39], Appendix, Corollary 3). Let S r ( α , Σ 1 ) be an r-variate random vector with the strictly stable elliptically contoured stable distribution with characteristic exponent α and matrix parameter Σ 1 (see, e.g., [40]). As is known, the characteristic function of S r ( α , Σ 1 ) has the form
g r , α , Σ 1 ( t ) = E exp { i t S r ( α , Σ 1 ) } = exp { ( t Σ 1 t ) α / 2 } ,
see, e.g., [41]. As it was shown in that paper, if 0 < α β 2 , then
S r ( α , Σ 1 ) = d S α / β , 1 1 / β S r ( β , Σ 1 ) .
Rewrite (49) in terms of characteristic functions with arbitrary 0 < α β 2 :
exp { ( t Σ 1 t ) α / 2 } = 0 exp t ( z Σ 1 ) t β / 2 g α / β , 1 ( z ) d z ,
whence by elementary transformations, re-denoting t = x , we obtain
p r , α , Σ ( x ) = α Γ ( r 2 ) 2 1 + r / 2 π r / 2 | Σ | 1 / 2 Γ ( r α ) · exp x Σ 1 x α / 2 =
= 0 z r / β ( 2 π ) r / 2 | Σ | 1 / 2 exp z x Σ 1 x β / 2 · α Γ ( r β ) β Γ ( r α ) g α / β , 1 ( z ) z r / β d z =
= 0 z r / β p r , β , Σ ( z 1 / β x ) u r , β , α / β ( z ) d z .
But, in accordance with (47) and (48), the function on the right-hand side of (50) is the density of the product U β / r , α / β 1 / β Q r , β , Σ . □
Corollary 9.
Let α ( 0 , 2 ] , Σ be a symmetric positive definite ( r × r ) -matrix. Then
Q r , α , Σ = U 2 / r , α / 2 1 X r , Σ
In [15], it was shown that if α > 2 , then the EP distribution cannot be represented as a normal scale mixture.
By the r-variate elliptically contoured Laplace distribution with parameter Σ , we will mean L ( Λ r , Σ ) , where
Λ r , Σ = U 2 / r , 1 / 2 1 X r , Σ = d Q r , 1 , Σ
(cf. [15]).
Corollary 10.
Let 0 < α 1 , Σ be a symmetric positive definite ( r × r ) -matrix. Then
Q r , α , Σ = d U 1 / r , α 1 Λ r , Σ .
Proposition 17.
Let Σ be a symmetric positive definite ( r × r ) -matrix, A be an ( r × r ) -matrix of rank r, Q r , α , Σ be an r-variate random vector with elliptically contoured EP distribution with parameters α and Σ. Then the random vector A Q r , α , Σ has the r-variate elliptically contoured EP distribution with parameters α and A Σ A : L ( A Q r , α , Σ ) = L ( Q r , α , A Σ A ) .
Proof. 
From the definition of multivariate elliptically contoured EP distribution, by virtue of the well-known property of linear transformations of random vectors with the multivariate normal distribution (see, e.g., [39], Theorem 2.4.4), we have
A Q r , α , Σ = d A U 2 / r , α / 2 1 X r , Σ = d U 2 / r , α / 2 1 ( A X r , Σ ) = d U 2 / r , α / 2 1 X r , A Σ A = d Q r , α , A Σ A .
In [4], it was shown that if A is a ( p × r ) -matrix with p < r , then the distribution of A Q r , α , Σ is elliptically contoured, but, in general, not EP. The idea of the proof of Proposition 17 can clarify why this is so. Indeed, if p < r and 0 < α < 2 , then
A Q r , α , Σ = d U 2 / r , α / 2 1 X p , A Σ A ,
and if p r , then, by virtue of Corollary 9 and identifiability of scale mixtures of multivariate normal distributions, the product on the right-hand side is not an EP-distributed p-variate random vector. This fact illustrates the result of Y. Kano [42]: to ensure that marginal distributions of a multivariate elliptically contoured distribution belong to the same type, the mixing distribution in the stochastic representation similar to (51) must not depend on the dimensionality, whereas in (51) this condition (called ‘consistency’ in [42]) is violated.
Corollary 11.
Let r 2 . Assume that a random vector Z has an r-variate elliptically contoured EP distribution with parameters α and Σ. If α ( 0 , 2 ) , then the distribution of each linear combination of its coordinates is a normal scale mixture, but not EP.
By this property multivariate EP distributions differ from multivariate stable (in particular, normal) laws, for which each projection of a random vector with a stable law also follows a stable distribution with the same characteristic exponent (see, e.g., [43]).

4.2. Alternative Multivariate Uniform Mixture Representation for the EP Distribution

The multivariate EP distributions are a special class of elliptically contoured distributions (see, e.g., [44,45,46,47]). Therefore, it is possible to use the properties of elliptically contoured laws to obtain the following multivariate uniform mixture representation for the EP distributions similar to Lemma 5.
Proposition 18.
Let α > 0 , Σ be a symmetric positive definite ( r × r ) -matrix, A be an ( r × r ) -matrix such that A A = Σ , Y r be a random vector with the uniform distribution on the unit sphere in R r . Then,
Q r , α , Σ = d 2 G r / α , 1 1 / α A Y r = d 2 1 / α G ¯ r / α , α , 1 A Y r .
Proof. 
See Proposition 4.1 in [4]. □
Since Q r , 2 , Σ = d X r , Σ , from Proposition 18 with α = 2 , we obtain the following representation of the r-variate normal distribution as a scale mixture of the uniform distribution on the unit sphere in R r transformed into the dispersion ellipsoid corresponding to the covariance matrix.
Corollary 12.
Let Σ be a symmetric positive definite ( r × r ) -matrix, A be an ( r × r ) -matrix such that A A = Σ , Y r be a random vector with the uniform distribution on the unit sphere in R r . Then,
X r , Σ = d 2 G r / 2 , 1 A Y r = d G ¯ r / 2 , 2 , 1 / 2 A Y r .

4.3. Multivariate Projective Exponential Power Distributions

Let r N . In order to obtain a multivariate analog of a univariate EP distribution that meets Kano’s consistency condition, for which each projection has a univariate EP distribution, we will formally transfer the property of a univariate EP distribution to be a normal scale mixture to the multivariate case and call the distribution of an r-variate random vector
Q r , α , Σ * = 1 2 U 2 , α / 2 1 X r , Σ
the multivariate projective exponential power (PEP) distribution, where α ( 0 , 2 ] and Σ is a positive definite ( r × r ) -matrix. Since scale mixtures of the multivariate normal distribution are elliptically contoured (see [44,46,47]), the PEP distributions so defined are elliptically contoured.
Consider an analog of Proposition 16 for multivariate PEP distributions.
Proposition 19.
Let 0 < α β 2 , Σ be a symmetric positive definite ( r × r ) -matrix. Then,
Q r , α , Σ * = d U β , α / β 1 / β Q r , β , Σ * .
Proof. 
From Proposition 5, it follows that
U 2 , α / 2 = d U 2 , β / 2 U β , α / β 2 / β .
Therefore,
Q r , α , Σ * = d 1 2 U 2 , α / 2 1 X r , Σ = d U β , α / β 1 / β 1 2 U 2 , β / 2 1 X r , Σ = d U β , α / β 1 / β Q r , β , Σ * .
Taking into account the relation U 1 , 1 / 2 = d 1 4 W 1 1 , by the multivariate projective Laplace distribution with matrix parameter Σ , we will mean L ( Λ r , Σ * ) , where
Λ r , Σ * = d Q r , 1 , Σ * = d 2 W 1 X r , Σ .
From Proposition 19, with β = 1 we obtain the following statement.
Corollary 13.
Let 0 < α 1 , Σ be a symmetric positive definite ( r × r ) -matrix. Then
Q r , α , Σ * = d U 1 , α 1 Λ r , Σ * .
The following statements present the features of projective EP distributions that distinguish them from ‘conventional’ EP distributions considered in the preceding section.
Proposition 20.
Let p N , p r , Σ be a symmetric positive definite ( r × r ) -matrix, A be a ( p × r ) -matrix of rank p, Q r , α , Σ * be an r-variate random vector with PEP distribution with parameters α and Σ. Then the random vector A Q r , α , Σ * has the p-variate PEP distribution with parameters α and A Σ A : A Q r , α , Σ * = d Q p , α , A Σ A * .
Proof. 
From the definition of the multivariate PEP distribution, by virtue of the well-known property of the linear transformations of the random vectors with the multivariate normal distribution (see, e.g., [39], Theorem 2.4.4), we have
A Q r , α , Σ * = d A 1 2 U 2 , α / 2 1 X r , Σ = d 1 2 U 2 , α / 2 1 ( A X r , Σ ) = d 1 2 U 2 , α / 2 1 X p , A Σ A = d Q p , α , A Σ A * .
Proposition 21.
A random vector has an r-variate PEP distribution if and only if each linear combination of its coordinates has a univariate symmetric EP distribution.
Proof. 
The ‘only if’ part. Let u R r be an arbitrary vector, u 0 . Assume that a random vector Z has an r-variate PEP distribution with some α ( 0 , 2 ] and positive definite matrix Σ ; that is, Z = d Q r , α , Σ * . We have
u Z = d u Q r , α , Σ * = d u 1 2 U 2 , α / 2 1 X r , Σ = d 1 2 U 2 , α / 2 1 ( u X r , Σ ) = d
= d u Σ u · 1 2 U 2 , α / 2 1 X = d u Σ u · Q α ,
that is, up to a scale factor u Σ u , the distribution of the linear combination u Z of the coordinates of Z (the projection of Z onto the direction u ) is univariate EP with parameter α .
The ‘if’ part. Let Z be an r-variate random vector with E Z = 0 and covariance matrix C, u be an arbitrary vector from R r . Consider the linear combination u Z of the coordinates of Z . We obviously have E u Z = 0 and D ( u Z ) = u C u . According to the assumption, this combination, up to a scale parameter σ > 0 , has a univariate EP distribution with some parameter α : u Z = d σ Q α . With the account of Corollary 1 and Proposition 3 (see (16)), this means that
u C u = D ( u Z ) = σ 2 D Q α = σ 2 D 1 2 U 2 , α / 2 1 X = 1 2 σ 2 E U 2 , α / 2 1 X 2 = 1 2 σ 2 E U 2 , α / 2 1 = σ 2 Γ ( 3 α ) Γ ( 1 α ) ,
hence, we obtain
σ 2 = γ ( α ) · u C u ,
where
γ ( α ) = Γ ( 1 α ) Γ ( 3 α ) .
Now consider the characteristic function h ( t ) of the r.v. u Z . By virtue of the assumption, with the account of Corollary 1 and (55), we have
h ( t ) = E e i t u Z = E e i t σ Q α = E exp i t σ 1 2 U 2 , α / 2 1 X = 0 E exp i t σ X y / 2 d P ( U 2 , α / 2 1 < y ) =
= 0 e t 2 σ 2 y / 2 d P ( U 2 , α / 2 1 < y ) = 0 exp { 1 2 γ ( α ) t 2 y ( u C u ) } d P ( U 2 , α / 2 1 < y ) , t R ,
with γ ( α ) given by (56). Relation (57) holds for any u R r . Letting in (57) t = 1 and taking (46) into account, we notice that (57) turns into the characteristic function h Z ( u ) of the random vector Z :
h Z ( u ) = E e i u Z = 0 exp { 1 2 γ ( α ) ( u C u ) y } d P ( U 2 , α / 2 1 < y ) =
= 0 E exp i u y · X r , γ ( α ) C d P ( U 2 , α / 2 1 < y ) = E exp i u 1 2 U 2 , α / 2 1 X r , 2 γ ( α ) C =
= E exp { i u Q r , α , 2 γ ( α ) C * } , u R r .
That is, the random vector Z has the r-variate PEP distribution with parameters α and 2 γ ( α ) C . □
Proposition 21 explains the term projective EP distribution.
Proposition 22.
If α ( 0 , 1 ] { 2 } , then PEP distributions are infinitely divisible. If 1 < α < 2 , then PEP distributions are not infinitely divisible.
Proof. 
First, consider the case α ( 0 , 1 ] { 2 } . In the proof of Corollary 3 we established that L ( U 2 , α / 2 1 ) is infinitely divisible for α ( 0 , 1 ] . Hence, for these values of α the distribution of Q r , α , Σ * is also infinitely divisible being a scale mixture of the r-variate normal distribution in which the mixing distribution is infinitely divisible (this fact can be proved in the same way as in the univariate case [27]). The case α = 2 is trivial since Q r , 2 , Σ * = d 1 2 X r , Σ .
Now consider an r-variate random vector Q r , α , Σ * with 1 < α < 2 and some positive definite ( r × r ) -matrix Σ . Assume that L ( Q r , α , Σ * ) is infinitely divisible. Then, in accordance with Theorem 3.2 of [48] for any u R r the r.v. u Q r , α , Σ * is infinitely divisible as well. In the proof of the ‘only if’ part of Proposition 21, we found out that
u Q r , α , Σ * = d u Σ u · Q α ,
that is, the univariate distribution L ( Q α ) also must be infinitely divisible. However, it was shown in [14], with values of α ( 1 , 2 ) the univariate EP distributions are not infinitely divisible. This contradiction completes the proof. □
Now consider the representation of r-variate PEP distributions as scale mixtures of the uniform distribution on the unit sphere in R r transformed in accordance with the corresponding matrix parameter Σ .
Proposition 23.
Let α ( 0 , 2 ] , Σ be a symmetric positive definite ( r × r ) -matrix, A be an ( r × r ) -matrix such that A A = Σ , Y r be a random vector with the uniform distribution on the unit sphere in R r . Then,
Q r , α , Σ * = d U 2 , α / 2 1 G r / 2 , 1 A Y r = d U 2 , α / 2 1 G ¯ r / 2 , 2 , 1 A Y r .
Proof. 
This statement follows from the definition of an r-variate PEP distribution and Corollary 12. □
In practice, depending on a particular problem, a researcher should choose what is more beneficial: either to deal with a statistical model based on the convenient multivariate density of a conventional EP distribution at the expense of the loss of EP property for marginals and projections, or to deal with the model having convenient EP marginal and projective densities at the expense of the loss of conventional multivariate EP property of PEP distributions.

4.4. A Criterion of Convergence of the Distributions of Random Sums to Multivariate EP and PEP Distributions

Recall that the symbol ⟹ denotes convergence in distribution. The Borel σ -algebra of subsets of R r will be denoted B r .
Consider a sequence of independent identically distributed random vectors X 1 , X 2 , taking values in R r . For a natural n 1 , let
S n = X 1 + + X n .
Let N 1 , N 2 , be a sequence of nonnegative integer r.v.s defined on the same probability space so that for each n 1 the r.v. N n is independent of the sequence X 1 , X 2 , For definiteness, hereinafter we will assume that j = 1 0 = 0 .
Lemma 6.
Assume that the random vectors S 1 , S 2 , satisfy the condition
L b n 1 / 2 S n N r , Σ
as n , where { b n } n 1 is an infinitely increasing sequence of positive numbers and Σ is some positive definite matrix. In other words, let
b n 1 / 2 S n X r , Σ ( n ) .
Let { d n } n 1 be an infinitely increasing sequence of positive numbers. Then a distribution F on B r such that
L ( d n 1 / 2 S N n ) F ( n ) ,
exists if and only if there exists a d.f. V ( x ) satisfying the conditions
(i)
V ( x ) = 0 for x < 0 ;
(ii)
for any A B r
F ( A ) = 0 N r , u Σ ( A ) d V ( u ) ;
(iii)
P ( b N n < d n x ) V ( x ) , n .
Proof. 
This statement is a particular case of a more general theorem proved in [49]. □
Proposition 24.
Assume that the random vectors X 1 , X 2 , satisfy condition (58) with some infinitely increasing sequence { b n } n 1 of positive numbers and some positive definite matrix Σ. Let { d n } n 1 be an infinitely increasing sequence of positive numbers. Then
d n 1 S N n Q r , α , Σ ( n )
if and only if
d n 1 b N n U 2 / r , α / 2 ( n ) .
Proof. 
First of all, note that in the case under consideration, as F ( A ) in Lemma 6 we can take F ( A ) = P ( Q r , α , Σ A ) , A B r . Furthermore, by virtue of Corollary 9, we have
F ( A ) = P ( Q r , α , Σ A ) = P U 2 / r , α / 2 1 / 2 X r , Σ A = 0 N r , u Σ ( A ) d P U 2 / r , α / 2 < u , A B r .
Therefore, Proposition 24 is a direct consequence of Lemma 6 with V ( x ) = P U 2 / r , α / 2 1 < x . □
In the same way as Proposition 22 was proved, by the corresponding replacement of U 2 / r , α / 2 by 1 2 U 2 , α / 2 with the reference to Corollary 9 replaced by that to the definition of a multivariate PEP distribution, we can obtain conditions for the convergence of the distributions of multivariate random sums to PEP distributions.
Proposition 25.
Assume that the random vectors X 1 , X 2 , satisfy condition (58) with some infinitely increasing sequence { b n } n 1 of positive numbers and some positive definite matrix Σ. Let { d n } n 1 be an infinitely increasing sequence of positive numbers. Then,
d n 1 S N n Q r , α , Σ * ( n )
if and only if
d n 1 b N n 1 2 U 2 , α / 2 ( n ) .

4.5. A Criterion of Convergence of the Distributions of Regular Statistics Constructed from Samples with Random Sizes to Multivariate EP Distributions

Let { X n } n 1 be independent not necessarily identically distributed random vectors with values in R r , r N . For n N , let T n = T n ( X 1 , , X n ) be a statistic, i.e., a measurable function of X 1 , , X n with values in R m , m N . For each n 1 , we define a random vector T N n by setting
T N n ( ω ) T N n ( ω ) X 1 ( ω ) , , X N n ( ω ) ( ω ) , ω Ω .
Let θ R m . Assume that the statistics T n are asymptotically normal in the sense that
n ( T n θ ) X m , Σ
as n , where X m , Σ is a random vector with the m-variate normal distribution with an ( m × m ) covariance matrix Σ . Recall that we use a special notation N m , Σ for L ( X m , Σ ) . Examples of statistics satisfying (60) are well known: sample quantiles, maximum likelihood estimators of a multivariate parameter, etc.
Let N 1 , N 2 , be a sequence of nonnegative integer r.v.s defined on the same probability space so that for each n 1 the r.v. N n is independent of the sequence X 1 , X 2 , In this section, we will be interested in the conditions providing the convergence of the distributions of the m-variate random vectors Z = n ( T N n θ ) to m-variate elliptically contoured EP distributions L ( Q m , α , Σ ) .
In limit theorems of probability theory and mathematical statistics, it is conventional to use centering and normalization of r.v.s and vectors in order to obtain non-trivial asymptotic distributions. Moreover, to obtain reasonable approximation to the distribution of the basic statistics (in our case, T N n ), the normalizing values should be non-random. Otherwise the approximate distribution becomes a random process itself and, say, the problem of evaluation of its quantiles or critical values of statistical tests becomes senseless. Therefore, in the definition of Z we consider the non-randomly normalized statistic constructed from a sample with random size.
Lemma 7.
Assume that N n in probability and the statistic T n is asymptotically normal so that condition (60) holds. Then a random vector Z such that
n ( T N n θ ) Z ( n )
exists if and only if there exists a d.f. V such that
(i)
V ( x ) = 0 for x < 0 ;
(ii)
for any A B r
P ( Z A ) = 0 N m , u Σ ( A ) d V ( u ) ;
(iii)
P ( n N n 1 < x ) V ( x ) , n .
Proof. 
This lemma is a particular case of a more general statement proved in [50] and strengthened in [41] (see Theorem 8 there). □
Proposition 26.
Assume that N n in probability and the statistic T n is asymptotically normal so that condition (60) holds. Then,
Z = n ( T N n θ ) Q m , α , Σ
as n with some α ( 0 , 2 ] and the same ( m × m ) matrix Σ as in (60), if and only if
n 1 N n U 2 / m , α / 2 1 ( n ) .
Proof. 
This statement is a direct consequence of Lemma 7 with the account of Corollary 9. □
In the same way that Proposition 26 was proved, by the corresponding replacement of U 2 / r , α / 2 by 1 2 U 2 , α / 2 with the reference to Corollary 9 replaced by that to the definition of a multivariate PEP distribution, we can obtain conditions for the convergence of the distributions of regular (in the sense of (60)) multivariate statistics constructed from samples with random sizes to PEP distributions.
Proposition 27.
Assume that N n in probability and the statistic T n is asymptotically normal so that condition (60) holds. Then,
Z = n ( T N n θ ) Q m , α , Σ *
as n with some α ( 0 , 2 ] and the same ( m × m ) matrix Σ as in (60), if and only if
n 1 N n 1 2 U 2 , α / 2 1 ( n ) .

Funding

The research was supported by the Ministry of Science and Higher Education of the Russian Federation, project No. 075-15-2020-799.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Subbotin, M.T. On the law of frequency of error. Math. Collect. 1923, 31, 296–301. [Google Scholar]
  2. Box, G.; Tiao, G. Bayesian Inference in Statistical Analysis; Addison–Wesley: Reading, MA, USA, 1973. [Google Scholar]
  3. Landsman, Z.M.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 2003, 7, 55–71. [Google Scholar] [CrossRef] [Green Version]
  4. Gómez, E.; Gómez-Villegas, M.A.; Marín, J.M. A multivariate generalization of the power exponential family of distributions. Commun. Stat. Theory Methods 1998, 27, 589–600. [Google Scholar] [CrossRef]
  5. Dang, U.J. Mixtures of Power Exponential Distributions and Topics in Regression-Based Mixture Models. Ph.D. Thesis, The University of Guelph, Guelph, ON, Canada, 2014. [Google Scholar]
  6. Dang, U.J.; Browne, R.P.; McNicholas, P.D. Mixtures of multivariate power exponential distributions. Biometrics 2015, 71, 1081–1089. [Google Scholar] [CrossRef] [Green Version]
  7. Evans, M.; Hastings, N.; Peacock, B. Statistical Distributions, 3rd ed.; Wiley: New York, NY, USA, 2000. [Google Scholar]
  8. Giller, G.L. A Generalized Error Distribution. Available online: https://ssrn.com/abstract=2265027 (accessed on 16 August 2005).
  9. Leemis, L.M.; McQueston, J.T. Univariate distribution relationships. Am. Stat. 2008, 62, 45–53. [Google Scholar] [CrossRef]
  10. RiskMetrics Technical Document; RiskMetric Group, J.P. Morgan: New York, NY, USA, 1996.
  11. Nadaraja, S. A generalized normal distribution. J. Appl. Stat. 2005, 32, 685–694. [Google Scholar] [CrossRef]
  12. Varanasi, M.K.; Aazhang, B. Parametric generalized Gaussian density estimation. J. Acoust. Soc. Am. 1989, 86, 1404–1415. [Google Scholar] [CrossRef]
  13. Pascal, F.; Bombrun, L.; Tourneret, J.-Y.; Berthoumieu, Y. Parameter estimation for multivariate generalized Gaussian distributions. IEEE Trans. Signal Process. 2013, 61, 5960–5971. [Google Scholar] [CrossRef] [Green Version]
  14. Dytso, A.; Bustin, R.; Poor, H.V.; Shamai, S. Analytical properties of generalized Gaussian distributions. J. Stat. Distrib. Appl. 2018, 5, 6. [Google Scholar] [CrossRef] [Green Version]
  15. Gómez-Sánchez-Manzano, E.; Gómez-Villegas, M.A.; Marín, J.M. Multivariate exponential power distributions as mixtures of normal distributions with Bayesian applications. Commun. Stat. Theory Methods 2008, 37, 972–985. [Google Scholar] [CrossRef] [Green Version]
  16. West, M. On scale mixtures of normal distributions. Biometrika 1987, 74, 646–648. [Google Scholar] [CrossRef]
  17. Choy, S.T.B.; Smith, A.F.F. Hierarchical models with scale mixtures of normal distributions. TEST 1997, 6, 205–221. [Google Scholar] [CrossRef]
  18. Gnedenko, B.V.; Kolmogorov, A.N. Limit Distributions for Sums of Independent Random Variables; Addison-Wesley: Cambridge, MA, USA, 1954. [Google Scholar]
  19. Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  20. Korolev, V.; Bening, V.; Zeifman, A.; Zaks, L. Exponential power distributions as asymptotic approximations in applied probability and statistics. In VI International Workshop ”Applied Problems in Theory of Probabilities and Mathematical Statistics Related to Modeling of Information Systems” (Autumn Session). 24–30 September, 2012, Svetlogorsk, Russia. Abstracts of Communications; Institute for Informatics Problems: Moscow, Russia, 2012; pp. 60–71. [Google Scholar]
  21. Zolotarev, V.M. One-Dimensional Stable Distributions; American Mathematical Society: Providence, RI, USA, 1986. [Google Scholar]
  22. Schneider, W.R. Stable distributions: Fox function representation and generalization. In Stochastic Processes in Classical and Quantum Systems; Albeverio, S., Casati, G., Merlini, D., Eds.; Springer: Berlin, Germany, 1986; pp. 497–511. [Google Scholar]
  23. Uchaikin, V.V.; Zolotarev, V.M. Chance and Stability. Stable Distributions and their Applications; VSP: Utrecht, The Netherlands, 1999. [Google Scholar]
  24. Korolev, V.Y. Product representations for random variables with Weibull distributions and their applications. J. Math. Sci. 2016, 218, 298–313. [Google Scholar] [CrossRef]
  25. Teicher, H. Identifiability of mixtures. Ann. Math. Stat. 1961, 32, 244–248. [Google Scholar] [CrossRef]
  26. Goldie, C.M. A class of infinitely divisible distributions. Math. Proc. Camb. Philos. Soc. 1967, 63, 1141–1143. [Google Scholar] [CrossRef]
  27. Feller, W. An Introduction to Probability Theory and Its Applications; Wiley: New York, NY, USA; London, UK; Sydney, Australia, 1966; Volume 2. [Google Scholar]
  28. Bernstein, S.N. Sur les fonctions absolument monotones. Acta Math. 1928, 52, 1–66. [Google Scholar] [CrossRef]
  29. Korolev, V.Y. Convergence of random sequences with independent random indexes. Theory Probab. Appl. 1994, 39, 313–333. [Google Scholar]
  30. Gleser, L.J. The gamma distribution as a mixture of exponential distributions. Am. Stat. 1989, 43, 115–117. [Google Scholar]
  31. Korolev, V.Y. Analogs of Gleser’s theorem for negative binomial and generalized gamma distributions and some their applications. Inf. Appl. 2017, 11, 2–17. [Google Scholar]
  32. Kritskii, S.N.; Menkel, M.F. On methods of studying random fluctuations of river discharge. Proc. State Hydrol. Inst. Ser. IV 1946, 29, 3–32. [Google Scholar]
  33. Kritskii, S.N.; Menkel, M.F. The choice of probability distribution curves for the calculation of river discharge. Izvetiya Tech. Sci. 1948, 6, 15–21. [Google Scholar]
  34. Stacy, E.W. A generalization of the gamma distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  35. Amoroso, L. Ricerche intorno alla curva dei redditi. Ann. Mat. Pura Appl. Ser. 4 1925, 21, 123–159. [Google Scholar] [CrossRef]
  36. Walker, S.G.; Gutierrez-Pena, E. Robustifying Bayesian procedures (with discussion). In Bayesian Statistics; Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M., Eds.; Oxford University Press: New York, NY, USA, 1999; Volume 6, pp. 685–710. [Google Scholar]
  37. Korolev, V.Y.; Sokolov, I.A.; Gorshenin, A.K. Max-compound Cox processes. I. J. Math. Sci. 2019, 237, 789–803. [Google Scholar] [CrossRef]
  38. Korolev, V.Y.; Sokolov, I.A.; Gorshenin, A.K. Max-compound Cox processes. II. J. Math. Sci. 2020, 246, 488–502. [Google Scholar] [CrossRef]
  39. Anderson, T. Introduction to Multivariate Statistical Analysis; Wiley: New York, NY, USA; Chapman and Hall: London, UK, 1957. [Google Scholar]
  40. Nolan, J.P. Multivariate stable densities and distribution functions: General and elliptical case. In Proceedings of the Deutsche Bundesbank’s 2005 Annual Autumn Conference, Eltville, Germany, 11 November 2005; pp. 1–20. [Google Scholar]
  41. Khokhlov, Y.S.; Korolev, V.Y.; Zeifman, A.I. Multivariate scale-mixed stable distributions and related limit theorems. Mathematics 2020, 8, 749. [Google Scholar] [CrossRef]
  42. Kano, Y. Consistency property of elliptical probability density functions. J. Multivar. Anal. 1994, 51, 139–147. [Google Scholar] [CrossRef]
  43. Press, S.J. Multivariate stable distributions. J. Multivar. Anal. 1972, 2, 444–462. [Google Scholar] [CrossRef] [Green Version]
  44. Cambanis, S.; Huang, S.; Simons, G. On the theory of elliptically contoured distributions. J. Multvariate Anal. 1981, 11, 365–385. [Google Scholar] [CrossRef] [Green Version]
  45. Johnson, M. Multivariate Statistical Simulation; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  46. Fang, K.; Zhang, Y. Generalized Multivariate Analysis; Springer: Beijing, China, 1990. [Google Scholar]
  47. Fang, K.T.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distributions; Chapman and Hall: London, UK, 1990. [Google Scholar]
  48. Horn, R.; Steutel, F.W. On multivariate infinitely divisible distributions. Stoch. Process. Appl. 1978, 6, 139–151. [Google Scholar] [CrossRef] [Green Version]
  49. Korchagin, A.Y. On convergence of random sums of independent random vectors to multivariate generalized variance-gamma distributions. Syst. Means Inf. 2015, 25, 127–141. [Google Scholar]
  50. Korolev, V.Y.; Zeifman, A.I. On normal variance–mean mixtures as limit laws for statistics with random sample sizes. J. Stat. Plan. Inference 2016, 169, 34–42. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Korolev, V. Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics. Mathematics 2020, 8, 1918. https://doi.org/10.3390/math8111918

AMA Style

Korolev V. Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics. Mathematics. 2020; 8(11):1918. https://doi.org/10.3390/math8111918

Chicago/Turabian Style

Korolev, Victor. 2020. "Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics" Mathematics 8, no. 11: 1918. https://doi.org/10.3390/math8111918

APA Style

Korolev, V. (2020). Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics. Mathematics, 8(11), 1918. https://doi.org/10.3390/math8111918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop