Next Article in Journal
Is Catalan’s Constant Rational?
Next Article in Special Issue
Ergodicity Bounds and Limiting Characteristics for a Modified Prendiville Model
Previous Article in Journal
Preprocessing of Spectroscopic Data Using Affine Transformations to Improve Pattern-Recognition Analysis: An Application to Prehistoric Lithic Tools
Previous Article in Special Issue
Multi-Server Queuing Production Inventory System with Emergency Replenishment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bounds for the Rate of Convergence in the Generalized Rényi Theorem

by
Victor Korolev
1,2,3,4
1
Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow, Russia
2
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
3
Moscow Center for Fundamental and Applied Mathematics, 119991 Moscow, Russia
4
Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
Mathematics 2022, 10(22), 4252; https://doi.org/10.3390/math10224252
Submission received: 13 October 2022 / Revised: 9 November 2022 / Accepted: 11 November 2022 / Published: 14 November 2022

Abstract

:
In the paper, an overview is presented of the results on the convergence rate bounds in limit theorems concerning geometric random sums and their generalizations to mixed Poisson random sums, including the case where the mixing law is itself a mixed exponential distribution. The main focus is on the upper bounds for the Zolotarev ζ -metric as the distance between the pre-limit and limit laws. New results are presented that extend existing estimates of the rate of convergence of geometric random sums (in the well-known Rényi theorem) to a considerably more general class of random indices whose distributions are mixed Poisson, including generalized negative binomial (e.g., Weibull-mixed Poisson), Pareto-type (Lomax)-mixed Poisson, exponential power-mixed Poisson, Mittag-Leffler-mixed Poisson, and one-sided Linnik-mixed Poisson distributions. A transfer theorem is proven that makes it possible to obtain upper bounds for the rate of convergence in the law of large numbers for mixed Poisson random sums with mixed exponential mixing distribution from those for geometric random sums (that is, from the convergence rate estimates in the Rényi theorem). Simple explicit bounds are obtained for ζ -metrics of the first and second orders. An estimate is obtained for the stability of representation of the Mittag-Leffler distribution as a geometric convolution (that is, as the distribution of a geometric random sum).

1. Geometric Sums: The Rényi Theorem

Assume that all the random variables and processes noted below are defined on one and the same probability space ( Ω , A , P ) . Let X 1 , X 2 , be independent identically distributed random variables. Let N p be a random variable with the geometric distribution
P ( N p = n ) = p ( 1 p ) n 1 , n = 1 , 2 , , p ( 0 , 1 ) .
Assume that N p is independent of X 1 , X 2 , Consider the random variables
N p * = N p 1 , S p = j = 1 N p X j , S p * = j = 1 N p * X j j = 1 0 0 .
The random variables S p and S p * are called geometric (random) sums.
Denote a = E X 1 . Then
E S p = a p , E S p * = ( 1 p ) a p .
The distribution function of a random variable X will be denoted as F X ( x ) .
Everywhere in what follows, let E denote the random variable with the standard exponential distribution:
F E ( x ) = 1 e x , x 0 , 0 , x < 0 .
The uniform distance between the distributions of random variables X and Y will be denoted as ρ ( X , Y ) ,
ρ ( X , Y ) = sup x | F X ( x ) F Y ( x ) | .
In what follows, the symbol L ( X ) will stand for the distribution of a random variable X.
The symbol ∘ will denote the product of independent random variables.
The statement of the problem considered in this paper goes back to the mid-1950s, when A. Rényi noticed that any renewal point process iteratively subjected to the operation of elementary rarefaction followed by an appropriate contraction of time tends to the Poisson process [1,2].The operation of elementary rarefaction assumes that each point of the point process, independently of other points, is either removed with probability 1 p or remains as it is with probability p ( 0 < p < 1 ). The limit Poisson process is characterized by the concept that the distribution of time intervals between successive points is exponential. Moreover, it is easy to see that, at each iteration of rarefaction, the time interval between successive points in the rarefied process is representable as the sum of a geometrically distributed random number of independent random variables in which the number of summands is independent of the summands. These objects are called geometric sums. Geometric sums proved to be important mathematical models in many fields, e.g., risk theory and insurance, reliability theory, etc. It suffices to note the famous Pollaczek–Khinchin formula for the ruin probability in a classical risk process and some recent publications [3,4] dealing with important applications of geometric random sums and their generalizations to modeling counting processes.
The publication of [5] in 1984 strongly stimulated interest in analytic and asymptotic properties of geometric sums. In that paper, the notions of geometric infinite divisibility and geometric stability were introduced.
The geometric stability of a random variable X means that if X 1 , X 2 , are independent identically distributed random variables with the same distribution as that of X, and N p is the random variable with geometric distribution (1) independent of X 1 , X 2 , , then for each p ( 0 , 1 ) there exists a constant a p > 0 such that
L a p X 1 + + X N p = L ( X ) .
In [5] it was shown that geometrically stable distributions and only they can be limiting for geometric random sums. (For the case of nonnegative summands, this statement was earlier proven by I. N. Kovalenko [6] who, in terms of Laplace transforms, introduced the class of distribution that, as turned out later, actually coincides with the class of geometrically stable distributions on R + .)
A significant contribution to the theory of geometric summation was made by V. V. Kalashnikov. The results were summarized in his wonderful and widely cited book [7]. That book was followed by many other important publications, for example, [8,9,10,11].
Formally, the Rényi theorem states that, as p 0 (or, which is the same, the expectation of the sum infinitely increases), the distribution of a geometric sum being normalized by its expectation converges to the exponential law:
lim p 0 ρ p S p a , E = lim p 0 ρ p S p * a ( 1 p ) , E = 0

2. Convergence Rate Bounds in the Classical Rényi Theorem

The first result on convergence rate in the Rényi theorem was obtained by A. D. Solovyev [12]. He considered the case of nonnegative summands X 1 , X 2 , and proved that for 2 < r 3
ρ p S p * a ( 1 p ) , E 24 p r 2 · E X 1 r a r 1 / ( r 1 ) .
The result of Solovyev was extended by V. V. Kalashnikov and S. Yu. Vsekhsvyatskii to the case 1 < r 2 in [13], where they showed that
ρ p S p a , E C p r 1 · E X 1 r a r ,
with C > 0 being a finite absolute constant; also see [14].
In [15], it was proven that if E X 1 2 < , then
ρ p S p * a ( 1 p ) , E p max 1 , 1 2 ( 1 p ) · E X 1 2 a 2 .
In [7], cited above, the bounds for the rate of convergence in the Rényi theorem were formulated in terms of the Zolotarev ζ -metric. To make the importance of the ζ -metric more clear, recall that, by the definition of weak convergence, random variables Y n are said to converge to a random variable Y weakly, if
δ n ( f ) = E f ( Y n ) f ( Y ) 0
as n for any f F , where the set F contains all continuous bounded functions. However, for the construction of convergence rate bounds, it is not convenient to use the quantities δ n , because the set F is too wide. V. M. Zolotarev noticed that for this purpose it is more appropriate to consider the convergence only on some special sub-classes of F . He suggested narrowing the set F to the class of differentiable bounded functions with Lipschitz derivatives. This suggestion resulted in the definition of the ‘ideal’ ζ -metric.
The formal definition of the ζ -metric is as follows. Let s > 0 . The number s can be uniquely represented as s = m + ε where m is an integer and 0 < ε 1 . Let F s be the set of all real-valued bounded functions f on R that are m times differentiable and
| f ( m ) ( x ) f ( m ) ( y ) | | x y | ε .
In 1976, V. M. Zolotarev [16] introduced the ζ -metric ζ s ( X , Y ) ζ s ( F X , F Y ) in the space of probability distributions by the equality
ζ s ( X , Y ) = sup | E f ( X ) f ( Y ) | : f F s ;
also see [17,18]. In particular, it can be proved that
ζ 1 ( X , Y ) = R | F X ( x ) F Y ( x ) | d x ,
e.g., see the derivation of Equation (1.4.23) in [18].
The following properties of the ζ -metrics will be used below. First of all, any probability metric satisfies the triangle inequality and therefore,
ζ s ( X , Y ) ζ s ( X , Z ) + ζ s ( Z , Y )
for any random variables X, Y, and Z. Some other properties of ζ -metrics will be presented in the form of lemmas.
Lemma 1.
Let c > 0 . Then
ζ s ( c X , c Y ) = c s ζ s ( X , Y ) .
Lemma 2.
Let X , Y , Z be random variables such that both X and Y are independent of Z. Then
ζ s ( X + Z , Y + Z ) = ζ s ( X , Y ) .
For the proofs of these statements, see that of Theorem 1.4.2 in [18]. The property of ζ -metrics stated by Lemma 1 is called homogeneity of order s of the metric ζ s , whereas its property stated in Lemma 2 is called regularity. The proof of regularity of the metric ζ s given in [18] can be easily extended to sums of an arbitrary number of independent random variables. Namely, for any n N let X 1 , X 2 , , X n and Y 1 , Y 2 , Y n be two sets of independent random variables. Then
ζ s j = 1 n X j , j = 1 n Y j j = 1 n ζ s ( X j , Y j ) .
In what follows, we will sometimes use the following semi-additivity property of the ζ -metric.
Lemma 3.
Let X and Y be random variables with the distribution functions F X ( x ) and F Y ( x ) , respectively. Let F X ( x ; z ) and F Y ( x ; z ) be conditional distribution functions of X and Y given Z = z , respectively, so that
E f ( X ) | Z = z = R f ( x ) d x F X ( x ; z ) , E f ( Y ) | Z = z = R f ( x ) d x F Y ( x ; z ) .
Then
ζ s ( F X , F Y ) R ζ s F X ( · ; z ) , F Y ( · ; z ) d F Z ( z ) .
For the proof, see Proposition 2.1.2 in [7] or Lemma 3 in [19].
The tractability of ζ -metrics in terms of the weak convergence and their attractive properties of ζ -metrics inspired Zolotarev to call these metrics ideal.
Return to the discussion of the convergence rate estimates in the Rényi theorem.
The results presented in [7,14] concern geometric sums of not necessarily nonnegative summands and are as follows. Let 1 < s 2 . Then
ζ s p S p a , E p s 1 ζ s ( X 1 , E ) ,
ζ 1 p S p a , E p ζ 1 ( X 1 , E ) + 2 ( 1 p ) p s 1 ζ s ( X 1 , E ) .
These results actually present estimates of the geometric stability of the exponential distribution.
It should be noted that the definition of the ζ -metric used in [7] was more general than that used by Zolotarev, so that the boundedness of functions of the class F s was not assumed.
I. G. Shevtsova and M. A. Tselishchev [20] proved a general result for independent and not necessarily identically distributed random summands X 1 , X 2 , but with identical nonzero expectations (say, equal to a) and finite second moments that implies the bounds
ζ 1 p S p a , E p E X 1 2 a 2 2 P ( X 1 0 )
and
ζ 1 p S p * a ( 1 p ) , E p E X 1 2 ( 1 p ) a 2 .
In [19], it was proved that if a E X 1 0 and E X 1 2 < , then for 1 s 2
ζ s p S p * ( 1 p ) a , E Γ ( 1 + ε ) Γ ( 1 + s ) p 1 p · E X 1 2 a 2 s / 2 .
In particular,
ζ 2 p S p * ( 1 p ) a , E p 2 ( 1 p ) · E X 1 2 a 2 .
Inequalities (10) and (12) establish best known moment bounds for the convergence rate in the classical Rényi theorem in terms of the ζ -metrics of the first and second orders.

3. Generalizations of the Rényi Theorem

The normalization of a sum of random variables by its expectation in the classical Rényi theorem is traditional for the laws of large numbers. Therefore, it is possible to regard the Rényi theorem as the law of large numbers for geometric sums. In its general form, the law of large numbers for random sums in which the summands are independent and identically distributed random variables was proven in [21]. It was demonstrated in that paper that the distribution of a non-randomly normalized random sum converges to some distribution if and only if the distribution of the the number of summands under the same normalization converges to the same distribution (up to a scale parameter).
Inequalities (11) and (12) were obtained as particular cases of a more general result concerning mixed Poisson random sums since, as is known, the geometric distribution of the random variable N p * can be represented as a mixed Poisson law:
P ( N p * = k ) = 1 k ! 0 λ k e λ μ e μ λ d λ , k = 0 , 1 , ,
with μ = ( 1 p ) 1 p . Representation (13) points at a natural direction of development of the studies related to the Rényi theorem leading to a more general class of possible limit laws and, correspondingly, to a more general set of possible distributions of the number of summands.
A direct way to construct generalizations of the geometric distribution is to replace mixing exponential distribution in (13) by a more general distribution from some class
M = { L ( E θ ) : supp ( L ( E θ ) ) = R + , θ Θ R }
containing the exponential distribution L ( E ) .
Let N ( t ) be the standard Poisson process (that is, the Poisson process with unit intensity) independent of the random variable E θ for each θ Θ . Let, for each θ Θ , the random variable N θ be defined as
N θ = N ( E θ ) .
The random variable N θ so defined has a mixed Poisson distribution:
P ( N θ = k ) = 1 k ! 0 e λ λ k d F E θ ( λ ) , k = 0 , 1 , 2 , ,
where F E θ ( x ) is the distribution function of E θ . Mixed Poisson distributions constitute a very wide class.
In [19], it was proven that if E X 1 2 is finite and Λ is some nonnegative random variable, then for 1 s 2 we have
ζ s 1 a θ j = 1 N ( E θ ) X j , Λ E E θ s / 2 θ s · Γ ( 1 + α ) Γ ( 1 + s ) · 1 + σ 2 a 2 s / 2 + ζ s E θ θ , Λ .
If, in addition, E θ = d θ Λ , then
ζ s 1 a θ j = 1 N ( θ Λ ) X j , Λ E Λ s / 2 θ s / 2 Γ ( 1 + α ) Γ ( 1 + s ) 1 + σ 2 a 2 s / 2 .
In particular,
ζ 2 1 a θ j = 1 N ( θ Λ ) X j , Λ E Λ 2 θ 1 + σ 2 a 2 .
Of course, the first idea is to replace the exponential mixing distribution in (13) by the gamma distribution. Let G r , μ be a random variable with the gamma distribution with parameters r and μ corresponding to the probability density function
g ( x ; r , μ ) = μ r Γ ( r ) x r 1 e μ x , x 0 .
Let E θ = G r , μ / θ = d θ G r , μ where r > 0 , μ > 0 , θ > 0 . Then N θ = N ( θ G r , μ ) has the negative binomial distribution with parameters r and p = μ / ( θ + μ ) :
P ( N θ = k ) = μ r θ k k ! Γ ( r ) 0 e λ ( θ + μ ) λ k + r 1 d λ =
= Γ ( k + r ) k ! Γ ( r ) μ θ + μ r 1 μ θ + μ k , k = 0 , 1 , 2 ,
We have E E θ = E G r , μ / θ = θ E ( G r , μ ) = θ r / μ so that for each θ > 0
E θ E E θ = θ G r , μ E G r , μ / θ = μ r G r , μ = d G r , r ,
that is, the limit distribution in the the law of large numbers for negative binomial random sums (or “generalized Rényi theorem”) is gamma with shape and scale parameters equal to r, and for 1 s 2 the following bound holds:
ζ s μ a θ r j = 1 N ( θ G r , μ ) X j , G r , r Γ ( 1 + ε ) Γ ( 1 + s ) μ θ r · E X 1 2 a 2 s / 2 .
In particular,
ζ 2 μ a θ r j = 1 N ( θ G r , μ ) X j , G r , r μ 2 θ r · E X 1 2 a 2 .
If r = 1 , then L ( G r , r ) = L ( E ) .
To make the parametrization of the distribution of N ( θ G r , μ ) more traditional by using the parameters r and p = μ / ( θ + μ ) , let us denote this random variable in an alternative way: N ( θ G r , μ ) = N B r , p . In these terms, (18) and (19) can be rewritten as
ζ s p a r ( 1 p ) j = 1 N B r , p X j , G r , r Γ ( 1 + ε ) Γ ( 1 + s ) p ( 1 p ) r · E X 1 2 a 2 s / 2
and
ζ 2 p a r ( 1 p ) j = 1 N B r , p X j , G r , r p 2 r ( 1 p ) · E X 1 2 a 2 .
With r = 1 , bounds (20) and (21) turn into (11) and (12), respectively.
For the case s = 1 in this problem, a bound more accurate in p (but less accurate in r) was independently obtained in [22]:
ζ 1 p a r ( 1 p ) j = 1 N B r , p X j , G r , r r p r ( 1 p ) · E X 1 2 a 2 ,
where r is the least integer no less than r. In [19], more examples can be found, say, the upper bounds for the ζ s -distance between the distribution of a generalized negative binomial random sum and the generalized gamma distribution, with 1 s 2 . (For the case s = 1 in that problem, a more accurate bound was independently obtained in [22].)

4. Convergence Rate Bounds for Mixed Geometric Sums

Another reasonable way to generalize geometric distribution is to take as M the class of mixed exponential distributions. This class is very wide and actually contains all distributions with distribution functions F ( x ) such that 1 F ( x ) is the Laplace transform of some other probability distribution on the nonnegative half-line. For example, this class contains Weibull distributions with shape parameter 1 , Pareto-type distributions, exponential power distributions with shape parameter 1 , gamma distributions with shape parameter 1 , Mittag-Leffler distributions, one-sided Linnik distributions, etc.
Let Q be a nonnegative random variable. Let E θ = θ E Q . In this case, 1 F E Q ( x ) is the Laplace transform of the random variable Q 1 . It is easy to see that a mixed Poisson random sum with the mixing distribution L ( θ E Q ) is a mixed geometric random sum. Indeed, because N ( t ) , Q, and E are assumed independent, by the Fubini theorem for k N { 0 } we have
P N ( Q E ) = k = 0 y k k ! 0 e λ ( y + 1 ) λ k d λ d F Q ( y ) =
= 0 1 ( y + 1 ) 1 1 y + 1 k d F Q ( y ) , k = 0 , 1 , 2 ,
Here, the integrands are geometric probabilities with the parameter p = 1 / ( y + 1 ) .
Moreover, the following convergence rate bound holds:
ζ s 1 a θ j = 1 N ( θ E Q ) X j , E Q E Q s / 2 θ s / 2 Γ ( 1 + ε ) Γ ( s 2 + 1 ) Γ ( 1 + s ) 1 + σ 2 a 2 s / 2 .
In particular,
ζ 2 1 a θ j = 1 N ( θ E Q ) X j , E Q E Q 2 θ 1 + σ 2 a 2 .
These inequalities are particular cases of (15) and (16).
In addition to the examples presented in [19], consider some more particular cases of (16).
Example 1. The case where E Q has the (generalized) gamma-distribution. We say that a random variable G r , γ , μ has the generalized gamma distribution (GG distribution) if its density has the form
g ( x ; r , γ , μ ) = | γ | μ r Γ ( r ) x γ r 1 e μ x γ , x 0 ,
with γ R , μ > 0 , r > 0 .
The class of GG distributions was proposed in 1925 by the Italian economist L. Amoroso [23] and is often associated with the work of E. W. Stacy [24], who introduced this family as the class of probability distributions containing both gamma and Weibull distributions. This family embraces practically all of the most popular absolutely continuous distributions on R + . The GG distributions serve as reliable models in reliability testing, life-time analysis, image processing, economics, social network analysis, etc. Apparently, the GG distributions are popular because most of them are adequate asymptotic approximations appearing in limit theorems of probability theory in rather simple limit settings. An analog of the law of large numbers for random sums in which the GG distributions are limit laws was proven in [25]. In [26], the maximum entropy principle was used to justify the applicability of GG distributions; also see [27,28].
In this case, the random variable N ( θ Q E ) has the generalized negative binomial distribution [29]. For the convergence bounds for negative binomial sums, see (20), (21) and [20], and for the convergence bounds for generalized binomial sums see [19,22].
As a particular case of the GG distribution, consider the Weibull-distributed E Q . In this case in (25), γ > 0 and r = 1 . For convenience, without loss of generality, also assume that μ = 1 . In other words, we consider the case L ( E Q ) = L ( G 1 , γ , 1 ) . In [30], it was demonstrated that if γ ( 0 , 1 ] , then
L ( G 1 , γ , 1 ) = L ( E Z γ 1 ) ,
where Z γ is a nonnegative random variable with the strictly stable distribution given by its characteristic function
g γ ( t ) = exp | t | γ exp { 1 2 i π γ sign t } , t R .
This means that in the case under consideration L ( Q ) = L ( Z γ 1 ) . As this is so, in [31] it was proven that
E Z γ β = Γ β γ γ Γ ( β ) ( 1 2 β < ) .
Therefore, in the case of the Weibull limit distribution with γ ( 0 , 1 ] , bound (24) takes the form
ζ 2 1 a θ j = 1 N ( θ G 1 , γ , 1 ) X j , G 1 , γ , 1 Γ 1 + 1 γ 2 θ · E X 1 2 a 2 .
Example 2. Let V r be a random variable with the Pareto type II distribution defined by the probability density
f ( x ; r ) = r ( x + 1 ) r + 1 , x 0 ,
where r > 1 . This distribution is also called the Lomax distribution [32]. This distribution is used in business, economics, actuarial science, Internet traffic modeling, queueing theory, and other fields. Consider the case where L ( E Q ) = L ( V r ) . It is easy to see that in this case the random variable Q has the inverse gamma distribution, that is, Q = Q ˜ 1 , where Q ˜ has the gamma-distribution with the probability density g ( x ; r , 1 ) (see (17)), so that
P ( V r < x ) = P ( E < x Q ˜ ) = 1 Γ ( r ) 0 x 0 y e z y y r 1 e y d y d z = r 0 x d z ( z + 1 ) r + 1 .
Hence, if r > 1 , then
E Q = E Q ˜ 1 = 1 Γ ( r ) 0 y r 2 e y d y = Γ ( r 1 ) Γ ( r ) = 1 r 1 ,
so that bound (24) takes the form
ζ 2 1 a θ j = 1 N ( θ V r ) X j , V r 1 2 ( r 1 ) θ · E X 1 2 a 2 .
Example 3. Let γ ( 0 , 1 ] . By W γ , we denote a random variable with the exponential power distribution defined by the density
w ( x ; γ ) = e x γ Γ 1 + 1 γ , x 0 .
Consider the case where L ( E Q ) = L ( W γ ) . In [31], it was proven that if γ ( 0 , 1 ] , then L ( W γ ) = L ( E U γ 1 ) , where the random variable U γ has the probability density
u ( x ; γ ) = 1 Γ ( 1 + 1 γ ) · g γ , 1 ( x ) x , x > 0 .
Here, g γ , 1 ( x ) is the probability density of the strictly stable distribution defined by the characteristic function (26). Moreover, in [31], it was demonstrated that
E U γ β = Γ ( β + 1 γ ) Γ ( 1 γ ) Γ ( β + 1 )
for β > 1 . Therefore, in the case under consideration E Q = Γ ( 2 γ ) / Γ ( 1 γ ) and bound (24) takes the form
ζ 2 1 a θ j = 1 N ( θ W γ ) X j , W γ Γ ( 2 γ ) 2 θ Γ ( 1 γ ) · E X 1 2 a 2 .
As one more example, consider convergence rate bounds for mixed Poisson random sums with the one-sided Linnik mixing distribution.
Example 4. In 1953, Yu. V. Linnik [33] introduced the class of symmetric distributions corresponding to the characteristic functions
f α ( t ) = 1 1 + | t | α , t R ,
where α ( 0 , 2 ] . If α = 2 , then the Linnik distribution turns into the Laplace distribution whose probability density has the form
( x ) = 1 2 e | x | , x R .
A random variable with Laplace density (29) and its distribution function will be denoted as Λ and F Λ ( x ) , respectively.
An overview of the analytic properties of the Linnik distribution can be found in [30], with the main focus on the various mixture representations of this distribution. Apparently, the Linnik distributions are most often recalled as examples of geometric stable distributions supported by the whole R . Moreover, the Linnik distributions exhaust the class of all symmetric geometrically strictly stable distributions (e.g., see [34]).
In what follows, the notation L α will stand for a random variable with the Linnik distribution with parameter α . The distribution function and density of L α will be denoted as F L α ( x ) and f ( x ; α ) , respectively. It is easy to see that (28) and (29) imply F L 2 ( x ) F Λ ( x ) , x R .
In [30], the distribution of the random variable | L α | with α ( 0 , 2 ] was called the one-sided Linnik distribution.
It is easy to see that
F ^ L α ( x ) P ( | L α | < x ) = 2 F L α ( x ) 1 , x 0 .
In [35], it was proven that the Linnik distribution density admits the following integral representation:
f ( x ; α ) = sin ( π α 2 ) π 0 y α e y | x | d y 1 + y 2 α + 2 y α cos ( π α 2 ) , x R .
Hence, the density f ^ ( x ; α ) of the one-sided Linnik law has the form
f ^ ( x ; α ) = 2 sin ( π α 2 ) π 0 y α e y x d y 1 + y 2 α + 2 y α cos ( π α 2 ) , x 0 .
That is, f ^ ( x ; α ) is the Laplace transform of the random variable Q ^ , whose probability density q ^ ( y ; α ) has the form
q ^ ( y ; α ) = 2 sin ( π α 2 ) y α π [ 1 + y 2 α + 2 y α cos ( π α 2 ) ] , x 0 .
Hence,
L ( | L α | ) = L ( E Q ^ ) .
In [30]), it was shown that if δ ( 0 , 1 ) , then the probability density g δ ( x ) of the ratio Z δ Z δ 1 has the form
g δ ( x ) = sin ( π δ ) x δ 1 π [ 1 + x 2 δ + 2 x δ cos ( π δ ) ] , x > 0 .
Comparing representation (30) with (32), we come to the conclusion that
L ( Q ^ ) = L Z α / 2 Z α / 2 1 ,
where Z α / 2 and Z α / 2 are independent nonnegative random variables with one and the same strictly stable distribution given by its characteristic function (26) with characteristic exponent γ = α / 2 and, furthermore,
L ( Q ^ ) = L ( Q ^ 1 ) .
Moreover, in [31], it was demonstrated that for γ ( 0 , 1 ) ,
E Z γ β = Γ ( 1 β γ ) Γ ( 1 β ) ( 0 β < γ ) and E Z γ β = Γ ( β γ ) γ Γ ( β ) ( 1 2 β < ) .
Now consider a mixed Poisson random sum with the one-sided Linnik mixing distribution. From (31), (33), (23), and (35) with γ = α 2 , we obtain the following statement.
Proposition 1.
If E X 1 = 1 , E X 1 2 < , 1 α 2 , then
ζ 2 1 a θ j = 1 N ( θ | L α | ) X j , | L α | Γ ( 1 1 α ) Γ ( 1 α ) π α · E X 1 2 θ .
Note that if α = 2 , then L ( | L 2 | ) = L ( | Λ | ) = L ( E ) and (36) turns into (12).

5. Transfer Theorem for the Rate of Convergence to Mixed Exponential Distributions

In the case where M is the class of mixed exponential distributions, it is possible to prove the following general ‘transfer theorem’ for the convergence rate bounds in terms of the ζ -metrics that extends Theorem 1 from [22] to a considerably wider class of distributions.
Theorem 1.
Assume that E X 1 = 1 and for some s > 0 and p ( 0 , 1 ) , some upper bound
ζ s p S p * 1 p , E Δ s ( p )
for the convergence rate in the Rényi theorem is known. Let Q be a nonnegative random variable and let θ > 0 be an ‘infinitely large’ parameter. Then
ζ s 1 θ j = 1 N ( θ E Q ) X j , E Q 0 y s Δ s 1 1 + θ y d F Q ( y ) .
Proof. 
In the same way that was used to prove (22), it is easy to verify that for θ > 0 and y > 0 , the random variable N ( y θ E ) has the geometric distribution with parameter p = 1 / ( y θ + 1 ) . Therefore, by the semi-additivity of the ζ -metric (see Lemma 3) with the account of (37), remembering the notation p = 1 / ( y θ + 1 ) , we have
ζ s 1 θ j = 1 N ( θ E Q ) X j , E Q 0 ζ s y p 1 p j = 1 N p * X j , y E d F Q ( y ) =
= 0 y s ζ s p 1 p j = 1 N p * X j , E d F Q ( y ) 0 y s Δ s ( p ) d F Q ( y ) =
= 0 y s Δ s 1 1 + y θ d F Q ( y ) .
That completes the proof. □
In the case s = 2 , if we take the right-hand side of (12) as Δ 2 ( p ) , then with the account of (41), Theorem 1 will yield
ζ 2 1 θ j = 1 N ( θ E Q ) X j , E Q E Q · E X 1 2 2 θ .
This means that the bounds for ζ 2 obtained in Examples 1–4 above can be also deduced from Theorem 1. These bounds depend on the distribution of the mixing random variable Q. For the case of s = 1 Theorem 1 yields the following rather unexpected result.
Corollary 1.
Assume that E X 1 = 1 . For any random variable Q we have
ζ 1 1 θ j = 1 N ( θ E Q ) X j , E Q E X 1 2 θ .
Note that, here, the right-hand side does not depend on  L ( Q ) .
Proof. 
In order to prove (40), note that if E X 1 2 = , then (40) holds trivially, otherwise the right-hand side of (10) should be taken as Δ 1 ( p ) with the account of the fact that, for p = 1 / ( y θ + 1 ) , the coefficient p / ( 1 p ) on the right-hand side of (10) in this case turns into
p 1 p = ( 1 + y θ ) 1 1 1 + y θ 1 = 1 y θ .
In [22], a particular case was considered and bound (40) was obtained for generalized negative binomial random sums.

6. Convergence Rate Bounds for Mixed Poisson Random Sums with the Mittag-Leffler Mixing Distribution

Another important case of the set M is the set of Mittag-Leffler distributions. This case is very interesting, as it is an illustration that, for the law of large numbers for mixed Poisson random sums (that is, for the generalized Rényi theorem) to hold, it is not necessary that the expectation of the mixed Poisson random sum exists.
Assume that, in (13), the mixing exponential distribution is replaced by the Mittag-Leffler distribution given by its Laplace transform
ψ δ ( s ) = 1 1 + λ s δ , s 0 ,
where λ > 0 , 0 < δ 1 . For convenience, without loss of generality, in what follows we will consider the case of the standard scale assuming that λ = 1 . As an aside, the class of Laplace transforms (42) coincides with the class introduced by I. N. Kovalenko [6] and, hence, from what has already been said, the Mittag-Leffler distributions exhaust the class of geometrically stable distributions on R + . A random variable with the Laplace transform (42) with λ = 1 will be denoted as M δ .
With δ = 1 , the Mittag-Leffler distribution turns into the standard exponential distribution, that is, L ( M 1 ) = L ( E ) . However, if 0 < δ < 1 , then the Mittag-Leffler distribution density has a heavy power-type tail: f δ M ( x ) = O ( x δ + 1 ) as x , see, e.g., [36], so that the moments of the random variable M δ of orders no less than δ are infinite.
But, as was shown in [21], the convergence of the distribution of a mixed Poisson random sum to the Mittag-Leffler distribution can also take place in the cases where the moments of the summands (expectations, variances, etc.) are finite. To make this sure, consider the following convergence rate bounds.
It is known that the Mittag-Leffler distribution admits the representation
L ( M δ ) = L ( E Z δ Z δ 1 ) ,
that is, it is mixed exponential. Here, Z δ and Z δ are nonnegative random variables with one and the same strictly stable distribution given by its characteristic function (26) (see, e.g., [30]).
Now let E θ = θ M δ for θ > 0 and some fixed δ ( 0 , 1 ] . In this case, with the account of (32), relation (38) takes the form
ζ s 1 θ j = 1 N ( θ M δ ) X j , M δ sin ( π δ ) π 0 Δ s 1 1 + θ y y s + δ 1 d y 1 + y 2 δ + 2 y δ cos ( π δ ) .
For the case s = 1 , relation (44) with Δ s 1 1 + θ y given by the right-hand of (10) (being consistent with Corollary 1) gives the following bound.
Proposition 2.
Let M δ be a random variable with the Mittag-Leffler distribution, 0 < δ 1 . If E X 1 = 1 and E X 1 2 < , then
ζ 1 1 θ j = 1 N ( θ M δ ) X j , M δ E X 1 2 θ .
For other values of s by the approach proposed in [19], it is possible to obtain an explicit (but possibly less accurate) estimate. If 0 < δ < 1 , then E M δ β = for each β δ and hence, E M δ = , implying that, in this case,
E j = 1 N ( θ M δ ) X j = .
From (43), it follows that, in this case, Q = Z δ Z δ 1 so that, for admissible β > 0
E Q β = E Z δ β · E Z δ β .
Therefore, from (19) and (43), we obtain the following bound.
Proposition 3.
If E X 1 = 1 and E X 1 2 < , then for 1 2 < δ 1 and 1 s < 2 δ , we have
ζ s 1 θ j = 1 N ( θ M δ ) X j , M δ E X 1 2 θ s / 2 · Γ ( s 2 + 1 ) Γ ( 1 s 2 δ ) Γ ( s 2 δ ) Γ ( 1 + ε ) δ Γ ( 1 s 2 ) Γ ( s 2 ) Γ ( 1 + s ) .

7. Quantification of the Geometric Stability of the Mittag-Leffler and Linnik Distributions

This section concerns another property of some geometric sums. Namely, here we will consider the property of geometric stability of some probability distributions.
Recall that the geometric stability of the distribution of a random variable X means that if X 1 , X 2 , are independent identically distributed random variables with the same distribution as that of X, and N p is the random variable with geometric distribution (1) independent of X 1 , X 2 , , then for each p ( 0 , 1 ) there exists a constant a p > 0 such that relation (2) holds. In what follows, we will concentrate our attention on the property of strict geometric stability, which means that in (2), the constants a p > 0 have the special form, namely, a p = C p 1 / γ for some C > 0 and γ ( 0 , 2 ] . For the sake of convenience and without loss of generality, assume that C = 1 . As (2) holds for any p ( 0 , 1 ) , we can let p 0 , so that (2) can be also regarded as a ‘limit theorem’ for geometric sums in which, unlike Rényi-theorem-type laws of large numbers, the limit law, as p 0 , is completely determined by the distribution of an individual summand.
In many papers, the Mittag-Leffler and Linnik distributions (for the corresponding definitions see Section 6 and Example 4) are noted as examples of geometrically strictly stable distributions. As this is so, a p = p 1 / δ for the Mittag-Leffler distribution with parameter δ ( 0 , 1 ] and a p = p 1 / α for the Linnik distribution with parameter α ( 0 , 2 ] .
First, consider the Mittag-Leffler distribution. Let δ ( 0 , 1 ] and M δ ( 1 ) , M δ ( 2 ) , be independent random variables with one and the same Mittag-Leffler distribution coinciding with that of M δ . Then, in accordance with (2), for any p ( 0 , 1 )
L ( M δ ) = L p 1 / δ M δ ( 1 ) + + M δ ( N p ) ,
where the random variable N p has geometric distribution (1) and is independent of M δ ( 1 ) , M δ ( 2 ) , The aim of the following statement is to illustrate this circumstance and generalize Kalashnikov’s bound (8) to all geometrically stable distributions on R + . In other words, the aim is to obtain an estimate for the stability of representation of the Mittag-Leffler distribution as a geometric convolution.
Theorem 2.
Let M δ be a random variable with the Mittag-Leffler distribution, 0 < δ 1 , s > δ , 0 < p < 1 . Then
ζ s p 1 / δ j = 1 N p X j , M δ p s / δ 1 ζ s ( X 1 , M δ ) .
Proof. 
By virtue of (48) for any p ( 0 , 1 ) , we have
L ( M δ ) = L p 1 / δ j = 1 N p M δ ( j ) ,
where M δ ( 1 ) , M δ ( 2 ) , are independent random variables with one and the same Mittag-Leffler distribution coinciding with that of M δ . Therefore, by Lemmas 3 and 1 with the account of (7), we have
ζ s p 1 / δ j = 1 N p X j , M δ = ζ s p 1 / δ j = 1 N p X j , p 1 / δ j = 1 N p M δ ( j )
n = 1 p ( 1 p ) n 1 ζ s p 1 / δ j = 1 n X j , p 1 / δ j = 1 n M δ ( j )
p s / δ n = 1 p ( 1 p ) n 1 j = 1 n ζ s X j , M δ ( j ) = p s / δ 1 ζ s ( X 1 , M δ ) .
Therefore, the appropriately scaled distribution of a geometric random sum may be close to the Mittag-Leffler distribution for two reasons: first, the parameter p may be small enough, and/or second, the distribution of a separate summand (say, X 1 ) may be close enough to the Mittag-Leffler distribution. In the first case, Theorem 1 serves as an illustration of the transfer theorem for random sums (e.g., see [27]). In this case, L ( X 1 ) may be not close to L ( M δ ) . The only requirement is that ζ s ( X 1 , M δ ) is finite. The finiteness of ζ s ( X 1 , M δ ) means that the tail of the distribution of X 1 is equivalent to that of M δ as x . However, this means that L ( X 1 ) belongs to the domain of attraction of a ‘usual’ strictly stable distribution with the characteristic exponent δ . As is known, in this case the moments of the random variable X 1 of orders no less than δ do not exist [37]. As this is so, with small p the number of summands in the geometric sum is large and, in accordance with the transfer theorem for random sums, the limit distribution of an appropriately normalized geometric sum has the form of a scale mixture of the strictly stable distribution with characteristic exponent δ , whereas the mixing distribution is the limit law for the standardized number of summands, i.e., is exponential. However, in the situation under discussion, this mixture is exactly the Mittag-Leffler distribution; for details see, e.g., [30] and the references therein. In the second case, the parameter p may not be small, and the closeness of the distribution of a geometric sum to the Mittag-Leffler distribution can be provided by the smallness of the distance between L ( X 1 ) and L ( M δ ) .
As it has already been said, estimate (46) makes sense, if ζ s ( X 1 , M δ ) < . To clarify the meaning of this condition, consider the case s = 1 . In this case, the metric ζ 1 turns into the mean metric (5), also sometimes referred to as the Kantorovich or Wasserstein distance. Assume that F X 1 ( x ) = F M δ ( x ) + h ( x ) , where h ( x ) is the corresponding ‘discrepancy’. In this case, the condition of finiteness of ζ 1 ( X 1 , M δ ) means that the discrepancy h ( x ) must be integrable:
R | h ( x ) | d x < ,
so that Theorem 2 implies the following statement.
Corollary 2.
Let 0 < δ < 1 . Assume that F X 1 ( x ) = F M δ ( x ) + h ( x ) and (47) holds. Then
ζ 1 p 1 / δ j = 1 N p X j , M δ p 1 / δ 1 R | h ( x ) | d x .
If δ = 1 , then the Mittag-Leffler distribution turns into the exponential distribution so that bounds (8) and (9) can be used.
Now turn to the Linnik distribution. Let α ( 0 , 2 ] and L α ( 1 ) , L α ( 2 ) , be independent random variables with one and the same Linnik distribution coinciding with that of L α . Then, in accordance with (2), for any p ( 0 , 1 )
L ( L α ) = L p 1 / α L α ( 1 ) + + L α ( N p ) ,
where the random variable N p has geometric distribution (1) and is independent of L α ( 1 ) , L α ( 2 ) , Therefore, just as in the case of the Mittag-Leffler distribution, the appropriately scaled distribution of a geometric random sum may be close to the Linnik distribution for two reasons: first, the parameter p may be small enough, and/or second, the distribution of a separate summand (say, X 1 ) may be close enough to the Linnik distribution.
Theorem 3.
Let 0 < α 2 , s > α , 0 < p < 1 . Then
ζ s p 1 / α j = 1 N p X j , L α p s / α 1 ζ s ( X 1 , L α ) .
Proof. 
This theorem can be proved by exactly the same reasoning that was used to prove Theorem 2. □
Denote g ( x ) = F X 1 ( x ) F L α ( x ) , x R . Theorem 3 implies the following analog of Corollary 2 for the Linnik distribution.
Corollary 3.
Let 0 < α < 1 . Assume that the discrepancy g ( x ) is integrable:
R | g ( x ) | d x < ,
Then
ζ 1 p 1 / α j = 1 N p X j , L α p 1 / α 1 R | g ( x ) | d x .
It should be noted that the conditions s > δ in Theorem 2 and s > α in Theorem 3 as well as the conditions on s in Corollaries 1 and 2 were assumed only to provide the convergence of the right-hand sides of (46) and (49) to zero as p 0 . However, in general, in these inequalities other values of s > 0 are also admissible, since, as it has already been said, with arbitrary fixedp the smallness of the right-hand sides can be provided by the smallness of the ζ -metrics between the distribution of an individual summand and the corresponding geometrically stable law.

8. Conclusions

In the paper, an overview was presented of the results on the convergence rate bounds in limit theorems concerning geometric random sums and their generalizations to mixed Poisson random sums, including the case where the mixing law is itself a mixed exponential distribution. The Zolotarev ζ -metric was considered as the distance between the limit and pre-limit laws. The well-known convergence rate estimates for geometric random sums in the classical Rényi theorem were extended to a considerably wider class of random indices with mixed Poisson distributions and, correspondingly, to a considerably wider class of limit distributions. It was demonstrated that, in the case where the limit distribution (and the corresponding mixing distribution of the mixed Poisson distribution of the number of summands) is itself mixed exponential, the upper bound for the ζ -metric of the first order (or, which is the same, for the Kantorovich or 1-Wasserstein or mean metric) depends only on the second moment of a separate summand and does not depend on any characteristic of the mixing distribution. This result substantially generalizes the corresponding estimate obtained by I. Shevtsova and M. Tselishchev [22] for generalized gamma limit distributions. In addition, an estimate was obtained for the stability of representation of the Mittag-Leffler and Linnik distributions as geometric convolutions (that is, as the distributions of geometric random sums). These results extend the corresponding estimate of the geometric stability of the exponential distribution obtained by V. V. Kalashnikov [7] for ζ s -metric with 1 s < 2 to all geometrically strictly stable distributions on R + and all symmetric geometrically strictly stable distributions on R . Moreover, these estimates make sense for ζ s -metrics with arbitrary s > 0 .

Funding

This research was funded by the Russian Science Foundation, grant 22-11-00212.

Data Availability Statement

Not applicable.

Acknowledgments

The author thanks Alexander Zeifman for his help in the final preparation of the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Rényi, A. A Poisson-folyamat egy jellemzese. Magyar Tud. Acad. Mat. Kutato Int. Közl. 1956, 1, 519–527. [Google Scholar]
  2. Rényi, A. On an extremal property of the Poisson process. Ann. Inst. Stat. Math. 1964, 16, 129–133. [Google Scholar] [CrossRef]
  3. Gijbels, I.; Omelka, M.; Pešta, M.; Veraverbeke, N. Score tests for covariate effects in conditional copulas. J. Multivar. Anal. 2017, 159, 111–133. [Google Scholar] [CrossRef]
  4. Maciak, M.; Okhrin, O.; Pešta, M. Infinitely stochastic micro reserving. Insur. Math. Econ. 2021, 9, 30–58. [Google Scholar] [CrossRef]
  5. Klebanov, L.B.; Maniya, G.M.; Melamed, I.A. A problem of Zolotarev and analogs of infinitely divisible and stable distributions in a scheme for summing a random number of random variables. Theory Probab. Appl. 1984, 29, 791–794. [Google Scholar] [CrossRef]
  6. Kovalenko, I.N. On the class of limit distributions for rarefied flows of homogeneous events. Lith. Math. J. 1965, 5, 569–573. [Google Scholar] [CrossRef]
  7. Kalashnikov, V.V. Geometric Sums: Bounds for Rare Events with Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar]
  8. Bon, J.L.; Kalashnikov, V.V. Bounds for geometric sums used for evaluation of reliability of regenerative models. J. Math. Sci. 1999, 93, 486–510. [Google Scholar] [CrossRef]
  9. Bon, J.L. Geometric sums in reliability evaluation of regenerative systems. Inf. Process. 2002, 2, 161–163. [Google Scholar]
  10. Grandell, J. Simple approximations of ruin probabilities. Insur. Math. Econ. 2000, 26, 157–173. [Google Scholar] [CrossRef]
  11. Grandell, J. Risk theory and geometric sums. Inf. Process. 2002, 2, 180–181. [Google Scholar]
  12. Solovyev, A.D. Asymptotic behaviour of the time of first occurrence of a rare event. Izv. Akad. Nauk. SSSR Teh. Kibern. 1971, 9, 1038–1048. [Google Scholar]
  13. Kalashnikov, V.V.; Vsekhsvyatskii, S.Y. Metric estimates of the first occurrence time in regenerative processes. In Stability Problems for Stochastic Models; Lecture Notes in Mathematics; Kalashnikov, V.V., Zolotarev, V.M., Eds.; Springer: Berlin/Heidelberg, Germany, 1985; Volume 1155, pp. 102–130. [Google Scholar]
  14. Kalashnikov, V.V.; Vsekhsvyatskii, S.Y. On the connection of Rényi’s theorem and renewal theory. In Stability Problems for Stochastic Models; Lecture Notes in Mathematics; Kalashnikov, V.V., Zolotarev, V.M., Eds.; Springer: Berlin/Heidelberg, Germany, 1987; Volume 1412, pp. 83–102. [Google Scholar]
  15. Brown, M. Error bounds for exponential approximations of geometric convolutions. Ann. Probab. 1990, 18, 1388–1402. [Google Scholar] [CrossRef]
  16. Zolotarev, V.M. Approximation of distributions of sums of independent random variables with values in infinite-dimensional spaces. Theory Probab. Appl. 1976, 21, 721–737. [Google Scholar] [CrossRef]
  17. Zolotarev, V.M. Ideal metrics in the problem of approximating distributions of sums of independent random variables. Theory Probab. Appl. 1977, 22, 433–449. [Google Scholar] [CrossRef]
  18. Zolotarev, V.M. Modern Theory of Summation of Random Variables; VSP: Utrecht, The Netherlands, 1997. [Google Scholar]
  19. Korolev, V.Y.; Zeifman, A.I. Bounds for convergence rate in laws of large numbers for mixed Poisson random sums. Stat. Probab. Lett. 2021, 168, 108918. [Google Scholar] [CrossRef]
  20. Shevtsova, I.; Tselishchev, M. A generalized equilibrium transform with application to error bounds in the Rényi theorem with no support constraints. Mathematics 2020, 8, 577. [Google Scholar] [CrossRef]
  21. Korolev, V.Y. Convergence of random sequences with the independent random indices. I. Theory Probab. Appl. 1994, 39, 282–297. [Google Scholar] [CrossRef]
  22. Shevtsova, I.; Tselishchev, M. On the accuracy of the generalized gamma approximation to generalized negative binomial random sums. Mathematics 2021, 9, 1571. [Google Scholar] [CrossRef]
  23. Amoroso, L. Ricerche intorno alla curva dei redditi. Ann. Mat. Pura Appl. 1925, 21, 123–159. [Google Scholar] [CrossRef]
  24. Stacy, E.W. A generalization of the gamma distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  25. Korolev, V.Y.; Zeifman, A.I. Generalized negative binomial distributions as mixed geometric laws and related limit theorems. Lith. Math. J. 2019, 59, 366–388. [Google Scholar] [CrossRef] [Green Version]
  26. Singh, V.P. Entropy-Based Parameter Estimation in Hydrology; Springer: Dordrecht, The Netherlands, 1998. [Google Scholar]
  27. Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  28. Kapur, J.N. Maximum Entropy Models in Science and Engineering; Wiley: New York, NY, USA, 1990. [Google Scholar]
  29. Korolev, V.Y.; Gorshenin, A.K. Probability models and statistical tests for extreme precipitation based on generalized negative binomial distributions. Mathematics 2020, 8, 604. [Google Scholar] [CrossRef] [Green Version]
  30. Korolev, V.Y.; Zeifman, A.I. Convergence of statistics constructed from samples with random sizes to the Linnik and Mittag-Leffler distributions and their generalizations. J. Korean Stat. Soc. 2017, 46, 161–181. [Google Scholar] [CrossRef]
  31. Korolev, V.Y. Some properties of univariate and multivariate exponential power distributions and related topics. Mathematics 2020, 8, 1918. [Google Scholar] [CrossRef]
  32. Lomax, K.S. Business failures. Another example of the analysis of failure data. J. Amer. Statist. Assoc. 1954, 49, 847–852. [Google Scholar] [CrossRef]
  33. Linnik, Y.V. Linear forms and statistical criteria, I, II. Sel. Transl. Math. Stat. Probab. 1963, 3, 41–90, (Original paper appeared in: Ukrainskii Matematicheskii Zhournal 1953, 5, 207–243, 247–290).. [Google Scholar]
  34. Khokhlov, Y.; Korolev, V.; Zeifman, A. Multivariate scale-mixed stable distributions and related limit theorems. Mathematics 2020, 8, 749–777. [Google Scholar] [CrossRef]
  35. Kozubowski, T.J. Mixture representation of Linnik distribution revisited. Stat. Probab. Lett. 1998, 38, 157–160. [Google Scholar] [CrossRef]
  36. Gorenflo, R.; Kilbas, A.A.; Mainardi, F.; Rogosin, S.V. Mittag-Leffler Functions, Related Topics and Applications; Springer: Berlin, Germany; New York, NY, USA, 2014. [Google Scholar]
  37. Tucker, H.G. Convolutions of distributions attracted to stable laws. Ann. Math. Stat. 1968, 39, 1381–1390. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Korolev, V. Bounds for the Rate of Convergence in the Generalized Rényi Theorem. Mathematics 2022, 10, 4252. https://doi.org/10.3390/math10224252

AMA Style

Korolev V. Bounds for the Rate of Convergence in the Generalized Rényi Theorem. Mathematics. 2022; 10(22):4252. https://doi.org/10.3390/math10224252

Chicago/Turabian Style

Korolev, Victor. 2022. "Bounds for the Rate of Convergence in the Generalized Rényi Theorem" Mathematics 10, no. 22: 4252. https://doi.org/10.3390/math10224252

APA Style

Korolev, V. (2022). Bounds for the Rate of Convergence in the Generalized Rényi Theorem. Mathematics, 10(22), 4252. https://doi.org/10.3390/math10224252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop