Next Article in Journal
Fuzzy Automata as Coalgebras
Next Article in Special Issue
The Extended Log-Logistic Distribution: Inference and Actuarial Applications
Previous Article in Journal
A Novel Integrated Interval Rough MCDM Model for Ranking and Selection of Asphalt Production Plants
Previous Article in Special Issue
Exponential and Hypoexponential Distributions: Some Characterizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Characterization of Probability Distributions via Functional Equations of Power-Mixture Type

1
National Changhua University of Education, Changhua 50058, Taiwan
2
Social and Data Science Research Center, Hwa-Kang Xing-Ye Foundation, Taipei 10659, Taiwan
3
Institute of Statistical Science, Academia Sinica, Taipei 11529, Taiwan
4
Institute of Mathematics & Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(3), 271; https://doi.org/10.3390/math9030271
Submission received: 13 October 2020 / Revised: 22 January 2021 / Accepted: 26 January 2021 / Published: 29 January 2021
(This article belongs to the Special Issue Characterization of Probability Distributions)

Abstract

:
We study power-mixture type functional equations in terms of Laplace–Stieltjes transforms of probability distributions on the right half-line [ 0 , ) . These equations arise when studying distributional equations of the type Z = d X + T Z , where the random variable T 0 has known distribution, while the distribution of the random variable Z 0 is a transformation of that of X 0 , and we want to find the distribution of X. We provide necessary and sufficient conditions for such functional equations to have unique solutions. The uniqueness is equivalent to a characterization property of a probability distribution. We present results that are either new or extend and improve previous results about functional equations of compound-exponential and compound-Poisson types. In particular, we give another affirmative answer to a question posed by J. Pitman and M. Yor in 2003. We provide explicit illustrative examples and deal with related topics.

1. Introduction

We deal with probability distributions on the right half-line [ 0 , ) and their characterization properties expressed in the form of distributional equations of the type Z = d X + T Z , where the random variable T 0 has known distribution, the distribution of the random variable Z 0 is a transformation of that of X 0 , and we want to find the distribution of X. By using Laplace–Stieltjes (LS) transform of the distributions of the random variables involved, we transfer such a distributional equation to a functional equation of a specific type. Our goal is to provide necessary and sufficient conditions for such a functional equation to have a unique solution. The unique solution is equivalent to a characterization property of a probability distribution.
It is worth mentioning that the topic Distributional Equations was intensively studied over the last decades. There are excellent sources; among them are the recent books by Buraczewski, Damek, and Mikosch [1] and Iksanov [2]. For good reasons, the phrase “The equation X = A X + B ” is included as a subtitle of [1]. From different perspectives this distributional equation is studied in [2]. Such equations are called “fixed-point equations”; they arise as limits when studying autoregressive sequences in economics and actuarial modeling, and the “fixed point” (the unique solution) is related to the so-called perpetuities. These books contain a detailed analysis of diverse stochastic models, a variety of results and methods. Besides the authors of the two books, an essential contribution in this area is made by many scientists, to list here only a few names: H. Kesten, C.M. Goldie, W. Vervaat, P. Embrechts, Z. Jurek, G. Alsmeyer, G. Letac, and J. Wesolowski. Much more can be found in the books [1,2] cited above and also in the book by Kagan, Linnik, and Rao [3].
In the present paper, we study a wide class of power-mixture functional equations for the LS transforms of probability distributions. In particular, equations of compound-exponential type, compound-Poisson type, and others, fall into this class. On the other hand, the related Poincaré type functional equations have been studied in [4] and recently in [5]; see also the references therein.
The power-mixture functional equations arise when studying power-mixture transforms involving two sii-processes. Here the abbreviation “sii-processes” stands for stationary-independent-increments stochastic processes. Think, for example, of the Lévy processes. Consider a continuous time sii-process ( X 1 ( t ) ) t 0 , and let F 1 , t be the (marginal) distribution of X 1 ( t ) ; we write this as X 1 ( t ) F 1 , t . Moreover, let X 1 : = X 1 ( 1 ) 0 be the generating random variable for the process, so X 1 F 1 : = F 1 , 1 uniquely determines the distribution of the process ( X 1 ( t ) ) t 0 at any time t. Thus, we have the multiplicative semigroup ( F ^ 1 , t ( s ) ) t 0 satisfying the power relation
F ^ 1 , t ( s ) = ( F ^ 1 ( s ) ) t , s , t 0 .
Here F ^ 1 , t is the LS transform of the distribution F 1 , t of X 1 ( t ) :
F ^ 1 , t ( s ) = E [ e s X 1 ( t ) ] = 0 e s x d F 1 , t ( x ) , s 0
(see, e.g., [6] (Chapter I)).
Let further, ( X 2 ( t ) ) t 0 , independent of ( X 1 ( t ) ) t 0 , be another continuous time sii-process with a generating random variable X 2 : = X 2 ( 1 ) 0 , and let X 2 ( t ) F 2 , t , X 2 F 2 : = F 2 , 1 . Now, we can consider the composition process ( X ( t ) ) t 0 : = ( X 1 ( X 2 ( t ) ) t 0 , which is the subordination of the process ( X 1 ( t ) ) t 0 to the process ( X 2 ( t ) ) t 0 . The generating random variable for ( X ( t ) ) t 0 is X : = X ( 1 ) = X 1 ( X 2 ( 1 ) ) F . In view of Equation (1), the distribution F has LS transform F ^ , which is of the power-mixture type (in short, power-mixture transform) and satisfies the following relations:
F ^ ( s ) : = E [ e s X ] = 0 E [ e s X 1 ( y ) ] d F 2 ( y ) = 0 ( F ^ 1 ( s ) ) y d F 2 ( y )
= 0 exp ( y [ log F ^ 1 ( s ) ] ) d F 2 ( y ) = F 2 ^ ( log F ^ 1 ( s ) ) , s 0 .
From now on, we will focus mainly on the power-mixture transforms (2) or (3). The brief illustration of dealing with two sii-processes is just one of the motivations. Thus, we now require only X 1 F 1 to be infinitely divisible, but not asking this property for X 2 F 2 . For such distributions F with elegant LS transforms, see ([6] Chapter III), as well as [7].
If X 2 F 2 , where F 2 Exp ( 1 ) , the standard exponential distribution, F 2 ( y ) = 1 e y , y 0 , its LS transform is F ^ 2 ( s ) = 1 / ( 1 + s ) , s 0 , and the generating distribution F for the composition process ( X ( t ) ) t 0 reduces to the so-called compound-exponential distribution whose LS transform (for short, compound-exponential transform) is
F ^ ( s ) = 1 1 log F ^ 1 ( s ) , s 0 .
This shows that the power-mixture transforms are more general than the compound-exponential ones. The latter case, however, is important by itself, and it has been studied in [8].
When the random variable X 1 F 1 is actually related to (or constructed from) the variable X F , the LS transform F ^ 1 will be a function of the LS transform F ^ . Hence, the distribution F (equivalently, its LS transform F ^ ) can be considered as a solution to the functional Equations (2)–(4). Since each of these equations is related to a distributional equation, as soon as we have a unique solution (a “fixed point”), this will provide a characterization property of the corresponding distribution.
Our main purpose in this paper is to provide necessary and sufficient conditions for the functional equations in question to have unique distributional solutions. We do this under quite general conditions, one of them is to require finite variance. We exhibit new results; some of them either extend or improve previous results for functional equations of compound-exponential and compound-Poisson types. In particular, we provide another affirmative answer to a question posed in [7], regarding the distributional equation Z = d X + T Z . This question and the answer were first given in [9,10]. Our arguments are different; details are given in Example 2 below. Functional equations of other types are also studied.
In Section 2, we formulate the problem and state the main results and corollaries. The results are illustrated in Section 3 by examples that fit well to the problem. Section 4 contains a series of lemmas, which we need in Section 5 to prove the main theorems. We conclude in Section 6 with comments and challenging questions. The list of references includes significant works all related to our study.

2. Formulation of the Problem and Main Results

Let X be a non-negative random variable with distribution F and mean μ = E [ X ] , which is a finite positive number, that is, μ ( 0 , ) . Starting with X F , we will construct an infinitely divisible random variable X 1 F 1 to be used in Equation (2). Consider three non-negative random variables and their distributions as follows: T F T , A F A , B F B . Suppose further that Z is a random variable, independent of T, with the length-biased distribution F Z induced by F, namely,
F Z ( z ) = 1 μ 0 z x d F ( x ) , z 0 .
We involve also the scale-mixture random variable T Z F T Z . We are now prepared to define the following two functions in terms of LS transforms:
σ ( s ) : = μ 0 s F ^ T Z ( x ) d x = 0 1 F ^ ( t s ) t d F T ( t ) , s 0 ,
σ B ( s ) : = 0 s F ^ B ( t ) d t , s 0 .
Notice that σ ( · ) and σ B ( · ) are Bernstein functions, and their first derivatives are completely monotone functions, by definition; see, e.g., [11]. The function σ in (6) will play a crucial role in this paper, and the integrand ( 1 F ^ ( t s ) ) / t is defined for t = 0 by continuity to be equal to μ s . The second equality in (6) can be verified by differentiating its both sides with respect to s and using the following facts:
F ^ Z ( s ) = E [ e s Z ] = F ^ ( s ) μ , 0 s F ^ Z ( x ) d x = 1 F ^ ( s ) μ , s 0 .
Recall that the composition σ B σ of two Bernstein functions is a Bernstein function; hence, this is so for σ B σ , the functions in (6) and (7). We need also the “simple” function ρ ( s ) : = e s , s 0 , which is the LS transform of the degenerate random variable at the point 1, and use its property as being completely monotone. Therefore, we can consider the infinitely divisible random variable X 1 F 1 (in Equation (1)) with LS transform of compound-Poisson type:
F ^ 1 ( s ) = ρ ( ( σ B σ ) ( s ) ) = exp ( σ B ( σ ( s ) ) ) , s 0
Such a choice is appropriate in view of Lemmas 1 and 2 in Section 3. Clearly, F ^ 1 is a function of F, F T , and F B . Let us formulate our main results and some corollaries.
Theorem 1.
Under the above setting, we have
0 E [ T ] < 1 , E [ A ] = 1 , E [ A 2 ] < a n d 0 E [ B ] <
if and only if the functional equation of power-mixture type
F ^ ( s ) = 0 { exp ( σ B ( σ ( s ) ) ) } a d F A ( a ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = Var [ A ] + E [ B ] + E [ T ] 1 E [ T ] μ 2 .
If we impose a condition on the variable B and use a.s. for “almost surely”, Theorem 1 reduces as follows.
Corollary 1.
In addition to the above setting, let B = 0 a.s. Then we have
0 E [ T ] < 1 , E [ A ] = 1 a n d E [ A 2 ] <
if and only if the functional equation of power-mixture type
F ^ ( s ) = 0 exp a · σ ( s ) d F A ( a ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = Var [ A ] + E [ T ] 1 E [ T ] μ 2 .
If we impose also a condition on A, Corollary 1 further reduces to the following.
Corollary 2.
In addition to the setting of Theorem 1, let A = 1 a.s. and B = 0 a.s. Then
0 E [ T ] < 1
if and only if the functional equation of compound-Poisson type
F ^ ( s ) = exp σ ( s ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = E [ T ] 1 E [ T ] μ 2 .
Here is a case of a “nice” proper random variable A, A Exp ( 1 ) , so F A ( y ) = 1 e y , y 0 . Corollary 1 takes now the following form.
Corollary 3.
Let X F have mean μ ( 0 , ) , B = 0 a.s., A Exp ( 1 ) and T be a non-negative random variable. Then
0 E [ T ] < 1
if and only if the functional equation of compound-exponential type
F ^ ( s ) = ( 1 + σ ( s ) ) 1 , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = 1 + E [ T ] 1 E [ T ] μ 2 .
Here is another particular but interesting case.
Corollary 4.
In addition to the setting of Theorem 1, suppose that T = p a.s. for some fixed number p ( 0 , 1 ) and that B = 0 a.s. Then we have
E [ A ] = 1 a n d E [ A 2 ] <
if and only if the functional equation
F ^ ( s ) = 0 exp a · 1 F ^ ( p s ) p d F A ( a ) , s 0 ,
has exactly one solution F, X F with mean μ and finite variance. Moreover,
Var [ X ] = Var ( A ) + p 1 p μ 2 .
We now return to the construction of the infinitely divisible LS transform F ^ 1 in (8). Using the completely monotone function ρ ( s ) = 1 / ( 1 + λ s ) , s 0 , which corresponds to Exp ( λ ) , we have instead the following LS transform:
F ^ 1 ( s ) = ρ ( ( σ B σ ) ( s ) ) = 1 1 + λ σ B ( σ ( s ) ) , s 0 ,
and here is the next result.
Theorem 2.
Suppose, as before, that X F is a non-negative random variable with mean μ ( 0 , ) . Let further T, A, and B be three non-negative random variables. Then, for a fixed constant λ > 0 , we have
0 E [ T ] < 1 , E [ A ] = 1 / λ , E [ A 2 ] < a n d 0 E [ B ] < ,
if and only if the functional equation of power-mixture type
F ^ ( s ) = 0 ( 1 + λ σ B ( σ ( s ) ) ) a d F A ( a ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = λ 2 Var [ A ] + λ + E [ B ] + E [ T ] 1 E [ T ] μ 2 .
Exchanging the roles of the arguments a and λ in Theorem 2 leads to the following.
Theorem 3.
Consider the non-negative random variables X , T , B , Λ , where X F has mean μ ( 0 , ) . Then, for an arbitrary constant a > 0 , we have
0 E [ T ] < 1 , E [ Λ ] = 1 / a , E [ Λ 2 ] < a n d 0 E [ B ] < ,
if and only if the functional equation
F ^ ( s ) = 0 ( 1 + λ σ B ( σ ( s ) ) ) a d F Λ ( λ ) , s 0 ,
has exactly one solution F, X F with mean μ and finite variance of X of the form:
Var [ X ] = a 2 Var [ Λ ] + a E [ Λ 2 ] + E [ B ] + E [ T ] 1 E [ T ] μ 2 .
Keeping both random variables A and Λ in Theorems 2 and 3 (rather than constants) yields the following general result. For simplicity, A and Λ below are assumed to be independent.
Theorem 4.
Let X , T , A , Λ and B be non-negative random variables, where X F has mean μ ( 0 , ) . We also require A and Λ to be independent. Then we have
0 E [ T ] < 1 , E [ A Λ ] = 1 , E [ A 2 ] < , E [ Λ 2 ] < a n d 0 E [ B ] <
if and only if the functional equation
F ^ ( s ) = 0 0 ( 1 + λ σ B ( σ ( s ) ) ) a d F A ( a ) d F Λ ( λ ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = Var [ A Λ ] + E [ A Λ 2 ] + E [ B ] + E [ T ] 1 E [ T ] μ 2 .
Clearly, when Λ = λ = c o n s t a . s . , Equations (20)–(22) reduce to Equations (14)–(16), respectively, while if A = a = c o n s t a . s . , Equations (20)–(22) reduce to Equations (17)–(19), accordingly. This is why in Section 5 we omit the proofs of Theorems 2 and 3; however, we provide a detailed proof of the more general Theorem 4.
Finally, let us involve the Riemann-zeta function defined as usual by
ζ ( s ) = n = 1 1 n s , s > 1 .
For any a > 1 , the function ρ ( s ) : = ζ ( a + s ) / ζ ( a ) , s 0 , is the LS transform of a probability distribution on [ 0 , ) of Riemann-zeta type (because ρ ( i t ) is the characteristic function of the Riemann-zeta distribution on ( , 0 ] ). Remarkably, it is infinitely divisible (see [12] Corollary 1). We have the following result which is in the spirit of the previous theorems; however, it is interesting on its own.
Theorem 5.
Suppose that X , T and Λ are non-negative random variables, where X F has mean μ ( 0 , ) . Then, for any fixed number a > 1 , we have
0 E [ T ] < 1 , E [ Λ ] = ζ ( a ) ζ ( a ) a n d E [ Λ 2 ] < ,
if and only if the functional equation
F ^ ( s ) = 1 ζ ( a ) 0 ζ ( a + λ σ ( s ) ) d F Λ ( λ ) , s 0 ,
has exactly one solution X F with mean μ and finite variance. Moreover,
Var [ X ] = ζ ( a ) E [ Λ 2 ] ζ ( a ) + ζ ( a ) E [ T ] ζ ( a ) ( 1 E [ T ] ) μ 2 .

3. Examples

We give some examples to illustrate the use of the above results. The first two examples improve Theorems 1.1 and 1.3 of [8]. We use the notation = d in its usual meaning of equality in distribution.
Example 1.
Let 0 X F with mean μ ( 0 , ) and let T be a non-negative random variable. Assume that the random variable Z 0 has the length-biased distribution (5) induced by F, and that X 1 , X 2 are two random variables having the same distribution F. Assume further that all random variables Z, T, X 1 , X 2 are independent. Then
0 E [ T ] < 1
if and only if the distributional equation
Z = d X 1 + X 2 + T Z
has exactly one solution X F with mean μ and finite variance as expressed by (13).
All this is because the distributional Equation (26) is equivalent to the functional Equation (12) expressed in terms of the LS transform F ^ . Let us give details. We rewrite Equation (26) as follows:
F ^ Z ( s ) = ( F ^ ( s ) ) 2 σ ( s ) μ , s 0 .
By using the identity F ^ Z ( s ) = F ^ ( s ) / μ , the above relation is equivalent to
d d s ( F ^ ( s ) ) 1 = σ ( s ) , s 0 .
This means that indeed Equation (12) holds true in view of the fact that F ^ ( 0 ) = 1 , σ ( 0 ) = 0 .
Let us discuss two specific choices of T , each one leading to an interesting conclusion.
(a) When T = 0 a.s., we have, by definition, that σ ( s ) = μ s , s 0 , and hence, by (12), F ^ ( s ) = 1 / ( 1 + σ ( s ) ) = 1 / ( 1 + μ s ) , s 0 . Equivalently, F is an exponential distribution with mean μ . On the other hand, Equation (26) reduces to
Z = d X 1 + X 2 .
Therefore, this equation claims to be a characterization of the exponential distribution. The explicit formulation is
The convolution of an underlying distribution F with itself is equal to the length-biased distribution induced by F, if and only if, F is an exponential distribution.
(b) More generally, if T = p a.s. for some fixed number p [ 0 , 1 ) , then the unique solution X F to Equation (26) is the following explicit mixture distribution:
F ( x ) = p + ( 1 p ) ( 1 e β x ) , x 0 , where β = ( 1 p ) / μ .
Example 2.
As in Example 1, we consider two non-negative random variables, T and X, where X F has mean μ ( 0 , ) . Assume that the random variable Z 0 has the length-biased distribution (5) induced by F, and that all random variables X, T, Z are independent. Then
0 E [ T ] < 1
if and only if the distributional equation
Z = d X + T Z
has exactly one solution X F with mean μ and finite variance,
Var [ X ] = E [ T ] 1 E [ T ] μ 2 .
Notice that the finding in this example is another affirmative answer to a question posed by Pitman and Yor in ([7] p. 320). The question can be read (in our format) as follows:
Given a random variable T F T , does there exist a random variable X F (with unknown F) such that Equation (27) is satisfied with Z having length-biased distribution induced by F?
In order to see that the answer is affirmative, we note that the distributional Equation (27) is equivalent to the following functional equation (by arguments as in Example 1):
F ^ ( s ) = exp ( σ ( s ) ) , s 0 .
Clearly, Equation (28) is a special case of Equation (10) with degenerate random variables A = 1 a.s. and B = 0 a.s. Therefore, given arbitrary random variable 0 T F T with 0 E [ T ] < 1 , Equation (27) determines uniquely the corresponding underlying distribution F of X. Moreover, X has mean μ and variance Var [ X ] = μ 2 E [ T ] / ( 1 E [ T ] ) , as prescribed.
Let us mention that A. Iksanov was the first who gave an affirmative answer to the question by J. Pitman and M. Yor; see [2,9,10]. His conclusion and the above conclusion are partly similar; however, his conditions and arguments are different from ours.
For example, in [9] it is assumed that T is strictly positive, the expectation E [ log T ] exists (finite or infinite) and μ ( 0 , ) , and it is proved that there exists a unique solution F (to Equation (27)) with mean μ if and only if E [ log T ] < 0 . There is no conclusion/condition about the variance of X F .
In our condition 0 E [ T ] < 1 , we do not exclude the possibility that T has a mass at 0 , that is, P [ T = 0 ] > 0 . Actually, it can be shown that if strictly T > 0 and E [ T ] ( 0 , 1 ) , then E [ log T ] < 0 . This is so because the function g ( t ) = t 1 log t 0 for t > 0 . Thus if T > 0 , our condition and conclusion are stronger than those of Iksanov.
Let us consider four cases for the random variable T. (a) If T = 0 a.s., Equation (27) reduces to Z = d X . It tells us that the length-biased distribution F Z is equal to the underlying distribution F. This distributional equation characterizes the degenerate distribution F concentrated at the point μ because Equation (28) accordingly reduces to F ^ ( s ) = e μ s , s 0 .
(b) If T is a continuous random variable uniformly distributed on the interval [ 0 , 1 ] , Equation (27) characterizes the exponential distribution with mean μ ; see also ([7] p. 320). Indeed, by using the identity (easy to check by differentiating in s both sides)
log ( 1 + s ) = 1 s x ( x + s ) d x , s 0 ,
we see that F ^ ( s ) = 1 / ( 1 + μ s ) , s 0 , satisfies the functional Equation (28) and refer to the fact that the LS transform of the distribution F is F ^ ( s ) = 1 / ( 1 + μ s ) , s 0 , if and only if F is Exp ( μ ) .
More generally, if T has a uniform distribution on the interval [ p , 1 ] for some p [ 0 , 1 ) , then the unique solution F to the functional Equation (28) is the following explicit mixture distribution:
F ( x ) = p + ( 1 p ) ( 1 e β x ) , x 0 , where β = ( 1 p ) / μ .
(c) If we assume now that T has a beta distribution F T ( x ) = 1 ( 1 x ) a , x ( 0 , 1 ) , with parameter a > 0 , then the unique solution X F to the distributional Equation (27) will be the Gamma distribution F = F a , b with density
f a , b ( x ) = 1 Γ ( a ) b a x a 1 e x / b , x > 0 .
Here b = μ / a , and to make this conclusion we use the following identity: for a > 0 , b > 0 ,
log ( 1 + b s ) = 0 1 ( 1 t ) a 1 t [ 1 ( 1 + b s t ) a ] d t , s 0 ,
or, equivalently,
0 1 a ( 1 t ) a 1 ( 1 + b s t ) a + 1 d t = 1 1 + b s , s 0
(see, e.g., [13] Formula 8.380(7), p. 917).
(d) Take a particular value for μ , e.g., μ = 2 / 3 and assume that T with values in ( 0 , 1 ) has the density g ( t ) = 1 / t 1 , t ( 0 , 1 ) . Then Equation (27) has a unique solution X F whose LS transform is F ^ ( s ) = 2 s / ( sinh 2 s ) 2 , s 0 . Notice that F ^ is expressed in terms of the hyperbolic sine function; see [7] (p. 318). In general, if μ ( 0 , ) is an arbitrary number (not exactly specified as above) and T is the same random variable, then the unique solution X F has the following LS transform: F ^ ( s ) = 3 μ s / ( sinh 3 μ s ) 2 , s 0 .
Notice that Equation (27) can also be solved by fitting it to the Poincaré type functional equation considered in [5] (Theorem 4). This idea, however, requires to involve the third moment of the underlying distribution F.
On the other hand, we can replace Z in Equation (27) by a random variable X which obeys the equilibrium distribution F induced by F . Recall that
F ( x ) = 1 μ 0 x F ¯ ( t ) d t , x 0 ,
where F ¯ ( t ) = P [ X > t ] = 1 F ( t ) , t 0 . In such a case we obtain an interesting characterization result, and this is the content of the next example.
Example 3.
Let 0 X F have mean μ ( 0 , ) , and let T be a non-negative random variable. Assume that the random variable X obeys the equilibrium distribution F defined in (29). Further, assume that all random variables X, T, X are independent. Then
0 E [ T ] < 1
if and only if the distributional equation
X = d X + T X
has exactly one solution X F with mean μ and a finite variance of the form (13).
Indeed, this is true because the distributional Equation (30) is equivalent to the functional Equation (12). The latter follows easily if we rewrite Equation (30) in terms of LS transforms:
F ^ ( s ) = F ^ ( s ) E [ e s T X ] = F ^ ( s ) 0 E [ e s t X ] d F T ( t ) = F ^ ( s ) 0 F ^ ( s t ) d F T ( t ) , s 0 .
Then, recall that F ^ ( s ) = ( 1 F ^ ( s ) ) / ( μ s ) , s > 0 (see Lemma 8(ii) below). Plugging this identity in (31) and carrying out the function F ^ lead to Equation (12).
As before, letting T = 0 a.s. in (30), we get another characterization of the exponential distribution (because, by (12), F ^ ( s ) = ( 1 + μ s ) 1 , s 0 ). The full statement (see also [14] (p. 63)) is
The equilibrium distribution F is equal to the underlying distribution F, if and only if, F is exponential.

4. Ten Lemmas

To prove the main results, we need some auxiliary statements given here as lemmas. The first two lemmas are well known, and Lemma 1 is called Bernstein’s Theorem (see, e.g., [6] (p. 484) or [11] (p. 28)).
Lemma 1.
The LS transform F ^ of a non-negative random variable X F is a completely monotone function on [ 0 , ) with F ^ ( 0 ) = 1 , and vice versa.
Lemma 2.
(a) The class of Bernstein functions is closed under composition. Namely, the composition of two Bernstein functions is still a Bernstein function. (b) Let ρ be a completely monotone function and σ a Bernstein function on [ 0 , ) . Then, their composition ρ σ is a completely monotone function on [ 0 , ) .
Note that in Theorems 1 and 2 we have used two simple choices for the function ρ . The next two lemmas concern the contraction property of some “usual” real-valued functions of real arguments. These properties will be used later to prove the uniqueness of the solution to functional equations in question.
Lemma 3.
For arbitrary non-negative real numbers a and b , we claim that
(i) | log ( 1 + a ) log ( 1 + b ) | | a b | ; (ii) | e a e b | | a b | .
Proof. 
Use the mean-value theorem and the following two facts: for x 0 ,
d d x log ( 1 + x ) = 1 1 + x 1 and d d x e x = | e x | 1 .
Lemma 4.
(i) For arbitrary real numbers a , b [ 0 , 1 ] and t 1 , we have | a t b t | t | a b | . (ii) For arbitrary real numbers x , y 0 and a > 1 , the Riemann-zeta function satisfies
| ζ ( a + x ) ζ ( a + y ) | ζ ( a ) | x y | .
(iii) For any real a > 1 , we have ζ ( a ) ζ ( a ) > ( ζ ( a ) ) 2 .
Proof. 
It is easy to establish claim (i); still, details can be seen in [5]. For claim (ii), we use Lemma 3(ii). Indeed,
| ζ ( a + x ) ζ ( a + y ) | = n = 1 1 n a + x n = 1 1 n a + y n = 1 1 n a 1 n x 1 n y = n = 1 1 n a e x log n e y log n n = 1 1 n a x log n y log n = n = 1 log n n a | x y | = ζ ( a ) | x y | .
We used the fact that ζ ( s ) = n = 1 ( log n ) / n s for s > 1 . To prove claim (iii), we consider the non-negative random variable X with LS transform
π ( s ) : = E [ e s X ] = ζ ( a + s ) / ζ ( a ) , s 0 .
Then E [ X ] = π ( 0 + ) = ζ ( a ) / ζ ( a ) and E [ X 2 ] = π ( 0 + ) = ζ ( a ) / ζ ( a ) .
The required inequality follows from the fact that Var [ X ] = E [ X 2 ] ( E [ X ] ) 2 > 0 . The proof is complete. □
We need now notations for the first two moments of the random variable X F and a useful relation implied by the non-negativity of the variance:
m 1 : = E [ X ] , m 2 : = E [ X 2 ] , where m 1 2 m 2 .
Sometimes, instead of “first moment m 1 ”, we also use the equivalent name “mean μ ”.
Lemma 5.
Suppose that the non-negative random variable X F has a finite positive second moment. Then its LS transform F ^ has a sharp upper bound as follows:
F ^ ( s ) 1 m 1 2 m 2 + m 1 2 m 2 e ( m 2 / m 1 ) s , s 0 .
For the proof of Lemma 5 we refer to [15,16,17]. It is interesting to mention that the RHS of the inequality (32) is actually the LS transform of a specific two-point random variable, say X 0 F 0 , whose first two moments are equal to m 1 and m 2 . Indeed, define the values of X 0 and their probabilities as follows:
P [ X 0 = 0 ] = 1 m 1 2 m 2 and P [ X 0 = m 2 m 1 ] = m 1 2 m 2 .
Here is another result, Lemma 6; its proof is given in [18].
Lemma 6.
Let 0 X F with F ^ being its LS transform. Then for each integer n 1 , the nth moment of X is expressed by the nth derivative of F ^ as follows:
m n : = E [ X n ] = lim s 0 + ( 1 ) n F ^ ( n ) ( s ) = ( 1 ) n F ^ ( n ) ( 0 + ) ( finite or infinite ) .
Let us deal again with equilibrium distributions. For a random variable X, 0 X F with finite positive mean μ (= first moment m 1 ), we define the first-order equilibrium distribution based on F by
F ( 1 ) ( x ) : = 1 μ 0 x F ¯ ( y ) d y , x 0 .
See also Equation (29), where we have used the notation F . Thus F = F ( 1 ) , here and in Lemma 8 below.
If we assume now that for some n , m n = E [ X n ] < , we define iteratively the equilibrium distribution F ( k ) of order k for k = 1 , , n , as follows:
F ( k ) ( x ) : = 1 μ ( k 1 ) 0 x F ¯ ( k 1 ) ( y ) d y , x 0 ; μ ( j ) : = 0 x d F ( j ) ( x ) , j = 1 , , k 1 .
Thus, we start with F ( 0 ) = F and define F ( 1 ) , F ( 2 ) , , where the kth-order equilibrium distribution F ( k ) is the equilibrium of the previous one, F ( k 1 ) , for which we need the mean value μ ( k 1 ) of the latter.
In order for the above definition to be correct, we need all mean values μ ( j ) , j = 1 , , n 1 , to be finite. This is guaranteed by the assumption that the moment m n = E [ X n ] is finite. The latter implies that μ ( n 1 ) < , and vice versa. Moreover, finite are all moments m k and all mean values μ ( k ) for k = 1 , , n 1 . These properties are summarized in Lemma 7 below showing an interesting relationship between the mean values { μ ( k ) } and the moments { m k } . For details see, e.g., ([19] p. 265) or [20].
Lemma 7.
Suppose that 0 X F has finite positive moment m n = E [ X n ] for some integer n 2 . Then, for any k = 1 , 2 , , n 1 , the mean value μ ( k ) of the kth-order equilibrium distribution F ( k ) is well defined (finite), and moreover, for k = 1 , 2 , , n , the following relations hold:
μ ( k 1 ) = m k k m k 1 , o r , e q u i v a l e n t l y , m k = k ! Π j = 0 k 1 μ ( j ) , μ ( 0 ) = m 1 .
We now provide the last three lemmas; for their proofs, see [5].
Lemma 8.
Consider the non-negative random variable X F whose mean is μ ( 0 , ) , that is, μ is strictly positive and finite, and let X F , where F is the equilibrium distribution induced by F . Then, for s > 0 , the following statements are true:
(i) 
( 1 F ^ ( s ) ) / s = 0 e s x ( 1 F ( x ) ) d x ;
(ii) 
F ^ ( s ) = ( 1 F ^ ( s ) ) / ( μ s ) 1 ;
(iii) 
( F ^ ( s ) 1 + μ s ) / s 2 = μ 0 e s x ( 1 F ( x ) ) d x ;
(iv) 
lim s 0 + ( 1 F ^ ( s ) ) / s = μ ;
(v) 
lim s 0 + ( F ^ ( s ) 1 + μ s ) / s 2 = 1 2 E [ X 2 ] (finite or infinite).
Lemma 9.
Given is a sequence of random variables { Y n } n = 1 , where Y n 0 and Y n G n . We impose two assumptions: (a1) all Y n , hence all G n , have the same finite first two moments, that is, E [ Y n ] = m 1 , E [ Y n 2 ] = m 2 for n = 1 , 2 , ; (a2) the LS transforms { G ^ n } n = 1 form a decreasing sequence of functions.
Then the following limit exists:
lim n G ^ n ( s ) = : G ^ ( s ) , s 0 .
Moreover, G ^ is the LS transform of the distribution G of a random variable Y 0 with first moment E [ Y ] = m 1 and second moment E [ Y 2 ] belonging to the interval [ m 1 2 , m 2 ] .
Lemma 10.
Suppose that W 1 F W 1 and W 2 F W 2 are non-negative random variables with the same mean (same first moment) μ W , a strictly positive finite number. Consider another random variable Z 0 , where Z F Z has a positive mean μ Z < 1 . Assume further that the LS transforms of W 1 and W 2 satisfy the following relation:
| F ^ W 1 ( s ) F ^ W 2 ( s ) | 0 | F ^ W 1 ( t s ) F ^ W 2 ( t s ) | d F Z ( t ) , s 0 .
Then F ^ W 1 = F ^ W 2 and hence F W 1 = F W 2 .

5. Proofs of the Main Results

We start with the proof of Theorem 1, then omit details about Theorems 2 and 3; however, we provide the proof of the more general Theorem 4. Finally we give the proof of Theorem 5. Each of the proofs consist naturally of two steps, Step 1 (sufficiency) and Step 2 (necessity). In many places, in order to make a clear distinction between factors in long expressions, we use the dot symbol, “ · ", for multiplication.
Proof of Theorem 1.
Step 1 (sufficiency). Suppose that Equation (10) has exactly one solution 0 X F with mean μ = μ X ( 0 , ) and finite variance (and hence E [ X 2 ] ( 0 , ) ). Then, we want to show that the conditions (9) are satisfied.
First, rewrite Equation (10) as follows:
F ^ ( s ) = 0 exp a · 0 σ ( s ) F ^ B ( t ) d t d F A ( a ) , s 0 .
Differentiating twice this relation with respect to s, we find, for s > 0 , that
F ^ ( s ) = 0 ( a ) exp a · 0 σ ( s ) F ^ B ( t ) d t d F A ( a ) · F ^ B ( σ ( s ) ) σ ( s ) , F ^ ( s ) = 0 a 2 exp a · 0 σ ( s ) F ^ B ( t ) d t d F A ( a ) · ( F ^ B ( σ ( s ) ) σ ( s ) ) 2 = + 0 ( a ) exp a · 0 σ ( s ) F ^ B ( t ) d t d F A ( a ) · F ^ B ( σ ( s ) ) ( σ ( s ) ) 2
= + 0 ( a ) exp a · 0 σ ( s ) F ^ B ( t ) d t d F A ( a ) · F ^ B ( σ ( s ) ) σ ( s ) .
Letting s 0 + in (34) and (35) yields, respectively,
F ^ ( 0 + ) = F ^ ( 0 + ) E [ A ] , F ^ ( 0 + ) = E [ A 2 ] ( F ^ ( 0 + ) ) 2 E [ A ] F ^ B ( 0 + ) ( F ^ ( 0 + ) ) 2 F ^ ( 0 + ) E [ T ] .
Equivalently, in view of Lemma 6, we obtain two relations:
μ = μ E [ A ] ,
E [ X 2 ] = E [ A 2 ] μ 2 + E [ A ] ( E [ B ] μ 2 + E [ X 2 ] E [ T ] ) .
Since μ and E [ X 2 ] are strictly positive and finite, we conclude from (36) and (37) that E [ A ] = 1 and that each of the quantities E [ A 2 ] , E [ B ] , E [ T ] is finite. Moreover, E [ T ] 1 due to (37) again. We need, however, the strong inequality E [ T ] < 1 . Suppose, on the contrary, that E [ T ] = 1 . Then, this would imply that E [ A 2 ] = 0 by (37), a contradiction to the fact that E [ A ] = 1 . This proves that the conditions (9) are satisfied. In addition, relation (11) for the variance Var [ X ] also follows from (36) and (37) because
E [ X 2 ] = E [ A 2 ] + E [ B ] 1 E [ T ] μ 2 .
The sufficiency part is established.
Step 2 (necessity). Suppose now that the conditions (9) are satisfied. We will show the existence of a solution X F to Equation (10) with mean μ and finite variance.
To find such a solution X F , we first define two numbers:
m 1 = μ and m 2 = E [ A 2 ] + E [ B ] 1 E [ T ] m 1 2 ,
and show later that these happen to be the first two moments of the solution. Note that the denominator 1 E [ T ] > 0 by (9) and that the numbers m 1 , m 2 do satisfy the required moment relation m 2 m 1 2 . This is true because E [ A 2 ] ( E [ A ] ) 2 = 1 due to (9) and Lyapunov’s inequality. Therefore, the RHS of (32) with m 1 , m 2 defined in (38) is a bona fide LS transform, say F ^ 0 , of a non-negative random variable, Y 0 F 0 (by Lemma 1). Namely,
F ^ 0 ( s ) = 1 m 1 2 m 2 + m 1 2 m 2 e ( m 2 / m 1 ) s , s 0 .
It is easy to see that m 1 , m 2 are exactly the first two moments of Y 0 F 0 , as mentioned before.
Next, using the initial Y 0 F 0 we define iteratively the sequence of random variables { Y n } n = 1 , Y n F n , through their LS transforms (they are well-defined due to Lemma 2):
F ^ n ( s ) = 0 exp a 0 σ n 1 ( s ) F ^ B ( t ) d t d F A ( a ) , s 0 , n 1 ,
where
σ n 1 ( s ) = 0 1 F ^ n 1 ( t s ) t d F T ( t ) , s 0 .
Differentiating (39) twice with respect to s and letting s 0 + , we obtain, for n 1 ,
F ^ n ( 0 + ) = F ^ n 1 ( 0 + ) E [ A ] ,
F ^ n ( 0 + ) = E [ A 2 ] ( F ^ n 1 ( 0 + ) ) 2 E [ A ] F ^ B ( 0 + ) ( F ^ n 1 ( 0 + ) ) 2 F ^ n 1 ( 0 + ) E [ T ] .
By Lemma 6, induction on n and in view of (40) and (41), we can show that for any n = 1 , 2 , , we have E [ Y n ] = E [ Y 0 ] = m 1 and E [ Y n 2 ] = E [ Y 0 2 ] = m 2 (see relation (38)). Hence
Var [ Y n ] = m 2 m 1 2 = Var [ A ] + E [ B ] + E [ T ] 1 E [ T ] m 1 2 , n 0 .
Moreover, by Lemma 5, we first have that F ^ 1 F ^ 0 , and then by the iteration (39), that F ^ n F ^ n 1 for any n 2 . Thus, { Y n } n = 0 is a sequence of non-negative random variables having the same first two moments m 1 , m 2 , and such that their LS transforms { F ^ n } are decreasing. Therefore, Lemma 9 applies. Denote the limit of { F ^ n } , as n , by F ^ . Then F ^ will be the LS-transform of the distribution F of a non-negative random variable Y with E [ Y ] = m 1 and E [ Y 2 ] [ m 1 2 , m 2 ] . It follows from (39) that the limit F is a solution to Equation (10) with mean μ = m 1 and finite variance. Applying once again Lemma 6 to Equation (10) (with X = Y and F = F ), we conclude that E [ Y 2 ] = m 2 as expressed in (38), and hence the solution Y F has the required variance as in (11) or (42).
Finally, we prove the uniqueness of the solution to Equation (10). Suppose, under conditions (9), that there are two solutions, say X F and Y G , each satisfying Equation (10) and each having mean equal to μ (and hence both having the same finite variance as shown above). Then we want to show that F = G , or, equivalently, that F ^ = G ^ . To do this, we introduce two functions,
σ ¯ F ( s ) = 0 1 F ^ ( t s ) t d F T ( t ) , σ ¯ G ( s ) = 0 1 G ^ ( t s ) t d F T ( t ) , s 0 .
Then we have, by assumption,
F ^ ( s ) = 0 exp a σ B ( σ ¯ F ( s ) ) d F A ( a ) , G ^ ( s ) = 0 exp a σ B ( σ ¯ G ( s ) ) d F A ( a ) , s 0 .
Using Lemma 3, we have the inequalities:
| F ^ ( s ) G ^ ( s ) | 0 a · 0 σ ¯ F ( s ) F ^ B ( t ) d t 0 σ ¯ G ( s ) F ^ B ( t ) d t d F A ( a ) E [ A ] σ ¯ G ( s ) σ ¯ F ( s ) F ^ B ( t ) d t = σ ¯ G ( s ) σ ¯ F ( s ) F ^ B ( t ) d t σ ¯ F ( s ) σ ¯ G ( s ) , s 0 .
We have used the fact that E [ A ] = 1 . Thus we obtain that for s > 0 ,
1 F ^ ( s ) μ s 1 G ^ ( s ) μ s 0 1 F ^ ( t s ) μ t s d F T ( t ) 0 1 G ^ ( t s ) μ t s d F T ( t ) 0 1 F ^ ( t s ) μ t s 1 G ^ ( t s ) μ t s d F T ( t ) .
This relation is equivalent to another one, for the pair of distributions F and G , induced, respectively, by F and G; see Lemma 8. Thus
| F ^ ( s ) G ^ ( s ) | 0 F ^ ( t s ) G ^ ( t s ) d F T ( t ) , s > 0 .
This, however, is exactly relation (33). Therefore, Lemma 10 applies because E [ T ] < 1 and both F and G have the same mean by Lemma 7. Hence, F ^ = G ^ , which in turn implies that F ^ = G ^ since F and G have the same mean (see [21] [Proposition 1]). The proof of the necessity and hence of Theorem 1 is complete. □
Proof of Theorem 4.
Although the proof has some similarity to that of Theorem 1, there are differences, and it is given here for completeness and the reader’s convenience.
Step 1 (sufficiency). Suppose that Equation (21) has exactly one solution 0 X F with mean μ , a finite positive number, and finite variance (hence E [ X 2 ] ( 0 , ) ). Now we want to show that all five conditions in (20) are satisfied.
Differentiating twice Equation (21) with respect to s, we have, for s > 0 , the following:
F ^ ( s ) = 0 0 ( a ) λ 1 + λ σ B ( σ ( s ) ) ( a + 1 ) d F A ( a ) d F Λ ( λ ) · σ B ( σ ( s ) ) σ ( s ) , F ^ ( s ) = 0 0 ( a ) ( a 1 ) λ 2 1 + λ σ B ( σ ( s ) ) ( a + 2 ) d F A ( a ) d F Λ ( λ ) · ( σ B ( σ ( s ) ) σ ( s ) ) 2 = + 0 0 ( a ) λ 1 + λ σ B ( σ ( s ) ) ( a + 1 ) d F A ( a ) d F Λ ( λ ) · σ B ( σ ( s ) ) ( σ ( s ) ) 2
= + 0 0 ( a ) λ 1 + λ σ B ( σ ( s ) ) ( a + 1 ) d F A ( a ) d F Λ ( λ ) · σ B ( σ ( s ) ) σ ( s ) .
Letting s 0 + in (43) and (44) yields, respectively,
F ^ ( 0 + ) = F ^ ( 0 + ) E [ A Λ ] , F ^ ( 0 + ) = E [ A ( A + 1 ) Λ 2 ] ( F ^ ( 0 + ) ) 2 + E [ A Λ ] E [ B ] ( F ^ ( 0 + ) ) 2 + E [ T ] F ^ ( 0 + ) .
Equivalently, by Lemma 6, we have two relations:
μ = μ E [ A Λ ] ,
E [ X 2 ] = E [ A ( A + 1 ) Λ 2 ] μ 2 + E [ A Λ ] ( E [ B ] μ 2 + E [ X 2 ] E [ T ] ) .
From (45) and (46) it follows that E [ A Λ ] = 1 and that each of the quantities E [ ( A Λ ) 2 ] , E [ Λ 2 ] , E [ B ] and E [ T ] < , is strictly positive and finite; this is because μ and E [ X 2 ] are numbers in ( 0 , ) . Moreover, E [ T ] 1 due to (46), and it remains to show the strict bound E [ T ] < 1 . Suppose, on the contrary, that E [ T ] = 1 . Then we would have E [ ( A Λ ) 2 ] = 0 by (46), a contradiction to the fact that E [ A Λ ] = 1 . Thus, we conclude that the conditions in (20) are satisfied. Besides, the expression (22) for the variance Var [ X ] also follows from (45) and (46) because
E [ X 2 ] = E [ A ( A + 1 ) Λ 2 ] + E [ B ] 1 E [ T ] m 2 .
The sufficiency part is established.
Step 2 (necessity). Suppose now that the conditions (20) are satisfied. We want to show the existence of a solution X F to Equation (21) with mean μ and finite variance.
Set first
m 1 = μ and m 2 = E [ A ( A + 1 ) Λ 2 ] + E [ B ] 1 E [ T ] m 1 2 .
As in the proof of Theorem 1, we have that 1 E [ T ] > 0 by (20) and also that m 2 m 1 2 , and, by using the same notations as before, we can claim the existence of a non-negative random variable Y 0 F 0 such that the LS transform F 0 ^ is equal to the RHS of (32).
The next is to use the initial Y 0 F 0 and define iteratively the sequence of random variables Y n F n , n = 1 , 2 , , through the LS transforms (see Lemma 2):
F ^ n ( s ) = 0 0 1 + λ σ B ( σ n 1 ( s ) ) a d F A ( a ) d F Λ ( λ ) , s 0 , n 1 ,
where
σ n 1 ( s ) = 0 1 F ^ n 1 ( t s ) t d F T ( t ) , s 0 .
Differentiating (48) twice with respect to s and letting s 0 + , we have, for n 1 ,
F ^ n ( 0 + ) = F ^ n 1 ( 0 + ) E [ A Λ ] ,
F ^ n ( 0 + ) = E [ A ( A + 1 ) Λ 2 ] ( F ^ n 1 ( 0 + ) ) 2 + E [ A Λ ] E [ B ] ( F ^ n 1 ( 0 + ) ) 2 + E [ T ] F ^ n 1 ( 0 + ) .
By Lemma 6 and induction on n, we find through (49) and (50) that E [ Y n ] = E [ Y 0 ] = m 1 and E [ Y n 2 ] = E [ Y 0 2 ] = m 2 for any n 1 , and hence
Var [ Y n ] = m 2 m 1 2 = Var [ A Λ ] + E [ A Λ 2 ] + E [ B ] + E [ T ] 1 E [ T ] m 1 2 , n 0 .
Moreover, by Lemma 5, we first have that F ^ 1 F ^ 0 , and then by the iteration (48), that F ^ n F ^ n 1 for any n 2 . Thus, { Y n } n = 0 is a sequence of non-negative random variables all having the same first two moments m 1 , m 2 , such that the sequence of their LS transforms { F ^ n } is decreasing. Therefore, Lemma 9 applies, so the limit lim n F ^ n = : F ^ exists. Moreover, F ^ is the LS transform of a non-negative random variable, say Y with E [ Y ] = m 1 and E [ Y 2 ] [ m 1 2 , m 2 ] . Consequently, it follows from (48) that the limit F is a solution to Equation (21) with mean μ = m 1 and finite variance. Applying Lemma 6 to Equation (21) again (with X = Y and F = F ), we conclude that E [ Y 2 ] = m 2 , as in (47), and hence the solution Y F has the required variance as in (22) or (51).
Finally, let us show the uniqueness of the solution to Equation (21). Suppose, under conditions (20), that there are two solutions, X F and Y G , which satisfy Equation (21) and both have the same mean μ (hence have the same finite variance).
Now we want to show that F = G , or, equivalently, that F ^ = G ^ . We need the functions
σ ¯ F ( s ) = 0 1 F ^ ( t s ) t d F T ( t ) , σ ¯ G ( s ) = 0 1 G ^ ( t s ) t d F T ( t ) , s 0 .
Then we have
F ^ ( s ) = 0 0 1 + λ σ B ( σ ¯ F ( s ) ) a d F A ( a ) d F Λ ( λ ) , s 0 , G ^ ( s ) = 0 0 1 + λ σ B ( σ ¯ G ( s ) ) a d F A ( a ) d F Λ ( λ ) , s 0 .
Using Lemma 4, we obtain the following chain of relations:
| F ^ ( s ) G ^ ( s ) | 0 0 a λ · 0 σ ¯ F ( s ) F ^ B ( t ) d t 0 σ ¯ G ( s ) F ^ B ( t ) d t d F A ( a ) d F Λ ( λ ) E [ A Λ ] σ ¯ G ( s ) σ ¯ F ( s ) F ^ B ( t ) d t = σ ¯ G ( s ) σ ¯ F ( s ) F ^ B ( t ) d t σ ¯ F ( s ) σ ¯ G ( s ) , s 0 ,
where we have used the condition E [ A Λ ] = 1 . The remaining arguments are similar to those in the proof of Theorem 1, so we omit the details. Thus, the necessity is also established and the proof of Theorem 4 is complete. □
Proof of Theorem 5.
We follow a similar idea as that in the proofs of Theorems 1 and 4. It is convenient and useful to see the details which are based explicitly on properties of the Riemann-zeta function.
Step 1 (sufficiency). Suppose that Equation (24) has exactly one solution 0 X F with mean μ ( 0 , ) and finite variance ( E [ X 2 ] ( 0 , ) ). We want to show that conditions (23) are satisfied.
Differentiating twice Equation (24) with respect to s, we have, for s > 0 ,
F ^ ( s ) = 1 ζ ( a ) 0 λ ζ ( a + λ σ ( s ) ) d F Λ ( λ ) · σ ( s ) ,
F ^ ( s ) = 1 ζ ( a ) 0 λ 2 ζ ( a + λ σ ( s ) ) d F Λ ( λ ) · ( σ ( s ) ) 2 + 1 ζ ( a ) 0 λ ζ ( a + λ σ ( s ) ) d F Λ ( λ ) · σ ( s ) .
Letting s 0 + in (52) and (53) yields, respectively,
F ^ ( 0 + ) = ζ ( a ) ζ ( a ) F ^ ( 0 + ) E [ Λ ] , F ^ ( 0 + ) = ζ ( a ) ζ ( a ) ( F ^ ( 0 + ) ) 2 E [ Λ 2 ] ζ ( a ) ζ ( a ) F ^ ( 0 + ) E [ Λ ] E [ T ] .
Equivalently, we have, by Lemma 6,
μ = μ ζ ( a ) ζ ( a ) E [ Λ ] ,
E [ X 2 ] = ζ ( a ) ζ ( a ) E [ Λ 2 ] μ 2 ζ ( a ) ζ ( a ) E [ X 2 ] E [ Λ ] E [ T ] .
From (54) and (55) it follows that E [ Λ ] = ζ ( a ) / ζ ( a ) and that both quantities E [ Λ 2 ] and E [ T ] are finite; this is because μ and E [ X 2 ] are numbers in the interval ( 0 , ) . In addition, E [ T ] 1 due to (55). However, we need the strict relation E [ T ] < 1 . Suppose, on the contrary, that E [ T ] = 1 . In such a case we would have E [ Λ 2 ] = 0 by (55), which contradicts the fact that E [ Λ ] = ζ ( a ) / ζ ( a ) . Thus, conditions (23) are satisfied. Besides, (25) also follows from (54) and (55) because
E [ X 2 ] = ζ ( a ) E [ Λ 2 ] ζ ( a ) ( 1 E [ T ] ) μ 2 .
The sufficiency part is established.
Step 2 (necessity). Suppose that conditions (23) are satisfied. We want to show that there exists a solution X F to Equation (24) with mean μ and finite variance.
We start by setting two relations,
m 1 = μ and m 2 = ζ ( a ) E [ Λ 2 ] ζ ( a ) ( 1 E [ T ] ) m 1 2 .
The denominator 1 E [ T ] in (56) is strictly positive by (23). Additionally, we have m 2 m 1 2 because E [ Λ 2 ] ( E [ Λ ] ) 2 = ( ζ ( a ) / ζ ( a ) ) 2 ζ ( a ) / ζ ( a ) (see Lemma 4). Therefore, the RHS of (32) with m 1 , m 2 defined in (56) is a bona fide LS transform, say F ^ 0 , of a non-negative random variable, Y 0 (by Lemma 1).
Next, starting with the initial Y 0 F 0 , we define iteratively the sequence of random variables Y n F n , n = 1 , 2 , , through their LS transforms F ^ n (see Lemma 2):
F ^ n ( s ) = 1 ζ ( a ) 0 ζ ( a + λ σ n 1 ( s ) ) d F Λ ( λ ) , s 0 , n 1 ,
where
σ n 1 ( s ) = 0 1 F ^ n 1 ( t s ) t d F T ( t ) , s 0 .
Differentiating (57) twice with respect to s and letting s 0 + , we find, for n 1 ,
F ^ n ( 0 + ) = ζ ( a ) ζ ( a ) F ^ n 1 ( 0 + ) E [ Λ ] ,
F ^ n ( 0 + ) = ζ ( a ) ζ ( a ) ( F ^ n 1 ( 0 + ) ) 2 E [ Λ 2 ] ζ ( a ) ζ ( a ) F ^ n 1 ( 0 + ) E [ Λ ] E [ T ] .
By Lemma 6, induction on n, and relations (58) and (59), we find that E [ Y n ] = E [ Y 0 ] = m 1 , E [ Y n 2 ] = E [ Y 0 2 ] = m 2 (defined in (56)) for any n 1 and hence
Var [ Y n ] = m 2 m 1 2 = ζ ( a ) E [ Λ 2 ] ζ ( a ) + ζ ( a ) E [ T ] ζ ( a ) ( 1 E [ T ] ) m 1 2 , n 0 .
Moreover, by Lemma 5, we first have that F ^ 1 F ^ 0 , and then by the iteration (57), that F ^ n F ^ n 1 for any n 2 . Thus, { Y n } n = 0 is a sequence of non-negative random variables having the same first two moments m 1 , m 2 , such that the sequence of their LS transforms { F ^ n } is decreasing. Applying Lemma 9 we conclude that there is a limit lim n F ^ n = : F ^ , which is the LS transform of a non-negative random variable Y F with E [ Y ] = m 1 and E [ Y 2 ] [ m 1 2 , m 2 ] . Hence, it follows from (57) that F is a solution to Equation (24) with mean μ and finite variance. Applying again Lemma 6 to Equation (24) for X = Y and F = F , we conclude that E [ Y 2 ] = m 2 as in (56), and hence the solution Y F has the required variance as in (25) or (60).
Finally, it remains to prove the uniqueness of the solution to Equation (24). Suppose, under conditions (23), that there are two solutions, X F and Y G , both satisfying Equation (24) and having the same mean μ = m 1 (hence the same finite variance). We want to show that F = G , or, equivalently, that F ^ = G ^ . We use the functions
σ ¯ F ( s ) = 0 1 F ^ ( t s ) t d F T ( t ) , σ ¯ G ( s ) = 0 1 G ^ ( t s ) t d F T ( t ) , s 0 .
to express explicitly the two LS transforms:
F ^ ( s ) = 1 ζ ( a ) 0 ζ ( a + λ σ ¯ F ( s ) ) d F Λ ( λ ) , s 0 , G ^ ( s ) = 1 ζ ( a ) 0 ζ ( a + λ σ ¯ G ( s ) ) d F Λ ( λ ) , s 0 .
By Lemma 4, we derive the relations:
| F ^ ( s ) G ^ ( s ) | ζ ( a ) ζ ( a ) 0 λ · σ ¯ F ( s ) σ ¯ G ( s ) d F Λ ( λ ) ζ ( a ) ζ ( a ) E [ Λ ] σ ¯ F ( s ) σ ¯ G ( s ) = σ ¯ F ( s ) σ ¯ G ( s ) , s 0 .
We have used the fact that E [ Λ ] = ζ ( a ) / ζ ( a ) . The remaining arguments are similar to those in the proof of Theorems 1 and 4; thus, they are omitted. The necessity is established, and the proof of Theorem 5 is completed. □

6. Concluding Remarks

Below are some relevant and useful remarks regarding the problems and the results in this paper and their relations with previous works.
Remark 1.
In Theorem 1, we have treated the power-mixture type functional equation (see Equation (2)), which includes the compound-Poisson equation, Equation (28), as a special case. Thus, the problems and the results here can be considered as an extension of previous works.
Remark 2.
In Examples 1 and 3, when T = p a.s. for some fixed number p [ 0 , 1 ) , the unique solution X F to Equations (26) and (30) with mean μ and finite variance is the mixture distribution
F ( x ) = p + ( 1 p ) ( 1 e β x ) , x 0 , w i t h β = ( 1 p ) / μ .
Hence, its LS transform has a mixture form:
F ^ ( s ) = 1 μ λ + μ λ 1 1 + λ s , s 0 , w h e r e λ = 1 β = μ 1 p .
Actually, for an arbitrary random variable T F T with 0 T 1 and E [ T ] < 1 , and for any number p [ 0 , 1 ) such that F T ( p ) ( 0 , 1 ] , the unique solution X F to Equations (26) and (30) satisfies the inequality:
F ^ ( s ) 1 μ λ + μ λ 1 1 + λ s , s 0 , where λ = μ F T ( p ) ( 1 p ) .
Notice that this relation is satisfied even if the explicit form of F ^ is unknown.
Remark 3.
The class of power-mixture transforms defined in Equation (2) is quite rich, and referring to [7] we can see, e.g., that it includes the LS transforms of the so-called C t , S t , T t random variables (where t > 0 ), which are expressed in terms of the hyperbolic functions cosh , sinh , tanh , respectively.
Let us provide some details. The random variable C t , t > 0 , is described by its explicit LS transform as follows:
E [ e s C t ] = 1 cosh 2 s t = exp ( t log ( cosh 2 s ) ) = exp t 0 s tanh 2 x 2 x d x , s > 0 .
This is related to Equation (2) by taking X 2 = t a.s. and X 1 F 1 , where
F ^ 1 ( s ) = exp 0 s tanh 2 x 2 x d x , s 0 .
Similar arguments apply to the LS transforms of the random variables S t and T t whose explicit expressions are
E [ e s S t ] = 2 s sinh 2 s t , s > 0 , and E [ e s T t ] = tanh 2 s 2 s t , s > 0 .
It is also interesting to note that for any fixed t > 0 , the following relation holds:
E [ e s C t ] = E [ e s S t ] E [ e s T t ] , s > 0 .
Therefore, we have an interesting distributional equation
C t = d S t + T t .
This means that the random variable C t can be decomposed into a sum of two sub-independent random variables S t and T t . (See [7].)
Remark 4.
We finally consider a functional equation which is similar to Equation (27) (or to Equation (28)), however not really of the power-mixture type Equation (10). Let 0 X F with mean μ ( 0 , ) and let T be a non-negative random variable. Assume that the random variable Z 0 has the length-biased distribution (5) induced by F . Let the random variables X 1 , X 2 be independent copies of X F , and moreover, let X 1 , X 2 , T be independent. Then, the distributional equation
Z = d X 1 + T X 2
(different from Equation (27)) is equivalent to the functional equation
F ^ ( s ) = e σ ( s ) , s 0
(compare with Equation (28)). Here, the Bernstein function σ is of the form
σ ( s ) = 0 s μ 0 F ^ ( x t ) d F T ( t ) d x , s 0 .
To analyze the solutions to this kind of functional equations is a serious problem. The attempt to follow the approach in this paper was not successful. Perhaps a new idea is needed. However, there is a specific case when the solution to Equation (61) is explicitly known. More precisely, let us take
T = d U 2
with U being a continuous random variable uniformly distributed on [ 0 , 1 ] . In this case Equation (61) has a unique solution F: F is the hyperbolic-cosine distribution with LS-transform
F ^ ( s ) = 1 cosh μ s 2 , s > 0 .
Therefore X = d 1 2 μ C 2 (see, e.g., [7] p. 317). Once again, this characteristic property (look at Equation (61)) with T = d U 2 is found for the variable C t only when t = 2 . Thus, a natural question arises: What about arbitrary t > 0 ? As far as we know, for general random variables C t , S t , T t , the characterizations of their distributions are challenging but still open problems.
Remark 5.
One of the reviewers kindly pointed out the connection of the distributional equation Z = d X + T Z with Pólya’s [22] characterization of the normal distribution. By taking Z = 2 X and T = 1 2 a.s., the distributional equation reduces to
2 X = d X 1 + X 2 , equivalently , X = d 1 2 ( X 1 + X 2 ) ,
where X 1 , X 2 are independent copies of X F on the whole real line (instead of the right half-line). Then, the solutions of the equation are exactly the normal distributions with mean zero. In this regard, the reader can consult [3] (Chapter 3) and [23,24,25,26,27] and the references therein for further extensions of these and related results.

Author Contributions

The authors have equal contribution to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We are grateful to Sergey Foss, Gerold Alsmeyer, and Alexander Iksanov for their attention and useful comments on previous version of our paper. We also thank the Referees and the Editor for their helpful suggestions.

Conflicts of Interest

The authors declare that there is no conflict of interests.

References

  1. Buraczewski, D.; Damek, E.; Mikosch, T. Stochastic Models with Power-law Tails: The Equation X = AX + B; Springer: Cham, Switzerland, 2016. [Google Scholar]
  2. Iksanov, A.M. Renewal Theory for Perturbed Random Walks and Similar Processes; Birkhäuser: Basel, Switzerland, 2016. [Google Scholar]
  3. Kagan, A.M.; Linnik, Y.V.; Rao, C.R. Characterization Problems in Mathematical Statistics; Wiley: New York, NY, USA, 1973. [Google Scholar]
  4. Liu, Q. An extension of a functional equation of Poincaré and Mandelbrot. Asian J. Math. 2002, 6, 145–168. [Google Scholar] [CrossRef]
  5. Hu, C.-Y.; Lin, G.D. Necessary and sufficient conditions for unique solution to functional equations of Poincaré type. J. Math. Analysis Appl. 2020, 491, 124399. [Google Scholar] [CrossRef]
  6. Steutel, F.W.; van Harn, K. Infinite Divisibility of Probability Distributions on the Real Line; Marcel Dekker Inc.: New York, NY, USA, 2004. [Google Scholar]
  7. Pitman, J.; Yor, M. Infinitely divisible laws associated with hyperbolic functions. Can. J. Math. 2003, 55, 292–330. [Google Scholar] [CrossRef] [Green Version]
  8. Hwang, T.-Y.; Hu, C.-Y. A characterization of the compound-exponential type distributions. Can. Math. Bull. 2011, 54, 464–471. [Google Scholar] [CrossRef] [Green Version]
  9. Iksanov, A.M. On perpetuities related to the size-biased distributions. Theory Stoch. Process. 2002, 8, 127–135. [Google Scholar]
  10. Iksanov, A.M.; Kim, C.-S. On a Pitman-Yor problem. Stat. Probab. Lett. 2004, 68, 61–72. [Google Scholar] [CrossRef]
  11. Schilling, R.L.; Song, R.; Vondraček, Z. Bernstein Functions: Theory and Applications, 2nd ed.; De Gruyter: Berlin, Germany, 2012. [Google Scholar]
  12. Lin, G.D.; Hu, C.-Y. The Riemann zeta distribution. Bernoulli 2001, 7, 817–828. [Google Scholar] [CrossRef]
  13. Gradshteyn, I.S.; Ryzhik, I.M. Tables of Integrals, Series, and Products, 8th ed.; Academic Press: New York, NY, USA, 2014. [Google Scholar]
  14. Cox, D.R. Renewal Theory; Methuen: London, UK, 1962. [Google Scholar]
  15. Eckberg, A.E., Jr. Sharp bounds on Laplace–Stieltjes transforms, with applications to various queueing problems. Math. Oper. Res. 1977, 2, 135–142. [Google Scholar] [CrossRef]
  16. Guljaš, B.; Pearce, C.E.M.; Pečarić, J. Jensen’s inequality for distributions possessing higher moments, with application to sharp bounds for Laplace–Stieltjes transforms. J. Austral. Math. Soc. Ser. B 1998, 40, 80–85. [Google Scholar] [CrossRef]
  17. Hu, C.-Y.; Lin, G.D. Some inequalities for Laplace transforms. J. Math. Anal. Appl. 2008, 340, 675–686. [Google Scholar] [CrossRef] [Green Version]
  18. Lin, G.D. Characterizations of the exponential distribution via the blocking time in a queueing system. Stat. Sin. 1993, 3, 577–581. [Google Scholar]
  19. Lin, G.D. Characterizations of the L-class of life distributions. Stat. Probab. Lett. 1998, 40, 259–266. [Google Scholar]
  20. Harkness, W.L.; Shantaram, R. Convergence of a sequence of transformations of distribution functions. Pac. J. Math. 1969, 31, 403–415. [Google Scholar] [CrossRef]
  21. Huang, J.S.; Lin, G.D. Characterization of distributions using equilibrium transformations. Sankhyā Ser. A 1995, 57, 179–185. [Google Scholar]
  22. Pólya, G. Herleitung des Gausschen fehlergesetzes aus einer funktionalgleichung. Math. Z. 1923, 18, 96–108. [Google Scholar] [CrossRef]
  23. Laha, R.G.; Lukacs, E. On a linear form whose distribution is identical with that of a monomial. Pac. J. Math. 1965, 15, 207–214. [Google Scholar] [CrossRef]
  24. Skitovich, V.P. On a property of the normal distribution. DAN SSSR 1953, 89, 217–219. (In Russian) [Google Scholar]
  25. Skitovich, V.P. Linear forms in independent random variables and the normal distribution law. Izvestiya AN SSSR Ser. Matem. 1954, 18, 185–200. (In Russian) [Google Scholar]
  26. Darmois, G. Analyse générale des liaisons stochastiques. Etude particuliére de l’analyse factorielle linéaire. Rev. Inst. Internat. Stat. 1953, 21, 2–8. [Google Scholar] [CrossRef]
  27. Lin, G.D.; Hu, C.-Y. Characterizations of distributions via the stochastic ordering property of random linear forms. Stat. Probab. Lett. 2001, 51, 93–99. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, C.-Y.; Lin, G.D.; Stoyanov, J.M. Characterization of Probability Distributions via Functional Equations of Power-Mixture Type. Mathematics 2021, 9, 271. https://doi.org/10.3390/math9030271

AMA Style

Hu C-Y, Lin GD, Stoyanov JM. Characterization of Probability Distributions via Functional Equations of Power-Mixture Type. Mathematics. 2021; 9(3):271. https://doi.org/10.3390/math9030271

Chicago/Turabian Style

Hu, Chin-Yuan, Gwo Dong Lin, and Jordan M. Stoyanov. 2021. "Characterization of Probability Distributions via Functional Equations of Power-Mixture Type" Mathematics 9, no. 3: 271. https://doi.org/10.3390/math9030271

APA Style

Hu, C. -Y., Lin, G. D., & Stoyanov, J. M. (2021). Characterization of Probability Distributions via Functional Equations of Power-Mixture Type. Mathematics, 9(3), 271. https://doi.org/10.3390/math9030271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop