1. Introduction
We deal with probability distributions on the right half-line and their characterization properties expressed in the form of distributional equations of the type , where the random variable has known distribution, the distribution of the random variable is a transformation of that of , and we want to find the distribution of X. By using Laplace–Stieltjes (LS) transform of the distributions of the random variables involved, we transfer such a distributional equation to a functional equation of a specific type. Our goal is to provide necessary and sufficient conditions for such a functional equation to have a unique solution. The unique solution is equivalent to a characterization property of a probability distribution.
It is worth mentioning that the topic Distributional Equations was intensively studied over the last decades. There are excellent sources; among them are the recent books by Buraczewski, Damek, and Mikosch [
1] and Iksanov [
2]. For good reasons, the phrase “The equation
” is included as a subtitle of [
1]. From different perspectives this distributional equation is studied in [
2]. Such equations are called “fixed-point equations”; they arise as limits when studying autoregressive sequences in economics and actuarial modeling, and the “fixed point” (the unique solution) is related to the so-called perpetuities. These books contain a detailed analysis of diverse stochastic models, a variety of results and methods. Besides the authors of the two books, an essential contribution in this area is made by many scientists, to list here only a few names: H. Kesten, C.M. Goldie, W. Vervaat, P. Embrechts, Z. Jurek, G. Alsmeyer, G. Letac, and J. Wesolowski. Much more can be found in the books [
1,
2] cited above and also in the book by Kagan, Linnik, and Rao [
3].
In the present paper, we study a wide class of power-mixture functional equations for the LS transforms of probability distributions. In particular, equations of compound-exponential type, compound-Poisson type, and others, fall into this class. On the other hand, the related Poincaré type functional equations have been studied in [
4] and recently in [
5]; see also the references therein.
The power-mixture functional equations arise when studying power-mixture transforms involving two sii-processes. Here the abbreviation “sii-processes” stands for stationary-independent-increments stochastic processes. Think, for example, of the Lévy processes. Consider a continuous time sii-process
, and let
be the (marginal) distribution of
; we write this as
. Moreover, let
be the generating random variable for the process, so
uniquely determines the distribution of the process
at any time
t. Thus, we have the multiplicative semigroup
satisfying the power relation
Here
is the LS transform of the distribution
of
:
(see, e.g., [
6] (Chapter I)).
Let further,
, independent of
, be another continuous time sii-process with a generating random variable
, and let
,
. Now, we can consider the composition process
, which is the subordination of the process
to the process
. The generating random variable for
is
. In view of Equation (
1), the distribution
F has LS transform
, which is of the power-mixture type (in short, power-mixture transform) and satisfies the following relations:
From now on, we will focus mainly on the power-mixture transforms (
2) or (3). The brief illustration of dealing with two sii-processes is just one of the motivations. Thus, we now require only
to be infinitely divisible, but not asking this property for
. For such distributions
F with elegant LS transforms, see ([
6] Chapter III), as well as [
7].
If
, where
, the standard exponential distribution,
,
, its LS transform is
,
, and the generating distribution
F for the composition process
reduces to the so-called compound-exponential distribution whose LS transform (for short, compound-exponential transform) is
This shows that the power-mixture transforms are more general than the compound-exponential ones. The latter case, however, is important by itself, and it has been studied in [
8].
When the random variable
is actually related to (or constructed from) the variable
, the LS transform
will be a function of the LS transform
. Hence, the distribution
F (equivalently, its LS transform
) can be considered as a solution to the functional Equations (
2)–(
4). Since each of these equations is related to a distributional equation, as soon as we have a unique solution (a “fixed point”), this will provide a characterization property of the corresponding distribution.
Our main purpose in this paper is to provide necessary and sufficient conditions for the functional equations in question to have unique distributional solutions. We do this under quite general conditions, one of them is to require finite variance. We exhibit new results; some of them either extend or improve previous results for functional equations of compound-exponential and compound-Poisson types. In particular, we provide another affirmative answer to a question posed in [
7], regarding the distributional equation
This question and the answer were first given in [
9,
10]. Our arguments are different; details are given in Example 2 below. Functional equations of other types are also studied.
In
Section 2, we formulate the problem and state the main results and corollaries. The results are illustrated in
Section 3 by examples that fit well to the problem.
Section 4 contains a series of lemmas, which we need in
Section 5 to prove the main theorems. We conclude in
Section 6 with comments and challenging questions. The list of references includes significant works all related to our study.
2. Formulation of the Problem and Main Results
Let
X be a non-negative random variable with distribution
F and mean
, which is a finite positive number, that is,
. Starting with
, we will construct an infinitely divisible random variable
to be used in Equation (
2). Consider three non-negative random variables and their distributions as follows:
,
,
. Suppose further that
Z is a random variable, independent of
T, with the length-biased distribution
induced by
F, namely,
We involve also the scale-mixture random variable
. We are now prepared to define the following two functions in terms of LS transforms:
Notice that
and
are Bernstein functions, and their first derivatives are completely monotone functions, by definition; see, e.g., [
11]. The function
in (
6) will play a crucial role in this paper, and the integrand
is defined for
by continuity to be equal to
. The second equality in (
6) can be verified by differentiating its both sides with respect to
s and using the following facts:
Recall that the composition
of two Bernstein functions is a Bernstein function; hence, this is so for
the functions in (
6) and (7). We need also the “simple” function
,
, which is the LS transform of the degenerate random variable at the point 1, and use its property as being completely monotone. Therefore, we can consider the infinitely divisible random variable
(in Equation (
1)) with LS transform of compound-Poisson type:
Such a choice is appropriate in view of Lemmas 1 and 2 in
Section 3. Clearly,
is a function of
F,
, and
. Let us formulate our main results and some corollaries.
Theorem 1. Under the above setting, we haveif and only if the functional equation of power-mixture typehas exactly one solution with mean μ and finite variance. Moreover, If we impose a condition on the variable B and use a.s. for “almost surely”, Theorem 1 reduces as follows.
Corollary 1. In addition to the above setting, let a.s. Then we haveif and only if the functional equation of power-mixture typehas exactly one solution with mean μ and finite variance. Moreover, If we impose also a condition on A, Corollary 1 further reduces to the following.
Corollary 2. In addition to the setting of Theorem 1, let a.s. and a.s. Thenif and only if the functional equation of compound-Poisson typehas exactly one solution with mean μ and finite variance. Moreover, Here is a case of a “nice” proper random variable A, , so , . Corollary 1 takes now the following form.
Corollary 3. Let have mean , a.s., and T be a non-negative random variable. Thenif and only if the functional equation of compound-exponential typehas exactly one solution with mean μ and finite variance. Moreover, Here is another particular but interesting case.
Corollary 4. In addition to the setting of Theorem 1, suppose that a.s. for some fixed number and that a.s. Then we haveif and only if the functional equationhas exactly one solution F, with mean μ and finite variance. Moreover, We now return to the construction of the infinitely divisible LS transform
in (
8). Using the completely monotone function
,
, which corresponds to
, we have instead the following LS transform:
and here is the next result.
Theorem 2. Suppose, as before, that is a non-negative random variable with mean Let further T, A, and B be three non-negative random variables. Then, for a fixed constant , we haveif and only if the functional equation of power-mixture typehas exactly one solution with mean μ and finite variance. Moreover, Exchanging the roles of the arguments a and in Theorem 2 leads to the following.
Theorem 3. Consider the non-negative random variables , where has mean . Then, for an arbitrary constant , we haveif and only if the functional equationhas exactly one solution F, with mean μ and finite variance of X of the form: Keeping both random variables A and in Theorems 2 and 3 (rather than constants) yields the following general result. For simplicity, A and below are assumed to be independent.
Theorem 4. Let and B be non-negative random variables, where has mean . We also require A and Λ to be independent. Then we haveif and only if the functional equationhas exactly one solution with mean μ and finite variance. Moreover, Clearly, when
Equations (20)–(22) reduce to Equations (14)–(16), respectively, while if
Equations (20)–(22) reduce to Equations (17)–(19), accordingly. This is why in
Section 5 we omit the proofs of Theorems 2 and 3; however, we provide a detailed proof of the more general Theorem 4.
Finally, let us involve the Riemann-zeta function defined as usual by
For any
, the function
,
, is the LS transform of a probability distribution on
of Riemann-zeta type (because
is the characteristic function of the Riemann-zeta distribution on
). Remarkably, it is infinitely divisible (see [
12] Corollary 1). We have the following result which is in the spirit of the previous theorems; however, it is interesting on its own.
Theorem 5. Suppose that and Λ
are non-negative random variables, where has mean . Then, for any fixed number , we haveif and only if the functional equationhas exactly one solution with mean μ and finite variance. Moreover, 3. Examples
We give some examples to illustrate the use of the above results. The first two examples improve Theorems 1.1 and 1.3 of [
8]. We use the notation
in its usual meaning of equality in distribution.
Example 1. Let with mean and let T be a non-negative random variable. Assume that the random variable has the length-biased distribution (5) induced by F, and that , are two random variables having the same distribution F. Assume further that all random variables Z, T, , are independent. Thenif and only if the distributional equationhas exactly one solution with mean μ and finite variance as expressed by (13). All this is because the distributional Equation (
26) is equivalent to the functional Equation (
12) expressed in terms of the LS transform
. Let us give details. We rewrite Equation (
26) as follows:
By using the identity
, the above relation is equivalent to
This means that indeed Equation (
12) holds true in view of the fact that
,
.
Let us discuss two specific choices of each one leading to an interesting conclusion.
(a) When
a.s., we have, by definition, that
and hence, by (
12),
Equivalently,
F is an exponential distribution with mean
On the other hand, Equation (
26) reduces to
Therefore, this equation claims to be a characterization of the exponential distribution. The explicit formulation is
The convolution of an underlying distribution F with itself is equal to the length-biased distribution induced by F, if and only if, F is an exponential distribution.
(b) More generally, if
a.s. for some fixed number
then the unique solution
to Equation (
26) is the following explicit mixture distribution:
Example 2. As in Example 1, we consider two non-negative random variables, T and X, where has mean Assume that the random variable has the length-biased distribution (5) induced by F, and that all random variables X, T, Z are independent. Thenif and only if the distributional equationhas exactly one solution with mean μ and finite variance, Notice that the finding in this example is another affirmative answer to a question posed by Pitman and Yor in ([
7] p. 320). The question can be read (in our format) as follows:
Given a random variable , does there exist a random variable (with unknown F) such that Equation (27) is satisfied with Z having length-biased distribution induced by F?
In order to see that the answer is affirmative, we note that the distributional Equation (
27) is equivalent to the following functional equation (by arguments as in Example 1):
Clearly, Equation (
28) is a special case of Equation (
10) with degenerate random variables
a.s. and
a.s. Therefore, given arbitrary random variable
with
, Equation (
27) determines uniquely the corresponding underlying distribution
F of
X. Moreover,
X has mean
and variance
as prescribed.
Let us mention that A. Iksanov was the first who gave an affirmative answer to the question by J. Pitman and M. Yor; see [
2,
9,
10]. His conclusion and the above conclusion are partly similar; however, his conditions and arguments are different from ours.
For example, in [
9] it is assumed that
T is strictly positive, the expectation
exists (finite or infinite) and
and it is proved that there exists a unique solution
F (to Equation (
27)) with mean
if and only if
There is no conclusion/condition about the variance of
In our condition , we do not exclude the possibility that T has a mass at that is, Actually, it can be shown that if strictly and then . This is so because the function for . Thus if our condition and conclusion are stronger than those of Iksanov.
Let us consider four cases for the random variable
T. (a) If
a.s., Equation (
27) reduces to
. It tells us that the length-biased distribution
is equal to the underlying distribution
F. This distributional equation characterizes the degenerate distribution
F concentrated at the point
because Equation (
28) accordingly reduces to
,
.
(b) If
T is a continuous random variable uniformly distributed on the interval
, Equation (
27) characterizes the exponential distribution with mean
; see also ([
7] p. 320). Indeed, by using the identity (easy to check by differentiating in
s both sides)
we see that
,
, satisfies the functional Equation (
28) and refer to the fact that the LS transform of the distribution
F is
,
if and only if
F is
.
More generally, if
T has a uniform distribution on the interval
for some
then the unique solution
F to the functional Equation (
28) is the following explicit mixture distribution:
(c) If we assume now that
T has a beta distribution
with parameter
then the unique solution
to the distributional Equation (
27) will be the Gamma distribution
with density
Here
, and to make this conclusion we use the following identity: for
or, equivalently,
(see, e.g., [
13] Formula 8.380(7), p. 917).
(d) Take a particular value for
e.g.,
and assume that
T with values in
has the density
,
. Then Equation (
27) has a unique solution
whose LS transform is
,
Notice that
is expressed in terms of the hyperbolic sine function; see [
7] (p. 318). In general, if
is an arbitrary number (not exactly specified as above) and
T is the same random variable, then the unique solution
has the following LS transform:
,
Notice that Equation (
27) can also be solved by fitting it to the Poincaré type functional equation considered in [
5] (Theorem 4). This idea, however, requires to involve the third moment of the underlying distribution
F.
On the other hand, we can replace
Z in Equation (
27) by a random variable
which obeys the equilibrium distribution
induced by
Recall that
where
,
. In such a case we obtain an interesting characterization result, and this is the content of the next example.
Example 3. Let have mean , and let T be a non-negative random variable. Assume that the random variable obeys the equilibrium distribution defined in (29). Further, assume that all random variables X, T, are independent. Thenif and only if the distributional equationhas exactly one solution with mean μ and a finite variance of the form (13). Indeed, this is true because the distributional Equation (
30) is equivalent to the functional Equation (
12). The latter follows easily if we rewrite Equation (
30) in terms of LS transforms:
Then, recall that
,
(see Lemma 8(ii) below). Plugging this identity in (
31) and carrying out the function
lead to Equation (
12).
As before, letting
a.s. in (
30), we get another characterization of the exponential distribution (because, by (
12),
,
). The full statement (see also [
14] (p. 63)) is
The equilibrium distribution is equal to the underlying distribution F, if and only if, F is exponential.
4. Ten Lemmas
To prove the main results, we need some auxiliary statements given here as lemmas. The first two lemmas are well known, and Lemma 1 is called Bernstein’s Theorem (see, e.g., [
6] (p. 484) or [
11] (p. 28)).
Lemma 1. The LS transform of a non-negative random variable is a completely monotone function on with , and vice versa.
Lemma 2. (a) The class of Bernstein functions is closed under composition. Namely, the composition of two Bernstein functions is still a Bernstein function. (b) Let ρ be a completely monotone function and σ a Bernstein function on . Then, their composition is a completely monotone function on .
Note that in Theorems 1 and 2 we have used two simple choices for the function The next two lemmas concern the contraction property of some “usual” real-valued functions of real arguments. These properties will be used later to prove the uniqueness of the solution to functional equations in question.
Lemma 3. For arbitrary non-negative real numbers a and we claim that
(i) ; (ii) .
Proof. Use the mean-value theorem and the following two facts: for
,
□
Lemma 4. (i) For arbitrary real numbers and , we have . (ii) For arbitrary real numbers and , the Riemann-zeta function satisfies(iii) For any real , we have . Proof. It is easy to establish claim (i); still, details can be seen in [
5]. For claim (ii), we use Lemma 3(ii). Indeed,
We used the fact that
for
. To prove claim (iii), we consider the non-negative random variable
X with LS transform
Then and .
The required inequality follows from the fact that . The proof is complete. □
We need now notations for the first two moments of the random variable
and a useful relation implied by the non-negativity of the variance:
Sometimes, instead of “first moment ”, we also use the equivalent name “mean ”.
Lemma 5. Suppose that the non-negative random variable has a finite positive second moment. Then its LS transform has a sharp upper bound as follows: For the proof of Lemma 5 we refer to [
15,
16,
17]. It is interesting to mention that the RHS of the inequality (
32) is actually the LS transform of a specific two-point random variable, say
, whose first two moments are equal to
and
. Indeed, define the values of
and their probabilities as follows:
Here is another result, Lemma 6; its proof is given in [
18].
Lemma 6. Let with being its LS transform. Then for each integer , the nth moment of X is expressed by the nth derivative of as follows: Let us deal again with equilibrium distributions. For a random variable
X,
with finite positive mean
(= first moment
), we define the first-order equilibrium distribution based on
F by
See also Equation (
29), where we have used the notation
. Thus
, here and in Lemma 8 below.
If we assume now that for some
we define iteratively the equilibrium distribution
of order
k for
as follows:
Thus, we start with and define where the kth-order equilibrium distribution is the equilibrium of the previous one, for which we need the mean value of the latter.
In order for the above definition to be correct, we need all mean values
to be finite. This is guaranteed by the assumption that the moment
is finite. The latter implies that
and vice versa. Moreover, finite are all moments
and all mean values
for
These properties are summarized in Lemma 7 below showing an interesting relationship between the mean values
and the moments
. For details see, e.g., ([
19] p. 265) or [
20].
Lemma 7. Suppose that has finite positive moment for some integer Then, for any , the mean value of the kth-order equilibrium distribution is well defined (finite), and moreover, for , the following relations hold: We now provide the last three lemmas; for their proofs, see [
5].
Lemma 8. Consider the non-negative random variable whose mean is , that is, μ is strictly positive and finite, and let where is the equilibrium distribution induced by Then, for the following statements are true:
- (i)
- (ii)
- (iii)
- (iv)
- (v)
(finite or infinite).
Lemma 9. Given is a sequence of random variables , where and We impose two assumptions: (a1) all , hence all , have the same finite first two moments, that is, for (a2) the LS transforms form a decreasing sequence of functions.
Then the following limit exists: Moreover, is the LS transform of the distribution of a random variable with first moment and second moment belonging to the interval
Lemma 10. Suppose that and are non-negative random variables with the same mean (same first moment) , a strictly positive finite number. Consider another random variable , where has a positive mean Assume further that the LS transforms of and satisfy the following relation: Then and hence
5. Proofs of the Main Results
We start with the proof of Theorem 1, then omit details about Theorems 2 and 3; however, we provide the proof of the more general Theorem 4. Finally we give the proof of Theorem 5. Each of the proofs consist naturally of two steps, Step 1 (sufficiency) and Step 2 (necessity). In many places, in order to make a clear distinction between factors in long expressions, we use the dot symbol, “ · ", for multiplication.
Proof of Theorem 1. Step 1 (sufficiency). Suppose that Equation (
10) has exactly one solution
with mean
and finite variance (and hence
). Then, we want to show that the conditions (
9) are satisfied.
First, rewrite Equation (
10) as follows:
Differentiating twice this relation with respect to
s, we find, for
that
Letting
in (
34) and (35) yields, respectively,
Equivalently, in view of Lemma 6, we obtain two relations:
Since
and
are strictly positive and finite, we conclude from (
36) and (37) that
and that each of the quantities
is finite. Moreover,
due to (37) again. We need, however, the strong inequality
. Suppose, on the contrary, that
. Then, this would imply that
by (37), a contradiction to the fact that
. This proves that the conditions (
9) are satisfied. In addition, relation (
11) for the variance
also follows from (
36) and (37) because
The sufficiency part is established.
Step 2 (necessity). Suppose now that the conditions (
9) are satisfied. We will show the existence of a solution
to Equation (
10) with mean
and finite variance.
To find such a solution
, we first define two numbers:
and show later that these happen to be the first two moments of the solution. Note that the denominator
by (
9) and that the numbers
do satisfy the required moment relation
This is true because
due to (
9) and Lyapunov’s inequality. Therefore, the RHS of (
32) with
,
defined in (
38) is a bona fide LS transform, say
, of a non-negative random variable,
(by Lemma 1). Namely,
It is easy to see that are exactly the first two moments of , as mentioned before.
Next, using the initial
we define iteratively the sequence of random variables
, through their LS transforms (they are well-defined due to Lemma 2):
where
Differentiating (
39) twice with respect to
s and letting
, we obtain, for
,
By Lemma 6, induction on
n and in view of (
40) and (41), we can show that for any
we have
and
(see relation (
38)). Hence
Moreover, by Lemma 5, we first have that
, and then by the iteration (
39), that
for any
. Thus,
is a sequence of non-negative random variables having the same first two moments
,
, and such that their LS transforms
are decreasing. Therefore, Lemma 9 applies. Denote the limit of
, as
, by
. Then
will be the LS-transform of the distribution
of a non-negative random variable
with
and
. It follows from (
39) that the limit
is a solution to Equation (
10) with mean
and finite variance. Applying once again Lemma 6 to Equation (
10) (with
and
), we conclude that
as expressed in (
38), and hence the solution
has the required variance as in (
11) or (
42).
Finally, we prove the uniqueness of the solution to Equation (
10). Suppose, under conditions (
9), that there are two solutions, say
and
, each satisfying Equation (
10) and each having mean equal to
(and hence both having the same finite variance as shown above). Then we want to show that
, or, equivalently, that
. To do this, we introduce two functions,
Then we have, by assumption,
Using Lemma 3, we have the inequalities:
We have used the fact that
Thus we obtain that for
,
This relation is equivalent to another one, for the pair of distributions
and
, induced, respectively, by
F and
G; see Lemma 8. Thus
This, however, is exactly relation (
33). Therefore, Lemma 10 applies because
and both
and
have the same mean by Lemma 7. Hence,
, which in turn implies that
since
F and
G have the same mean (see [
21] [Proposition 1]). The proof of the necessity and hence of Theorem 1 is complete. □
Proof of Theorem 4. Although the proof has some similarity to that of Theorem 1, there are differences, and it is given here for completeness and the reader’s convenience.
Step 1 (sufficiency). Suppose that Equation (
21) has exactly one solution
with mean
, a finite positive number, and finite variance (hence
). Now we want to show that all five conditions in (
20) are satisfied.
Differentiating twice Equation (
21) with respect to
s, we have, for
, the following:
Letting
in (
43) and (44) yields, respectively,
Equivalently, by Lemma 6, we have two relations:
From (
45) and (46) it follows that
and that each of the quantities
,
,
and
, is strictly positive and finite; this is because
and
are numbers in
. Moreover,
due to (46), and it remains to show the strict bound
. Suppose, on the contrary, that
. Then we would have
by (46), a contradiction to the fact that
. Thus, we conclude that the conditions in (
20) are satisfied. Besides, the expression (
22) for the variance
also follows from (
45) and (46) because
The sufficiency part is established.
Step 2 (necessity). Suppose now that the conditions (
20) are satisfied. We want to show the existence of a solution
to Equation (
21) with mean
and finite variance.
As in the proof of Theorem 1, we have that
by (
20) and also that
, and, by using the same notations as before, we can claim the existence of a non-negative random variable
such that the LS transform
is equal to the RHS of (
32).
The next is to use the initial
and define iteratively the sequence of random variables
,
, through the LS transforms (see Lemma 2):
where
Differentiating (
48) twice with respect to
s and letting
, we have, for
,
By Lemma 6 and induction on
n, we find through (
49) and (50) that
and
for any
and hence
Moreover, by Lemma 5, we first have that
, and then by the iteration (
48), that
for any
. Thus,
is a sequence of non-negative random variables all having the same first two moments
,
, such that the sequence of their LS transforms
is decreasing. Therefore, Lemma 9 applies, so the limit
exists. Moreover,
is the LS transform of a non-negative random variable, say
with
and
. Consequently, it follows from (
48) that the limit
is a solution to Equation (
21) with mean
and finite variance. Applying Lemma 6 to Equation (
21) again (with
and
), we conclude that
, as in (
47), and hence the solution
has the required variance as in (
22) or (
51).
Finally, let us show the uniqueness of the solution to Equation (
21). Suppose, under conditions (
20), that there are two solutions,
and
, which satisfy Equation (
21) and both have the same mean
(hence have the same finite variance).
Now we want to show that
, or, equivalently, that
. We need the functions
Using Lemma 4, we obtain the following chain of relations:
where we have used the condition
. The remaining arguments are similar to those in the proof of Theorem 1, so we omit the details. Thus, the necessity is also established and the proof of Theorem 4 is complete. □
Proof of Theorem 5. We follow a similar idea as that in the proofs of Theorems 1 and 4. It is convenient and useful to see the details which are based explicitly on properties of the Riemann-zeta function.
Step 1 (sufficiency). Suppose that Equation (
24) has exactly one solution
with mean
and finite variance (
). We want to show that conditions (
23) are satisfied.
Differentiating twice Equation (
24) with respect to
s, we have, for
,
Letting
in (
52) and (53) yields, respectively,
Equivalently, we have, by Lemma 6,
From (
54) and (55) it follows that
and that both quantities
and
are finite; this is because
and
are numbers in the interval
. In addition,
due to (55). However, we need the strict relation
. Suppose, on the contrary, that
. In such a case we would have
by (55), which contradicts the fact that
. Thus, conditions (
23) are satisfied. Besides, (
25) also follows from (
54) and (55) because
The sufficiency part is established.
Step 2 (necessity). Suppose that conditions (
23) are satisfied. We want to show that there exists a solution
to Equation (
24) with mean
and finite variance.
We start by setting two relations,
The denominator
in (
56) is strictly positive by (
23). Additionally, we have
because
(see Lemma 4). Therefore, the RHS of (
32) with
,
defined in (
56) is a bona fide LS transform, say
, of a non-negative random variable,
(by Lemma 1).
Next, starting with the initial
, we define iteratively the sequence of random variables
,
, through their LS transforms
(see Lemma 2):
where
Differentiating (
57) twice with respect to
s and letting
, we find, for
,
By Lemma 6, induction on
n, and relations (
58) and (59), we find that
,
(defined in (
56)) for any
and hence
Moreover, by Lemma 5, we first have that
, and then by the iteration (
57), that
for any
. Thus,
is a sequence of non-negative random variables having the same first two moments
,
, such that the sequence of their LS transforms
is decreasing. Applying Lemma 9 we conclude that there is a limit
, which is the LS transform of a non-negative random variable
with
and
. Hence, it follows from (
57) that
is a solution to Equation (
24) with mean
and finite variance. Applying again Lemma 6 to Equation (
24) for
and
, we conclude that
as in (
56), and hence the solution
has the required variance as in (
25) or (
60).
Finally, it remains to prove the uniqueness of the solution to Equation (
24). Suppose, under conditions (
23), that there are two solutions,
and
, both satisfying Equation (
24) and having the same mean
(hence the same finite variance). We want to show that
, or, equivalently, that
. We use the functions
to express explicitly the two LS transforms:
By Lemma 4, we derive the relations:
We have used the fact that . The remaining arguments are similar to those in the proof of Theorems 1 and 4; thus, they are omitted. The necessity is established, and the proof of Theorem 5 is completed. □
6. Concluding Remarks
Below are some relevant and useful remarks regarding the problems and the results in this paper and their relations with previous works.
Remark 1. In Theorem 1, we have treated the power-mixture type functional equation (see Equation (2)), which includes the compound-Poisson equation, Equation (28), as a special case. Thus, the problems and the results here can be considered as an extension of previous works. Remark 2. In Examples 1 and 3, when a.s. for some fixed number the unique solution to Equations (26) and (30) with mean μ and finite variance is the mixture distributionHence, its LS transform has a mixture form: Actually, for an arbitrary random variable
with
and
and for any number
such that
the unique solution
to Equations (
26) and (
30) satisfies the inequality:
Notice that this relation is satisfied even if the explicit form of
is unknown.
Remark 3. The class of power-mixture transforms defined in Equation (2) is quite rich, and referring to [7] we can see, e.g., that it includes the LS transforms of the so-called random variables (where ), which are expressed in terms of the hyperbolic functions respectively. Let us provide some details. The random variable
is described by its explicit LS transform as follows:
This is related to Equation (
2) by taking
a.s. and
where
Similar arguments apply to the LS transforms of the random variables
and
whose explicit expressions are
It is also interesting to note that for any fixed
, the following relation holds:
Therefore, we have an interesting distributional equation
This means that the random variable
can be decomposed into a sum of two sub-independent random variables
and
(See [
7].)
Remark 4. We finally consider a functional equation which is similar to Equation (27) (or to Equation (28)), however not really of the power-mixture type Equation (10). Let with mean and let T be a non-negative random variable. Assume that the random variable has the length-biased distribution (5) induced by Let the random variables be independent copies of and moreover, let be independent. Then, the distributional equation(different from Equation (27)) is equivalent to the functional equation(compare with Equation (28)). Here, the Bernstein function is of the form To analyze the solutions to this kind of functional equations is a serious problem. The attempt to follow the approach in this paper was not successful. Perhaps a new idea is needed. However, there is a specific case when the solution to Equation (
61) is explicitly known. More precisely, let us take
with
U being a continuous random variable uniformly distributed on
In this case Equation (
61) has a unique solution
F:
F is the hyperbolic-cosine distribution with LS-transform
Therefore
(see, e.g., [
7] p. 317). Once again, this characteristic property (look at Equation (
61)) with
is found for the variable
only when
. Thus, a natural question arises: What about arbitrary
? As far as we know, for general random variables
, the characterizations of their distributions are challenging but still open problems.
Remark 5. One of the reviewers kindly pointed out the connection of the distributional equation with Pólya’s [22] characterization of the normal distribution. By taking and a.s., the distributional equation reduces towhere are independent copies of on the whole real line (instead of the right half-line). Then, the solutions of the equation are exactly the normal distributions with mean zero. In this regard, the reader can consult [3] (Chapter 3) and [23,24,25,26,27] and the references therein for further extensions of these and related results.