Next Article in Journal
A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations
Next Article in Special Issue
A New Algebraic Inequality and Some Applications in Submanifold Theory
Previous Article in Journal
Tail Dependence and Risk Spillover from the US to GCC Banking Sectors
Previous Article in Special Issue
On an Inequality for Legendre Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inequalities for Information Potentials and Entropies

1
Department of Mathematics and Informatics, Lucian Blaga University of Sibiu, Str. Dr. I. Ratiu, No. 5-7, RO-550012 Sibiu, Romania
2
Department of Mathematics, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj-Napoca, Romania
3
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(11), 2056; https://doi.org/10.3390/math8112056
Submission received: 4 October 2020 / Revised: 2 November 2020 / Accepted: 13 November 2020 / Published: 18 November 2020
(This article belongs to the Special Issue Applications of Inequalities and Functional Analysis)

Abstract

:
We consider a probability distribution p 0 ( x ) , p 1 ( x ) , depending on a real parameter x. The associated information potential is S ( x ) : = k p k 2 ( x ) . The Rényi entropy and the Tsallis entropy of order 2 can be expressed as R ( x ) = log S ( x ) and T ( x ) = 1 S ( x ) . We establish recurrence relations, inequalities and bounds for S ( x ) , which lead immediately to similar relations, inequalities and bounds for the two entropies. We show that some sequences R n ( x ) n 0 and T n ( x ) n 0 , associated with sequences of classical positive linear operators, are concave and increasing. Two conjectures are formulated involving the information potentials associated with the Durrmeyer density of probability, respectively the Bleimann–Butzer–Hahn probability distribution.

1. Introduction

Entropies associated with discrete or continuous probability distributions are usually described by complicated explicit expressions, depending on one or several parameters. Therefore, it is useful to establish lower and upper bounds for them. Convexity-type properties are also useful: they embody valuable information on the behavior of the functions representing the entropies.
This is why bounds and convexity-type properties of entropies, expressed by inequalities, are under an active study: see [1,2,3,4,5,6,7,8,9,10,11,12,13] and the references therein. Our paper is concerned with this kind of inequalities: we give new results and new proofs or improvements of some existing results, in the framework which is presented below.
Let p 0 ( x ) , p 1 ( x ) , be a probability distribution depending on a parameter x I , where I is a real interval. The associated information potential (also called index of coincidence, for obvious probabilistic reasons) is defined (see [14]),
S ( x ) : = k p k 2 ( x ) , x I .
If p ( t , x ) , t R , x I , is a probability density function depending on the parameter x , the associated information potential is defined as (see [14]),
S ( x ) : = R p 2 ( t , x ) d t , x I .
The information potential is the core concept of the book [14]. The reader can find properties, extensions, generalizations of S ( x ) , as well as applications to Information theoretic learning. Other properties and applications can be found in the recent papers [15,16].
It is important to remark that the Rényi entropy and the Tsallis entropy can be expressed in terms of S ( x ) as
R ( x ) = log S ( x ) , T ( x ) = 1 S ( x ) , x I .
So the properties of S ( x ) lead immediately to properties of R ( x ) , respectively T ( x ) .
On the other hand, we can consider the discrete positive linear operators
L f ( x ) : = k p k ( x ) f ( x k ) , x I ,
where x k are given points in R , and the integral operators
M f ( x ) = R p ( t , x ) f ( t ) d t , x I .
In both cases, f is a function from a suitable set of functions defined on R . In this paper, we consider classical operators of this kind, which are used in approximation theory.
Let us mention that the “degree of nonmultiplicativity” of the operator L can be estimated in terms of the information potential S ( x ) : see [17] and the references therein.
In this paper, we will be concerned with a special family of discrete probability distributions, described as follows.
Let c R . Set I c = 0 , 1 c if c < 0 , and I c = [ 0 , + ) if c 0 . For α R and k N 0 the binomial coefficients are defined as usual by
α k : = α ( α 1 ) ( α k + 1 ) k !   if k N ,   and α 0 : = 1 .
Let n > 0 be a real number, k N 0 and x I c . Define
p n , k [ c ] ( x ) : = ( 1 ) k n c k ( c x ) k ( 1 + c x ) n c k ,   if c 0 ,
p n , k [ 0 ] ( x ) : = lim c 0 p n , k [ c ] ( x ) = ( n x ) k k ! e n x ,   if c = 0 .
Then k = 0 p n , k [ c ] ( x ) = 1 . Suppose that n > c if c 0 , or n = c l with some l N if c < 0 .
With this notation, we consider the discrete distribution of probability p n , k [ c ] ( x ) k = 0 , 1 , depending on the parameter x I c .
According to (1), the associated information potential, or index of coincidence, is
S n , c ( x ) : = k = 0 p n , k [ c ] ( x ) 2 , x I c .
The Rényi entropy and the Tsallis entropy corresponding to the same distribution of probability are defined, respectively (see (3))
R n , c ( x ) = log S n , c ( x )
and
T n , c ( x ) = 1 S n , c ( x ) .
For c = 1 (6) reduces to the binomial distribution and (8) becomes
S n , 1 ( x ) : = k = 0 n n k x k ( 1 x ) n k 2 , x [ 0 , 1 ] .
The case c = 0 corresponds to the Poisson distribution (see (7)), which
S n , 0 ( x ) : = e 2 n x k = 0 ( n x ) 2 k k ! 2 , x 0 .
For c = 1 , we have the negative binomial distribution, with
S n , 1 ( x ) : = k = 0 n + k 1 k x k ( 1 + x ) n k 2 , x 0 .
The binomial, Poisson, respectively negative binomial distributions correspond to the classical Bernstein, Szász-Mirakyan, respectively Baskakov operators from approximation theory; all of them are of the form (4). In fact, the distribution p n , k [ c ] ( x ) k = 0 , 1 , is instrumental for the construction of the family of positive linear operators introduced by Baskakov in [18]; see also [19,20,21,22,23]. As a probability distribution, the family of functions ( p n , k [ c ] ) k = 0 , 1 , was considered in [17,24].
The distribution
n k x k ( 1 + x ) n k = 0 , 1 , , n , x [ 0 , + ) ,
corresponds to the Bleimann–Butzer–Hahn operators, while
n + k k x k ( 1 x ) n + 1 k = 0 , 1 , , x [ 0 , 1 ) ,
is connected with the Meyer-König and Zeller operators.
The information potentials and the entropies associated with all these distributions were studied in [17]; see also [25,26,27]. It should be mentioned that they satisfy Heun-type differential equations: see [17]. We continue this study. To keep the same notation as in [17], let us return to (11)–(13) and denote
F n ( x ) : = S n , 1 ( x ) , G n ( x ) : = S n , 1 ( x ) , K n ( x ) : = S n , 0 ( x ) .
Moreover, the information potential corresponding to (14) and (15) will be denoted by
U n ( x ) : = k = 0 n n k x k ( 1 + x ) n 2 , x [ 0 , + ) ,
J n ( x ) : = k = 0 n + k k x k ( 1 x ) n + 1 2 , x [ 0 , 1 ) .
In Section 2, we present several relations between the functions F n ( x ) , G n ( x ) , U n ( x ) , J n ( x ) , as well as between these functions and the Legendre polynomials. By using the three-terms recurrence relations involving the Legendre polynomials, we establish recurrence relations involving three consecutive terms from the sequences F n ( x ) , G n ( x ) , U n ( x ) , respectively J n ( x ) . We recall also some explicit expressions of these functions.
Section 3 is devoted to inequalities between consecutive terms of the above sequences; in particular, we emphasize that for fixed x, the four sequences are logarithmicaly convex and hence convex.
Other inequalities are presented in Section 4. All the inequalities can be used to get information about the Rényi entropies and Tsallis entropies connected with the corresponding probability distributions.
Section 5 contains new properties of the function U n ( x ) and a problem of its shape.
Section 6 is devoted to some inequalities involving integrals of the form a b f 2 ( x ) d x in relation with certain combinatorial identities.
The information potential associated with the Durrmeyer density of probability is computed in Section 7. We recall a conjecture formulated in [24].
As already mentioned, all the results involving the information potential can be used to derive results about Rényi and Tsallis entropies. For the sake of brevity, we will study usually only the information potential.
Concerning the applications of Rényi entropies and Tsallis entropies see, e.g., [14,28].

2. Recurrence Relations

F n ( x ) is a polynomial, G n ( x ) , U n ( x ) , J n ( x ) are rational functions. On their maximal domains, these functions are connected by several relations (see [17], Cor. 13, (46), (53), (54)):
F n ( x ) = ( 1 2 x ) 2 n + 1 G n + 1 ( x ) ,
F n ( x ) = U n x 1 x ,
F n ( x ) = ( 1 2 x ) 2 n + 1 J n x 1 x .
Consider the Legendre polynomial (see [29], 22.3.1 )
P n ( t ) = 2 n k = 0 n n k 2 ( x + 1 ) k ( x 1 ) n k .
Then (see [17], (39))
P n ( t ) = ( 1 2 x ) n F n ( x ) ,
where
t = 1 2 x + 2 x 2 1 2 x , x [ 0 , 1 2 ) .
Combining (22) with (18)–(20), we get
P n ( t ) = ( 1 2 x ) n + 1 G n + 1 ( x ) ,
P n ( t ) = ( 1 2 x ) n U n x 1 x ,
P n ( t ) = ( 1 2 x ) n + 1 J n x 1 x .
In the theory of special functions recurrence, relations play a crucial role. In particular, the Legendre polynomials (21) satisfy the important recurrence relation ([29], 22.7.1)
( n + 1 ) P n + 1 ( t ) ( 2 n + 1 ) t P n ( t ) + n P n 1 ( t ) = 0 .
This leads us to
Theorem 1.
The functions F n ( x ) , G n ( x ) , U n ( x ) and J n ( x ) satisfy the following three-terms recurrence relations:
2 ( n + 1 ) F n + 1 ( x ) = ( 2 n + 1 ) ( 1 + ( 1 2 x ) 2 ) F n ( x ) 2 n ( 1 2 x ) 2 F n 1 ( x ) ,
n ( 1 + 2 x ) 2 G n + 1 ( x ) = ( 2 n 1 ) ( 1 + 2 x + 2 x 2 ) G n ( x ) ( n 1 ) G n 1 ( x ) ,
( n + 1 ) ( 1 + t ) 2 U n + 1 ( t ) = ( 2 n + 1 ) ( t 2 + 1 ) U n ( t ) n ( 1 t ) 2 U n 1 ( t ) ,
( n + 1 ) ( 1 + t ) 2 J n + 1 ( t ) = ( 2 n + 1 ) ( t 2 + 1 ) J n ( t ) n ( 1 t ) 2 J n 1 ( t ) .
Proof. 
It suffices to use relations (22)–(26). □
Remark 1.
According to (29) and (30), U n ( x ) and J n ( x ) satisfy the same recurrence relation. In fact, from ([17], (49), (55)), we have
U n ( x ) = k = 0 n c n , k 1 x 1 + x 2 k ,
J n ( x ) = k = 0 n c n , k 1 x 1 + x 2 k + 1 ,
where
c n , k : = 4 n 2 ( n k ) n k 2 k k , k = 0 , 1 , , n .
From (31) and (32), we see that
J n ( x ) = 1 x 1 + x U n ( x ) .
Remark 2.
From ([17], (56)) and ([30], (21)), we know that
G n + 1 ( x ) = k = 0 n 1 c n , k ( 1 + 2 x ) 2 k 1 ,
F n ( x ) = k = 0 n c n , k ( 1 2 x ) 2 k .
So, the recurrence relations (27)–(30) are accompanied by
F 0 ( x ) = 1 , F 1 ( x ) = 1 2 x + 2 x 2 ; G 1 ( x ) = 1 2 x + 1 , G 2 ( x ) = 1 + 2 x + 2 x 2 ( 2 x + 1 ) 3 ; U 0 ( x ) = 1 , U 1 ( x ) = 1 + x 2 ( 1 + x ) 2 ; J 0 ( x ) = 1 x 1 + x , J 1 ( x ) = ( 1 x ) ( 1 + x 2 ) ( 1 + x ) 3 .

3. Inequalities for Information Potentials

In studying a sequence of special functions, not only are recurrence relations important, but also inequalities connecting successive terms; in particular, inequalities showing that the sequence is (logarithmically) convex or concave. This section is devoted to such inequalities involving the sequences ( F n ( x ) ) , ( G n ( x ) ) , ( U n ( x ) ) , and ( J n ( x ) ) .
Theorem 2.
The function F n ( x ) satisfies the inequalities
F n + 1 ( x ) 1 + ( 4 n 2 ) x ( 1 x ) 1 + ( 4 n + 2 ) x ( 1 x ) F n 1 ( x ) ,
F n ( x ) 1 + 4 n x ( 1 x ) 1 + ( 4 n + 2 ) x ( 1 x ) F n 1 ( x ) ,
F n 2 ( x ) F n 1 ( x ) F n + 1 ( x ) ; 2 F n ( x ) F n 1 ( x ) + F n + 1 ( x ) ,
for all n 1 , x [ 0 , 1 ] .
Proof. 
We start with the following integral representation (see [17], (29))
F n ( x ) = 1 π 0 1 f n ( x , t ) d t t ( 1 t ) ,
where f ( x , t ) : = t + ( 1 t ) ( 1 2 x ) 2 [ 0 , 1 ] .
It follows that
F n + 1 ( x ) F n ( x ) .
On the other hand,
F n 1 ( x ) + F n + 1 ( x ) 2 F n ( x ) = = 1 π 0 1 f n 1 ( x , t ) 1 + f 2 ( x , t ) 2 f ( x , t ) d t t ( 1 t ) ,
which entails
2 F n ( x ) F n 1 ( x ) + F n + 1 ( x ) .
According to (27), we have
F n ( x ) = a n ( x ) F n 1 ( x ) + b n ( x ) F n + 1 ( x ) ,
where
a n ( x ) = n ( 1 2 x ) 2 ( 2 n + 1 ) ( 1 2 x + 2 x 2 ) , b n ( x ) = n + 1 ( 2 n + 1 ) ( 1 2 x + 2 x 2 ) .
Using (40) and (41) we get
2 a n ( x ) F n 1 ( x ) + 2 b n ( x ) F n + 1 ( x ) F n 1 ( x ) + F n + 1 ( x ) ,
which yields
( 2 b n ( x ) 1 ) F n + 1 ( x ) ( 1 2 a n ( x ) ) F n 1 ( x ) ,
and this immediately leads to (36). To prove (37), it suffices to combine (40) and (36). The inequalities (38) were proven in ([31], (3.2) and (3.3)). □
Combining (37) with (9) and (10), we obtain
Corollary 1.
The Rényi entropy R n ( x ) and the Tsallis entropy T n ( x ) corresponding to the binomial distribution with parameters n and x satisfy the inequalities:
R n ( x ) R n 1 ( x ) log 1 + ( 4 n + 2 ) x ( 1 x ) 1 + 4 n x ( 1 x ) 0 ,
T n ( x ) T n 1 ( x ) 2 x ( 1 x ) 1 + 4 n x ( 1 x ) 1 T n ( x ) 0 .
Theorem 3.
The following inequalities hold:
U n 2 U n 1 U n + 1 , 2 U n U n 1 + U n + 1 ,
U n + 1 ( x ) 1 + 4 n x + x 2 1 + ( 4 n + 4 ) x + x 2 U n 1 ( x ) ,
U n ( x ) 1 + ( 4 n + 2 ) x + x 2 1 + ( 4 n + 4 ) x + x 2 U n 1 ( x ) ,
G n 2 G n 1 G n + 1 , 2 G n G n 1 + G n + 1 ,
G n + 1 ( x ) 1 + ( 4 n 2 ) x ( 1 + x ) 1 + ( 4 n + 2 ) x ( 1 + x ) G n 1 ( x ) ,
G n ( x ) 1 + 4 n x ( x + 1 ) 1 + ( 4 n + 2 ) x ( x + 1 ) G n 1 ( x ) ,
J n 2 J n 1 J n + 1 , 2 J n J n 1 + J n + 1 ,
J n + 1 ( x ) 1 + 4 n x + x 2 1 + ( 4 n + 4 ) x + x 2 J n 1 ( x ) ,
J n ( x ) 1 + ( 4 n + 2 ) x + x 2 1 + ( 4 n + 4 ) x + x 2 J n 1 ( x ) .
Proof. 
The proof is similar to that of Theorem 2, starting from (see ([17], (48), (58), (63))):
U n ( x ) = 1 π 0 1 t + ( 1 t ) 1 x 1 + x 2 n d t t ( 1 t ) , G n ( x ) = 1 π 0 1 t + ( 1 t ) ( 1 + 2 x ) 2 n d t t ( 1 t ) , J n ( x ) = 1 π 0 1 t + ( 1 t ) 1 + x 1 x 2 n 1 d t t ( 1 t ) .
These integral representations, together with the representation of F n ( x ) given by (39), are consequences of the important results of Elena Berdysheva ([19], Theorem 1).
From (44)–(52), we can derive inequalities similar to (42) and (43), for the entropies associated with the probability distributions corresponding to U n ( x ) , G n ( x ) , and J n ( x ) .
Remark 3.
Let us remark that the inequalities (38), (44), (47), (50) show that for each x, the sequences ( F n ( x ) ) n 0 , ( U n ( x ) ) n 0 , ( G n ( x ) ) n 0 , ( J n ( x ) ) n 0 , are logarithmically convex, and so convex; the other inequalities from Theorems 2 and 3 show that the same sequences are decreasing. It immediately follows that the associated sequences of entropies ( R n ( x ) ) n 0 and ( T n ( x ) ) n 0 are concave and increasing; see also (42) and (43).

4. Other Inequalities

Besides their own interest, the next Theorems 4 and 6 will be instrumental in establishing new lower and upper bounds for the information potentials ( F n ( x ) ) n 0 , ( U n ( x ) ) n 0 , ( G n ( x ) ) n 0 , ( J n ( x ) ) n 0 , and consequently for the associated Rényi and Tsallis entropies.
Let us return to the information potential (8). According to ([17], (10)) we have for c 0 ,
S n , c ( x ) = 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 n c d t t ( 1 t ) .
Let c < 0 . Using (53) and Chebyshev’s inequality for synchronous functions, we can write
S n c , c ( x ) = 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 n c t + ( 1 t ) ( 1 + 2 c x ) 2 d t t ( 1 t ) 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 n c d t t ( 1 t ) · 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 d t t ( 1 t ) = S n , c ( x ) ( 1 + 2 c x + 2 c 2 x 2 ) .
For c > 0 , we use Chebyshev’s inequality for asynchronous functions and obtain the reverse inequality. So we have
Theorem 4.
If c < 0 , then
S n c , c ( x ) ( 1 + 2 c x ( 1 + c x ) ) S n , c ( x ) .
If c > 0 , the inequality is reversed.
Corollary 2.
For c = 1 , (54) and (37) yield
( 1 2 x ( 1 x ) ) F n ( x ) F n + 1 ( x ) 1 + ( 4 n + 4 ) x ( 1 x ) 1 + ( 4 n + 6 ) x ( 1 x ) F n ( x ) .
For c = 1 , we obtain
1 1 + 2 x ( 1 + x ) G n ( x ) G n + 1 ( x ) 1 + ( 4 n + 4 ) x ( 1 + x ) 1 + ( 4 n + 6 ) x ( 1 + x ) G n ( x ) .
Now, using [17, (48)], we have
U n + 1 ( x ) = 1 π 0 1 1 x 1 + x 2 + 4 x ( 1 + x ) 2 t n · · 1 x 1 + x 2 + 4 x ( 1 + x ) 2 t d t t ( 1 t ) U n ( x ) 1 π 0 1 1 x 1 + x 2 + 4 x ( 1 + x ) 2 t d t t ( 1 t ) = 1 + x 2 ( 1 + x ) 2 U n ( x ) .
Therefore, using also (46), we get
1 + x 2 ( 1 + x ) 2 U n ( x ) U n + 1 ( x ) 1 + ( 4 n + 6 ) x + x 2 1 + ( 4 n + 8 ) x + x 2 U n ( x ) .
Now (57) and (33) yield
1 + x 2 1 + x 2 J n ( x ) J n + 1 ( x ) 1 + ( 4 n + 6 ) x + x 2 1 + ( 4 n + 8 ) x + x 2 J n ( x ) .
Theorem 5.
The following inequalities are satisfied:
( 1 2 x ( 1 x ) ) n F n ( x ) 1 + 4 x ( 1 x ) 1 + ( 4 n + 4 ) x ( 1 x ) , n 0 , x [ 0 , 1 ] ,
1 1 + 2 x ( 1 + x ) n 1 ( 1 + 2 x ) G n ( x ) 1 + 8 x ( 1 + x ) 1 + ( 4 n + 4 ) x ( 1 + x ) , n 1 , x 0 ,
1 + x 2 ( 1 + x ) 2 n U n ( x ) 1 + 6 x + x 2 1 + ( 4 n + 6 ) x + x 2 , n 0 , x 0 ,
1 x 1 + x 1 + x 2 ( 1 + x ) 2 n J n ( x ) 1 x 1 + x 1 + 6 x + x 2 1 + ( 4 n + 6 ) x + x 2 , n 0 , x [ 0 , 1 ] .
Proof. 
Writing (55) for n = 0 , 1 , , m 1 , and multiplying term by term, we get
1 2 x ( 1 x ) m F m ( x ) 1 + 4 x ( 1 x ) 1 + 6 x ( 1 x ) 1 + 8 x ( 1 x ) 1 + 10 x ( 1 x ) 1 + 4 m x ( 1 x ) 1 + ( 4 m + 2 ) x ( 1 x ) .
Using
1 + t x ( 1 x ) 1 + ( t + 2 ) x ( 1 x ) 1 + ( t + 2 ) x ( 1 x ) 1 + ( t + 4 ) x ( 1 x ) , t 0 ,
it follows that
F m 2 ( x ) 1 + 4 x ( 1 x ) 1 + ( 4 m + 4 ) x ( 1 x ) ,
and so (59) is proven. The other three relations can be proved similarly, using (56)–(58). □
Remark 4.
The inequalities (59)–(62) in Theorem 5 provide lower and upper bounds for the information potentials F n , G n , U n , J n , and consequently for the associated entropies. They can be compared with other bounds existing in the literature, obtained with other methods. For the moment, let us prove the inequality
( 1 4 x ( 1 x ) ) n / 2 F n ( x ) , n 0 , x [ 0 , 1 ] ,
and compare it with the first inequality in (59).
According to ([17], (4.6), (4.2)),
F n ( x ) = j = 0 n c n , j ( 1 2 x ) 2 j ,
where (see also (87) and (84))
c n , j : = 4 n 2 j j 2 n 2 j n j , j = 0 n c n , j = 1 , j = 0 n j c n , j = n 2 .
Using the weighted arithmetic mean-geometric mean inequality, we have
F n ( x ) j = 0 n ( 1 2 x ) 2 j c n , j = ( 1 2 x ) 2 j = 0 n j c n , j = ( 1 4 x ( 1 x ) ) n / 2 ,
and this is (63). Clearly, the first inequality in (59) provides a lower bound for F n ( x ) , which is better than the lower bound provided by (63).
Theorem 6.
The information potential satisfies the following inequality for all c R :
S m + n , c ( x ) S m , c ( x ) S n , c ( x ) .
Proof. 
If c 0 , we can use (53) to get
S m + n , c ( x ) = 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 m c · · t + ( 1 t ) ( 1 + 2 c x ) 2 n c d t t ( 1 t ) .
Applying Chebyshev’s inequality for synchronous functions, we obtain
S m + n , c ( x ) 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 m c d t t ( 1 t ) · · 1 π 0 1 t + ( 1 t ) ( 1 + 2 c x ) 2 n c d t t ( 1 t ) = S m , c ( x ) S n , c ( x ) .
For c = 0 , we have (see [17], (13))
S n , 0 ( x ) = 1 π 1 1 e 2 n x ( 1 + t ) d t 1 t 2 .
With the same Chebyshev inequality, one obtains (64). □
From Theorem 6, we derive
Corollary 3.
For the Rényi entropy R n , c ( x ) and the Tsallis entropy T n , c ( x ) , we have
R m + n , c ( x ) R m , c ( x ) + R n , c ( x ) ,
T m + n , c ( x ) T m , c ( x ) + T n , c ( x ) T m , c ( x ) T n , c ( x ) .
Remark 5.
The inequalities (65) and (66) express the subadditivity of the sequences ( R n ( x ) ) n 0 and ( T n ( x ) ) n 0 .
Remark 6.
From (64) with c = 1 , we obtain
F m + n ( x ) F m ( x ) F n ( x ) , x [ 0 , 1 ] .
Here is a probabilistic proof of this inequality.
Let X m , X n , Y m , Y n be independent binomial random variables with the same parameter x [ 0 , 1 ] . Then
F n ( x ) = k = 0 n P X n = Y n = k = P ( X n = Y n ) ,
and consequently
F m + n ( x ) = P X m + n = Y m + n = P X m + X n = Y m + Y n P X m = Y m and X n = Y n = P X m = Y m P X n = Y n = F m ( x ) F n ( x ) ,
and this proves (67). It would be useful to have purely probabilistic proofs of other inequalities in this specific framework; they would facilitate a deeper understanding of the interplay between analytic proofs/results and probabilistic proofs/results.
Inequalities similar to (67) hold for G n ( x ) (apply (64) with c = 1 ) and for U n ( x ) and J n ( x ) . Indeed, according to (19),
U m + n x 1 x = F m + n ( x ) F m ( x ) F n ( x ) = U m x 1 x U n x 1 x ,
for all x [ 0 , 1 ) , which implies
U m + n t U m ( t ) U n ( t ) , t [ 0 , + ) .
From (68), by multiplication with 1 t 1 + t 2 and using (33), we get
J m + n t 1 t 1 + t J m + n t J m ( t ) J n ( t ) , t [ 0 , 1 ) .
Let us remark that the inequality (69) is stronger than the similar inequalities for F n , G n and U n .
Corollary 4.
For m 0 , n 0 , k 0 , we have
F m + k n ( x ) F m ( x ) F k n ( x ) , x [ 0 , 1 ] .
In particular,
F k n ( x ) F k n ( x ) ; F n ( x ) F 1 n ( x ) .
Proof. 
Starting from (67), it suffices to use induction on n. □
Similar results hold for G n ( x ) , U n ( x ) , J n ( x ) , but we omit the details. However, let us remark that F 1 ( x ) = 1 2 x ( 1 x ) and so the second inequality in (70) is the first inequality in (59).
Remark 7.
Convexity properties of the information potentials and the associated entropies were presented in [31,32], but the hypothesis a n k = a k , k = 0 , 1 , , n was inadvertently omitted in ([31], Conjecture 6.1).

5. More about U n ( t )

This section contains some additional properties of the function U n defined initially by (16). Using the simple relation (33) connecting J n and U n , one can easily derive new properties of the function J n given by (17).
Theorem 7.
(i) 
U n is decreasing on [ 0 , 1 ] and increasing on [ 1 , ) .
(ii) 
U n is logarithmically convex on [ 0 , 1 ] .
Proof. 
It was proved (see [27,31,32]) that F n is a logarithmically convex function on [ 0 , 1 ] , i.e.,
F n ( x ) F n ( x ) F n 2 ( x ) 0 , x [ 0 , 1 ] .
Let x = t t + 1 , t [ 0 , ) , x [ 0 , 1 ) . Then t = x 1 x and (19) shows that U n ( t ) = F n ( x ) . Consequently,
U n ( t ) = F n ( x ) d x d t = F n ( x ) ( t + 1 ) 2 .
It is known (see [17]) that F n ( x ) 0 , x [ 0 , 1 2 ] , and F n ( x ) 0 , x [ 1 2 , 1 ] . It follows that U n ( t ) 0 , t [ 0 , 1 ] , and U n ( t ) 0 , t [ 1 , ) . This proves (i).
To prove (ii), let us remark that
U n ( t ) = F n ( x ) ( t + 1 ) 4 2 F n ( x ) ( t + 1 ) 3 .
Combined with (72), this yields
U n ( t ) U n ( t ) U n 2 ( t ) = F n ( x ) F n ( x ) F n 2 ( t ) ( t + 1 ) 4 2 F n ( x ) F n ( x ) ( t + 1 ) 3 .
Using (71) and F n ( x ) 0 , x [ 0 , 1 2 ] , we obtain
U n ( t ) U n ( t ) U n 2 ( t ) 0 , t [ 0 , 1 ] ,
which proves (ii). □
Remark 8.
U n ( 0 ) = lim t + U n ( t ) = 1 , (see (31)). These equalities, Theorem 7, and graphical experiments (see Figure 1) suggest that U n is convex on [ 0 , t n ] and concave on [ t n , + ) , for a suitable t n > 1 . It would be interesting to have a proof for this shape of U n ( t ) , and to find the value of t n .
In order to compute U n ( x ) , we have the explicit expressions (16) and (31), and the three terms of recurrence relation (29). In what follows, we provide two terms of recurrence relation.
According to ([31], (2.3)),
x ( 1 x ) F n ( x ) = n ( 1 2 x ) ( F n ( x ) F n 1 ( x ) ) , n 1 , x [ 0 , 1 ] .
Setting again x = t t + 1 , t [ 0 , ) , and U n ( t ) = F n ( x ) , , we obtain, after some computation,
t ( t + 1 ) U ( t ) = n ( 1 t ) U n ( t ) U n 1 ( t ) , t [ 0 , ) .
Multiplying (73) by ( t + 1 ) 2 n 1 / t n + 1 , we obtain
( t + 1 ) 2 n t n U n ( t ) = ( t + 1 ) 2 n t n U n 1 ( t ) , t 1 .
Let s ( t ) : = ( t + 1 ) 2 t , t 1 , and
V n ( t ) : = s n ( t ) U n ( t ) , t 1 .
Theorem 8.
The sequence ( V n ( t ) ) n 0 satisfies the recurrence relation
V n ( t ) = 2 n n + n 1 t x 2 1 x 2 V n 1 ( x ) d x , n 1 ,
with V 0 ( t ) = 1 , t 1 .
Proof. 
From (74), we obtain
s n ( x ) U n ( x ) = s n ( x ) U n 1 ( x ) , x 1 ,
i.e.,
V n ( x ) = n s n 1 ( x ) s ( x ) V n 1 ( x ) s n 1 ( x ) .
This reduces to
V n ( x ) = n s ( x ) V n 1 ( x ) ,
and therefore
V n ( t ) V n ( 1 ) = n 1 t x 2 1 x 2 V n 1 ( x ) d x .
Now V n ( 1 ) = s n ( 1 ) U n ( 1 ) , and (31) shows that V n ( 1 ) = 4 n 1 4 n 2 n n = 2 n n . Together with (76) this proves (75). We have also V 0 ( t ) = U 0 ( t ) = 1 (see Remark 2).
Example 1.
From (75), we deduce
V 1 ( t ) = 2 + 1 t 1 1 x 2 d x = t + 1 t ,
and consequently
U 1 ( t ) = V 1 ( t ) s ( t ) = t 2 + 1 ( t + 1 ) 2 .
Moreover,
V 2 ( t ) = 6 + 2 1 t 1 1 x 2 x + 1 x d x = 4 + t 2 + 1 t 2 ,
i.e., U 2 ( t ) = t 4 + 4 t 2 + 1 ( t + 1 ) 4 , and so on.
Remark 9.
A recurrence relation similar to (75) and defining a sequence of Appell polynomials was instrumental in ([31], Section 5) for studying the function F n .
Remark 10.
According to (61), lim n + U n ( x ) = 0 , x > 0 , i.e., the sequence of functions ( U n ) n 0 is pointwise convergent to zero on ( 0 , ) . The convergence is not uniform, because U n ( 0 ) = lim x U n ( x ) = 1 for all n 0 .

6. Inequalities for the Integral of the Squared Derivative

Integrals of the form a b f 2 ( x ) d x are important for several applications; see, e.g., ([33], Section 3.10). In this section, we present bounds for such integrals using the logarithmic convexity of the functions F n , G n , K n . The results involve some combinatorial identities.
Theorem 9.
The following inequalities are valid for n = 0 , 1 , :
0 1 F n 2 ( x ) d x 2 n ,
0 G n + 1 2 ( x ) d x n + 1 ,
0 K n 2 ( x ) d x n .
Proof. 
Let us return to (71). Integrating by parts, we obtain
0 1 F n 2 ( x ) d x 1 2 F n ( 1 ) F n ( 1 ) F n ( 0 ) F n ( 0 ) .
Remembering that F n ( x ) = S n , 1 ( x ) and using (11), we obtain
F n ( 0 ) = 2 n , F n ( 1 ) = 2 n , F n ( 0 ) = F n ( 1 ) = 1 .
Now (77) is a consequence of (80) and (81).
The logarithmic convexity of the functions G n + 1 and K n on [ 0 , ) was proven in [34]. Using G n ( x ) = S n , 1 ( x ) and (13), it is easy to derive
G n + 1 ( 0 ) = 1 , G n + 1 ( 0 ) = 2 ( n + 1 )
lim x G n + 1 ( x ) = lim x G n + 1 ( x ) = 0 ,
and from
0 G n + 1 2 ( x ) d x 1 2 G n + 1 ( ) G n + 1 ( ) G n + 1 ( 0 ) G n + 1 ( 0 )
combined with (82) and (83), we obtain (78).
The proof of (79) is similar and we omit it. □
Remark 11.
If we compute F n ( 1 ) starting from (35), we obtain
F n ( 1 ) = 4 k = 1 n k c n , k .
Combined with (81), this yields
k = 1 n k 2 ( n k ) n k 2 k k = n 4 n / 2 .
On the other hand, (34) leads to
G n + 1 ( 0 ) = 2 k = 0 n ( 2 k + 1 ) c n , k .
Now (82) and (85) produce
k = 0 n ( 2 k + 1 ) 2 ( n k ) n k 2 k k = ( n + 1 ) 4 n .
From (84) and (86), we obtain
k = 0 n 2 ( n k ) n k 2 k k = 4 n ,
which is (3.90) in Gould [35].

7. Information Potential for the Durrmeyer Density of Probability

Consider the Durrmeyer operators
L n f ( x ) : = 0 1 ( n + 1 ) i = 0 n b n i ( x ) b n i ( t ) f ( t ) d t ,
f C [ 0 , 1 ] , x [ 0 , 1 ] ; see, e.g., [36]. They are of the form (5).
Here, b n i ( x ) = n i x i ( 1 x ) n i . The kernel is
K n ( x , t ) : = ( n + 1 ) i = 0 n b n i ( x ) b n i ( t ) ,
and according to (2), the associated information potential is
S n ( x ) : = ( n + 1 ) 2 0 1 i , j = 0 n b n i ( x ) b n j ( x ) b n i ( t ) b n j ( t ) d t = ( n + 1 ) 2 i , j = 0 n b n i ( x ) b n j ( x ) n i n j 2 n i + j 1 0 1 b 2 n , i + j ( t ) d t = ( n + 1 ) 2 2 n + 1 i , j = 0 n n i 2 n j 2 2 n i + j 1 x i + j ( 1 x ) 2 n i j = ( n + 1 ) 2 2 n + 1 i , j = 0 n n i n j 2 n i + j 1 2 b 2 n , i + j ( x ) .
Setting i + j = k , we obtain
S n ( x ) = k = 0 2 n q n , k b 2 n , k ( x ) , x [ 0 , 1 ] ,
where
q n , k : = ( n + 1 ) 2 2 n + 1 2 n k 2 l = 0 k n l 2 n k l 2 = ( n + 1 ) 2 n n 1 2 n + 1 n 1 l = 0 k k l 2 2 n k n l 2 , k = 0 , 1 , , 2 n ,
with n m = 0 if m > n .
It is easy to see that q 2 n k = q n , k , k = 0 , 1 , , 2 n .
We recall here Conjecture 4.6 from [24].
Conjecture 1.
([24]) The sequence ( q n , k ) k = 0 , 1 , , 2 n is convex and, consequently, the function S n is convex on [ 0 , 1 ] .
The following numerical and graphical experiments support this conjecture (see Table 1 and Figure 2).

8. Concluding Remarks and Future Work

Bounds and convexity type properties of entropies are important and useful, especially when the entropies are expressed as complicated functions depending on one or several variables. Of course, bounds and convexity properties are presented in terms of inequalities; therefore, their study is a branch of the theory of inequalities, under active research. Our paper contains some contributions in this framework. We have obtained analytic inequalities, with analytic methods, but involving certain information potentials and their associated Rényi and Tsallis entropies. The probabilistic flavor is underlined by the purely probabilistic proof of the inequality (67). Finding such probabilistic proofs for other inequalities in this context will be a topic for future research. For example, is there a purely probabilistic proof of the subadditivity property (65) of the Rényi entropy R n , c ( x ) ?
The area in which our results can be placed is delineated by the papers [1,2,3,4,5,6,7,8,9,10,11,12,13] and the references therein: the titles are expressive by themselves.
Basically, we are concerned with the family of probability distributions ( p n , k [ c ] ( x ) ) k = 0 , 1 , , strongly related with the family of generalized Baskakov positive linear operators. The interplay between the theory of positive linear operators and probability theory is still a rich source of important results. Besides the binomial distribution, Poisson distribution, and negative binomial distribution (corresponding, respectively to c = 1 , c = 0 , c = 1 ) and associated with the Bernstein, Szász-Mirakyan, respectively classical Baskakov operators, we consider in our paper, from an analytic point of view, the distributions associated with the Bleimann–Butzer–Hahn, Meyer-König and Zeller, and Durrmeyer operators. Their study in a probabilistic perspective is deferred to a future paper. Another possible direction of further research is to investigate with our methods the distributions associated with other classical or more recent sequences of positive linear operators.
The information potential F n , G n , U n , and J n have strong relations with the Legendre polynomials. Quite naturally, the recurrence relations satisfied by these polynomials yield similar relations for the information potentials. It should be mentioned that the differential equation characterizing the Legendre polynomials was used in [17], in order to show that F n , G n , U n and J n satisfy Heun-type differential equations and consequently to obtain bounds for them. Other bounds are obtained in this paper, starting from the important integral representations given in [19]. They can be compared with other bounds from the literature, and this is another possible topic for further research.
For a fixed n, the convexity and even the logarithmic convexity of the function F n ( x ) were established in [17,27,31,32,34]. In this paper, we prove that for a fixed x, the sequence ( F n ( x ) ) n 0 is logarithmically convex. Similar results hold for the other information potentials, and they have consequences concerning the associated entropies. However, we think that this direction of research can be continued and developed.
Two conjectures, accompanied by graphical experiments supporting them, are mentioned in our paper.

Author Contributions

These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

Project financed by Lucian Blaga University of Sibiu & Hasso Plattner Foundation research grants LBUS-IRG-2020-06.

Acknowledgments

The authors are very greateful to the reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Harremoës, P.; Topsoe, F. Inequalities between entropy and index of coincidence derived from information diagrams. IEEE Trans. Inf. Theory 2001, 47, 2944–2960. [Google Scholar] [CrossRef]
  2. Harremoës, P. Binomial and Poisson distribution as maximum entropy distributions. IEEE Trans. Inf. Theory 2001, 47, 2039–2041. [Google Scholar] [CrossRef]
  3. Hillion, E. Concavity of entropy along binomial convolution. Electron. Commun. Probab. 2012, 17, 1–9. [Google Scholar] [CrossRef]
  4. Hillion, E.; Johnson, O. A proof of the Shepp-Olkin entropy concavity conjecture. Bernoulli 2017, 23, 3638–3649. [Google Scholar] [CrossRef] [Green Version]
  5. Knessl, C. Integral representation and asymptotic expansions for Shannon and Renyi entropies. Appl. Math. Lett. 1998, 11, 69–74. [Google Scholar] [CrossRef] [Green Version]
  6. Adell, J.A.; Lekuona, A.; Yu, Y. Sharp bound on the entropy of the Poisson low and related quantities. IEEE Trans. Inf. Theory 2010, 56, 2299–2306. [Google Scholar] [CrossRef]
  7. Melbourne, J.; Tkocz, T. Reversals of Rényi entropy inequalities under log-concavity. IEEE Trans. Inf. Theory 2020. [Google Scholar] [CrossRef]
  8. Shepp, L.A.; Olkin, I. Entropy of the Sum of Independent Bernoulli Random Variables and of the Multinomial Distribution, Contributions to Probability A Collection of Papers Dedicated to Eugene Lukacs; Academic Press: London, UK, 1981; pp. 201–206. [Google Scholar]
  9. Hillion, E.; Johnson, O. Discrete versions of the transport equation and the Shepp-Olkin conjecture. Ann. Probab. 2016, 44, 276–306. [Google Scholar] [CrossRef]
  10. Alzer, H. A refinement of the entropy inequality. Ann. Univ. Sci. Bp. 1995, 38, 13–18. [Google Scholar]
  11. Chang, S.-C.; Weldon, E. Coding for T-user multiple-access channels. IEEE Trans. Inf. Theory 1979, 25, 684–691. [Google Scholar] [CrossRef]
  12. Xu, D. Energy, Entropy and Information Potential for Neural Computation. Ph.D. Thesis, University of Florida, Gainesville, FL, USA, 1999. [Google Scholar]
  13. Bărar, A.; Mocanu, G.R.; Raşa, I. Bounds for some entropies and special functions. Carpathian J. Math. 2018, 34, 9–15. [Google Scholar]
  14. Principe, J.C. Information Theoretic Learning: Rényi’s Entropy and Kernel Perspectives; Springer: New York, NY, USA, 2010. [Google Scholar]
  15. Acu, A.M.; Başcanbaz-Tunca, G.; Raşa, I. Information potential for some probability density functions. Appl. Math. Comput. 2021, 389, 125578. [Google Scholar]
  16. Acu, A.M.; Başcanbaz-Tunca, G.; Raşa, I. Bounds for indices of coincidence and entropies. submitted.
  17. Raşa, I. Entropies and Heun functions associated with positive linear operators. Appl. Math. Comput. 2015, 268, 422–431. [Google Scholar] [CrossRef]
  18. Baskakov, V.A. An instance of a sequence of positive linear operators in the space of continuous functions. Doklady Akademii Nauk SSSR 1957, 113, 249–251. [Google Scholar]
  19. Berdysheva, E. Studying Baskakov–Durrmeyer operators and quasi-interpolants via special functions. J. Approx. Theory 2007, 149, 131–150. [Google Scholar] [CrossRef] [Green Version]
  20. Heilmann, M. Erhohung der Konvergenzgeschwindigkeit bei der Approximation von Funktionen mit Hilfe von Linearkombinationen Spezieller Positiver Linearer Operatoren; Habilitationschrift Universitat Dortmund: Dortmund, Germany, 1992. [Google Scholar]
  21. Wagner, M. Quasi-Interpolaten zu genuinen Baskakov-Durrmeyer-Typ Operatoren; Shaker: Aachen, Germany, 2013. [Google Scholar]
  22. Acu, A.M.; Heilmann, M.; Rasa, I. Linking Baskakov Type Operators, Constructive theory of functions, Sozopol 2019; Draganov, B., Ivanov, K., Nikolov, G., Uluchev, R., Eds.; Prof. Marin Drinov Publishing House of BAS: Sofia, Bulgaria, 2020; pp. 23–38. [Google Scholar]
  23. Heilmann, M.; Raşa, I. A nice representation for a link between Baskakov and Szász-Mirakjan-Durrmeyer operators and their Kantorovich variants. Results Math. 2019, 74, 9. [Google Scholar] [CrossRef] [Green Version]
  24. Raşa, I. Rényi entropy and Tsallis entropy associated with positive linear operators. arXiv 2014, arXiv:1412.4971v.1. [Google Scholar]
  25. Nikolov, G. Inequalities for ultraspherical polynomials. Proof of a conjecture of I. Raşa. J. Math. Anal. Appl. 2014, 418, 852–860. [Google Scholar] [CrossRef]
  26. Gavrea, I.; Ivan, M. On a conjecture concerning the sum of the squared Bernstein polynomials. Appl. Math. Comput. 2014, 241, 70–74. [Google Scholar] [CrossRef]
  27. Alzer, H. Remarks on a convexity theorem of Raşa. Results Math. 2020, 75, 29. [Google Scholar] [CrossRef]
  28. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  29. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Dover Publications, Inc.: New York, NY, USA, 1970. [Google Scholar]
  30. Bărar, A.; Mocanu, G.; Raşa, I. Heun functions related to entropies. RACSAM 2019, 113, 819–830. [Google Scholar] [CrossRef] [Green Version]
  31. Raşa, I. Convexity properties of some entropies (II). Results Math. 2019, 74, 154. [Google Scholar] [CrossRef]
  32. Raşa, I. Convexity properties of some entropies. Results Math. 2018, 73, 105. [Google Scholar] [CrossRef]
  33. Cloud, M.J.; Drachman, B.C. Inequalities: With Applications to Engineering; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  34. Abel, U.; Gawronski, W.; Neuschel, T. Complete monotonicity and zeros of sums of squared Baskakov functions. Appl. Math. Comput. 2015, 258, 130–137. [Google Scholar] [CrossRef] [Green Version]
  35. Gould, H.W. Combinatorial Identities—A Standardized Set of Tables Listing 500 Binomial Coefficient Summations; West Virginia University Press: Morgantown, VA, USA, 1972. [Google Scholar]
  36. Altomare, F.; Campiti, M. Korovkin-Type Approximation Theory and Its Applications; Walter de Gruyter: Berlin, Germany; New York, NY, USA, 1994. [Google Scholar]
Figure 1. Graphics of U n for n = 10 , 20 , 30 , 40 .
Figure 1. Graphics of U n for n = 10 , 20 , 30 , 40 .
Mathematics 08 02056 g001
Figure 2. Graphics of S n for n = 1 , 2 , 3 , 4 , 5 , 6 .
Figure 2. Graphics of S n for n = 1 , 2 , 3 , 4 , 5 , 6 .
Mathematics 08 02056 g002
Table 1. Values of the coefficients q n , k .
Table 1. Values of the coefficients q n , k .
nk 0123456789101112
1 4 3 2 3 4 3
2 9 5 9 10 9 10 9 10 9 5
3 16 7 8 7 176 175 164 175 176 175 8 7 16 7
4 25 9 25 18 1025 882 925 882 905 882 925 882 1025 882 25 18 25 9
5 36 11 18 11 4 3 13 11 86 77 23 21 86 77 13 11 4 3 18 11 36 11
6 49 13 49 26 4753 3146 4165 3146 17395 14157 66787 56628 7329 6292 66787 56628 17395 14157 4165 3146 4753 3146 49 26 49 13
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Acu, A.M.; Măduţa, A.; Otrocol, D.; Raşa, I. Inequalities for Information Potentials and Entropies. Mathematics 2020, 8, 2056. https://doi.org/10.3390/math8112056

AMA Style

Acu AM, Măduţa A, Otrocol D, Raşa I. Inequalities for Information Potentials and Entropies. Mathematics. 2020; 8(11):2056. https://doi.org/10.3390/math8112056

Chicago/Turabian Style

Acu, Ana Maria, Alexandra Măduţa, Diana Otrocol, and Ioan Raşa. 2020. "Inequalities for Information Potentials and Entropies" Mathematics 8, no. 11: 2056. https://doi.org/10.3390/math8112056

APA Style

Acu, A. M., Măduţa, A., Otrocol, D., & Raşa, I. (2020). Inequalities for Information Potentials and Entropies. Mathematics, 8(11), 2056. https://doi.org/10.3390/math8112056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop