Next Article in Journal
Two-Variable Type 2 Poly-Fubini Polynomials
Next Article in Special Issue
On the Computation of Some Interval Reliability Indicators for Semi-Markov Systems
Previous Article in Journal
Nonlinear Spectrum and Fixed Point Index for a Class of Decomposable Operators
Previous Article in Special Issue
Prognostic and Classification of Dynamic Degradation in a Mechanical System Using Variance Gamma Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ordering Awad–Varma Entropy and Applications to Some Stochastic Models

by
Răzvan-Cornel Sfetcu
1,*,†,
Sorina-Cezarina Sfetcu
1,† and
Vasile Preda
1,2,3,†
1
Faculty of Mathematics and Computer Science, University of Bucharest, Str. Academiei 14, 010014 Bucharest, Romania
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics, Calea 13 Septembrie, 050711 Bucharest, Romania
3
“Costin C. Kiriţescu” National Institute of Economic Research, Calea 13 Septembrie, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(3), 280; https://doi.org/10.3390/math9030280
Submission received: 30 December 2020 / Revised: 27 January 2021 / Accepted: 29 January 2021 / Published: 31 January 2021
(This article belongs to the Special Issue Stochastic Models and Methods with Applications)

Abstract

:
We consider a generalization of Awad–Shannon entropy, namely Awad–Varma entropy, introduce a stochastic order on Awad–Varma residual entropy and study some properties of this order, like closure, reversed closure and preservation in some stochastic models (the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model).

1. Introduction

The concept of entropy has its roots in Communication Theory and was introduced by Shannon. More exactly, a data communication system, in this theory, consists of three elements: a receiver, a communication channel and a source of data. Shannon considered the following problem: based on the signal received through the channel, the receiver should be able to understand what data was generated by the source. Shannon taked into account many methods to compress, encode and transmit messages from a data source and showed that the entropy is an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. He generalized and strengthened this result considerably for noisy channels in his noisy channel coding theorem.
Another domain in which the entropy is very useful is Information Theory. Here the concept is directly analogous to the entropy in Statistical Thermodynamics.
Entropy has also relevance in other areas of Mathematics such as Combinatorics. The definition can be derived from a set of axioms which establish that entropy should be a measure of how surprising the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy.
The notion of Shannon entropy has multiple generalizations (Tsallis entropy, Rényi entropy, Varma entropy, Kaniadakis entropy, cumulative entropy, relative entropy, weighted entropy etc.), which are useful in many technological areas like Physics, Communication Theory, Probability, Statistics, Economics etc. More exactly, there are specific areas where the entropies are used: optimal reactive power dispatch (see [1]), reinforcement learning (see [2]), income distribution (see [3,4]), non-coding human DNA (see [5]), earthquakes (see [6]), stock exchanges (see [7]), Markov chains (see [8,9,10]), biostatistics (see [11,12]), model selection (see [13,14]), statistical mechanics (see [15,16]), internet (see [17]). These concepts can be also linked with bayesian control (see [18]).
The idea of Tsallis was to consider another formula instead of classical logarithm used in Shannon entropy. Moreover, using Tsallis entropy, many physically meaningful generalizations have been introduced. From these generalizations we mention the following: superstatistics, introduced by Beck and Cohen (see [19]) and spectral statistics, introduced by Tsekouras and Tsallis (see [20]). At the basis of these both entropic forms are Tsallis and Boltzmann–Gibbs statistics. It has been showed that spectral statistics generalize superstatistics and it has been conjectured that cover some additional cases.
Awad (see [21]) generalized Shannon entropy obtaining Awad–Shannon entropy. Using his ideas some of the overmentioned entropies can be generalized. We apply this method for Varma entropy, defining Awad–Varma entropy and study some properties concerning ordering of this entropy. Awad–Shannon entropy is intensively studied, especially because of its applications in microchannels (see [22,23,24,25,26]). At the same time, Varma entropy, introduced in [27], is now very actual (see [28,29,30,31]).
Awad–Shannon entropy has some advantages over other entropies. For example, working with other entropies, we can have completely different systems with the same entropy, the entropy is not necessarily nonnegative, the entropy of a continuous random variable is not a natural extension of the entropy of a discrete random variable, despite they have analogous form etc. None of these situations occur in the case of Awad–Shannon entropy. For other properties of Awad–Shannon entropy see [21].
Speaking about the divergence measure derived from this new entropy we can say that it coincides with the Kullback-Leibler divergence measure derived from the Shannon entropy.
In this paper we work with a generalization of Awad–Shannon entropy, namely Awad–Varma entropy (the difference is that we work with Varma entropy instead of Shannon entropy). We define a stochastic order on this entropy (more precisely on Awad–Varma residual entropy) and study closure and reversed closure properties of this order. Moreover, we show that this order is preserved in some stochastic models as the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model.
The rest of the paper is organized as follows. In Section 2, we present the main notions and notations used throughout the paper. In Section 3, we prove the main results concerning the stochastic order introduced on Awad–Varma residual entropy. We lay stress upon the fact that Theorem 1 is crucial for the whole paper. In Section 4, we prove the closure and reversed closure properties for the aforementioned order under some reliability transforms, mainly including linear transformations and parallel and series operations. In Section 5, we deal with applications of the preceding results in some stochastic models, namely the preservation of this order in the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model. Finally, we give a concrete example and draw conclusions.

2. Preliminaries

Let X be a nonnegative random variable with absolutely continuous cumulative distribution function F X , survival function F ¯ X = d e f 1 F X and probability density function f X (X represents a living thing or the lifetime of a device). We define Shannon entropy of X by
H X = E Z log f X ( Z ) ,
where “log” is the natural logarithm function and Z is a nonnegative random variable identically distributed like X.
In [21] (see also [32]) was introduced the so-called Awad–Shannon entropy. Let δ X = d e f sup x f X ( x ) . We assume in the whole paper that this supremum is in ( 0 , ) for any density function f X . Awad–Shannon entropy is given by
H X A = E Z log f X ( Z ) δ X .
One of the generalizations of Shannon entropy is Varma entropy, introduced in [27].
In this paper we consider Awad–Varma entropy. Let α , β R such that β 1 and β 1 < α < β . Awad–Varma entropy is given by
H α , β X = 1 β α log E Z f X ( Z ) δ X α + β 2 .
In [33,34], the notion of Shannon residual entropy was introduced as a dynamic measure of uncertainty. More precisely, for an absolutely continuous nonnegative random variable X, the residual life of X is X t = [ X t | X > t ] and the residual entropy of X at time t is
H X ( t ) = H X t = E Z log f X ( Z ) F ¯ X ( t ) | [ Z > t ] for   any   t 0 .
Practically, the residual entropy of X measures the uncertainty of the residual life of X t . The reader can find some interesting results concerning the residual entropy in [35,36,37,38,39,40,41,42] and in many other papers.
We can define Awad–Varma residual entropy by
H α , β X ( t ) = 1 β α log E Z f X ( Z ) δ X F ¯ X ( t ) α + β 2 | [ Z > t ] for   any   t 0 .
Clearly H α , β X ( 0 ) = H α , β X .
We recall the definition of the quantile function
Q X ( u ) = d e f F X 1 ( u ) = inf { x [ 0 , ) | F X ( x ) u } for   any   u [ 0 , 1 ] .
Many times the quantile function F X 1 is called the right-continuous inverse function of F X (or, in short, of X).
Differentiating with respect to u both sides of the equality F X ( Q X ( u ) ) = u , we have F X ( Q X ( u ) ) Q X ( u ) = 1 . We denote q X ( u ) = Q X ( u ) for any u [ 0 , 1 ] and we get q X ( u ) f X ( Q X ( u ) ) = 1 for any u [ 0 , 1 ] .
A quantile version of Shannon residual entropy was introduced in [40] and this idea was generalized for Rényi residual entropy in [41,42]. We continue this work for Awad–Varma residual entropy, dealing with H α , β X ( Q X ( u ) ) for any u [ 0 , 1 ] .
In the whole paper U is a random variable uniformly distributed on [ 0 , 1 ] .
We have
H α , β X ( Q X ( u ) ) = 1 β α log E Z f X ( Z ) δ X ( 1 u ) α + β 2 | [ Z > Q X ( u ) ] = 1 β α log E U 1 q X ( U ) δ X ( 1 u ) α + β 2 | [ u < U < 1 ] = 1 β α log E U q X ( U ) 2 ( α + β ) δ X ( 1 u ) α + β 2 | [ u < U < 1 ] = 1 β α log E U f X ( Q X ( U ) ) δ X ( 1 u ) α + β 2 | [ u < U < 1 ] .
The following lemma is very useful in this paper.
Lemma 1
(see [41]). Let h : [ 0 , 1 ] × [ 0 , ) R be a function with the property that
E U h ( u , U ) | [ u < U < 1 ] 0 for   any   u [ 0 , 1 ]
and g : [ 0 , ) [ 0 , ) an increasing function. Then
E U h ( u , U ) g ( U ) | [ u < U < 1 ] 0 ,
provided the conditional expectations exist.
Throughout the paper, if X and Y are absolutely continuous nonnegative random variables, we denote the distribution functions by F X , respectively, F Y , the survival functions by F ¯ X , respectively, F ¯ Y and the density functions by f X , respectively, f Y .

3. Main Results

Definition 1.
We say that X is smaller than Y in the Awad–Varma quantile entropy order (and denote X A V Q E Y ) if and only if H α , β X ( Q X ( u ) ) H α , β Y ( Q Y ( u ) ) for any u [ 0 , 1 ] .
Theorem 1.
(i) The following assertions are equivalent:
1 . X A V Q E Y .
2 . E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0 for any t 0 .
(ii)The following assertions are equivalent:
1 . Y A V Q E X .
2 . E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0 for any t 0 .
Proof. 
It is sufficient to prove (i), the proof of (ii) being analogous.
We have X A V Q E Y if and only if
1 δ X α + β 2 E U f X ( Q X ( U ) ) α + β 2 | [ u < U < 1 ] 1 δ Y α + β 2 E U f Y ( Q Y ( U ) ) α + β 2 | [ u < U < 1 ] for   any   u [ 0 , 1 ] ( see   Definition   1 ) .
Considering U = F X ( Z ) in the preceding inequality we have the equivalences (for any u [ 0 , 1 ] ):
X A V Q E Y 1 δ X α + β 2 E Z f X ( Z ) α + β 2 | [ Z > F X 1 ( u ) ] 1 δ Y α + β 2 E Z f Y ( Q Y ( F X ( Z ) ) ) α + β 2 | [ Z > F X 1 ( u ) ] δ Y δ X α + β 2 E Z f X ( Z ) α + β 2 | [ Z > F X 1 ( u ) ] E Z f Y ( F Y 1 ( F X ( Z ) ) ) α + β 2 | [ Z > F X 1 ( u ) ] E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > F X 1 ( u ) ] 0 .
Putting F X 1 ( u ) = t in these equivalences we get the conclusion. □
Definition 2
(see [43]). We say that:
1 . X is smaller than Y in the dispersive order (and write X d i s p Y ) if
F X 1 ( δ ) F X 1 ( γ ) F Y 1 ( δ ) F Y 1 ( γ )   f o r   a n y   0 < γ < δ < 1 ,
which is equivalent to
f X ( x ) f Y ( F Y 1 ( F X ( x ) ) )   f o r   a n y   x 0 .
2 . X is smaller than Y in the convex transform order (and write X c Y ) if the function
[ 0 , ) x F Y 1 ( F X ( x ) ) i s   c o n v e x ,
which is equivalent to the fact that the function
[ 0 , ) x f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) i s   n o n n e g a t i v e   i n c r e a s i n g .
Theorem 2.
We suppose that X d i s p Y and δ X δ Y .
(i) If α + β 2 , then X A V Q E Y .
(ii) If α + β 2 , then Y A V Q E X .
Proof. 
It is sufficient to prove (i), the proof of (ii) being analogous.
If X d i s p Y , then f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) for any x 0 . We use the inequality δ X δ Y and the conclusion follows by Theorem 1. □
Theorem 3.
We suppose that X c Y and δ X f Y ( 0 ) δ Y f X ( 0 ) .
(i) If α + β 2 , then X A V Q E Y .
(ii) If α + β 2 , then Y A V Q E X .
Proof. 
It is sufficient to prove (i), the proof of (ii) being analogous.
If X c Y , then the function [ 0 , ) x f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) is nonnegative increasing, hence
f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) f X ( 0 ) f Y ( 0 ) .
The conclusion follows from Theorem 1, using the inequality δ X f Y ( 0 ) δ Y f X ( 0 ) .

4. Closure Properties

In the sequel, we study the closure and reversed closure properties of the Awad–Varma quantile entropy order under some reliability transforms, mainly including linear transformations and parallel and series operations.
We take X 1 , . . . , X n and Y 1 , . . . , Y n independent and identically distributed (i.i.d.) copies of X, respectively, of Y and
X 1 : n = min { X 1 , . . . , X n } , X n : n = max { X 1 , . . . , X n } ,
Y 1 : n = min { Y 1 , . . . , Y n } , Y n : n = max { Y 1 , . . . , Y n } .
The same notations as above are used for distribution functions, survival functions and density functions, i.e., F X 1 : n , F ¯ X 1 : n , f X 1 : n etc.
Theorem 4.
We suppose that X A V Q E Y and δ X n : n δ Y 2 ( α + β ) δ X δ Y n : n 2 ( α + β ) . Then X n : n A V Q E Y n : n .
Proof. 
If X A V Q E Y , according to Theorem 1, we have
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0
for any t 0 .
One can see that f X n : n ( x ) = n F X ( x ) n 1 f X ( x ) .
Hence
f X n : n ( x ) f Y n : n ( F Y n : n 1 ( F X n : n ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) .
Because the function
[ 0 , ) x n ( F X ( x ) ) n 1 α + β 1   is   nonnegative   increasing ,
it follows, by inequality δ X n : n δ Y 2 ( α + β ) δ X δ Y n : n 2 ( α + β ) , inequality (1) and Lemma 1, that
E Z f X n : n ( Z ) α + β 2 f X n : n ( Z ) f Y n : n ( F Y n : n 1 ( F X n : n ( Z ) ) ) 2 ( α + β ) δ X n : n δ Y n : n 2 ( α + β ) | [ Z > t ] 0 for   any   t 0 .
We use Theorem 1 and conclude that X n : n A V Q E Y n : n . □
In a similar manner like in Theorem 4 we get
Theorem 5.
We suppose that X 1 : n A V Q E Y 1 : n and δ X δ Y 1 : n 2 ( α + β ) δ X 1 : n δ Y 2 ( α + β ) . Then X A V Q E Y .
We take X 1 , X 2 , . . . and Y 1 , Y 2 , . . . sequences of independent and identically distributed copies of X, respectively, of Y. Let N be a positive integer random variable having the probability mass function p N ( n ) = P ( N = n ) , n = 1 , 2 , . . . . We assume that N is independent of X i s and Y i s . We take
X 1 : N = min { X 1 , . . . , X N } , X N : N = max { X 1 , . . . , X N }
and
Y 1 : N = min { Y 1 , . . . , Y N } , Y N : N = max { Y 1 , . . . , Y N } .
The following two theorems are extensions of Theorems 4 and 5 from a finite number n to a random variable N. We will prove only Theorem 7.
Theorem 6.
We suppose that X A V Q E Y and δ X N : N δ Y 2 ( α + β ) δ X δ Y N : N 2 ( α + β ) . Then X N : N A V Q E Y N : N .
Theorem 7.
We suppose that X 1 : N A V Q E Y 1 : N and δ X δ Y 1 : N 2 ( α + β ) δ X 1 : N δ Y 2 ( α + β ) . Then X A V Q E Y .
Proof. 
If X 1 : N A V Q E Y 1 : N , then
E Z ( f X 1 : N ( Z ) α + β 2 [ f X 1 : N ( Z ) f Y 1 : N ( F Y 1 : N 1 ( F X 1 : N ( Z ) ) ) 2 ( α + β ) δ X 1 : N δ Y 1 : N 2 ( α + β ) ] | [ Z > t ] ) 0
for any t 0 .
We can see that
f X 1 : N ( x ) = f X ( x ) · n = 1 n F ¯ X ( x ) n 1 p N ( n )
and
f Y 1 : N ( x ) = f Y ( x ) · n = 1 n F ¯ Y ( x ) n 1 p N ( n ) ,
where p N ( n ) = P ( N = n ) , n = 1 , 2 , . . . is the probability mass function of N.
It was proved in [43] that
F Y 1 : N 1 F X 1 : N ( x ) = F Y 1 F X ( x ) .
Hence
f X 1 : N ( x ) f Y 1 : N ( F Y 1 : N 1 ( F X 1 : N ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) .
Because the function
[ 0 , ) x n = 1 n ( F ¯ X ( x ) ) n 1 p N ( n ) 1 ( α + β )   is   nonnegative   increasing ,
it follows, by inequality δ X δ Y 1 : N 2 ( α + β ) δ X 1 : N δ Y 2 ( α + β ) , inequality (2) and Lemma 1, that
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0 for   any   t 0 .
We use Theorem 1 and conclude that X A V Q E Y . □

5. Applications to Some Stochastic Models

For the remainder of the paper we present the preservation of the Awad–Varma quantile entropy order in the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model.

5.1. Proportional Hazard Rate Model and Proportional Reversed Hazard Rate Model

We work with the following proportional hazard rate model (see [43]). Let θ > 0 . We take X ( θ ) and Y ( θ ) two absolutely continuous nonnegative random variables with survival functions ( F ¯ X ) θ , respectively, ( F ¯ Y ) θ .
Theorem 8.
1 . If θ 1 , X A V Q E Y and δ X ( θ ) δ Y 2 ( α + β ) δ X δ Y ( θ ) 2 ( α + β ) , then X ( θ ) A V Q E Y ( θ ) .
2 . If 0 < θ 1 , X ( θ ) A V Q E Y ( θ ) and δ X δ Y ( θ ) 2 ( α + β ) δ X ( θ ) δ Y 2 ( α + β ) , then X A V Q E Y .
Proof. 
We use the same notations as above, namely, for X ( θ ) and Y ( θ ) , we denote the distribution functions by F X ( θ ) , respectively, F Y ( θ ) , the right continuous inverse functions by F X ( θ ) 1 , respectively, F Y ( θ ) 1 and the density functions by f X ( θ ) , respectively, f Y ( θ ) .
For any x 0 we have
f X ( θ ) ( x ) = θ F ¯ X ( x ) θ 1 f X ( x ) ,
F Y ( θ ) 1 F X ( θ ) ( x ) = F Y 1 F X ( x ) ,
f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = θ F ¯ X ( x ) θ 1 f Y F Y 1 F X ( x ) .
Hence
f X ( θ ) ( x ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then, we have
E Z f X ( θ ) ( Z ) α + β 2 f X ( θ ) ( Z ) f Y ( θ ) ( F Y ( θ ) 1 ( F X ( θ ) ( Z ) ) ) 2 ( α + β ) δ X ( θ ) δ Y ( θ ) 2 ( α + β ) | [ Z > t ] = E Z θ F ¯ X ( Z ) θ 1 α + β 1 f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X ( θ ) δ Y ( θ ) 2 ( α + β ) | [ Z > t ]   for   any   t 0 .
By Theorem 1 we have X ( θ ) A V Q E Y ( θ ) if and only if
E Z f X ( θ ) ( Z ) α + β 2 f X ( θ ) ( Z ) f Y ( θ ) ( F Y ( θ ) 1 ( F X ( θ ) ( Z ) ) ) 2 ( α + β ) δ X ( θ ) δ Y ( θ ) 2 ( α + β ) | [ Z > t ] 0
for any t 0 and X A V Q E Y if and only if
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0   for   any   t 0 .
1 . We suppose that 0 < θ 1 and X A V Q E Y . Hence the function
[ 0 , ) x θ F ¯ X ( x ) θ 1 α + β 1   is   nonnegative   increasing
and
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0
for any t 0 .
Using the inequality δ X ( θ ) δ Y 2 ( α + β ) δ X δ Y ( θ ) 2 ( α + β ) , inequality (3) and Lemma 1, we conclude that X ( θ ) A V Q E Y ( θ ) .
2 . We suppose that θ 1 and X ( θ ) A V Q E Y ( θ ) . Hence the function
[ 0 , ) x θ F ¯ X ( x ) θ 1 1 ( α + β )   is   nonnegative   increasing
and
E Z ( f X ( θ ) ( Z ) α + β 2 [ f X ( θ ) ( Z ) f Y ( θ ) ( F Y ( θ ) 1 ( F X ( θ ) ( Z ) ) ) 2 ( α + β ) δ X ( θ ) δ Y ( θ ) 2 ( α + β ) ] | [ Z > t ] ) 0
for any t 0 .
Using the inequality δ X δ Y ( θ ) 2 ( α + β ) δ X ( θ ) δ Y 2 ( α + β ) , inequality (4) and Lemma 1, we obtain the conclusion. □
We consider the following proportional reversed hazard rate model (see [43]). For any θ > 0 let X ( θ ) and Y ( θ ) be two absolutely continuous nonnegative random variables with distribution functions ( F X ) θ and ( F Y ) θ .
Theorem 9.
1 . If θ 1 , X A V Q E Y and δ X ( θ ) δ Y 2 ( α + β ) δ X δ Y ( θ ) 2 ( α + β ) , then X ( θ ) A V Q E Y ( θ ) .
2 . If 0 < θ 1 , X ( θ ) A V Q E Y ( θ ) and δ X δ Y ( θ ) 2 ( α + β ) δ X ( θ ) δ Y 2 ( α + β ) , then X A V Q E Y .
Proof. 
The proof is analogous with the proof of Theorem 8. □

5.2. Proportional Odds Model

We deal with the following proportional odds model (see [44]). Let θ > 0 and X and Y be two absolutely continuous nonnegative random variables having the survival functions F ¯ X , respectively, F ¯ Y and the density functions f X , respectively, f Y . The proportional odds random variables X p and Y p are defined by the survival functions
F ¯ X p ( x ) = θ F ¯ X ( x ) 1 ( 1 θ ) F ¯ X ( x ) ,
respectively,
F ¯ Y p ( x ) = θ F ¯ Y ( x ) 1 ( 1 θ ) F ¯ Y ( x ) .
Theorem 10.
1 . If θ 1 , X A V Q E Y and δ X p δ Y 2 ( α + β ) δ X δ Y p 2 ( α + β ) , then X p A V Q E Y p .
2 . If 0 < θ 1 , X p A V Q E Y p and δ X δ Y p 2 ( α + β ) δ X p δ Y 2 ( α + β ) , then X A V Q E Y .
Proof. 
For any θ > 0 , let h : [ 0 , 1 ] R , h ( u ) = θ u 1 ( 1 θ ) u .
We have:
1 . If θ 1 , then h is an increasing concave function on [ 0 , 1 ] .
2 . If 0 < θ 1 , then h is an increasing convex function on [ 0 , 1 ] .
We can see that
F ¯ X p ( x ) = h F ¯ X ( x )
and
F ¯ Y p ( x ) = h F ¯ Y ( x ) .
Hence
f X p ( x ) = h F ¯ X ( x ) f X ( x )
and
f Y p ( x ) = h F ¯ Y ( x ) f Y ( x )
for any x 0 .
One can prove that F Y p 1 F X p ( x ) = F Y 1 F X ( x ) .
We obtain that
f X p ( x ) f Y p F Y p 1 F X p ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Hence
E Z f X p ( Z ) α + β 2 f X p ( Z ) f Y p ( F Y p 1 ( F X p ( Z ) ) ) 2 ( α + β ) δ X p δ Y p 2 ( α + β ) | [ Z > t ] = E Z h F ¯ X ( Z ) α + β 1 f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X p δ Y p 2 ( α + β ) | [ Z > t ]   for   any   t 0 .
We have (see Theorem 1): X A V Q E Y if and only if
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0   for   any   t 0
and X p A V Q E Y p if and only if
E Z f X p ( Z ) α + β 2 f X p ( Z ) f Y p ( F Y p 1 ( F X p ( Z ) ) ) 2 ( α + β ) δ X p δ Y p 2 ( α + β ) | [ Z > t ] 0   for   any   t 0 .
1 . If X A V Q E Y and θ 1 , then
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0
for any t 0 and the function
[ 0 , ) x h F ¯ X ( x ) α + β 1   is   nonnegative   increasing ,
hence, by inequality δ X p δ Y 2 ( α + β ) δ X δ Y p 2 ( α + β ) , inequality (5) and Lemma 1, we get
X p A V Q E Y p .
2 . If X p A V Q E Y p and 0 < θ 1 , then
E Z f X p ( Z ) α + β 2 f X p ( Z ) f Y p ( F Y p 1 ( F X p ( Z ) ) ) 2 ( α + β ) δ X p δ Y p 2 ( α + β ) | [ Z > t ] 0
for any t 0 and the function
[ 0 , ) x 1 h F ¯ X ( x ) α + β 1   is   nonnegative   increasing ,
hence, by inequality δ X δ Y p 2 ( α + β ) δ X p δ Y 2 ( α + β ) , inequality (6) and Lemma 1, we get
X A V Q E Y .

5.3. Record Values Model

In the sequel we are concerned with the preservation of the Awad–Varma quantile entropy order in the record values model.
We consider { X i | i 1 } a sequence of independent and identically distributed random variables from the random variable X with survival function F ¯ X and density function f X . The nth record times are the random variables T n X defined via T 1 X = 1 and T n + 1 X = min { j > T n X | X j > X T n X } , n 1 .
We denote X T n X = d e f R n X and call them the nth record values. For more informations about record values, the reader can consult [45].
Concerning R n X we have, for any x 0 :
f R n X ( x ) = 1 Γ ( n ) Λ X n 1 ( x ) f X ( x )
and
F ¯ R n X ( x ) = F ¯ X ( x ) j = 1 n 1 Λ X ( x ) j j ! = Γ ¯ n Λ X ( x ) ,
where Γ ¯ n is the survival function of a Gamma random variable with the shape parameter n and the scale parameter 1 and Λ X ( x ) = log F ¯ X ( x ) is the cumulative failure rate function of X.
Taking { Y i | i 1 } a sequence of independent and identically distributed random variables from the random variable Y, we have similar formulas for R n Y .
Theorem 11.
Let m , n N = d e f { 1 , 2 , . . . } .
1 . If X A V Q E Y and δ R n X δ Y 2 ( α + β ) δ X δ R n Y 2 ( α + β ) then R n X A V Q E R n Y .
2 . We suppose that n > m 1 . If R m X A V Q E R m Y and δ R n X δ R m Y 2 ( α + β ) δ R m X δ R n Y 2 ( α + β ) , then R n X A V Q E R n Y .
Proof. 
1 . We suppose that X A V Q E Y . Then
E Z f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ X δ Y 2 ( α + β ) | [ Z > t ] 0
for any t 0 .
We can see that
f R n X ( x ) f R n Y F R n Y 1 F R n X ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then
E Z f R n X ( Z ) α + β 2 f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ] = E Z 1 Γ ( n ) Λ X n 1 ( Z ) α + β 1 f X ( Z ) α + β 2 f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ]   for   any   t 0 .
Because the function
[ 0 , ) x 1 Γ ( n ) Λ X n 1 ( x ) α + β 1   is   nonnegative   increasing ,
we obtain, by inequality δ R n X δ Y 2 ( α + β ) δ X δ R n Y 2 ( α + β ) , inequality (7) and Lemma 1, that
E Z f R n X ( Z ) α + β 2 f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ] 0   for   any   t 0 ,
i.e., R n X A V Q E R n Y .
2 . Let n > m 1 . If R m X A V Q E R m Y , then
E Z f R m X ( Z ) α + β 2 f R m X ( Z ) f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) 2 ( α + β ) δ R m X δ R m Y 2 ( α + β ) | [ Z > t ] 0
for any t 0 .
Using previous formulas we get
f R m X ( x ) f R m Y F R m Y 1 F R m X ( x ) = f R n X ( x ) f R n Y F R n Y 1 F R n X ( x ) = f X ( x ) f Y F Y 1 F X ( x )
and
f R n X ( x ) f R m X ( x ) = Γ ( m ) Γ ( n ) Λ X ( x ) n m .
Hence
E Z f R n X ( Z ) α + β 2 f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ] = E Z Γ ( m ) Γ ( n ) Λ X ( Z ) n m α + β 1 f R m X ( Z ) α + β 2 f R m X ( Z ) f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ]   for   any   t 0 .
Because the function
[ 0 , ) x Γ ( m ) Γ ( n ) Λ X ( x ) n m α + β 1   is   nonnegative   increasing ,
using the inequality δ R n X δ R m Y 2 ( α + β ) δ R m X δ R n Y 2 ( α + β ) , inequality (8) and Lemma 1, we obtain that
E Z f R n X ( Z ) α + β 2 f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) 2 ( α + β ) δ R n X δ R n Y 2 ( α + β ) | [ Z > t ] 0   for   any   t 0 ,
i.e., R n X A V Q E R n Y . □
Concrete Example. The exponential distribution can be applied to describe the time to failure of a device.
We consider α , β R such that β 1 and β 1 < α < β . Let ε > 0 and γ > 0 . Let X be an exponential absolute continuous nonnegative random variable with density function f X given by
f X ( x ) = ε e ε x ; if x 0 0 ; otherwise
and Y a truncated exponential absolute continuous nonnegative random variable with density function f Y given by
f Y ( y ) = ε e ε x 1 e ε γ ; if 0 y γ 0 ; otherwise .
We have
H α , β X ( Q X ( u ) ) = 1 β α log E U 1 U 1 u α + β 2 | [ u < U < 1 ]
and
H α , β Y ( Q Y ( u ) ) = 1 β α log E U 1 U ( 1 e ε γ ) 1 u α + β 2 | [ u < U < 1 ] .
If α + β 2 , then X A V Q E Y .
Placing components in parallel, system reliability can be improved. The only time the system fails is when all the parallel components fail.
Applying the results that we proved above we have: X n : n A V Q E Y n : n (by Theorem 4) and X N : N A V Q E Y N : N (by Theorem 6) without any supplementary computations.
The study of record values are linked with the study of order statistics. Moreover, it is known that things that could be done relatively easily for order statistics are also feasible for records and things that could be done hard for order statistics are, unfortunately, equally or more difficult for records.
Due to Theorem 11 we have R n X A V Q E R n Y .
If α + β 2 , then Y A V Q E X etc.

6. Conclusions

Awad–Shannon entropy solves several drawbacks that appears in case of other entropies (for example, in case of Awad–Shannon entropy: completely different systems have different entropies, the entropy is nonnegative, the entropy of a continuous random variable is a natural extension of the entropy of a discrete random variable etc.). We considered a generalization of this entropy, namely Awad–Varma entropy and investigated some statistical order properties of it.
More exactly, we studied closure and reversed closure properties of Awad–Varma quantile entropy order under several reliability transformations, like linear transformations, parallel and series operations. Moreover, we applied the main result to some stochastic models, like proportional hazard rate model, proportional reversed hazard rate model, proportional odds model and record values model, showing that the order defined on Awad–Varma residual entropy is preserved for the aforementioned models.
We intend to continue this work, considering other generalizations of Awad–Shannon entropy.

Author Contributions

Conceptualization, R.-C.S., S.-C.S. and V.P.; validation, R.-C.S., S.-C.S. and V.P.; investigation, R.-C.S., S.-C.S. and V.P.; writing—original draft preparation, R.-C.S., S.-C.S. and V.P.; writing—review and editing, R.-C.S., S.-C.S. and V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors are very much indebted to the anonymous referees and to the editors for their most valuable comments and suggestions which improved the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Muhammad, Y.; Khan, R.; Raja, M.A.Z.; Ullah, F.; Chaudhary, N.I.; He, Y. Design of fractional swarm intelligent computing with entropy evolution for optimal power flow problems. IEEE Access 2020, 8, 111401–111419. [Google Scholar] [CrossRef]
  2. Wang, H.; Zariphopoulou, T.; Zhou, X.Y. Reinforcement learning in continuous time and space: A stochastic control approach. J. Mach. Learn. Res. 2020, 21, 1–34. [Google Scholar]
  3. Preda, V.; Dedu, S.; Gheorghe, C. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints. Physica A 2015, 436, 925–932. [Google Scholar] [CrossRef]
  4. Soares, A.D.; Moura, N.J., Jr.; Ribeiro, M.B. Tsallis statistics in the income distribution of Brazil. Chaos Solitons Fractals 2016, 88, 158–171. [Google Scholar] [CrossRef] [Green Version]
  5. Oikonomou, N.; Provata, A.; Tirnakli, U. Nonextensive statistical approach to non-coding human DNA. Physica A 2008, 387, 2653–2659. [Google Scholar] [CrossRef]
  6. Darooneh, A.H.; Dadashinia, C. Analysis of the spatial and temporal distributions between successive earthquakes: Nonextensive statistical mechanics viewpoint. Physica A 2008, 387, 3647–3654. [Google Scholar] [CrossRef]
  7. Jiang, Z.Q.; Chen, W.; Zhou, W.X. Scaling in the distribution of intertrade durations of Chinese stocks. Physica A 2008, 387, 5818–5825. [Google Scholar] [CrossRef] [Green Version]
  8. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy, divergence rates and weighted divergence rates for Markov chains. I: The alpha-gamma and beta-gamma case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2017, 18, 293–301. [Google Scholar]
  9. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. II: The weighted case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 3–10. [Google Scholar]
  10. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. III: The Cressie and Read case and applications. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 413–421. [Google Scholar]
  11. Di Crescenzo, A.; Di Gironimo, P.; Kayal, S. Analysis of the past lifetime in a replacement model through stochastic comparisons and differential entropy. Mathematics 2020, 8, 1203. [Google Scholar] [CrossRef]
  12. Gkelsinis, T.; Karagrigoriou, A. Theoretical aspects on measures of directed information with simulations. Mathematics 2020, 8, 587. [Google Scholar] [CrossRef] [Green Version]
  13. Toma, A. Model selection criteria using divergences. Entropy 2014, 16, 2686–2698. [Google Scholar] [CrossRef] [Green Version]
  14. Toma, A.; Karagrigoriou, A.; Trentou, P. Robust model selection criteria based on pseudodistances. Entropy 2020, 22, 304. [Google Scholar] [CrossRef] [Green Version]
  15. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Physica A 2014, 407, 350–359. [Google Scholar] [CrossRef]
  16. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  17. Abe, S.; Suzuki, N. Itineration of the internet over nonequilibrium stationary states in Tsallis statistics. Phys. Rev. E 2003, 67, 016106. [Google Scholar] [CrossRef] [Green Version]
  18. Imani, M.; Ghoreishi, S.F.; Braga-Neto, U.M. Bayesian control of large MDPs with unknown dynamics in data-poor environments. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS 2018), Montreal, QC, Canada, 8–13 December 2018; pp. 8157–8167. [Google Scholar]
  19. Beck, C.; Cohen, E.G.D. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef] [Green Version]
  20. Tsekouras, G.A.; Tsallis, C. Generalized entropy arising from a distribution of q indices. Phys. Rev. E 2005, 71, 046144. [Google Scholar] [CrossRef] [Green Version]
  21. Awad, A.M. A statistical information measure. Dirasat 1987, 12, 7–20. [Google Scholar]
  22. Awad, M.M. A review of entropy generation in microchannels. Adv. Mech. Eng. 2015, 7, 1–32. [Google Scholar] [CrossRef]
  23. Dey, S.; Maiti, S.S.; Ahmad, M. Comparison of different entropy measures. Pak. J. Statist. 2016, 32, 97–108. [Google Scholar]
  24. Hung, Y.-M. A comparative study of viscous dissipation effect on entropy generation in single-phase liquid flow in microchannels. Int. J. Therm. Sci. 2009, 48, 1026–1035. [Google Scholar] [CrossRef]
  25. Li, J.; Kleinstreuer, C. Entropy generation analysis for nanofluid flow in microchannels. J. Heat Transf. 2010, 132, 122401. [Google Scholar] [CrossRef]
  26. Mah, W.H.; Hung, Y.M.; Guo, M. Entropy generation of viscous dissipative nanofluid flow in microchannels. Int. J. Heat Mass. Tran. 2012, 55, 4169–4182. [Google Scholar] [CrossRef]
  27. Varma, R.S. Generalization of Rényi’s entropy of order α. J. Math. Sci. 1966, 1, 34–48. [Google Scholar]
  28. Liu, C.; Chang, C.; Chang, Z. Maximum Varma entropy distribution with conditional value at risk constraints. Entropy 2020, 22, 663. [Google Scholar] [CrossRef]
  29. Malhotra, G.; Srivastava, R.; Taneja, H.C. Calibration of the risk-neutral density function by maximization of a two-parameter entropy. Physica A 2019, 513, 45–54. [Google Scholar] [CrossRef]
  30. Preda, V.; Băncescu, I. Evolution of non-stationary processes and some maximum entropy principles. Analele Univ. de Vest din Timişoara Ser. Mat.-Inform. 2018, 2, 43–70. [Google Scholar] [CrossRef]
  31. Sati, M.M.; Gupta, N. On partial monotonic behaviour of Varma entropy and its application in coding theory. J. Indian Stat. Assoc. 2015, 53, 135–152. [Google Scholar]
  32. Awad, A.M.; Alawneh, A.J. Application of entropy to a life-time model. IMA J. Math. Control Inform. 1987, 4, 143–147. [Google Scholar] [CrossRef]
  33. Ebrahimi, N. How to measure uncertainty in the residual lifetime distribution. Sankhyā A 1996, 58, 48–56. [Google Scholar]
  34. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  35. Di Crescenzo, A.; Longobardi, M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002, 39, 434–440. [Google Scholar] [CrossRef]
  36. Di Crescenzo, A.; Longobardi, M. A measure of discrimination between past lifetime distributions. Statist. Probab. Lett. 2004, 67, 173–182. [Google Scholar] [CrossRef]
  37. Ebrahimi, N.; Kirmani, S.N.U.A. A measure of discrimination between two residual lifetime distributions and its applications. Ann. Inst. Statist. Math. 1996, 48, 257–265. [Google Scholar] [CrossRef]
  38. Kumar, V.; Taneja, H.C. Some characterization results on generalized cumulative residual entropy measure. Statist. Probab. Lett. 2011, 81, 1072–1077. [Google Scholar] [CrossRef]
  39. Nanda, A.K.; Paul, P. Some results on generalized past entropy. J. Statist. Plann. Inference 2006, 136, 3659–3674. [Google Scholar] [CrossRef]
  40. Sunoj, S.M.; Sankaran, P.G. Quantile based entropy function. Statist. Probab. Lett. 2012, 82, 1049–1053. [Google Scholar] [CrossRef]
  41. Nanda, A.K.; Sankaran, P.G.; Sunoj, S.M. Rényi’s residual entropy: A quantile approach. Statist. Probab. Lett. 2014, 85, 114–121. [Google Scholar] [CrossRef]
  42. Yan, L.; Kang, D.-t. Some new results on the Rényi quantile entropy ordering. Stat. Methodol. 2016, 33, 55–70. [Google Scholar] [CrossRef]
  43. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science Business Media LLC: New York, NY, USA, 2007. [Google Scholar]
  44. Navarro, J.; del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Statist. Plann. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  45. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. Records; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. Ordering Awad–Varma Entropy and Applications to Some Stochastic Models. Mathematics 2021, 9, 280. https://doi.org/10.3390/math9030280

AMA Style

Sfetcu R-C, Sfetcu S-C, Preda V. Ordering Awad–Varma Entropy and Applications to Some Stochastic Models. Mathematics. 2021; 9(3):280. https://doi.org/10.3390/math9030280

Chicago/Turabian Style

Sfetcu, Răzvan-Cornel, Sorina-Cezarina Sfetcu, and Vasile Preda. 2021. "Ordering Awad–Varma Entropy and Applications to Some Stochastic Models" Mathematics 9, no. 3: 280. https://doi.org/10.3390/math9030280

APA Style

Sfetcu, R. -C., Sfetcu, S. -C., & Preda, V. (2021). Ordering Awad–Varma Entropy and Applications to Some Stochastic Models. Mathematics, 9(3), 280. https://doi.org/10.3390/math9030280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop