1. Introduction
The concept of entropy has its roots in Communication Theory and was introduced by Shannon. More exactly, a data communication system, in this theory, consists of three elements: a receiver, a communication channel and a source of data. Shannon considered the following problem: based on the signal received through the channel, the receiver should be able to understand what data was generated by the source. Shannon taked into account many methods to compress, encode and transmit messages from a data source and showed that the entropy is an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. He generalized and strengthened this result considerably for noisy channels in his noisy channel coding theorem.
Another domain in which the entropy is very useful is Information Theory. Here the concept is directly analogous to the entropy in Statistical Thermodynamics.
Entropy has also relevance in other areas of Mathematics such as Combinatorics. The definition can be derived from a set of axioms which establish that entropy should be a measure of how surprising the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy.
The notion of Shannon entropy has multiple generalizations (Tsallis entropy, Rényi entropy, Varma entropy, Kaniadakis entropy, cumulative entropy, relative entropy, weighted entropy etc.), which are useful in many technological areas like Physics, Communication Theory, Probability, Statistics, Economics etc. More exactly, there are specific areas where the entropies are used: optimal reactive power dispatch (see [
1]), reinforcement learning (see [
2]), income distribution (see [
3,
4]), non-coding human DNA (see [
5]), earthquakes (see [
6]), stock exchanges (see [
7]), Markov chains (see [
8,
9,
10]), biostatistics (see [
11,
12]), model selection (see [
13,
14]), statistical mechanics (see [
15,
16]), internet (see [
17]). These concepts can be also linked with bayesian control (see [
18]).
The idea of Tsallis was to consider another formula instead of classical logarithm used in Shannon entropy. Moreover, using Tsallis entropy, many physically meaningful generalizations have been introduced. From these generalizations we mention the following: superstatistics, introduced by Beck and Cohen (see [
19]) and spectral statistics, introduced by Tsekouras and Tsallis (see [
20]). At the basis of these both entropic forms are Tsallis and Boltzmann–Gibbs statistics. It has been showed that spectral statistics generalize superstatistics and it has been conjectured that cover some additional cases.
Awad (see [
21]) generalized Shannon entropy obtaining Awad–Shannon entropy. Using his ideas some of the overmentioned entropies can be generalized. We apply this method for Varma entropy, defining Awad–Varma entropy and study some properties concerning ordering of this entropy. Awad–Shannon entropy is intensively studied, especially because of its applications in microchannels (see [
22,
23,
24,
25,
26]). At the same time, Varma entropy, introduced in [
27], is now very actual (see [
28,
29,
30,
31]).
Awad–Shannon entropy has some advantages over other entropies. For example, working with other entropies, we can have completely different systems with the same entropy, the entropy is not necessarily nonnegative, the entropy of a continuous random variable is not a natural extension of the entropy of a discrete random variable, despite they have analogous form etc. None of these situations occur in the case of Awad–Shannon entropy. For other properties of Awad–Shannon entropy see [
21].
Speaking about the divergence measure derived from this new entropy we can say that it coincides with the Kullback-Leibler divergence measure derived from the Shannon entropy.
In this paper we work with a generalization of Awad–Shannon entropy, namely Awad–Varma entropy (the difference is that we work with Varma entropy instead of Shannon entropy). We define a stochastic order on this entropy (more precisely on Awad–Varma residual entropy) and study closure and reversed closure properties of this order. Moreover, we show that this order is preserved in some stochastic models as the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model.
The rest of the paper is organized as follows. In
Section 2, we present the main notions and notations used throughout the paper. In
Section 3, we prove the main results concerning the stochastic order introduced on Awad–Varma residual entropy. We lay stress upon the fact that Theorem 1 is crucial for the whole paper. In
Section 4, we prove the closure and reversed closure properties for the aforementioned order under some reliability transforms, mainly including linear transformations and parallel and series operations. In
Section 5, we deal with applications of the preceding results in some stochastic models, namely the preservation of this order in the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model. Finally, we give a concrete example and draw conclusions.
2. Preliminaries
Let
X be a nonnegative random variable with absolutely continuous cumulative distribution function
, survival function
and probability density function
(
X represents a living thing or the lifetime of a device). We define Shannon entropy of
X by
where “log” is the natural logarithm function and
Z is a nonnegative random variable identically distributed like
X.
In [
21] (see also [
32]) was introduced the so-called Awad–Shannon entropy. Let
. We assume in the whole paper that this supremum is in
for any density function
. Awad–Shannon entropy is given by
One of the generalizations of Shannon entropy is Varma entropy, introduced in [
27].
In this paper we consider Awad–Varma entropy. Let
such that
and
. Awad–Varma entropy is given by
In [
33,
34], the notion of Shannon residual entropy was introduced as a dynamic measure of uncertainty. More precisely, for an absolutely continuous nonnegative random variable
X, the residual life of
X is
|
and the residual entropy of
X at time
t is
Practically, the residual entropy of
X measures the uncertainty of the residual life of
. The reader can find some interesting results concerning the residual entropy in [
35,
36,
37,
38,
39,
40,
41,
42] and in many other papers.
We can define Awad–Varma residual entropy by
Clearly .
We recall the definition of the quantile function
Many times the quantile function is called the right-continuous inverse function of (or, in short, of X).
Differentiating with respect to u both sides of the equality , we have . We denote for any and we get for any .
A quantile version of Shannon residual entropy was introduced in [
40] and this idea was generalized for Rényi residual entropy in [
41,
42]. We continue this work for Awad–Varma residual entropy, dealing with
for any
.
In the whole paper U is a random variable uniformly distributed on .
The following lemma is very useful in this paper.
Lemma 1 (see [
41]).
Let be a function with the property thatand an increasing function. Thenprovided the conditional expectations exist. Throughout the paper, if X and Y are absolutely continuous nonnegative random variables, we denote the distribution functions by , respectively, , the survival functions by , respectively, and the density functions by , respectively, .
3. Main Results
Definition 1. We say that X is smaller than Y in the Awad–Varma quantile entropy order (and denote ) if and only if for any .
Theorem 1. (i) The following assertions are equivalent:
.
for any .
(ii)The following assertions are equivalent:
.
for any .
Proof. It is sufficient to prove (i), the proof of (ii) being analogous.
We have
if and only if
Considering
in the preceding inequality we have the equivalences (for any
):
Putting in these equivalences we get the conclusion. □
Definition 2 X is smaller than Y in the dispersive order (and write ) ifwhich is equivalent to X is smaller than Y in the convex transform order (and write ) if the functionwhich is equivalent to the fact that the function Theorem 2. We suppose that and .
(i) If , then .
(ii) If , then .
Proof. It is sufficient to prove (i), the proof of (ii) being analogous.
If , then for any . We use the inequality and the conclusion follows by Theorem 1. □
Theorem 3. We suppose that and .
(i) If , then .
(ii) If , then .
Proof. It is sufficient to prove (i), the proof of (ii) being analogous.
If
, then the function
is nonnegative increasing, hence
The conclusion follows from Theorem 1, using the inequality .
□
4. Closure Properties
In the sequel, we study the closure and reversed closure properties of the Awad–Varma quantile entropy order under some reliability transforms, mainly including linear transformations and parallel and series operations.
We take
and
independent and identically distributed (i.i.d.) copies of
X, respectively, of
Y and
The same notations as above are used for distribution functions, survival functions and density functions, i.e., , , etc.
Theorem 4. We suppose that and . Then .
Proof. If
, according to Theorem 1, we have
for any
.
One can see that .
Because the function
it follows, by inequality
, inequality (1) and Lemma 1, that
We use Theorem 1 and conclude that . □
In a similar manner like in Theorem 4 we get
Theorem 5. We suppose that and . Then .
We take
and
sequences of independent and identically distributed copies of
X, respectively, of
Y. Let
N be a positive integer random variable having the probability mass function
,
. We assume that
N is independent of
and
. We take
and
The following two theorems are extensions of Theorems 4 and 5 from a finite number n to a random variable N. We will prove only Theorem 7.
Theorem 6. We suppose that and . Then .
Theorem 7. We suppose that and . Then .
Proof. If
, then
for any
.
We can see that
and
where
,
is the probability mass function of
N.
It was proved in [
43] that
Because the function
it follows, by inequality
, inequality (2) and Lemma 1, that
We use Theorem 1 and conclude that . □
5. Applications to Some Stochastic Models
For the remainder of the paper we present the preservation of the Awad–Varma quantile entropy order in the proportional hazard rate model, the proportional reversed hazard rate model, the proportional odds model and the record values model.
5.1. Proportional Hazard Rate Model and Proportional Reversed Hazard Rate Model
We work with the following proportional hazard rate model (see [
43]). Let
. We take
and
two absolutely continuous nonnegative random variables with survival functions
, respectively,
.
Theorem 8. If , and , then .
If , and , then .
Proof. We use the same notations as above, namely, for and , we denote the distribution functions by , respectively, , the right continuous inverse functions by , respectively, and the density functions by , respectively, .
By Theorem 1 we have
if and only if
for any
and
if and only if
We suppose that
and
. Hence the function
and
for any
.
Using the inequality , inequality (3) and Lemma 1, we conclude that .
We suppose that
and
. Hence the function
and
for any
.
Using the inequality , inequality (4) and Lemma 1, we obtain the conclusion. □
We consider the following proportional reversed hazard rate model (see [
43]). For any
let
and
be two absolutely continuous nonnegative random variables with distribution functions
and
.
Theorem 9. If , and , then .
If , and , then .
Proof. The proof is analogous with the proof of Theorem 8. □
5.2. Proportional Odds Model
We deal with the following proportional odds model (see [
44]). Let
and
X and
Y be two absolutely continuous nonnegative random variables having the survival functions
, respectively,
and the density functions
, respectively,
. The proportional odds random variables
and
are defined by the survival functions
respectively,
Theorem 10. If , and , then .
If , and , then .
Proof. For any , let , .
We have:
If , then h is an increasing concave function on .
If , then h is an increasing convex function on .
Hence
and
for any
.
One can prove that .
We have (see Theorem 1):
if and only if
and
if and only if
If
and
, then
for any
and the function
hence, by inequality
, inequality (5) and Lemma 1, we get
If
and
, then
for any
and the function
hence, by inequality
, inequality (6) and Lemma 1, we get
□
5.3. Record Values Model
In the sequel we are concerned with the preservation of the Awad–Varma quantile entropy order in the record values model.
We consider | a sequence of independent and identically distributed random variables from the random variable X with survival function and density function . The nth record times are the random variables defined via and |, .
We denote
and call them the
nth record values. For more informations about record values, the reader can consult [
45].
Concerning
we have, for any
:
and
where
is the survival function of a Gamma random variable with the shape parameter
n and the scale parameter 1 and
is the cumulative failure rate function of
X.
Taking | a sequence of independent and identically distributed random variables from the random variable Y, we have similar formulas for .
Theorem 11. Let .
If and then .
We suppose that . If and , then .
Proof. We suppose that
. Then
for any
.
Because the function
we obtain, by inequality
, inequality (7) and Lemma 1, that
i.e.,
.
Let
. If
, then
for any
.
Using previous formulas we get
and
Because the function
using the inequality
, inequality (8) and Lemma 1, we obtain that
i.e.,
. □
Concrete Example. The exponential distribution can be applied to describe the time to failure of a device.
We consider
such that
and
. Let
and
. Let
X be an exponential absolute continuous nonnegative random variable with density function
given by
and
Y a truncated exponential absolute continuous nonnegative random variable with density function
given by
If , then .
Placing components in parallel, system reliability can be improved. The only time the system fails is when all the parallel components fail.
Applying the results that we proved above we have: (by Theorem 4) and (by Theorem 6) without any supplementary computations.
The study of record values are linked with the study of order statistics. Moreover, it is known that things that could be done relatively easily for order statistics are also feasible for records and things that could be done hard for order statistics are, unfortunately, equally or more difficult for records.
Due to Theorem 11 we have .
If , then etc.
6. Conclusions
Awad–Shannon entropy solves several drawbacks that appears in case of other entropies (for example, in case of Awad–Shannon entropy: completely different systems have different entropies, the entropy is nonnegative, the entropy of a continuous random variable is a natural extension of the entropy of a discrete random variable etc.). We considered a generalization of this entropy, namely Awad–Varma entropy and investigated some statistical order properties of it.
More exactly, we studied closure and reversed closure properties of Awad–Varma quantile entropy order under several reliability transformations, like linear transformations, parallel and series operations. Moreover, we applied the main result to some stochastic models, like proportional hazard rate model, proportional reversed hazard rate model, proportional odds model and record values model, showing that the order defined on Awad–Varma residual entropy is preserved for the aforementioned models.
We intend to continue this work, considering other generalizations of Awad–Shannon entropy.