Next Article in Journal
Four-Objective Optimizations of a Single Resonance Energy Selective Electron Refrigerator
Next Article in Special Issue
Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables
Previous Article in Journal
Maximum Efficient Power Performance Analysis and Multi-Objective Optimization of Two-Stage Thermoelectric Generators
Previous Article in Special Issue
Stochastic Properties of Fractional Generalized Cumulative Residual Entropy and Its Extensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Cumulative Past Extropy and Its Inference

by
Mohammad Reza Kazemi
1,
Majid Hashempour
2 and
Maria Longobardi
3,*
1
Department of Statistics, Faculty of Science, Fasa University, Fasa IS74, Iran
2
Department of Statistics, University of Hormozgan, Bandar Abbas 79177, Iran
3
Dipartimento di Biologia, Università degli Studi di Napoli Federico II, 80126 Naples, Italy
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(10), 1444; https://doi.org/10.3390/e24101444
Submission received: 14 September 2022 / Revised: 4 October 2022 / Accepted: 8 October 2022 / Published: 11 October 2022
(This article belongs to the Special Issue Measures of Information II)

Abstract

:
This paper introduces and studies a new generalization of cumulative past extropy called weighted cumulative past extropy (WCPJ) for continuous random variables. We explore the following: if the WCPJs of the last order statistic are equal for two distributions, then these two distributions will be equal. We examine some properties of the WCPJ, and a number of inequalities involving bounds for WCPJ are obtained. Studies related to reliability theory are discussed. Finally, the empirical version of the WCPJ is considered, and a test statistic is proposed. The critical cutoff points of the test statistic are computed numerically. Then, the power of this test is compared to a number of alternative approaches. In some situations, its power is superior to the rest, and in some other settings, it is somewhat weaker than the others. The simulation study shows that the use of this test statistic can be satisfactory with due attention to its simple form and the rich information content behind it.

1. Introduction

In recent years, there has been strong interest in the measurement of the uncertainty of probability distributions, which is called entropy. The probabilistic concept of entropy was developed by [1]. For an absolutely continuous random variable X, the Shannon entropy is defined as
H ( X ) = E ( log f ( X ) ) = + f ( x ) log f ( x ) d x ,
where log means the natural logarithm, and f ( x ) is the probability density function (pdf) of a random variable X. Several applications of entropy in information theory, economics, communication theory, and physics are well developed in the literature, (see Cover and Thomas, [2]). Belis and Guiasu [3] and Guiasu [4] considered a weighted entropy measure as
H w ( X ) = E ( X log f ( X ) ) = + x f ( x ) log f ( x ) d x ,
where by assigning greater importance to larger values of X, the weight x in (1) emphasizes the occurrence of the event X = x . Reference [5] stated the necessity of the existence of the weighted measures of uncertainty. In the Shanon entropy H ( X ) , only the pdf of the random variable X is regarded. Moreover, it is known that this information measure is shift-independent, in the sense that the information content of a random variable X is equal to that of X + b . Indeed, some applied fields such as neurobiology do not tend to deal with shift-independent but shift-dependent. Further research was conducted to generalize the concept of entropy, for example, by replacing the pdf f ( x ) with the survival function F ¯ ( x ) , ref. [6] introduced the cumulative residual entropy (CRE) as
E ( X ) = 0 + F ¯ ( x ) log F ¯ ( x ) d x .
Moreover, continued generalizations recently include a more tractable measure of information, which is the dual of entropy, called extropy, introduced by [7], which has the following form:
J ( X ) = 1 2 0 + f 2 ( x ) d x
= 1 2 0 1 f F 1 ( u ) d u .
After that, a number of researchers have worked to identify the behavior of this concept in some complex schemes. In fact, both entropy and extropy provide us the information content associated with the random variable X. As stated before, the extropy is a measure of uncertainty introduced as dual to the entropy. The most important advantage of extropy is that it is easy to compute. References [8,9,10] characterized the behavior of extropy and its generalization in record values, order statistics schemes, and mixed systems, respectively. Moreover, the extropy properties of the ranked set sampling were given in [11]. In addition to the work of [12] in which the concept of extropy was generalized to cumulative residual extropy, reference [13] investigated the properties of this term in both theoretical and applied aspects based on a version of the ranked set sampling. Moreover, Vaselabi et al., Buono and Longobardi, Kazemi et al. [14,15,16] considered varextropy, Deng extropy, and fractional Deng extropy as generalizations of extropy. Furthermore, References [17,18,19] considered dynamic weighted extropy, the extropy of past lifetime distribution, and the extropy of k-records, respectively. For the problem of estimation and inference of the extropy, one can see, for example [20,21], and so on. An alternative measure of the uncertainty of a random variable X called cumulative residual extropy (CRJ) was introduced by [12] as
E J ( X ) = 1 2 0 + F ¯ 2 ( x ) d x
= 1 2 0 1 u 2 d u f F 1 ( 1 u ) .
As we see, this measure is a generalization of the so-called extropy of [7] in which the survival function F ¯ ( x ) plays the role of the pdf f ( x ) in (3). Since the pdf f ( x ) is the derivative of the cumulative distribution function F ( x ) (cdf), cdf is more convenient to work with. Therefore, because of its convenience, some researchers prefer to work with CRJ than extropy. In the following, the basic idea is to replace the pdf with the cdf in the extropy definition (3). The cdf is more regular than the pdf, because the pdf is computed as the derivative of the cdf. For the dual measure for a random variable X, we can define the cumulative past extropy (CPJ) as
E ¯ J ( X ) = 1 2 0 + F 2 ( x ) d x
= 1 2 0 1 u 2 d u f F 1 ( u ) .
The E ¯ J ( X ) is suitable to measure information when uncertainty is related to the past, and the empirical version of the CPJ can be easily obtained rather than the empirical version of the extropy itself. So, one can explore the applications of the CPJ in providing inferential methods. It is reasonable to define the CPJ only for random variables with bounded support, since this measure will be equal to for all random variables with unbounded support. The rest of this paper is organized as follows. In Section 2, we introduce the weighted cumulative past extropy as well as analyzing some of its properties, and some examples are presented. Section 3 considers the WCPJ of order statistics. Furthermore, we explore that when WCPJs of the last order statistic are equal for two distributions, these two distributions will be equal. In Section 4, some bounds and inequalities are achieved. Section 5 focuses on certain connections to reliability theory. Finally, in Section 6, an empirical version of the WCPJ is provided, and a hypothesis testing problem is carried out for a goodness of fit test of the standard uniform distribution.

2. Weighted Cumulative Past Extropy

In this section, we introduce a new information measure called weighted cumulative past extropy (WCPJ). The cumulative past extropy can be generalized to weighted cumulative past extropy. The main objective of the study is to extend weighted extropy to random variables with continuous distributions.
Definition 1.
Let X be a nonnegative absolutely continuous random variable having cdf F ( x ) . We define the WCPJ of X by
E ¯ w J ( X ) = 1 2 0 + x F 2 ( x ) d x
= 1 2 0 1 u 2 F 1 ( u ) f F 1 ( u ) d u .
The following equality can be used in the sequel.
E ¯ w J ( X ) = 1 2 0 y F X 2 ( x ) d x d y .
As stated in the introduction, similar to the CPJ, the value of the WCPJ is for all random variables with unbounded support. So, our definition for WCPJ should be restricted to all random variables with bounded support. Let X be a nonnegative random variable with bounded support S; then, the WCPJ of X is defined as
E ¯ w J ( X ) = 1 2 0 sup S x F 2 ( x ) d x .
Now, we evaluate the WCPJ of some distributions.
Example 1.
Let X have the power distribution with the cdf, F ( x ) = x β θ , x ( 0 , β ) , θ > 0 . Then,
E ¯ J ( X ) = β 2 ( 2 θ + 1 ) ,
and
E ¯ w J ( X ) = β 2 4 θ + 1 .
We conclude E ¯ w J ( X ) = 2 θ + 1 2 ( θ + 1 ) β E ¯ J ( X ) . If β > 2 ( θ + 1 ) 2 θ + 1 , then E ¯ w J ( X ) > E ¯ J ( X ) , and if β < 2 ( θ + 1 ) 2 θ + 1 , then E ¯ w J ( X ) < E ¯ J ( X ) .
Example 2.
Let X be a uniform random variable such that X U ( a , b ) . Then,
E ¯ J ( X ) = a b 6 ,
and
E ¯ w J ( X ) = b a 24 a + 3 b .
We conclude
E ¯ w J ( X ) = 3 b + a 4 E ¯ J ( X ) = E ( X ) + b 2 E ¯ J ( X ) .
If E ( X ) > 2 b , then E ¯ w J ( X ) > E ¯ J ( X ) , and if E ( X ) < 2 b , then E ¯ w J ( X ) < E ¯ J ( X ) .
In the following, the effect of the linear transformation on the WCPJ will be studied.
Proposition 1.
Let X be a nonnegative random variable. If Y = a X + b , a > 0 , b 0 , then
E ¯ w J ( Y ) = a 2 E ¯ w J ( X ) + a b E ¯ J ( X ) .
Theorem 1.
Let X be a nonnegative continuous random variable with bounded support S.Then, we have
(i) 
E ¯ w J ( X ) = 1 2 E H F ( X ̲ ) .
(ii) 
E ¯ w J ( X ) = 1 2 E [ H F ] E [ H F ( X ¯ ) ] ,
where H F ( t ¯ ) = 0 t x F ( x ) d x , H F ( t ̲ ) = t sup S x F ( x ) d x , and H F = 0 sup S x F ( x ) d x .
Proof. 
From Equation (9) and by Fubini’s theorem, we have
E ¯ w J ( X ) = 1 2 0 sup S x F 2 ( x ) d x = 1 2 0 sup S x F ( x ) 0 x f ( t ) d t d x = 1 2 0 sup S f ( t ) t sup S x F ( x ) d x d t = 1 2 E X sup S x F ( x ) d x .
On the other hand,
t sup S x F ( x ) d x = 0 sup S x F ( x ) d x 0 t x F ( x ) d x .
The proof of part ( i i ) then follows from the substitution of (17) in (16). □
In the following, we express an upper bound of the WCPJ in terms of the extropy.
Theorem 2.
Let J ( X ) be the extropy of the random variable X and f ( x ) 1 for all n; then,
E ¯ w J ( X ) D * exp { 2 J ( X ) } ,
where D * = 1 2 exp { E [ log ( X F 2 ( X ) ) ] } .
Proof. 
The proof is similar to that of Theorem 2.3 in [22]. □
Remark 1.
For a nonnegative and absolutely continuous random variable X with bounded support S, the weighted cumulative past extropy is nonpositive.

3. Some Characterization Results Based on the Order Statistics

In this section, for some characterization results, the following lemma is needed.
Lemma 1.
Let g be a continuous function with support [ 0 , 1 ] , such that 0 1 g ( y ) y m d y = 0 , for m 0 ; then, g ( y ) = 0 , for all y [ 0 , 1 ] .
In the following, we provide the WCPJ of the last and first order statistics. As before, we assume that the random variable X has bounded support S. The WCPJ of the last order statistic is
E ¯ w J ( X n : n ) = 1 2 0 sup S x F X n : n 2 ( x ) d x = 1 2 0 sup S x F X 2 n ( x ) d x ,
With a change in variable, u = F X ( x ) , we are able to write
E ¯ w J ( X n : n ) = 1 2 0 1 u 2 n F 1 ( u ) f ( F 1 ( u ) ) d u .
Moreover, by using F X 1 : n ( x ) = 1 F ¯ X n ( x ) , we have
E ¯ w J ( X 1 : n ) = 1 2 0 sup S x ( 1 F ¯ n ( x ) ) 2 d x ,
with another change in variable, u = F ¯ ( x ) in (21), we have
E ¯ w J ( X 1 : n ) = 1 2 0 1 ( 1 u ) 2 n F 1 ( 1 u ) f ( F 1 ( 1 u ) ) d u .
Remark 2.
Let Λ * = E ¯ w J ( X n : n ) E ¯ w J ( X ) . Since Λ * > 0 , the uncertainty of X n : n is more than that of X, for all n. If n = 1 , then E ¯ w J ( X n : n ) = E ¯ w J ( X ) .
Now, we evaluate the WCPJ of X n : n for some distributions.
Example 3.
Let X have a Power distribution with the cdf F ( x ) = x β θ , 0 < x < θ , 0 < θ . Then, E ¯ J ( X ) = β 2 ( 2 θ + 1 ) , E ¯ w J ( X ) = β 2 4 ( θ + 1 ) , E ¯ J ( X n : n ) = β 2 ( 2 n θ + 1 ) , and E J ( X 1 : n ) = β 2 4 ( n θ + 1 ) . In the sequel, E ¯ J ( X n : n ) = θ + 1 n θ + 1 E ¯ w J ( X ) .
Example 4.
Assume that X has a uniform distribution with support on ( a , b ) . Then, E ¯ J ( X n : n ) = b a 2 ( 2 n + 1 ) , E ¯ w J ( X n : n ) = b a 2 ( 2 n + 1 ) b b a 2 ( n + 1 ) , E ¯ J ( X ) = a b 6 , and E ¯ w J ( X ) = b a 6 ( 3 a + b ) .
Theorem 3.
Let X 1 , , X n and Y 1 , , Y n be random samples from nonnegative continuous cdfs F ( x ) and G ( x ) and pdfs f ( x ) and g ( x ) , respectively, with a common bounded support. Then, F ( x ) = G ( x ) if and only if E ¯ w J ( X n : n ) = E ¯ w J ( Y n : n ) , for all n.
Proof. 
The necessity is trivial. Therefore, it remains to prove the sufficiency part. If E ¯ w J ( X n : n ) = E ¯ w J ( Y n : n ) , for all n, then we have
1 2 0 1 u 2 n F 1 ( u ) f ( F 1 ( u ) ) G 1 ( u ) g ( G 1 ( u ) ) d u = 0 .
By using Lemma 1, we obtain
F 1 ( u ) f ( F 1 ( u ) ) = G 1 ( u ) g ( G 1 ( u ) ) .
In the following, we have F 1 ( u ) d F 1 ( u ) / d u = G 1 ( u ) d G 1 ( u ) / d u , u [ 0 , 1 ] . Since d F 1 ( u ) / d u = 1 / f ( F 1 ( u ) ) , it will be concluded that F 1 ( u ) = G 1 ( u ) , u [ 0 , 1 ] . □
Theorem 4.
Suppose that X 1 , , X n and Y 1 , , Y n are random samples from nonnegative continuous cdfs F ( x ) and G ( x ) and pdfs f ( x ) and g ( x ) , respectively, such that F ( x * ) = G ( x * ) . Then, F ( x ) = G ( x ) , for x < x * , if and only if
E ¯ w J ( X j : n | X j + 1 : n = x * ) = E ¯ w J ( Y j : n | Y j + 1 : n = x * ) .
Proof. 
Suppose E ¯ w J ( X j : n | X j + 1 : n = x * ) = E ¯ w J ( Y j : n | Y j + 1 : n = x * ) ; that is, the WCPJ of the last order statistic for two distributions F ( x ) and G ( x ) truncated at x * are equal. Thus, by Theorem 3, these two truncated distributions are equal, which leads to F ( x ) = G ( x ) for x < x * .
Conversely, if F ( x ) = G ( x ) for x < x * , then by assumption, F ( x * ) = G ( x * ) , and F ( x ) and G ( x ) truncated at x * are equal for x < x * ; that is,
F ( x * ) F ( x ) 1 F ( x * ) = G ( x * ) G ( x ) 1 G ( x * ) , x < x * .
The distribution of X j : n , given that X j + 1 : n = x * , is the same as the distribution of the last order statistic obtained from a sample of size n j 1 from a population whose distribution F ( x ) is truncated at x * . For more details, see [23]. By Theorem 3, we conclude that E ¯ w J ( X j : n | X j + 1 : n = x * ) = E ¯ w J ( Y j : n | Y j + 1 : n = x * ) . □

4. Some Inequalities

In this section, we obtain some upper and lower bounds for the WCPJ.
Proposition 2.
Let X be a nonnegative continuous random variable with the cdf F X ( x ) and bounded support S = [ k , sup S ) . Then, we obtain
E ¯ w J ( X ) k E ¯ J ( X ) .
Corollary 1.
Let X be a continuous random variable with the cdf F ( x ) and support [ 0 , k ] . Then,
(i) 
k E ¯ J ( X ) E ¯ w J ( X ) .
(ii) 
E ¯ w J ( X ) H F ( k ¯ ) 2 log [ 1 + ( 2 H F ( k ¯ ) k 2 ) ] ,
where H F ( k ¯ ) = 0 k x F ( x ) d x .
In the following, stochastic orders of two distributions in terms of their characteristics are considered. For more details, one can see [24]. In the sequel, we show that the ordering of the WCPJ is implied by the usual stochastic order.
Definition 2.
A random variable X 1 is said to be smaller than X 2 in the usual stochastic order, denoted by X 1 s t X 2 , if P ( X 1 x ) P ( X 2 x ) for all x.
Definition 3.
A random variable X 1 is said to be smaller than X 2 in the WCPJ order, denoted by X 1 w c p j X 2 , if
E ¯ w J ( X 1 ) E ¯ w J ( X 2 ) .
Proposition 3.
Let X 1 and X 2 be nonnegative and continuous random variables. If X 1 s t X 2 , then X 1 w c p j X 2 .
Example 5.
Let X and Y be two random variables with the cdfs F X ( x ) = x , x [ 0 , 1 ] and F Y ( x ) = x 2 , x [ 0 , 1 ] , respectively. It is seen that X s t Y , and X w c p j Y .
In the following, we find a lower bound for E ¯ w J ( X ) .
Remark 3.
Let X be a nonnegative random variable with the cdf F ( x ) and bounded support S. Then,
E ¯ w J ( X ) 1 2 H F log H F A + K ,
where K = 1 2 0 sup S x 2 F 2 ( x ) d x and A = 0 sup S F ( x ) d x .
Proof. 
By using the log-sum inequality, we obtain
0 sup S x F ( x ) log x d x H F log H F 0 sup S F ( x ) d x .
Using F ¯ 2 ( x ) F ¯ ( x ) and the inequality 1 x log x for 0 < x , we obtain
0 sup S x ( 1 x ) F 2 ( x ) d x H F log H F 0 sup S F ( x ) d x .
By multiplying both sides of (28) by 1 / 2 , we have
E ¯ w J ( X ) 1 2 0 sup S x 2 F 2 ( x ) d x H F log H F 0 sup S F ( x ) d x ,
which completes the proof. □

5. Connections to Reliability Theory

In this section, the connection between the WCPJ and reliability theory will be considered. The inactivity time function is of interest in many fields such as survival analysis, actuarial studies, economics, reliability, etc. The inactivity time is thus the duration of the time occurring between the inspection time t and the failure time X, given that at time t the system was found to be down. If X is the lifetime of a system, then the inactivity time of the system is denoted by t X | X t , t 0 . Let X be a nonnegative continuous random variable with the cdf F ( x ) , such that E ( X ) is finite. The mean inactivity time (MIT) function of X is defined as
M I T ( t ) = E t X | X t = 0 t F ( x ) F ( t ) d x , t 0 .
This function has been used in various contexts of survival analysis and reliability theory involving characterization and stochastic orders of random lifetime. For more details, see [25,26,27,28,29,30]. In the following theorem, we prove that the WCPJ has a relation to the second moment of the inactivity time (SMIT) function.
Definition 4.
Let X be a nonnegative continuous random variable. Then, for all t 0 , we define the second moment of the inactivity time (SMIT) as
S M I T ( t ) = E ( t X ) 2 | X t .
It can be easily seen that
S M I T ( t ) = 2 t M I T ( t ) 0 t 2 x F ( x ) F ( t ) d x .
Theorem 5.
Let X be a nonnegative continuous random variable with bounded support S, reversed hazard rate function r h ( x ) , SMIT function, and weighted cumulative extropy E ¯ w J ( X ) . Thus,
E ¯ w J ( X ) 1 4 E S M I T ( X ) + C * ,
where C * = 2 1 E ( X · M I T ( X ) ) H F .
Proof. 
E ( S M I T ( X ) ) = 2 E [ X · M I T ( X ) ] 2 0 sup S x sup S x r h ( t ) F ( x ) d t d x = 2 E [ X · M I T ( X ) ] 2 0 sup S x F ( x ) | log F ( x ) | d x 2 E [ X · M I T ( X ) ] 2 0 sup S x F ( x ) d x + 2 0 sup S x F 2 ( x ) d x = 2 E [ X · M I T ( X ) ) ] 2 H F 4 E ¯ w J ( X ) .
In the sequel, we have
E ¯ w J ( X ) 1 4 E ( S M I T ( X ) ) + 2 E ( X · M I T ( X ) ) 2 E ( H ¯ F ( X ) ) = 1 4 E S M I T ( X ) + 1 2 E ( X · M I T ( X ) ) H F ,
and the proof is complete. □
Equation (32) is useful when we have some information about the SMIT or its behavior. An alternative expression to (32) can be given in terms of the hazard rate function. The hazard rate function of a random variable X with pdf f ( x ) and survival function F ¯ ( x ) is defined as h ( x ) = f ( x ) / F ¯ ( x ) .
Proposition 4.
Let X be a nonnegative continuous random variable with bounded support S, hazard rate function h ( x ) and a finite WCPJ. Then,
E ¯ w J ( X ) E ( Q ( X ) ) ,
where Q ( t ) = 1 2 t sup S x 0 x h ( u ) d u d x .
Proof. 
E ¯ w J ( X ) = 1 2 0 sup S x F ( x ) 0 x f ( t ) d t d x = 1 2 0 sup S f ( t ) t sup S x F ( x ) d x d t 1 2 0 sup S f ( t ) t sup S x log F ( x ) d x d t = 1 2 0 sup S f ( t ) t sup S x 0 x h ( u ) d u d x d t .

6. Empirical WCPJ

In this part, an estimator of the WCPJ is constructed by means of the empirical WCPJ. Suppose that X 1 , , X n is a nonnegative, continuous, independent, and identically distributed random sample from a population having the cdf F ( x ) . By using the plug-in method, we define the empirical weighted cumulative past extropy as
E ¯ n w J ( X ) = 1 2 0 + x F n 2 ( x ) d x ,
where F n ( x ) is the empirical distribution function. Let X 1 , X 2 , . . . , X n be the ordered statistics corresponding to the underlying random sample. Then, E ¯ n w J ( X ) can be rewritten in the form of the ordered statistics
E ¯ n w J ( X ) = 1 4 i = 1 n 1 X i + 1 2 X i 2 i n 2 .
In the following, we use E ¯ n w J ( X ) in (34) for testing the uniformity of the random sample X 1 , , X n . Before dealing with a test statistic, we give the following nice property of uniform distribution among all distributions defined on interval ( 0 , 1 ) . For a random variable X with the cdf F and for p 0 , 1 , let ψ p J ( F ) be defined as
ψ p J ( F ) = 1 2 0 p x F 2 ( x ) d x .
It is trivial that for the uniform random variable X on interval ( 0 , 1 ) with the cdf F 0 ( x ) = x , ψ p J ( F 0 ) = p 4 / 8 . Suppose that for a cdf F in the class of cdfs defined on interval ( 0 , 1 ) , ψ p J ( F ) = p 4 / 8 . This means that F and F 0 have the same measure based on ψ p J ( · ) , i.e., ψ p J ( F ) = ψ p J ( F 0 ) . So, one can see that
0 p x F 2 ( x ) F 0 2 ( x ) d x = 0 , p 0 , 1 .
It is known that the 0 , p generate the Borel σ -algebra of Ω = ( 0 , 1 ] . Therefore, one can write
B x F 2 ( x ) F 0 2 ( x ) d x = 0 , B 0 , 1 .
So, F ( x ) = F 0 ( x ) , almost everywhere is obtained. ψ p J ( F ) is uniquely determined by the uniform distribution in the sense that for some cdfs defined on 0 , 1 , they take a value lower than p 4 / 8 and for some of them, they take higher than p 4 / 8 , and only for the standard uniform distribution, we have ψ p J ( F 0 ) = p 4 / 8 .

6.1. Uniform Goodness of Fit Test

Based on this last property, a test statistic can be designed for the uniform goodness of fit test. One can construct a test statistic based on E ¯ n w J ( X ) in (34), which is the sampling counterpart of the WCPJ measure. For this goodness of fit test problem, we want to test whether the given random sample X 1 , , X n is supported by the standard uniform distribution. In other words, we want to test a hypothesis testing H 0 : F = F 0 against an alternative H 1 : F F 0 , where F 0 is the cdf of the standard uniform distribution. A simple nonparametric test statistic is based on E ¯ n w J ( X ) , as mentioned before. Indeed, E ¯ n w J ( X ) is our test statistic. In the next stage of our hypothesis testing, we should provide the critical region for the uniform goodness of fit test problem. The critical region is then obtained in the sense that E ¯ n w J ( X ) is less than or greater than two values K 1 ( α ) and K 2 ( α ) , respectively, where α is a prespecified type I error rate; that is, one needs to determine K 1 ( α ) and K 2 ( α ) , and whenever E ¯ n w J ( X ) < K 1 ( α ) or E ¯ n w J ( X ) > K 2 ( α ) , then the null hypothesis of having a standard uniform distribution is rejected in favor of an alternative one. Since the distribution of the E ¯ n w J ( X ) is not easy to derive, then K 1 ( α ) and K 2 ( α ) can be estimated using the empirical quantile of the test statistic E ¯ n w J ( X ) under the standard uniform distribution. For a given type I error rate α and a large run number N, we generate a random sample X 1 , , X n from the standard uniform distribution and then compute the value of E ¯ n w J ( X ) . After that, we repeat this step for a large number of runs, i.e., N = 100000 . We sort these N values of E ¯ n w J ( X ) . Then, K 1 ( α ) and K 2 ( α ) can be estimated by the quantiles α / 2 -th and 1 α / 2 -th of the empirical distribution of E ¯ n w J ( X ) , respectively. In Table 1, we obtain the values of K 1 ( α ) and K 2 ( α ) , for some sizes of sample n.

6.2. Power of the Test

In this part, the power of the proposed test statistic is compared with some others. These competing approaches are the one-sample Kolmogorov–Smirnov, [31,32]. To compute the p-values of these tests, a package called “uniftest” in R software version 4.0.5 was used. The results of the test statistics of our proposed E ¯ n w J ( X ) , the Kolmogorov–Smirnov, Quesenberry and Miller, and the Frosini are symbolically shown by WCPJ, K-S, Q-M, and FRO, respectively.
To compute the power of the tests, a random sample, which assumed all possible values in the interval ( 0 , 1 ) , was generated from the non-standard uniform distribution, such as beta or Kumaraswamy distributions, see, for example [33], whose supports varied between 0 and 1. After that, the powers were estimated empirically. We considered the following alternative distributions to compute the tests’ power:
(1)
Beta distribution with pdf: 1 / B ( a , b ) x a 1 ( 1 x ) b 1 :
(i)
Beta (1.5, 1.5)
(ii)
Beta (0.5, 0.3)
(iii)
Beta (10, 1);
(2)
Kumaraswamy distribution with cdf: 1 ( 1 x a ) b :
(i)
Kuma (0.5, 5)
(ii)
Kuma (0.5, 0.3)
(iii)
Kuma (10, 10);
(3)
Piecewise distribution function with cdf F x = 0.5 2 k 1 0.5 x k ; 0 x 0.5 0.5 + 2 k 1 x 0.5 k ; 0.5 x 1
(i)
Piec (2)
(ii)
Piec (3.5)
(iii)
Piec (5)
The results are depicted in Figure 1 for the different values of the sample size n as 20 , 30 , 40 , and 50.
Figure 1 shows that the power of our proposed test based on the WCPJ was comparable to that of others for the beta and Kumaraswamy distributions. Even in some cases for these distributions, its power was superior to the other tests. For third alternative distribution, the power of the test based on the WCPJ was weaker than that of the rest. However, as the sample size n became larger, its power improved, and the test learned to discriminate the observations arising from the standard uniform distribution from those generated from nonuniform distributions. This test statistic can be satisfactory with due attention to its simple form and the rich information content behind it. Note that the plots for comparing the powers of the proposed test statistics are not shown in Figure 1 for beta (10, 1), kuma (0.5, 5) and kuma (10, 10), because the powers of all the tests were equal to 1.

7. Conclusions

The use of the extropy measure and its generalizations have become widespread in all scientific fields. One updated generalization of this measure is known as weighted extropy. In this paper, we introduced a new measure of uncertainty, related to cumulative extropy, named weighted cumulative past extropy (WCPJ). The properties of the WCPJ and a number of results including inequalities and various bounds to the WCPJ were considered. Studies related to reliability theory were discussed. A topic that may attract the attention of researchers is the dynamic version of the extropy in the sense that the uncertainty of the system depends on time t. Further research should investigate the uncertainty measure based on the weighted dynamic cumulative past or residual extropy. As an application of the proposed method, the empirical WCPJ was proposed to estimate this new information measure, and a test statistic was provided for the problem of the goodness of fit test of the standard uniform distribution based on the proposed WCPJ. Several applications of extropy and its generalizations, such as in information theory, economics, communication theory, and physics, can be found in the literature. Here, we cite some references. Ref. [34] studied the stock market in OECD countries based on a generalization of extropy known as negative cumulative extropy. Ref. [35] applied another version of extropy known as the Tsallis extropy to a pattern recognition problem. Ref. [16] explored an application of a generalization of extropy known as the fractional Deng extropy to a problem of classification. Ref. [36] used some extropy measures for the problem of compressive sensing.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Maria Longobardi is a member of the research group GNAMPA of INdAM (Istituto Nazionale di Alta Matematica) and is partially supported by MIUR - PRIN 2017, project “Stochastic Models for Complex Systems”, no. 2017 JFFHSH. The present work was developed within the activities of the project 000009_ALTRI_CDA_75_2021_FRA_LINEA_B funded by “Programma per il finanziamento della ricerca di Ateneo-Linea B” of the University of Naples Federico II.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons Inc.: Hoboken, NY, USA, 2006. [Google Scholar]
  3. Belis, M.; Guiasu, S. A quantitative-qualitative measure of information in cybernetic systems. IEEE Trans. Inf. Technol. Biomed. 1968, 4, 593–594. [Google Scholar] [CrossRef]
  4. Guiasu, S. Grouping data by using the weighted entropy. J. Stat. Plann. Inference 1986, 15, 63–69. [Google Scholar] [CrossRef]
  5. Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. Sci. Math. Jpn. 2006, 64, 255–266. [Google Scholar]
  6. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  7. Lad, F.; Sanfilippo, G.; Agrò, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  8. Qiu, G.; Jia, K. The Residual Extropy of Order Statistics. Stat. Probab. Lett. 2018, 133, 15–22. [Google Scholar] [CrossRef]
  9. Qiu, G. The extropy of order statistics and record values. Stat. Probab. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  10. Qiu, G.; Wang, L.; Wang, X. On Extropy Properties of Mixed Systems. Probab. Eng. Inf. Sci. 2019, 33, 471–486. [Google Scholar] [CrossRef]
  11. Raqab, M.Z.; Qiu, G. On extropy properties of ranked set sampling. Statistics 2019, 53, 210–226. [Google Scholar] [CrossRef]
  12. Jahanshahi, S.M.A.; Zarei, H.; Khammar, A. On Cumulative Residual Extropy. Probab. Eng. Inf. Sci. 2020, 34, 605–625. [Google Scholar] [CrossRef]
  13. Kazemi, M.R.; Tahmasebi, S.; Cali, C.; Longobardi, M. Cumulative residual extropy of minimum ranked set sampling with unequal samples. Results Appl. Math. 2021, 10, 100156. [Google Scholar] [CrossRef]
  14. Vaselabadi, N.M.; Tahmasebi, S.; Kazemi, M.R.; Buono, F. Results on Varextropy Measure of Random Variables. Entropy 2021, 23, 356. [Google Scholar] [CrossRef]
  15. Buono, F.; Longobardi, M. A dual measure of uncertainty: The Deng Extropy. Entropy 2020, 22, 582. [Google Scholar] [CrossRef]
  16. Kazemi, M.R.; Tahmasebi, S.; Buono, F.; Longobardi, M. Fractional Deng Entropy and Extropy and Some Applications. Entropy 2021, 23, 623. [Google Scholar] [CrossRef]
  17. Sathar, E.A.; Nair, R.D. On dynamic weighted extropy. J. Comput. Appl. Math. 2021, 393, 113507. [Google Scholar] [CrossRef]
  18. Kamari, O.; Buono, F. On extropy of past lifetime distribution. Ric. di Mat. 2021, 70, 505–515. [Google Scholar] [CrossRef]
  19. Sathar, E.A.; Jose, J. Past Extropy of k-Records. Stochastics Qual. Control. 2020, 35, 25–38. [Google Scholar] [CrossRef]
  20. Alizadeh Noughabi, H.; Jarrahiferiz, J. On the estimation of extropy. J. Nonparametr. Stat. 2019, 31, 88–99. [Google Scholar] [CrossRef]
  21. Al-Labadi, L.; Berry, S. Bayesian estimation of extropy and goodness of fit tests. J. Appl. Stat. 2022, 49, 357–370. [Google Scholar] [CrossRef]
  22. Hashempour, M.; Kazemi, M.R.; Tahmasebi, S. On weighted cumulative residual extropy: Characterization, estimation and testing. Statistics 2022, 56, 681–698. [Google Scholar] [CrossRef]
  23. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; John Wiley and Sons: New York, NY, USA, 1992. [Google Scholar]
  24. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  25. Li, X.; Lu, J. Stochastic comparisons on residual life and inactivity time of series and parallel systems. Probab. Eng. Inf. Sci. 2003, 17, 267–275. [Google Scholar] [CrossRef]
  26. Misra, N.; Gupta, N.; Dhariyal, I.D. Stochastic properties of residual life and inactivity time at a random time. Stoch. Model. 2008, 24, 89–102. [Google Scholar] [CrossRef]
  27. Ahmad, I.A.; Kayid, M. Characterizations of the RHR and MIT orderings and the DRHR and IMIT classes of life distributions. Probab. Eng. Inf. Sci. 2005, 19, 447–461. [Google Scholar] [CrossRef]
  28. Ahmad, I.A.; Kayid, M.; Pellery, F. Further results involving the MIT order and IMIT class. Probab. Eng. Inf. Sci. 2005, 19, 377–395. [Google Scholar] [CrossRef] [Green Version]
  29. Kayid, M.; Ahmad, I.A. On the mean inactivity time ordering with reliability applications. Probab. Eng. Inf. Sci. 2004, 18, 395–409. [Google Scholar] [CrossRef]
  30. Nanda, A.K.; Singh, H.; Misra, N.; Paul, P. Reliability properties of reversed residual lifetime. Commun. Stat. Theory Methods 2003, 32, 2031–2042. [Google Scholar] [CrossRef]
  31. Quesenberry, C.P.; Miller, F.L., Jr. Power studies of some tests for uniformity. J. Stat. Comput. Simul. 1977, 5, 169–191. [Google Scholar] [CrossRef]
  32. Frosini, B.V. On the Distribution and Power of a Goodness-of-Fit Statistic with Parametric and Nonparametric Applications, “Goodness-of-Fit”; Revesz, P., Sarkadi, K., Sen, P., Eds.; North-Holland: Amsterdam, The Netherlands; Oxford, UK; New York, NY, USA, 1987; pp. 133–154. [Google Scholar]
  33. Cordeiro, G.M.; de Castro, M. A new family of generalized distributions. J. Stat. Comput. Simul. 2009, 81, 883–898. [Google Scholar] [CrossRef]
  34. Tahmasebi, S.; Toomaj, A. On negative cumulative extropy with applications. Commun. Stat. Theory Methods 2022, 51, 5025–5047. [Google Scholar] [CrossRef]
  35. Balakrishnan, N.; Buono, F.; Longobardi, M. On Tsallis extropy with an application to pattern recognition. Stat. Probab. Lett. 2022, 180, 109241. [Google Scholar] [CrossRef]
  36. Tahmasebi, S.; Kazemi, M.R.; Keshavarz, A.; Jafari, A.A.; Buono, F. Compressive Sensing Using Extropy Measures of Ranked Set Sampling. Math. Slovaca 2022. accepted for publication. [Google Scholar]
Figure 1. Power comparison of the WCPJ, K-S, Q-M, and FRO test statistics: above (left): beta (1.5, 1.5), (middle): beta (0.5, 0.3), and (right): kuma (0.5, 0.3); and below (left): piec (2), (middle): piec (3.5), and (right): piec (5) distributions.
Figure 1. Power comparison of the WCPJ, K-S, Q-M, and FRO test statistics: above (left): beta (1.5, 1.5), (middle): beta (0.5, 0.3), and (right): kuma (0.5, 0.3); and below (left): piec (2), (middle): piec (3.5), and (right): piec (5) distributions.
Entropy 24 01444 g001
Table 1. Values of K 1 ( α ) and K 2 ( α ) for α = 0.05 .
Table 1. Values of K 1 ( α ) and K 2 ( α ) for α = 0.05 .
n
Cutoff Points20304050
K 1 ( α ) −0.1463−0.1446−0.1429−0.1405
K 2 ( α ) −0.0668−0.0785−0.0861−0.0910
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kazemi, M.R.; Hashempour, M.; Longobardi, M. Weighted Cumulative Past Extropy and Its Inference. Entropy 2022, 24, 1444. https://doi.org/10.3390/e24101444

AMA Style

Kazemi MR, Hashempour M, Longobardi M. Weighted Cumulative Past Extropy and Its Inference. Entropy. 2022; 24(10):1444. https://doi.org/10.3390/e24101444

Chicago/Turabian Style

Kazemi, Mohammad Reza, Majid Hashempour, and Maria Longobardi. 2022. "Weighted Cumulative Past Extropy and Its Inference" Entropy 24, no. 10: 1444. https://doi.org/10.3390/e24101444

APA Style

Kazemi, M. R., Hashempour, M., & Longobardi, M. (2022). Weighted Cumulative Past Extropy and Its Inference. Entropy, 24(10), 1444. https://doi.org/10.3390/e24101444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop