Next Article in Journal
Design of a 2-Bit Neural Network Quantizer for Laplacian Source
Previous Article in Journal
BeiDou Short-Message Satellite Resource Allocation Algorithm Based on Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian and E-Bayesian Estimations of Bathtub-Shaped Distribution under Generalized Type-I Hybrid Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(8), 934; https://doi.org/10.3390/e23080934
Submission received: 5 June 2021 / Revised: 8 July 2021 / Accepted: 17 July 2021 / Published: 22 July 2021

Abstract

:
For the purpose of improving the statistical efficiency of estimators in life-testing experiments, generalized Type-I hybrid censoring has lately been implemented by guaranteeing that experiments only terminate after a certain number of failures appear. With the wide applications of bathtub-shaped distribution in engineering areas and the recently introduced generalized Type-I hybrid censoring scheme, considering that there is no work coalescing this certain type of censoring model with a bathtub-shaped distribution, we consider the parameter inference under generalized Type-I hybrid censoring. First, estimations of the unknown scale parameter and the reliability function are obtained under the Bayesian method based on LINEX and squared error loss functions with a conjugate gamma prior. The comparison of estimations under the E-Bayesian method for different prior distributions and loss functions is analyzed. Additionally, Bayesian and E-Bayesian estimations with two unknown parameters are introduced. Furthermore, to verify the robustness of the estimations above, the Monte Carlo method is introduced for the simulation study. Finally, the application of the discussed inference in practice is illustrated by analyzing a real data set.

1. Introduction

1.1. Bathtub-Shaped Distribution

Chen [1] used the term ‘bathtub-shaped distribution’ to refer to a lifetime distribution that possesses an increasing or bathtub-shaped hazard function with two parameters. As it could depict the lifetimes for multiple mechanical and electrical products, this distribution is widely used in practice. There have been several further investigations into the study of bathtub-shaped distribution. Before [1] named the two-parameter lifetime distribution with the above characteristics of a hazard rate function as a bathtub-shaped distribution, a reliability distribution with a bathtub-shaped failure rate was proposed by [2], and ref. [3] employed an effective method to analyze data with a bathtub failure rate by introducing the exponentiated Weibull family.
Furthermore, ref. [4] considered the Bayes estimations and estimates of two unknown parameters based on the maximum likelihood method under a bathtub-shaped distribution. Additionally, a considerable amount of literature has been published on estimations under bathtub-shaped distribution based on a censoring scheme. The authors in [5] focused on the maximum likelihood method to calculate point estimators and derived an exact joint confidence region and confidence interval of parameters based on a progressively Type-II censoring sample. The researchers in [6] investigated the Fisher information matrix, maximum likelihood estimates, and confidence intervals for unknown parameters under hybrid censored data.
The probability density function (pdf) and cumulative distribution function of a bathtub-shaped distribution take the forms, respectively,    
f ( x ; λ , β ) = λ β x β 1 exp { λ ( e x β + 1 ) + x β } , λ , β > 0 , x > 0 ,
F ( x ; λ , β ) = exp { λ ( e x β + 1 ) } + 1 , λ , β > 0 , x > 0 .
The reliability function and hazard rate function are given by
R ( t ) = exp { λ ( e t β + 1 ) } , λ , β > 0 , t > 0 ,
h ( x ) = λ β e x β x β 1 , λ , β > 0 , x > 0 .
For simplicity, we denote the bathtub-shaped distribution as C H D with the parameters ( λ , β ) . The parameter λ has few influence on the shape of its hazard rate function; however, β makes a difference to the hazard rate function instead. h ( x ) is bathtub-shaped if the shape parameter β < 1 , otherwise it appears as an increasing function. In particular, according to [7], when β = 1 , C H D reduces to the exponential power distribution. Figure 1 and Figure 2 and Figure 3, Figure 4 and Figure 5 show the pdf and h ( x ) of a bathtub-shaped distribution, respectively.

1.2. Generalized Hybrid Type-I Censoring Scheme

There is no doubt that estimations based on complete samples are more accurate. However, it is inevitable to use censoring for lifetime experiments due to time constraints and expense reduction. Type-I and Type-II censoring are usually considered as two fundamental methods to conduct lifetime experiments, where we terminate these experiments at a certain time point or upon the occurrence of a certain number of failures. With the rapid development of science and technology, products have higher reliability and longer life spans, resulting in a longer time of life-testing to obtain sufficient failure samples.
In order to cut down the life-testing duration, ref. [8] carried out a hybrid Type-I censoring scheme that could be considered as a combination of those two fundamental censoring schemes discussed above. Under this scheme, lifetime experiments operate after a specific point of time and the number of failures is pre-fixed. As long as either of these occurs, the test will be terminated. However, this scheme also has limitations as it has a possibility that extremely few failures occur before the pre-determined time. As a result, it may be impractical to make statistical inferences under such a scheme.
In order to overcome this disadvantage and improve the efficiency of estimators in the life-testing experiment as well as to guarantee that a certain number of failures appear before the end of the experiment as well as saving the time of testing and the cost resulted from failures of units, ref. [9] introduced a generalized hybrid Type-I censoring scheme. Generalized hybrid Type-I censoring assures a minimum number of failures, which could mitigate the short back that exists in hybrid Type-I censoring. For simplicity, we denote this as Type-I GHCS.
We assume that X 1 : n , , X n : n are n ordered observations of failure lifetime. r and T are fixed in advance, where r represents the ideal number of failures and T is the timepoint. These three mentioned censoring models can be expressed as
  • Type-I censoring: terminate at T.
  • Hybrid Type-I censoring: terminate at T = min X r : n , T .
  • Type-I GHCS: terminate at T = max X k : n , min X r : n , T where k < r < n and k is the minimum acceptable number of failures fixed before the experiment.
In this article, we focus on Type-I GHCS, and it can be divided into three cases:
  • Case I: X 1 : n < < X k : n , when X k : n > T .
  • Case II: X 1 : n < < X r : n , when X r : n < T .
  • Case III: X 1 : n < < X d : n , when X k : n < T < X r : n .
Ref. [9] introduced exact likelihood estimation of exponential lifetime distribution based on GHCS. Ref. [10] discussed inferential issues under hybrid censoring schemes and presented details on developments regarding generalized hybrid censoring. Ref. [11] studied estimations of a single parameter from a Burr-X distribution under Type-I GHCS. Furthermore, ref. [12] applied an acceptance sampling plan under Weibull distribution under GHCS.
Suppose that x i : n is the i-th failure time based on samples from a bathtub-shaped distrubtion under Type-I GHCS. The likelihood function is shown as
L ( λ ) = n ! ( n k ) ! λ k β k exp { ( n k ) λ W k } i = 1 k x i : n β 1 ( 1 W i ) e λ W i , D < k n ! ( n D ) ! λ D β D exp { ( n D ) λ W T } i = 1 D x i : n β 1 ( 1 W i ) e λ W i , k D < r n ! ( n r ) ! λ r β r exp { ( n r ) λ W r } i = 1 r x i : n β 1 ( 1 W i ) e λ W i , D r ,
where W i = e x i : n β + 1 , W k = e x k : n β + 1 , W T = e T β + 1 , W r = e x r : n β + 1 , and D represents the number of failures before timepoint T.
According to (5), the likelihood function is translated into the following form,
L ( λ ) λ n exp ( λ Q ) ,
where
Q = i = 1 k W i , D < k i = 1 D W i , k D < r i = 1 r W i , D r .
The MLE of parameter λ can be derived by the equation below, L λ = n λ n 1 exp ( λ Q ) Q λ n exp ( λ Q ) = 0 .
From the equation above, the MLE of λ is obtained as
λ ^ = n Q .
By the same method, the MLE of parameter β can be derived by the equation,
L β = k β + i = 1 k ln ( x i : n ) ( 1 + x i : n β ) + k W k ln ( x k : n ) e x k : n β x k : n β , D < k D β + i = 1 T ln ( x i : n ) ( 1 + x i : n β ) + D W T ln ( x T : n ) e x T : n β x T : n β , k D < r r β + i = 1 r ln ( x i : n ) ( 1 + x i : n β ) + r W r ln ( x r : n ) e x r : n β x r : n β , D r = 0 .
Previous studies based on bathtub-shaped distribution have always dealt with censored samples under a typical statistical inference method—maximum likelihood estimation for instance. Ref. [13] investigated the estimation problems of unknown parameters, reliability, hazard rate functions, and their approximate confidence intervals under the maximum likelihood method and credible intervals under the Bayesian estimation method.
However, there has been no previous study to coalesce a generalized Type-I hybrid censoring scheme with a bathtub-shaped distribution under the E-Bayesian method. Therefore, our main purpose is to investigate estimations of the scale parameters and reliability function of bathtub-shaped distribution under E-Bayesian and Bayesian methods based on a generalized Type-I hybrid censoring scheme with the presupposition that the shape parameter is known.
The remainder of this paper is organized as follows. Section 2 investigates Bayesian estimations against squared error and LINEX loss functions under Type-I GHCS. Section 3 compares the E-Bayesian estimations derived from three different prior distributions. Section 4 introduces Bayesian and E-Bayesian estimations with two unknown parameters. Section 5 establishes the results of a Monte Carlo simulation study with the Metropolis–Hasting algorithm for the purpose of evaluating the effects of different methods and prior distributions on estimators. Section 5 presents a numerical example from a real data set for the purpose of examining the theoretical inference discussed above.

2. Bayesian Estimation

Bayesian estimation measures the uncertainties of unknown parameters by connecting the prior information from a random sample with certain distributions. Prior distributions as well as loss functions affect the accuracy of estimation under the Bayesian method. In this section, under two different loss functions, we assume the parameter β is known and calculate the estimation of scale parameter λ and the reliability function under a bathtub-shaped distribution based on Type-I GHCS. Then, we derive Bayesian estimations.
First, we suppose that λ follows the gamma conjugate prior distribution given by:
π ( λ ) = b a Γ ( a ) λ a 1 e b λ , a , b > 0 .
On the basis of the Bayesian method, we multiply (10) by (5) to obtain the posterior distribution of λ
π ( λ | x ˜ ) = κ 1 λ k + a 1 e b λ i = 1 k e λ W i exp λ ( n k ) W k ) , D < k κ 1 λ D + a 1 e b λ i = 1 D e λ W i exp λ ( n D ) W T ) , k D < r κ 1 λ r + a 1 e b λ i = 1 r e λ W i exp λ ( n r ) W r ) , D r ,
where x ˜ = ( x 1 : n , , x n : n ) and κ could be written in the following form
κ = 0 + λ k + a 1 e b λ i = 1 k e λ W i exp λ ( n k ) W k ) d λ , D < k 0 + λ D + a 1 e b λ i = 1 D e λ W i exp λ ( n D ) W T ) d λ , k D < r 0 + λ r + a 1 e b λ i = 1 r e λ W i exp λ ( n r ) W r ) d λ , D r ,
First, we adopt a symmetrical loss function called the squared error (SE) loss function, which lays weight equally on overestimation and underestimation. Based on this loss function, Bayesian estimations are equivalent to the posterior means, which could be obtained to be, respectively,
λ ^ B S = E ( λ | x ˜ ) = 0 + λ π ( λ | x ˜ ) d λ = K + a i = 1 k W i ( n k ) W k + b , D < k D + a i = 1 D W i ( n D ) W T + b , k D < r r + a i = 1 r W i ( n r ) W r + b , D r .
R ( t ) ^ B S = E ( R ( t ) | x ˜ ) = 0 + R ( t ) π ( λ | x ˜ ) d λ = b i = 1 k W i ( n k ) W k i = 1 k W i ( n k ) W k + b + P K + a , D < k b i = 1 D W i ( n D ) W T i = 1 D W i ( n D ) W T + b + P D + a , k D < r b i = 1 r W i ( n r ) W r i = 1 r W i ( n r ) W r + b + P r + a , D r ,
where P = e t β 1 .
Secondly, we consider a LINEX loss function with an asymmetric shape, which is commonly used in practice as it is more realistic to illustrate the loss according to ratio. The Bayesian estimation of λ against the LINEX loss function can be given by
λ ^ B L = 1 h ln E ( e h λ | x ˜ ) = 1 h ln 0 + e h λ π ( λ | x ˜ ) d λ = K + a h ln 1 + h b i = 1 k W i ( n k ) W k , D < k a + D h ln 1 + h b i = 1 D W i ( n D ) W T , k D < r a + r h ln 1 + h b i = 1 r W i ( n r ) W r , D r .
Similarly, the Bayesian estimator of R ( t ) , under a LINEX loss function, is derived in the following form:
R ( t ) ^ B L = 1 h l n E ( e h R ( t ) | x ˜ ) = 1 h ln i = 1 + ( h ) i Γ ( i ) ( K K + i P ) b , D < k 1 h ln i = 1 + ( h ) i Γ ( i ) ( D D + i P ) b , k D < r 1 h ln i = 1 + ( h ) i Γ ( i ) ( r r + i P ) b , D r .

3. E-Bayesian Estimation

Considering that the prior information may be deficient, the E-Bayesian method could be used to settle the uncertainty by introducing a class of priors. The authors in [14] demonstrated that, based on a specified prior distribution, the purpose of the E-Bayesian method is to estimate unknown parameters or to predict values of a sequence of random variables.
Under SE and LINEX loss functions, we derive E-Bayesian estimators of λ and the reliability function. Additionally, for the purpose of perceiving the effects of prior distributions on E-Bayesian estimations, three different prior distributions are considered. The authors in [15] indicated that, in order to ensure that π ( λ | a , b ) is decreasing, the hyper parameters a and b are chosen. In the case of λ , the derivative of π ( λ ) could be obtained as
d π ( λ ) d λ = b a Γ ( a ) λ a 2 e b λ ( a 1 ) b λ .
It is apparent that the prior distribution π ( λ ) is a decreasing function in λ when 0 < a < 1 and b > 0 . Assume that the bivariate density function in which a and b are independent is
π ( a , b ) = π 1 ( a ) π 2 ( b ) .
According to [16], when the parameter a is given, with the increase of b, the tailed prior distribution will be thinner, which would likely reduce the robustness of Bayesian estimations. Therefore, b is selected to be smaller than a pre-determined constant c. In this case, for parameter λ and R ( t ) , the E-Bayesian estimations are obtained as
λ ^ E B = E ( λ ^ B S | x ˜ ) = 0 1 0 c λ ^ B S ( a , b ) π ( a , b ) d a d b ,
R ( t ) ^ E B = E ( R ( t ) ^ B S ) | x ˜ ) = 0 1 0 c R ( t ) ^ B S ( a , b ) π ( a , b ) d a d b .
Next, for the purpose of exploring the influence of a prior distribution on an estimator under the E-Bayesian method, we derive the estimates under three different prior distributions. These three different prior distributions are selected as follows:
π 1 ( a , b ) = 1 c B ( u , v ) a u 1 ( 1 a ) v 1 , π 2 ( a , b ) = 2 ( c b ) c 2 B ( u , v ) a u 1 ( 1 a ) v 1 π 3 ( a , b ) = 2 b c 2 B ( u , v ) a u 1 ( 1 a ) v 1 , , 0 < b < c , 0 < a < 1 ,
where B ( u , v ) is the beta function. π 1 ( a , b ) is a constant in b, while π 2 ( a , b ) is a decreasing function in b and π 3 ( a , b ) is an increasing function in b.

3.1. E-Bayesian Estimations Based on SE Loss Function

Based on the SE loss function, the E-Bayesian estimations of λ with the prior distribution π 1 ( a , b ) can be obtained from (13), (17) and (19) as
λ ^ E B S 1 = 0 1 0 c λ ^ B S ( a , b ) π 1 ( a , b ) d a d b = 1 c B ( u , v ) 0 1 0 c ( K + a P + b ) a u 1 ( 1 a ) v 1 d b d a = 1 c ( K + u u + v ) ln ( 1 + P k c ) , D < k 1 c ( D + u u + v ) ln ( 1 + P T c ) , k D < r 1 c ( r + u u + v ) ln ( 1 + P r c ) , D r ,
where P k = i = 1 k W i ( n k ) W k , P T = i = 1 D W i ( n D ) W T , P r = i = 1 r W i ( n r ) W r .
Likewise, the E-Bayesian estimations of λ under π 2 ( a , b ) and π 3 ( a , b ) could be written, respectively, in the following forms:
λ ^ E B S 2 = 2 c ( K + u u + v ) ( 1 + P k c ) l n ( 1 + c P k ) 1 , D < k 2 c ( D + u u + v ) ( 1 + P T c ) l n ( 1 + c P T ) 1 , k D < r 2 c ( r + u u + v ) ( 1 + P r c ) l n ( 1 + c P r ) 1 , D r .
λ ^ E B S 3 = 2 c ( K + u u + v ) 1 P k c ln ( 1 + c P k ) , D < k 2 c ( D + u u + v ) 1 P T c ln ( 1 + c P T ) , k D < r 2 c ( r + u u + v ) 1 P r c ln ( 1 + c P r ) , D r .

3.2. E-Bayesian Estimations Based on a LINEX Loss Function

Under a LINEX loss function, the E-Bayesian estimation of λ with the prior distribution π 1 ( a , b ) can be obtained from (14), (17), and (19) as
λ ^ E B L 1 = 0 1 0 c λ ^ B L π 1 ( a , b ) d a d b = 1 c h B ( u , v ) 0 1 0 c ( m + a ) a u 1 ( 1 a ) v 1 ln ( 1 + h b + P ) d b d a = 1 c h ( K + u u + v ) c ln ( 1 + h c + P k ) + ( P k + h ) ln ( 1 + h P k + h ) P k ln ( 1 + c P k ) , D < k 1 c h ( D + u u + v ) c ln ( 1 + h c + P T ) + ( P T + h ) ln ( 1 + h P T + h ) P T ln ( 1 + c P T ) , k D < r 1 c h ( r + u u + v ) c ln ( 1 + h c + P r ) + ( P r + h ) ln ( 1 + h P r + h ) P r ln ( 1 + c P r ) , D r .
Under the same method, the E-Bayesian estimations of λ under π 2 ( a , b ) and π 3 ( a , b ) could be written, respectively, as
λ ^ E B L 2 = 0 1 0 c λ ^ B L π 2 ( a , b ) d a d b = ( K + u u + v ) ln ( P k + h P k ) h ( P k + c ) 2 c 2 h ln ( P k + c P k ) + ( P k + c + h ) 2 c 2 h ln ( P k + h + c P k + h ) 1 c , D < k ( D + u u + v ) ln ( P t + h P t ) h ( P T + c ) 2 c 2 h ln ( P t + c P T ) + ( P T + c + h ) 2 c 2 h ln ( P T + h + c P T + h ) 1 c , k D < r ( r + u u + v ) ln ( P r + h P r ) h ( P r + c ) 2 c 2 h ln ( P r + c P r ) + ( P r + c + h ) 2 c 2 h ln ( P r + h + c P r + h ) 1 c , D r .
λ ^ E B L 3 = 0 1 0 c λ ^ B L π 3 ( a , b ) d a d b = ( K + u u + v ) 1 h ln ( 1 + h c + P k ) + P k 2 c 2 h ln ( 1 + c P k ) ( P k + h ) 2 c 2 h ln ( 1 + c P k + h ) + 1 c , D < k ( D + u u + v ) 1 h ln ( 1 + h c + P T ) + P T 2 c 2 h ln ( 1 + c P T ) ( P T + h ) 2 c 2 h ln ( 1 + c P T + h ) + 1 c , k D < r ( r + u u + v ) 1 h ln ( 1 + h c + P r ) + P r 2 c 2 h ln ( 1 + c P r ) ( P r + h ) 2 c 2 h ln ( 1 + c P r + h ) + 1 c , D r .

3.3. E-Bayesian Estimations of R ( t )

The E-Bayesian estimation of R ( t ) under an SE loss function can be derived from (15), (17) and (19) by using the prior distribution π 1 ( a , b ) ,
R ( t ) ^ E B S 1 = 0 1 0 c R ^ B S ( t ) π 1 ( a , b ) d a d b = 1 c 0 c ( 1 + P b + P ) k F 1 : 1 ( u , u + v ; ln ( b + P k b + P k + P ) ) d b , D < k 1 c 0 c ( 1 + P b + P ) D F 1 : 1 ( u , u + v ; ln ( b + P T b + P T + P ) ) d b , k D < r 1 c 0 c ( 1 + P b + P ) r F 1 : 1 ( u , u + v ; ln ( b + P r b + P r + P ) ) d b , D r ,
where F 1 : 1 ( . , . ; . ) is the generalized hypergeometic function. For more details, one can refer to [17].
Under π 2 ( a , b ) and π 3 ( a , b ) , the E-Bayesian estimations of R ( t ) are written in the following forms under the same method.
R ( t ) ^ E B S 2 = 0 1 0 c R ^ B S ( t ) π 2 ( a , b ) d a d b = 2 c 2 0 c ( c b ) ( 1 + P b + P k ) k F 1 : 1 ( u , u + v ; ln ( b + P k b + P k + P ) ) d b , D < k 2 c 2 0 c ( c b ) ( 1 + P b + P T ) D F 1 : 1 ( u , u + v ; ln ( b + P T b + P T + P ) ) d b , k D < r 2 c 2 0 c ( c b ) ( 1 + P b + P r ) r F 1 : 1 ( u , u + v ; ln ( b + P r b + P r + P ) ) d b , D r .
R ( t ) ^ E B S 3 = 0 1 0 c R ^ B S ( t ) π 3 ( a , b ) d a d b = 2 c 2 0 c b ( 1 + P b + P k ) k F 1 : 1 ( u , u + v ; ln ( b + P k b + P k + P ) ) d b , D < k 2 c 2 0 c b ( 1 + P b + P T ) D F 1 : 1 ( u , u + v ; ln ( b + P T b + P T + P ) ) d b , k D < r 2 c 2 0 c b ( 1 + P b + P r ) r F 1 : 1 ( u , u + v ; ln ( b + P r b + P r + P ) ) d b , D r .
Under a LINEX loss function, the E-Bayesian estimation of R ( t ) with different prior distributions can be obtained from (16), (17) and (19) as follows,
R ( t ) ^ E B L i = 0 1 0 c R ^ B L ( t ) π i ( a , b ) d a d b , i = 1 , 2 , 3 .
The integrals in (26), (27), and (28) are not in simple closed forms. Additionally, the integrals in (29) are also infeasible to derive. Thus, to further evaluate E-Bayesian estimates of R ( t ) , the numerical technique should be introduced.

4. Estimation with Two Unknown Parameters

4.1. Bayesian Estimation

In this section, we assume that λ and β are independent and follow a gamma prior distribution:
π ( λ ) = b 1 a 1 Γ ( a 1 ) λ a 1 1 e b 1 λ , a 1 , b 1 > 0 ,
π ( β ) = b 2 a 2 Γ ( a 2 ) β a 2 1 e b 2 β , a 2 , b 2 > 0 .
Thus, the joint prior distribution is obtained as
π ( λ , β ) = b 1 a 1 Γ ( a 1 ) b 2 a 2 Γ ( a 2 ) λ a 1 1 β a 2 1 e ( b 1 λ + b 2 β ) , a 1 , b 1 , a 2 , b 2 > 0 .
On the basis of Bayesian method, we multiply (32) by (5) to obtain the joint posterior distribution
π ( λ | x ˜ ) = κ 1 λ k + a 1 1 β k + a 2 1 e ( b 1 λ + b 2 β ) i = 1 k e λ W i exp λ ( n k ) W k ) , D < k κ 1 λ D + a 1 1 β D + a 2 1 e ( b 1 λ + b 2 β ) i = 1 D e λ W i exp λ ( n D ) W T ) , k D < r κ 1 λ r + a 1 1 β r + a 2 1 e ( b 1 λ + b 2 β ) i = 1 r e λ W i exp λ ( n r ) W r ) , D r ,
where x ˜ = ( x 1 : n , , x n : n ) and κ could be written in the following form
κ = 0 + 0 + λ k + a 1 1 β k + a 2 1 e ( b 1 λ + b 2 β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 0 + 0 + λ D + a 1 1 β D + a 2 1 e ( b 1 λ + b 2 β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 0 + 0 + λ r + a 1 1 β r + a 2 1 e ( b 1 λ + b 2 β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
Similarly, we could obtain the Bayesian estimations of two unknown parameters and R ( t ) under SE and LINEX loss functions.
λ ^ B S 2 = E ( λ | x ˜ ) = 0 + 0 + λ π ( λ , β | x ˜ ) d λ d β = 1 κ 0 + 0 + λ k + a 1 β k + a 2 1 e ( b 1 λ + b 2 β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 1 κ 0 + 0 + λ D + a 1 β D + a 2 1 e ( b 1 λ + b 2 β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 1 κ 0 + 0 + λ r + a 1 β r + a 2 1 e ( b 1 λ + b 2 β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
λ ^ B L 2 = 1 h ln E ( e h λ | x ˜ ) = 1 h ln 0 + 0 + e h λ π ( λ , β | x ˜ ) d λ d β = 1 κ 0 + 0 + λ k + a 1 1 β k + a 2 1 e ( ( b 1 + h ) λ + b 2 β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 1 κ 0 + 0 + λ D + a 1 1 β D + a 2 1 e ( ( b 1 + h ) λ + b 2 β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 1 κ 0 + 0 + λ r + a 1 1 β r + a 2 1 e ( ( b 1 + h ) λ + b 2 β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
β ^ B S 2 = E ( β | x ˜ ) = 0 + 0 + β π ( λ , β | x ˜ ) d λ d β = 1 κ 0 + 0 + λ k + a 1 1 β k + a 2 e ( b 1 λ + b 2 β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 1 κ 0 + 0 + λ D + a 1 1 β D + a 2 e ( b 1 λ + b 2 β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 1 κ 0 + 0 + λ r + a 1 1 β r + a 2 e ( b 1 λ + b 2 β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
β ^ B L 2 = 1 h ln E ( e h β | x ˜ ) = 1 h ln 0 + 0 + e h β π ( λ , β | x ˜ ) d λ d β = 1 κ 0 + 0 + λ k + a 1 1 β k + a 2 1 e ( b 1 λ + ( b 2 + h ) β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 1 κ 0 + 0 + λ D + a 1 1 β D + a 2 1 e ( b 1 λ + ( b 2 + h ) β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 1 κ 0 + 0 + λ r + a 1 1 β r + a 2 1 e ( b 1 λ + ( b 2 + h ) β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
R ( t ) ^ B S 2 = E ( R ( t ) | x ˜ ) = 0 + 0 + R ( t ) π ( λ , β | x ˜ ) d λ d β = 1 κ 0 + 0 + λ k + a 1 1 β k + a 2 1 e b 1 λ + b 2 β + h e λ ( 1 e t β ) i = 1 k e λ W i exp λ ( n k ) W k ) d λ d β , D < k 1 κ 0 + 0 + λ D + a 1 1 β D + a 2 1 e b 1 λ + b 2 β + h e λ ( 1 e t β ) i = 1 D e λ W i exp λ ( n D ) W T ) d λ d β , k D < r 1 κ 0 + 0 + λ r + a 1 1 β r + a 2 1 e b 1 λ + b 2 β + h e λ ( 1 e t β ) i = 1 r e λ W i exp λ ( n r ) W r ) d λ d β , D r .
We could not directly calculate these above integrals in simple closed form, but the approximate Bayesian estimators could be derived under Lindley’s aprroximation. For more details, one can refer to [18].

4.2. E-Bayesian Estimation

According to E-Bayesian estimation with unknown parameter λ , we select the prior distributions for parameter λ and β as follows,
π 11 ( λ | a 1 , b 1 ) = 1 c 1 B ( u 1 , v 1 ) a 1 u 1 1 ( 1 a 1 ) v 1 1 , π 12 ( λ | a 1 , b 1 ) = 2 ( c 1 b 1 ) c 1 2 B ( u 1 , v 1 ) a 1 u 1 1 ( 1 a 1 ) v 1 1 π 13 ( λ | a 1 , b 1 ) = 2 b 1 c 1 2 B ( u 1 , v 1 ) a 1 u 1 1 ( 1 a 1 ) v 1 1 , , 0 < b 1 < c 1 , 0 < a 1 < 1 ,
π 21 ( β | a 2 , b 2 ) = 1 c 2 B ( u 2 , v 2 ) a 2 u 2 1 ( 1 a 2 ) v 2 1 , π 22 ( β | a 2 , b 2 ) = 2 ( c 2 b 2 ) c 2 2 B ( u 2 , v 2 ) a 2 u 2 1 ( 1 a 2 ) v 2 1 π 23 ( β | a 2 , b 2 ) = 2 b 2 c 2 2 B ( u 2 , v 2 ) a 2 u 2 1 ( 1 a 2 ) v 2 1 , , 0 < b 2 < c 2 , 0 < a 2 < 1 ,
where B ( u 1 , v 1 ) and B ( u 2 , v 2 ) are beta functions. Under an SE loss function, the E-Bayesian estimations can be obtained from (40), (41), (35), and (37) as,
λ ^ E B S i = 0 1 0 c 1 λ ^ B S π i ( a 1 , b 1 ) d a 1 d b 1 ,
β ^ E B S i = 0 1 0 c 1 β ^ B S π i ( a 1 , b 1 ) d a 1 d b 1
Under a LINEX loss function, the E-Bayesian estimations can be obtained from (40), (41), (36), and (38) as,
λ ^ E L S i = 0 1 0 c 2 λ ^ B L π i ( a 2 , b 2 ) d a 2 d b 2 ,
β ^ E B S i = 0 1 0 c 2 β ^ B L π i ( a 2 , b 2 ) d a 2 d b 2 .
Similarly, we could use the MCMC method to compute E-Bayesian estimations.

5. MCMC Method and Simulation Study

According to the Markov Chain Monte Carlo algorithm, we could approximate the integral when it cannot be generated explicitly for multidimensional problems. Therefore, the MCMC algorithm is a widely used and effective method to obtain samples from complex posterior distributions. We apply the Monte Carlo simulation under Type-I GHCS in this section to compute E-Bayesian estimates of λ and R ( t ) against different prior distributions and loss functions.
According to (11), the full conditional posterior probability density function of the parameter λ is written as,
π ( λ | x ) = λ k + a 1 e b λ i = 1 k e λ W i exp λ ( n k ) W k ) , D < k λ D + a 1 e b λ i = 1 D e λ W i exp λ ( n D ) W T ) , k D < r λ r + a 1 e b λ i = 1 r e λ W i exp λ ( n r ) W r ) , D r .
As the conditional posterior PDF of λ is complex, we introduce the MCMC method to obtain random samples by considering a normal distribution as the proposal distribution.
The MCMC approach is shown in Algorithm 1. We could refer to [19,20] for more details regarding the implementation of MCMC algorithm.
Algorithm 1 MCMC algorithm.
1:
Set the initial value λ ( 0 ) be equal to the MLE λ ^ .
2:
Set v = 1 .
3:
repeat
4:
 Take N ( λ ^ , V a r ( λ ^ ) ) as the proposal distribution and generate λ ( ) from it at iteration v.
5:
 Obtain the samples under Type-I GHCS from Uniform(0,1) distribution.
6:
 Compute the acceptance probability: p ( λ ( v 1 ) | λ ( ) ) = min π ( λ ( ) | x ˜ ) π ( λ ( v 1 ) | x ˜ ) , 1 .
7:
if   u   <   p   then
8:
   λ ( v ) = λ ( ) ;
9:
else
10:
   λ ( v ) = λ ( v 1 ) ;
11:
 end if
12:
 Compute the value of R ( v ) ( t ) as R ( v ) ( t ) = exp { λ ( v ) ( 1 e x β ) } .
13:
until   v = N
14:
Set δ = N 10 as the burn-in period.
15:
Based on the SE loss function,
            λ ^ B S = 1 N δ v = δ + 1 N λ ( v ) ,
          R ( t ) ^ B S = 1 N δ v = δ + 1 N R ( v ) ( t ) .
16:
Based on the LINEX loss function,
           λ ^ B L = 1 h ln 1 N δ v = δ + 1 N e h λ ( v ) ,
          R ( t ) ^ B L = 1 h ln 1 N δ v = δ + 1 N e h R ( v ) ( t ) .
17:
Compute the credible intervals of Bayesian and E-Bayesian estimations
( λ [ N ( γ 2 ) ] , λ [ N ( 1 γ 2 ) ] )
and
( R ( t ) [ N ( γ 2 ) ] , R ( t ) [ N ( 1 γ 2 ) ] ) .
where [ a ] = { [ a ] Z , [ a ] a } , γ is the significant level, and N is the amount of draws.
For Bayesian and E-Bayesian methods, for the purpose of evaluating and comparing the performance of estimators against different loss functions, we perform simulation comparisons with data derived from different scenarios. We assume that parameter β is fixed as constant 1. Given a particular value to c, a and b can be obtained according to (19). The algorithm of generating and analyzing data based on Type-I GHCS under the bathtub-shaped distribution is shown in Algorithm 2.
Algorithm 2 The algorithm of generating and analyzing data.
1:
Generate samples under Type-I GHCS from C H D ( λ , β ) :
  • i) Generate n independent variables U = ( U 1 , U 2 , , U n ) from Uniform(0,1) distribution.
    1.
    For given r , k , T , set m as,
    m = k D < k r k D < r d D r .
    2.
    Generalized Type-I hybrid censored samples X = ( X 1 : n , ⋯, X m : n ) are derived by the inverse function method: X = ln ( 1 ln ( 1 U ) λ ) 1 β .
2:
Repeat the simulation for N = 11,000 times.
3:
Discard δ = 1000 iterations (the burn-in) in each chains.
4:
Use the values of parameter λ from the sampler to generate Bayesian and E-Bayesian estimates.
5:
Compare the effectiveness of different methods according to the mean square error M S E of λ and R ( t ) , where
M S E ( λ ^ ) = 1 N δ v = δ + 1 N ( λ ^ v λ ) 2 ,
M S E ( R ^ ( t ) ) = 1 N δ v = δ + 1 N ( R ^ ( t ) v R ( t ) ) 2 .
Steps 2–4 are corresponding to steps 1–16 in Algorithm 1.
Under different ( n , ( r , k ) , T , β ) , we draw samples from every simulation, and values of M L E are computed. Additionally, we obtain the M L E and Bayesian estimation under each simulation.
In order to facilitate the simulation, according to [21], we take the special case
π ( a , b ) = 1 c ,
where c > 0 .
In this case, the E M S E of λ ^ could be obtained as,
E M S E ( λ ^ E B S ) = 0 1 0 c M S E ( λ ^ B S ) π ( a , b ) d a d b = 2 k + 1 2 Q ( Q + c ) D < k 2 D + 1 2 Q ( Q + c ) k D < r 2 r + 1 2 Q ( Q + c ) D r .
E M S E ( λ ^ E B L ) = 0 1 0 c M S E ( λ ^ B L ) π ( a , b ) d a d b = ( a + k ) 2 h 2 0 1 0 c h 2 ( a + k + 1 ) a + k ln 1 ( Q + b ) 2 + 2 h b + Q ln Q + b h + Q + b + ( ln Q + b h + Q + b ) 2 d a d b ( a + D ) 2 h 2 0 1 0 c h 2 ( a + D + 1 ) a + D ln 1 ( Q + b ) 2 + 2 h b + Q ln Q + b h + Q + b + ( ln Q + b h + Q + b ) 2 d a d b ( a + r ) 2 h 2 0 1 0 c h 2 ( a + r + 1 ) a + r ln 1 ( Q + b ) 2 + 2 h b + Q ln Q + b h + Q + b + ( ln Q + b h + Q + b ) 2 d a d b .
The sample size ( n , ( r , k ) , T ) is fixed from the data under Type-I GHCS and is set to n = 50 , 80 , 120 with three sets of fixed numbers ( r , k ) = ( 40 , 30 ) , ( 60 , 40 ) , ( 90 , 60 ) presented respectively for each size. Simultaneously, for the purpose of studying the reliability function under a bathtub-shaped distribution, we set T as T = 0.2 , 0.4 .
Based on the tabulated estimates and the mean square errors of the estimations whose statistical inference processes are computed from software R, the following conclusion can be drawn from Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 and Table 7, Table 8 and Table 9, in which the true value of λ takes 4.37 and 2.5, respectively, and E M S E represents the mean square error of the E-Bayesian estimations. Table 1, Table 2, Table 3 and Table 4 show the results of estimations when λ = 4.37 , β = 1 in detail. Table 5, Table 6, Table 7, Table 8 and Table 9 guarantee the robustness of the conclusion. See more details on the MCMC outputs in Appendix A.
1.
Both estimates are close to their theoretical values under different methods and loss functions.
2.
The mean square errors of the E-Bayesian estimations of parameter λ and R ( t ) are smaller than those of the Bayesian estimations. Therefore, the efficiency of the E-Bayesian method is higher in the sense of a smaller M S E .
3.
The M S E s of estimations under an SE loss function are less than those based on a LINEX loss function. Thus, the SE loss function is more efficient to generate estimates.
4.
As ( n , ( r , k ) , T ) increases, the M S E of the estimate decreases, and the average interval length of C R I s reduces. To conclude, the performance of estimates will improve with the size of the sample increases.

6. Illustrative Example

In order to clarify the algorithms and examine the accuracy and robustness of the theoretical results discussed above, we analyze a real data set of the number of cycles to failure from the electrical devices given by [22]. The authors in [4] divided each data point by 1000 in order to compute effectively, and they tested and concluded that the hazard rate of this data set was bathtub-shaped. Table 10 illustrates the electrical lifetime data in detail in which the unit of the data is the number of cycles.
When analyzing this data set under Type-I GHCS, we assume that ( n , ( r , k ) , T ) = ( 60 , ( 18 , 15 ) , 2 ) and evaluate estimates against SE and LINEX loss functions.
Under this assumption, X k : n = X 15 : 60 = 0.917 < T .
Thus, the terminated time will be T = min ( X r : n = 1.064 , T = 2 ) = 1.064 , and the number of failures is 18. According to (8), the MLE of parameter λ is obtained as: λ ^ = n T = n i = 1 r W i = 0.3870718 .
According to Table 11 and Table 12, the good performance of Bayesian and E-Bayesian estimations against different loss functions can be certified. The estimates are consistent with the real data sets. Similarly, we can determine that the M S E s of parameter λ and R ( t ) based on the E-Bayesian approach are smaller than those under the Bayesian approach. It is also more efficient to evaluate estimates against the SE loss function. These are consistent with the statistical inference and numerical simulation results. Thus, it is reasonable to conclude that the theoretical results discussed above are accurate and robust.
Figure 6 and Figure 7 show the relationship between c, λ ^ , and M S E ( λ ^ ) . As c increases, the value of λ ^ E B decreases and the value of E M S E ( λ ^ ) reduces.

7. Conclusions

The bathtub-shaped distribution is crucial in mechanical and electronic research. In addition, it is more efficient to estimate parameters under Type-I GHCS for product testing situations in practice, as this could save the time of testing and the cost resulting from failures of units.
In this article, in order to make estimations under a bathtub-shaped distribution, the Bayesian and E-Bayesian methods were introduced. According to Bayesian theory, we could generate statistical inference from the prior information. With the assumption that the prior distribution follows a gamma distribution, we could derive the estimates of parameter and reliability functions under different loss functions.
We presented the MCMC method in simulation and with a real lifetime example of electronic data to illustrate the statistical inferences discussed above. Under different sample sizes, parameter values and loss functions, we observed that the E-Bayesian method and SE loss function were more efficient in terms of the mean square error. Our study is useful and efficient for experimenters to examine the quality of industrial products. Additionally, this research can be further developed to address practical problems based on multiple censoring schemes.  

Author Contributions

Investigation, Y.Z.; Investigation, K.L.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in [22].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. MCMC Outputs

According to Figure A1, we can determine than the sample density of λ approximately obeys the law of normal distribution. Figure A2, Figure A3 and Figure A4 show the stationarity of the Markov Chain.
Figure A1. The sample density of λ when N = 10 , 000 .
Figure A1. The sample density of λ when N = 10 , 000 .
Entropy 23 00934 g0a1
Figure A2. The sequence of white noise in the draws.
Figure A2. The sequence of white noise in the draws.
Entropy 23 00934 g0a2
Figure A3. The autocorrelation coefficient of λ in the draws.
Figure A3. The autocorrelation coefficient of λ in the draws.
Entropy 23 00934 g0a3
Figure A4. The partial autocorrelation coefficient of λ in the draws.
Figure A4. The partial autocorrelation coefficient of λ in the draws.
Entropy 23 00934 g0a4

Appendix B. The Robustness of the Simulation with Different h

Table A1 and Table A2 show that the difference of estimations between different h values is relatively small, and the results still follow the statistical inference discussed above. Therefore, the robustness of the simulations with different h values is verified.
Table A1. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 3 , a = 0.6119 , and b = 0.1523 .
Table A1. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 3 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.52.60540.22922.57420.22452.31340.32512.28780.3166
0.4 2.55410.18322.52870.18012.31350.24502.29200.2399
80(60,40)0.2 2.57960.16652.55670.16402.35850.21932.33870.2152
0.4 2.53770.11422.52170.11302.38040.14022.36610.1384
120(90,60)0.2 2.54810.10852.53290.10742.39810.13242.38440.1308
0.4 2.52210.07512.51150.07452.41580.08672.40600.0860
50(40,30)0.212.60920.22992.57790.22512.31640.32622.29080.3177
0.4 2.55520.18332.52980.18022.31450.24512.29300.2401
80(60,40)0.2 2.56450.16462.54180.16212.34570.21632.32610.2123
0.4 2.53850.11432.52260.11312.38120.14032.36680.1385
120(90,60)0.2 2.55230.10892.53710.10782.40180.13292.38810.1313
0.4 2.52430.07522.51370.07472.41790.08692.40800.0862
50(40,30)0.21.52.60170.22852.57060.22382.31050.32372.28500.3153
0.4 2.55700.18352.53160.18052.31610.24562.29450.2406
80(60,40)0.2 2.57780.16632.55480.16382.35690.21902.33720.2150
0.4 2.53920.11442.52330.11322.38180.14042.36750.1386
120(90,60)0.2 2.54160.10802.52660.10692.39240.13162.37880.1300
0.4 2.52580.07522.51530.07472.41930.08692.40950.0863
Table A2. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 0.5 , a = 0.6119 , and b = 0.1523 .
Table A2. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 0.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.52.60140.22852.57030.22382.54600.23202.51600.2272
0.4 2.55720.18352.53180.18042.51240.18572.48770.1825
80(60,40)0.2 2.57210.16562.54930.16312.53160.16742.50940.1648
0.4 2.53730.11422.52130.11302.50920.11502.49350.1138
120(90,60)0.2 2.55590.10922.54070.10812.52900.11002.51400.1088
0.4 2.52810.07542.51750.07482.50950.07572.49900.0752
50(40,30)0.212.61680.23122.58540.22652.56070.23482.53040.2299
0.4 2.55950.18372.53410.18062.51460.18592.49000.1827
80(60,40)0.2 2.57470.16592.55190.16342.53420.16772.51190.1651
0.4 2.53610.11412.52010.11292.50790.11502.49230.1137
120(90,60)0.2 2.55270.10892.53760.10782.52590.10972.51100.1086
0.4 2.52160.07502.51110.07452.50300.07542.49260.0748
50(40,30)0.21.52.60690.22972.57570.22492.55120.23322.52110.2283
0.4 2.55430.18312.52900.18012.50960.18532.48500.1821
80(60,40)0.2 2.57650.16612.55360.16362.53590.16792.51360.1653
0.4 2.53090.11382.51490.11262.50280.11462.48720.1134
120(90,60)0.2 2.54590.10842.53080.10732.51920.10912.50440.1080
0.4 2.52830.07542.51770.07492.50960.07572.49910.0752

Appendix C. MCMC Method for Two Unknown Parameters

Algorithm A1 The MCMC algorithm for two unknown parameters.
1:
Set the initial value λ ( 0 ) , β ( 0 ) be equal to the MLE λ ^ , MLE β ^ .
2:
Set v = 1 .
3:
repeat
4:
 Take N ( λ ^ , V a r ( λ ^ ) ) and N ( β ^ , V a r ( β ^ ) ) as the proposal distribution and generate λ ( ) and β ( ) from it at iteration v.
5:
 Obtain the samples under Type-I GHCS from Uniform(0,1) distribution.
6:
 Compute the acceptance probability: p 1 ( λ ( v 1 ) | λ ( ) ) = min π ( λ ( ) | x ˜ ) π ( λ ( v 1 ) | x ˜ ) , 1 , p 2 ( β ( v 1 ) | β ( ) ) = min π ( β ( ) | x ˜ ) π ( β ( v 1 ) | x ˜ ) , 1 .
7:
if   u < p 1 and u < p 2 then
8:
   λ ( v ) = λ ( ) , β ( v ) = β ( ) ;
9:
else
10:
   λ ( v ) = λ ( v 1 ) , β ( v ) = β ( v 1 ) ;
11:
end if
12:
 Compute the value of R ( v ) ( t ) as R ( v ) ( t ) = exp { λ ( v ) ( 1 e x β ) } .
13:
until   v = N
14:
Set δ = N 10 as the burn-in period.
15:
Based on an SE loss function,
λ ^ B S = 1 N δ v = δ + 1 N λ ( v ) , β ^ B S = 1 N δ v = δ + 1 N β ( v ) , R ( t ) ^ B S = 1 N δ v = δ + 1 N R ( v ) ( t ) .
16:
Based on a LINEX loss function,
λ ^ B L = 1 h ln 1 N δ v = δ + 1 N e h λ ( v ) , β ^ B L = 1 h ln 1 N δ v = δ + 1 N e h β ( v ) , R ( t ) ^ B L = 1 h ln 1 N δ v = δ + 1 N e h R ( v ) ( t ) .
17:
Obtain a 1 , a 2 , b 1 , and b 2 from B e t a ( u 1 , v 1 ) , B e t a ( u 2 , v 2 ) , U ( u 1 , v 1 ) , and U ( u 2 , v 2 ) , respectively. Based on steps 15–16, (42)–(45), we can obtain the E-Bayesian estimations.

References

  1. Chen, Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Stat. Probab. Lett. 2008, 49, 155–161. [Google Scholar] [CrossRef]
  2. Hjorth, U. A reliability distribution with increasing, decreasing, and bathtub-shaped failure rate data. Technometrics 1980, 22, 99–107. [Google Scholar] [CrossRef]
  3. Mudholkar, G.S.; Srivastava, D.K. Exponentiated Weibull family for analyzing bathtub failure-rate data. IEEE Trans. Reliab. 1993, 42, 299–302. [Google Scholar] [CrossRef]
  4. Sarhan, A.M.; Hamilton, D.C.; Smith, B. Parameter estimation for a two-parameter bathtub-shaped lifetime distribution. Appl. Math. Model. 2012, 36, 5380–5392. [Google Scholar] [CrossRef]
  5. Wu, S. Estimation of the two-parameter bathtub-shaped lifetime distribution with progressive censoring. J. Appl. Stat. 2008, 35, 1139–1150. [Google Scholar] [CrossRef]
  6. Rastogi, M.K.; Tripathi, Y.M. Estimation using hybrid censored data from a two-parameter distribution with bathtub shape. Comput. Stat. Data Anal. 2013, 67, 268–281. [Google Scholar] [CrossRef]
  7. Smith, R.M.; Bain, L.J. An Exponential Power Life-Testing Distribution. Commun. Stat. Theory Methods 1975, 4, 469–481. [Google Scholar]
  8. Epstein, B. Truncated Life Tests in the Exponential Case. Ann. Math. Stat. 1954, 25, 555–564. [Google Scholar] [CrossRef]
  9. Chandrasekar, B.; Childs, A.; Balakrishnan, N. Exact Likelihood Inference for the Exponential Distribution under Generalized Type-I and Type-II Hybrid Censoring. Nav. Res. Logist. 2004, 51, 994–1004. [Google Scholar] [CrossRef]
  10. Balakrishnan, N.; Kundu, D. Hybrid censoring: Models, inferential results and applications. Comput. Stat. Data Anal. 2013, 57, 166–209. [Google Scholar] [CrossRef]
  11. Rabie, A.; Li, J. E-Bayesian Estimation for Burr-X Distribution Based on Generalized Type-I Hybrid Censoring Scheme. Am. J. Math. Manag. Sci. 2018, 41–55. [Google Scholar] [CrossRef] [Green Version]
  12. Sen, T.; Bhattacharya, R.; Tripathi, Y.M. Generalized hybrid censored reliability acceptance sampling plans for the Weibull distribution. Am. J. Math. Manag. Sci. 2018, 37, 324–343. [Google Scholar] [CrossRef]
  13. Nassar, M.; Dobbah, S.A. Analysis of Reliability Characteristics of Bathtub-Shaped Distribution Under Adaptive Type-I Progressive Hybrid Censoring. IEEE Access 2020, 8, 181796–181806. [Google Scholar] [CrossRef]
  14. Kiapour, A. Bayes, E-Bayes and Robust Bayes Premium Estimation and Prediction under the Squared Log Error Loss Function. J. Iran. Stat. Soc. 2018, 17, 33–47. [Google Scholar] [CrossRef] [Green Version]
  15. Han, M. The structure of hierarchical prior distribution and its applications. Chin. Oper. Res. Manag. Sci. 1997, 6, 31–40. [Google Scholar]
  16. Berger, J.O. Statistical Decision Theory and Bayesian Analysis, 2nd ed.; Springer: New York, NY, USA, 1985. [Google Scholar]
  17. Dziok, J.; Srivastava, H.M. Classes of analytic functions associated with the generalized hypergeometric function. Appl. Math. Comput. 1999, 103, 1–13. [Google Scholar] [CrossRef]
  18. Lavanya, A.; Alexander, T.L. Estimation of parameters using Lindley’s method. Int. J. Adv. Res. 2016, 4, 1767–1778. [Google Scholar] [CrossRef] [Green Version]
  19. Kozumi, H.; Kobayashi, G. Gibbs sampling methods for Bayesian quantile regression. J. Stat. Comput. Simul. 2011, 81, 1565–1578. [Google Scholar] [CrossRef] [Green Version]
  20. Ahmed, E.A. Bayesian estimation based on progressive Type-II censoring from two-parameter bathtub-shaped lifetime model: A Markov chain Monte Carlo approach. J. Appl. Stat. 2014, 41, 752–768. [Google Scholar] [CrossRef]
  21. Han, M. The E-Bayesian estimation and its E-MSE of Pareto distribution parameter under different loss functions. J. Stat. Comput. Simul. 2020, 90, 1834–1848. [Google Scholar] [CrossRef]
  22. Lawless, J.F. Statistical models and methods for lifetime data. Publ. Am. Stat. Assoc. 2003, 45, 264–265. [Google Scholar]
Figure 1. pdf of CHD when β = 0.5 .
Figure 1. pdf of CHD when β = 0.5 .
Entropy 23 00934 g001
Figure 2. pdf of CHD when β = 2 .
Figure 2. pdf of CHD when β = 2 .
Entropy 23 00934 g002
Figure 3. h ( x ) of CHD when λ 1 , β = 2 .
Figure 3. h ( x ) of CHD when λ 1 , β = 2 .
Entropy 23 00934 g003
Figure 4. h ( x ) of CHD when β < 1 , λ = 2 .
Figure 4. h ( x ) of CHD when β < 1 , λ = 2 .
Entropy 23 00934 g004
Figure 5. h ( x ) of CHD when β = 1 .
Figure 5. h ( x ) of CHD when β = 1 .
Entropy 23 00934 g005
Figure 6. The relationship between c and λ ^ E B S , λ ^ E B L .
Figure 6. The relationship between c and λ ^ E B S , λ ^ E B L .
Entropy 23 00934 g006
Figure 7. The relationship between c and E M S E ( λ ^ E B S ) , E M S E ( λ ^ E B L ) .
Figure 7. The relationship between c and E M S E ( λ ^ E B S ) , E M S E ( λ ^ E B L ) .
Entropy 23 00934 g007
Table 1. Estimations of λ = 4.37 under an SE loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , and b = 0.1523 .
Table 1. Estimations of λ = 4.37 under an SE loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , and b = 0.1523 .
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
λ ^ B S 4.43354.47684.39114.43034.38314.4187
M S E ( λ ^ B S ) 0.61190.50650.38620.32930.25700.2179
min1.75632.53852.92022.99993.45223.5749
max7.13316.29125.92835.75785.36415.2460
length5.37673.75273.00812.75791.91201.6711
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
λ ^ E B S 4.36134.41684.34474.39084.35194.3923
E M S E ( λ ^ E B S ) 0.59420.49430.37900.32400.25380.2155
min2.15552.53362.98893.12453.31453.5509
max6.33995.86105.55185.47655.30425.0982
length4.18443.32742.56282.35201.98971.5473
Table 2. Estimations of R ( t ) under an SE loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , b = 0.1523 , and t = 0.07 .
Table 2. Estimations of R ( t ) under an SE loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , b = 0.1523 , and t = 0.07 .
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
R ^ ( t ) B S 0.739770.732870.741540.744670.744160.74291
M S E ( R ^ ( t ) B S ) 0.001090.000880.000570.000380.000190.00012
min0.617170.653340.669580.677350.695640.70122
max0.887960.842190.820720.816310.79170.78515
length0.270790.188850.151130.138950.096070.08394
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
R ^ ( t ) E B S 0.749130.743280.748880.749620.748670.74657
E M S E ( R ^ ( t ) E B S ) 0.000960.000770.000470.000330.000170.00011
min0.651190.672640.686860.690370.698460.70826
max0.8643000.842470.816910.809450.799110.78643
length0.2131000.169830.130050.119080.100650.07817
Table 3. Estimations of λ = 4.37 under a LINEX loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , and b = 0.1523 .
Table 3. Estimations of λ = 4.37 under a LINEX loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , and b = 0.1523 .
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
λ ^ B L 4.04364.13584.12474.20054.20094.2630
M S E ( λ ^ B L ) 0.78600.63410.46010.38560.29110.2432
min1.21761.20622.28572.59722.98273.2330
max7.24727.1375.85135.65875.23025.0823
length6.02965.93083.56563.06152.24751.8493
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
λ ^ E B L 3.96884.08374.08314.16454.17204.2382
E M S E ( λ ^ E B L ) 0.75900.61600.45020.37850.28700.2403
min0.87531.52632.35192.51442.94133.3233
max6.31505.94935.54905.65695.27115.0406
length5.43974.42303.19703.14252.32981.7173
Table 4. Estimations of R ( t ) under a LINEX loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , b = 0.1523 , and t = 0.07 .
Table 4. Estimations of R ( t ) under a LINEX loss function when β = 1 , h = 1.5 , c = 1 , a = 0.6119 , b = 0.1523 , and t = 0.07 .
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
R ^ ( t ) B L 0.761780.753680.751790.755390.753720.75066
M S E ( R ^ ( t ) B L ) 0.001730.001280.000730.000550.000280.00016
min0.612420.617000.673080.681910.701970.70903
max0.920920.921630.856720.838850.817250.80353
length0.308500.304630.183640.156940.115290.09450
n50 80 120
( r , k ) (30,40) (60,40) (90,60)
T0.20.40.20.40.20.4
R ^ ( t ) E B L 0.770500.762090.758570.760360.757880.75433
E M S E ( R ^ ( t ) E B L ) 0.001410.001070.000640.000470.000260.00015
min0.652290.668630.686990.681990.700030.71103
max0.942500.901890.852890.843560.819540.79863
length0.290210.233250.165900.161570.119520.08761
Table 5. Estimates of parameter λ = 4.37 for different loss functions when β = 2 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
Table 5. Estimates of parameter λ = 4.37 for different loss functions when β = 2 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.54.50250.68474.42260.66274.05750.90783.99100.8723
0.4 4.48630.66704.40810.64624.05090.87463.98560.8418
80(60,40)0.2 4.47710.50604.41720.49374.13650.63304.08440.6150
0.4 4.40970.44564.35640.43604.10600.54204.05900.5286
120(90,60)0.2 4.43690.33044.39720.32504.20640.38704.17030.3799
0.4 4.39570.29764.35980.29334.18660.34274.15360.3371
50(40,30)0.214.50570.68574.42570.66374.06010.90933.99350.8738
0.4 4.48670.66694.40850.64624.05130.87483.98600.8419
80(60,40)0.2 4.46450.50374.40490.49164.12540.63034.07350.6123
0.4 4.40410.44484.35090.43534.10090.54094.05400.5275
120(90,60)0.2 4.43020.32914.39070.32384.20050.38524.16460.3781
0.4 4.38780.29694.35190.29264.17910.34174.14620.3362
50(40,30)0.21.54.47950.67794.40030.65634.03860.89733.97260.8624
0.4 4.48290.66574.40490.64504.04840.87293.98310.8401
80(60,40)0.2 4.46810.50384.40850.49174.12890.62984.07700.6119
0.4 4.41990.44704.36650.43744.11520.54404.06810.5305
120(90,60)0.2 4.43410.32984.39450.32454.20400.38614.16800.3791
0.4 4.40510.29864.36910.29434.19530.34404.16220.3384
Table 6. Estimates of parameter λ = 4.37 for different loss functions when β = 3 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
Table 6. Estimates of parameter λ = 4.37 for different loss functions when β = 3 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.54.50970.68694.42950.66484.06330.91143.99660.8757
0.4 4.49960.68374.41980.66184.05520.90633.98870.8709
80(60,40)0.2 4.45230.50044.39300.48844.11520.62494.06370.6071
0.4 4.46920.50414.40950.49204.12980.63034.07790.6123
120(90,60)0.2 4.43060.32924.39110.32394.20090.38544.16490.3783
0.4 4.42200.32804.38260.32274.19310.38374.15730.3767
50(40,30)0.214.51680.68954.43630.66734.06880.91614.00200.8802
0.4 4.49530.68264.41560.66074.05150.90443.98520.8691
80(60,40)0.2 4.46420.50324.40460.49104.12540.62904.07360.6111
0.4 4.46630.50384.40660.49164.12710.63014.07520.6121
120(90,60)0.2 4.42280.32824.38340.32294.19380.38404.15800.3770
0.4 4.43830.33064.39860.32524.20770.38734.17160.3802
50(40,30)0.21.54.50300.68474.42310.66284.05790.90783.99140.8723
0.4 4.49500.68214.41540.66034.05150.90333.98520.8681
80(60,40)0.2 4.47380.50534.41400.49314.13360.63214.08160.6140
0.4 4.46210.50294.40250.49084.12340.62894.07170.6110
120(90,60)0.2 4.43040.32924.39080.32394.20070.38534.16470.3782
0.4 4.44100.33104.40130.32574.21010.38794.17400.3808
Table 7. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
Table 7. Estimates of parameter λ = 2.5 for different loss functions when β = 1 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.52.60020.22842.56910.22372.4430.25642.4150.2505
0.4 2.56230.18432.53680.18122.43340.20212.410.1984
80(60,40)0.2 2.57580.1662.55290.16352.4590.18082.43780.1778
0.4 2.5380.11442.5220.11312.45590.12142.44080.1201
120(90,60)0.2 2.5490.10872.53390.10762.47090.11512.45650.1139
0.4 2.52780.07532.51720.07482.4730.07842.46270.0779
50(40,30)0.212.59730.22792.56630.22322.44050.25582.41250.25
0.4 2.55450.18312.52910.18012.42630.20072.40310.1971
80(60,40)0.2 2.57080.16542.54790.16292.45430.18012.43320.1772
0.4 2.53380.1142.51780.11282.4520.1212.43690.1196
120(90,60)0.2 2.54940.10872.53420.10762.47120.11512.45690.1139
0.4 2.5290.07542.51840.07492.47410.07852.46390.0779
50(40,30)0.21.52.60510.22912.57390.22442.44740.25732.41940.2514
0.4 2.54850.18262.52320.17962.42070.20012.39750.1965
80(60,40)0.2 2.57430.16582.55140.16332.45750.18062.43640.1777
0.4 2.54020.11442.52430.11322.45810.12152.4430.1202
120(90,60)0.2 2.55160.10882.53640.10772.47330.11532.45890.1141
0.4 2.52170.0752.51110.07452.4670.07812.45690.0776
Table 8. Estimates of parameter λ = 2.5 for different loss functions when β = 2 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
Table 8. Estimates of parameter λ = 2.5 for different loss functions when β = 2 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.52.60320.22882.57200.22402.44570.25682.41770.2510
0.4 2.60710.22982.57580.22502.44900.25832.42090.2524
80(60,40)0.2 2.58130.16832.55810.16572.46290.18382.44150.1807
0.4 2.58170.16822.55860.16562.46340.18352.44200.1805
120(90,60)0.2 2.54660.10882.53150.10772.46840.11542.45400.1141
0.4 2.55560.10962.54030.10852.47680.11632.46230.1150
50(40,30)0.212.59970.22832.56860.22362.44260.25622.41460.2504
0.4 2.60850.22992.57730.22522.45030.25832.42220.2524
80(60,40)0.2 2.57140.16702.54840.16452.45390.18232.43260.1792
0.4 2.57180.16702.54870.16442.45430.18212.43300.1791
120(90,60)0.2 2.55440.10952.53910.10842.47560.11622.46120.1149
0.4 2.55410.10942.53890.10832.47540.11602.46100.1148
50(40,30)0.21.52.59660.22792.56550.22322.43970.25602.41180.2501
0.4 2.60420.22942.57300.22462.44640.25782.41830.2519
80(60,40)0.2 2.57660.16782.55350.16522.45860.18312.43730.1801
0.4 2.58420.16872.56100.16612.46550.18432.44410.1812
120(90,60)0.2 2.54790.10892.53270.10782.46960.11552.45520.1142
0.4 2.54900.10902.53390.10792.47070.11562.45630.1143
Table 9. Estimates of parameter λ = 2.5 for different loss functions when β = 3 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
Table 9. Estimates of parameter λ = 2.5 for different loss functions when β = 3 , h = 1.5 , a = 0.6119 , and b = 0.1523 .
n ( r , k ) Tc λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
50(40,30)0.20.52.60630.22952.57500.22472.44840.25782.42030.2519
0.4 2.60060.22862.56950.22392.44330.25682.41530.2509
80(60,40)0.2 2.57500.16752.55190.16492.45720.18272.43590.1797
0.4 2.57930.16812.55620.16552.46110.18352.43980.1805
120(90,60)0.2 2.56100.11012.54560.10902.48180.11682.46730.1155
0.4 2.55650.10972.54120.10862.47760.11632.46320.1151
50(40,30)0.212.60650.22962.57530.22492.44850.25812.42040.2522
0.4 2.61470.23112.58330.22632.45580.25982.42750.2538
80(60,40)0.2 2.57580.16752.55270.16502.45790.18282.43660.1798
0.4 2.58100.16822.55790.16562.46270.18362.44130.1806
120(90,60)0.2 2.55320.10942.53790.10832.47460.11602.46010.1147
0.4 2.55050.10912.53530.10802.47210.11572.45770.1145
50(40,30)0.21.52.60660.22962.57530.22492.44850.25812.42040.2522
0.4 2.61050.23062.57920.22582.45190.25932.42370.2534
80(60,40)0.2 2.57520.16732.55220.16482.45750.18262.43620.1795
0.4 2.57880.16802.55570.16542.46070.18332.43930.1803
120(90,60)0.2 2.54430.10862.52910.10752.46620.11512.45180.1139
0.4 2.55310.10942.53780.10832.47450.11602.46000.1147
Table 10. Real data set of 60 observations of electrical appliances.
Table 10. Real data set of 60 observations of electrical appliances.
0.0140.0340.0590.0610.0690.0800.1230.1420.1650.210
0.3810.4640.4790.5560.5740.8390.9170.9690.9911.064
1.0881.0911.1741.2701.2751.3551.3971.4771.5781.649
1.7021.8931.9322.0012.1612.2922.3262.3372.6282.785
2.8112.8862.9933.1223.2483.7153.7903.8573.9124.100
4.1064.1164.3154.5104.5805.2675.2995.5836.0659.701
Table 11. The results of estimates of λ for the real data set.
Table 11. The results of estimates of λ for the real data set.
c λ ^ BS MSE ( λ ^ BS ) λ ^ EBS E MSE ( λ ^ EBS ) λ ^ BL MSE ( λ ^ BL ) λ ^ EBL E MSE ( λ ^ EBL )
0.250.39310.0038010.39210.0037960.39030.0038120.38930.003804
0.500.39310.0038010.39160.0037870.39030.0038120.38880.003795
1.000.39310.0038010.39120.0037780.39030.0038120.38840.003785
1.250.39310.0038010.39070.0037690.39030.0038120.38790.003776
1.500.39310.0038010.39020.0037590.39030.0038120.38740.003771
Table 12. The results of estimates of R ( t ) for the real data set.
Table 12. The results of estimates of R ( t ) for the real data set.
c R ( t ) BS R ( t ) BL R ( t ) EBS R ( t ) EBL
0.250.7303080.7319450.7308920.732531
0.500.7303080.7319450.7311850.732823
1.000.7303080.7319450.7314190.733058
1.250.7303080.7319450.7317110.733351
1.500.7303080.7319450.7320040.733644
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Liu, K.; Gui, W. Bayesian and E-Bayesian Estimations of Bathtub-Shaped Distribution under Generalized Type-I Hybrid Censoring. Entropy 2021, 23, 934. https://doi.org/10.3390/e23080934

AMA Style

Zhang Y, Liu K, Gui W. Bayesian and E-Bayesian Estimations of Bathtub-Shaped Distribution under Generalized Type-I Hybrid Censoring. Entropy. 2021; 23(8):934. https://doi.org/10.3390/e23080934

Chicago/Turabian Style

Zhang, Yuxuan, Kaiwei Liu, and Wenhao Gui. 2021. "Bayesian and E-Bayesian Estimations of Bathtub-Shaped Distribution under Generalized Type-I Hybrid Censoring" Entropy 23, no. 8: 934. https://doi.org/10.3390/e23080934

APA Style

Zhang, Y., Liu, K., & Gui, W. (2021). Bayesian and E-Bayesian Estimations of Bathtub-Shaped Distribution under Generalized Type-I Hybrid Censoring. Entropy, 23(8), 934. https://doi.org/10.3390/e23080934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop