Next Article in Journal
Big Data as a Tool for Building a Predictive Model of Mill Roll Wear
Next Article in Special Issue
Estimation and Prediction for Nadarajah-Haghighi Distribution under Progressive Type-II Censoring
Previous Article in Journal
Some Properties of the Arithmetic–Geometric Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation and Prediction for Gompertz Distribution under General Progressive Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(5), 858; https://doi.org/10.3390/sym13050858
Submission received: 24 April 2021 / Revised: 3 May 2021 / Accepted: 7 May 2021 / Published: 12 May 2021
(This article belongs to the Special Issue Probability, Statistics and Applied Mathematics)

Abstract

:
In this article, we discuss the estimation of the parameters for Gompertz distribution and prediction using general progressive Type-II censoring. Based on the Expectation–Maximization algorithm, we calculate the maximum likelihood estimates. Bayesian estimates are considered under different loss functions, which are symmetrical, asymmetrical and balanced, respectively. An approximate method—Tierney and Kadane—is used to derive the estimates. Besides, the Metropolis-Hasting (MH) algorithm is applied to get the Bayesian estimates as well. According to Fisher information matrix, we acquire asymptotic confidence intervals. Bootstrap intervals are also established. Furthermore, we build the highest posterior density intervals through the sample generated by the MH algorithm. Then, Bayesian predictive intervals and estimates for future samples are provided. Finally, for evaluating the quality of the approaches, a numerical simulation study is implemented. In addition, we analyze two real datasets.

1. Introduction

Gompertz distribution has wide applications in describing human mortality, establishing actuarial tables and other fields. Historically, it was originally introduced by Gompertz (see Reference [1]). The probability density function (PDF) and cumulative distribution function (CDF) of the Gompertz distribution are defined as
f ( x | α , β ) = α β e β x e α ( e β x 1 ) , 0 < x < + ,
and
F ( x | α , β ) = 1 e α ( e β x 1 ) , 0 < x < + ,
where the unknown parameters α and β are positive.
The Gompertz distribution possesses a unimodal PDF; in addition to this, it also has an increasing hazard function. Many researchers have contributed to the properties of the Gompertz distribution. In recent years, Reference [2] studied the relations between other distributions and Gompertz distribution, for instance, the Type I extreme value and Weibull distributions. Reference [3] obtained the weighted and unweighted least squares estimations under censored and complete samples. Reference [4] calculated the maximum likelihood estimates (MLEs), and completed the establishment for exact confidence interval and joint confidence region base on progressive Type-II censoring. Reference [5] studied the statistical inferences for Gompertz distribution under generalized progressively hybrid censoring. They derived the MLEs by Newton’s iteration method and used Markov chain Monte Carlo method to obtain Bayes estimates under generalized entropy and other loss functions. Bayesian predictions based on this censoring scheme were provided by one- and two-sample predictive approaches. Finally, they compared the proposed methods by simulation. Reference [6] obtained the MLEs and Bayesian estimates for the parameters using progressive first-failure censoring, also the estimates of hazard rate and reliability functions of Gompertz distribution. Besides, approximate and exact confidence intervals were constructed, and the conjugate and discrete prior distributions for the parameters were proposed. Finally, a numerical example was reported. One may also refer to References [7,8] for extensions about Gompertz distribution.
In life tests and reliability analyses, censoring has attracted more and more attention due to time and cost savings. Several schemes of censoring are proposed in the literature, among which Type-I and Type-II censoring are the most widely used. The former allows an experiment to be ceased at a fixed time point and the number of observed failed units is random, while the latter asks the life testing to be stopped when a prescriptive number of units fail, and the duration of the experiment is random. Both the traditional Type-I and Type-II censoring methods have a limitation that surviving experimental units are only allowed to be withdrawn at the terminal point. In this regard, progressive type-II censoring has better practicability and flexibility because it allows removal of the surviving units after any failure occurs. However, sometimes there are also other cases in the test, such as the existence of the unobserved failures at the starting point of the test. This would result in more general censoring. In this article, following Reference [9], we concentrate on the general progressive Type-II censoring. Assume that a life testing contains n experimental units. The first r failures occur at the time points X 1 , …, X r respectively which are unobserved. When ( r + 1 ) -th failure is observed at the time point X r + 1 , the surviving experimental units of size R r + 1 are withdrawn, and so forth. When ( r + i ) -th failure takes place at the point of time X r + i ( i = 1 , , m r ) , the surviving experimental units of size R r + i are removed at random. Eventually, when the m-th failed unit is observed at the time point X m , remaining experimental units of size R m = n R r + 1 R m 1 m are removed. Here R = ( R r + 1 , , R m ) is prefixed and is cited as the censoring scheme, besides, X = ( X r + 1 , , X m ) is known as general progressive censored data which denotes the observed failure time with size m r .
Several scholars have discussed various lifetime distributions using the general progressive censored data. Among others, Reference [10] derived both classical and Bayesian estimates using general progressive censored data obtained randomly from exponential distribution. Reference [11] applied general progressive censored sample to discuss Bayesian estimates for the two parameters in inverse Weibull distribution and prediction problems with the priors with gamma distribution on the scale parameter and a log-concave density on the shape parameter. Reference [12] obtained Bayesian prediction estimates for the future sample from Weibull distribution using asymmetric and symmetric loss functions under general progressive censoring. Other studies can be found in References [13,14,15], and so forth.
In this article, using general progressive censoring, we discuss the estimation and prediction problems on Gompertz distribution.
This paper proceeds as follows: First, we calculate the MLEs in Section 2 by the Expectation-Maximization (EM) method and acquire the Fisher information matrix. Besides, in the same section we also derive the bootstrap intervals. In Section 3, we discuss the Bayesian estimates with different loss functions. An approximate method, Tierney and Kadane (TK), is proposed to calculate these estimates. Furthermore, we apply the MH algorithm to obtain Bayesian estimations and establish highest posterior density (HPD) intervals under the sample generated by the MH algorithm. In Section 4, Bayesian point prediction and interval prediction estimates for future samples are provided. A numerical simulation is executed in Section 5 to evaluate the quality of these approaches, in addition, we also analyze two real datasets. Finally, conclusions are arranged in the last Section 6.

2. Maximum Likelihood Estimation

Let R = ( R r + 1 , , R m ) be the censoring scheme, under which X = ( X r + 1 , X r + 2 , , X m ) denotes the corresponding general progressive censored sample drawn from Gompertz distribution. Then, the likelihood function is derived as the following expression:
l ( α , β | x ˜ ) = Q [ F ( x r + 1 | α , β ) ] r i = r + 1 m [ 1 F ( x r + 1 | α , β ) ] R i f ( x r + 1 | α , β ) ,
where Q = n r ( n r ) j = r + 2 m ( n i = r + 1 j 1 R i + 1 j ) , and x ˜ = ( x r + 1 , , x m ) denotes an observed value of X = ( X r + 1 , , X m ) . In addition, n r is the binomial coefficient, that is n ! ( n r ) ! r ! .
Substituting (1) and (2), (3) can be written in the following form:
l ( α , β | x ˜ ) = Q α m r β m r [ 1 e α ( e β x r + 1 1 ) ] r i = r + 1 m e β x i e α ( R i + 1 ) ( e β x i 1 ) .

2.1. Point Estimation with EM Algrithm

A classical method for obtaining MLE is the Newton–Raphson method, which requires the second-order partial derivatives of the log-likelihood function and the derivatives are usually complicated in the case of censoring. Therefore, it is necessary to seek other methods. Following Reference [16], we use the EM algorithm to derive the MLEs. This algorithm is powerful for handling incomplete data problems because only the pseudo-log likelihood function of complete data needs to be maximized. It is an iterative method by using current estimation of the parameters to expect the log-likelihood function filled with censored data which is called E-step and maximize it to get the next estimation which is called M-step.
We use Z = ( Z r , Z r + 1 , , Z m ) to represent the censored sample, where Z r is a 1 × r vector Z r = ( Z r 1 , , Z r r ) of the first r unobserved failures, and Z i , i = r + 1 , , m denotes a 1 × R i vector Z i = ( Z i 1 , , Z i R i ) of the censored data after X i failed. The observed and complete sample are denoted by X = ( X r + 1 , , X m ) and K respectively, then K = ( X , Z ) . Let z 0 ˜ = ( z r 1 , , z r r ) , z i ˜ = ( z i 1 , , z i R i ) ( i = r + 1 , , m ) and x ˜ = ( x r + 1 , , x m ) represent the corresponding observations. Under the complete data, we can express the log-likelihood function by
L c ( α , β , K ) = n ln ( α β ) + β i = r + 1 m x i α i = r + 1 m ( e β x i 1 ) + β i = r + 1 m k = 1 R i z i k α i = r + 1 m k = 1 R i ( e β z i k 1 ) α k = 1 r ( e β z r k 1 ) + β k = 1 r z r k .
  • E-step
To conduct the E-step smoothly, first we compute the expectation of (5), the pseudo-log likelihood function is then expressed as
L E ( α , β ; x ˜ ) = n ln ( α β ) + β i = r + 1 m x i α i = r + 1 m ( e β x i 1 ) + β k = 1 r E ( z r k | z r k < x r + 1 ) + β i = r + 1 m k = 1 R i E ( z i k | z i k > x i ) α k = 1 r E ( ( e β z r k 1 ) | z r k < x r + 1 ) α i = r + 1 m k = 1 R i E ( ( e β z i k 1 ) | z i k > x i ) ,
where
E ( z i k | z i k > x i ) = 1 1 F ( x i | α , β ) x i + α β e β t e α ( e β t 1 ) t d t = α β e α ( e β x i 1 ) e β x i 1 + ln ( t + 1 ) e α t d t = E 1 ( x i , α , β ) ,
E ( ( e β z i k 1 ) | z i k > x i ) = 1 1 F ( x i | α , β ) x i + α β e β t e α ( e β t 1 ) ( e β t 1 ) d t = e β x i 1 + 1 α = E 2 ( x i , α , β ) ,
E ( z r k | z r k < x r + 1 ) = 1 F ( x r + 1 | α , β ) 0 x r + 1 α β e β t e α ( e β t 1 ) t d t = α β F ( x r + 1 | α , β ) 0 e β x r + 1 1 e α t ln ( t + 1 ) d t = E 3 ( x r + 1 , α , β ) ,
and
E ( ( e β z r k 1 ) | z r k < x r + 1 ) = 1 F ( x r + 1 | α , β ) 0 x r + 1 α β e β t e α ( e β t 1 ) ( e β t 1 ) d t = 1 e β x r + 1 e α ( e β x r + 1 1 ) F ( x r + 1 | α , β ) 1 + 1 α = E 4 ( x r + 1 , α , β ) .
  • M-step
Suppose that the s-th estimate of ( α , β ) is represented by ( α ( s ) * , β ( s ) * ) then the M-step aims to maximize (6) by substituting α ( s ) * and β ( s ) * into E 1 , E 2 , E 3 and E 4 , and derive the ( s + 1 ) -th estimate. Therefore, the next task is to maximize the function
L M ( α , β ; x ˜ ) = n ln ( α β ) + β i = r + 1 m x i α i = r + 1 m ( e β x i 1 ) + β i = r + 1 m R i E 1 * α i = r + 1 m R i E 2 * + r β E 3 * r α E 4 * ,
where E 1 * , E 2 * , E 3 * , E 4 * are E 1 ( x i , α ( s ) * , β ( s ) * ) , E 2 ( x i , α ( s ) * , β ( s ) * ) , E 3 ( x r + 1 , α ( s ) * , β ( s ) * ) , E 4 ( x r + 1 , α ( s ) * , β ( s ) * ) , respectively. The corresponding likelihood equations are
L M α = n α i = r + 1 m ( e β x i 1 + R i E 2 * ) r E 4 * = 0 ,
and
L M β = n β + i = r + 1 m ( x i α x i e β x i + R i E 1 * ) + r E 3 * = 0 .
For the reason that it is infeasible to compute (12) and (13) analytically, we use a numerical technique to obtain α ( s + 1 ) * and β ( s + 1 ) * . From (12), the estimate of α can be described as the following function of β :
α ^ = n i = r + 1 m ( e β x i 1 + R i E 2 * ) + r E 4 * .
By replacing α with α ^ , Equation (13) can be transformed into the equivalent form β = F F ( β ) , where
F F ( β ) = n i = r + 1 m ( α ^ x i e β x i x i + R i E 1 * ) r E 3 * .
Then, β ( s + 1 ) * can be acquired using the fixed-point iterative procedure:
β j + 1 = F F ( β j ) .
When | β j + 1 β j | is smaller than a given tolerance limit, the iteration stops. Once we get β ( s + 1 ) * , α ( s + 1 ) * can be computed as α ( s + 1 ) * = α ^ ( β ( s + 1 ) * ) easily from (14). Repeat E-step and M-step till this program converges. Then we get the MLEs for α and β .

2.2. Asymptotic Confidence Interval

Now, we acquire the Fisher information matrix and establish 100 ( 1 γ ) % asymptotic confidence intervals (ACIs). The observed information can be extracted by means of a program proposed by Reference [17] when using EM method in handling incomplete sample problems to derive MLEs. Let θ be the unknown parameter ( α , β ) . I K ( θ ) and I X ( θ ) denote complete information and observed information respectively. Furthermore, missing information is represented as I K | X ( θ ) . The main concept of this program can be described as the principle of missing information
I X ( θ ) = I K ( θ ) I K | X ( θ ) ,
where I K ( θ ) can be derived by
I K ( θ ) = E 2 L c ( θ , K ) θ 2 = A 11 A 12 A 21 A 22 .
I W | X is the expected information of the distribution of Z = ( Z r , Z r + 1 , , Z m ) given X = ( X r + 1 , , X m ) . According to Reference [18], given the observed sample of general progressive Type-II censoring, we obtain the distribution of Z as
f ( Z | X , θ ) = k = 1 r f ( z r k | θ ) F ( x r + 1 | θ ) i = r + 1 m k = 1 R i f ( z i k | θ ) 1 F ( x i | θ ) ,
then the I K | X is
I K | X ( θ ) = E 2 ln f ( Z | X , θ ) θ 2 .
Let f Z 0 ( z | x r + 1 , θ ) = f ( z | θ ) / F ( x r + 1 | θ ) , f Z i ( z i | x i , θ ) = f ( z i | θ ) / [ 1 F ( x i | θ ) ] . Following Reference [18], we know that given the observed sample ( X r + 1 , , X m ) = ( x r + 1 , , x m ) , the components of Z r are independent of each other and have the PDF f Z 0 ( z r k | x r + 1 , θ ) , k = 1 , , r . Similarly, the components of Z i , i = r + 1 , , m are independent of each other and have the PDF f Z i ( z i k | x i , θ ) , k = 1 , , R i . Therefore, the I K | X can be restated as
I K | X ( θ ) = r I K | X * ( θ ) + i = r + 1 m R i I K | X ( i ) ( θ ) ,
where
I K | X * ( θ ) = E 2 ln f Z 0 ( z | x r + 1 , θ ) θ 2 = B 11 B 12 B 21 B 22 ,
and
I K | X ( i ) ( θ ) = E 2 ln f Z i ( z i | x i , θ ) θ 2 = C 11 C 12 C 21 C 22 .
Now we can figure out the elements of the above matrices as follows:
A 11 = n / α 2 , A 21 = A 12 = n α β 0 + e α t ( t + 1 ) ln ( t + 1 ) d t ,
A 22 = n β 2 + n α β 2 0 + e α t ( t + 1 ) ln ( t + 1 ) 2 d t ,
B 11 = 1 α 2 + h 1 , B 12 = B 21 = α β F ( x r + 1 | θ ) 0 e β x r + 1 1 e α t ( t + 1 ) ln ( t + 1 ) d t + h 2 ,
B 22 = 1 β 2 α 2 β 2 F ( x r + 1 | θ ) 0 e β x r + 1 1 e α t ( t + 1 ) [ ln ( t + 1 ) ] 2 d t + h 3 ,
C 11 = 1 α 2 , C 12 = C 21 = α β ( 1 F ( x i | θ ) ) e β x i 1 + e α t ( t + 1 ) ln ( t + 1 ) d t + x i e β x i ,
C 22 = α 2 β 2 ( 1 F ( x i | θ ) ) e β x i 1 + e α t ( t + 1 ) [ ln ( t + 1 ) ] 2 d t 1 β 2 + α x i 2 e β x i ,
where
h 1 = ( e β x r + 1 1 ) 2 e α ( e β x r + 1 1 ) [ F ( x r + 1 | θ ) ] 2 ,
h 2 = x r + 1 e β x r + 1 e α ( e β x r + 1 1 ) [ 1 α ( e β x r + 1 1 ) e α ( e β x r + 1 1 ) ] [ F ( x r + 1 | θ ) ] 2 ,
and
h 3 = α x r + 1 2 e β x r + 1 [ 1 α e β x r + 1 e α ( e β x r + 1 1 ) ] [ F ( x r + 1 | θ ) ] 2 .
Further, using asymptotic normality of MLE θ ^ = ( α ^ , β ^ ) , θ ^ N ( θ ^ , I 1 ( θ ^ ) ) , the 100 ( 1 γ ) % ACIs for the two unknown parameters are obtained as
α ^ η γ 2 V a r ( α ^ ) , α ^ + η γ 2 V a r ( α ^ ) and β ^ η γ 2 V a r ( β ^ ) , β ^ + η γ 2 V a r ( β ^ ) ,
where η γ 2 represents the upper γ 2 -th quantile for standard normal distribution, V a r ( α ^ ) and V a r ( β ^ ) denote the principal diagonal elements of I 1 ( θ ^ ) respectively.

2.3. Bootstrap Confidence Interval

As is widely known, asymptotic confidence interval on the basis of MLE requires a large sample to support its accuracy but, in many practical cases, the sample size tends to not be enough. Reference [19] proposed the bootstrap method to construct the confidence interval (CI), which is more suitable for small sample. In this part, the parametric bootstrap method is employed in the establishment of percentile bootstrap (bootstrap-p) and bootstrap-t CIs for a parameter λ (here λ is α or β ). Interested readers may refer to References [20,21] for more information about bootstrap and Reference [22] for the algorithm of generating the general progressive type-II censored sample.
Parametric bootstrap-p
(1)
Calculate the MLEs α ^ 0 and β ^ 0 based on the existing general progressive censored data and censoring scheme R = ( R r + 1 , , R m ) .
(2)
Generate B m from Beta( n r , r + 1 ).
(3)
Generate independent U r + k from Uniform ( 0 , 1 ) , k = 1 , 2 , , m r 1 .
(4)
Set B r + k = U r + k 1 ξ r + k , where ξ r + k = k + i = m k + 1 m R i , k = 1 , 2 , , m r 1 .
(5)
Set Z r + k = 1 B m k + 1 B m k + 2 B m , k = 1 , 2 , , m r .
(6)
Set X k = F 1 ( Z k ) , k = r + 1 , , m , and F ( · ) represents the CDF of Gompertz distribution with parameters α ^ 0 and β ^ 0 . Then the X k , k = r + 1 , , m are the general progressive censored sample (also bootstrap sample).
(7)
Compute the MLEs α ^ * and β ^ * using the updated bootstrap sample.
(8)
Repeat steps (2)–(7) D times. Acquire the estimates: ( α ^ 1 * , α ^ 2 * , , α ^ D * ), ( β ^ 1 * , β ^ 2 * , , β ^ D * ).
(9)
Set F ^ λ ( x ) = P ( λ ^ * x ) as the CDF for λ ^ * . For a given value of x, define λ ^ p ( x ) = F ^ λ 1 ( x ) . The 100 ( 1 γ ) % bootstrap-p CI of the parameter λ is obtained as
λ ^ p ( γ 2 ) , λ ^ p ( 1 γ 2 ) .
Parametric bootstrap-t
(1)–(7)
The same as the bootstrap-p above.
(8)
Obtain the statistics T λ * that
T λ * = λ ^ * λ ^ V a r ^ ( λ ^ * ) .
(9)
Repeat steps (2)–(8) D times.
(10)
Set F ^ T λ ( x ) = P ( T λ * x ) as the CDF for T λ * . For a given value of x, define λ ^ t ( x ) = λ ^ + V a r ^ ( λ ^ * ) F ^ T λ 1 ( x ) . The 100 ( 1 γ ) % bootstrap-t CI for the parameter λ is given by
λ ^ t ( γ 2 ) , λ ^ t ( 1 γ 2 ) .

3. Bayesian Estimation

Bayesian statistics are different from traditional statistics in that they allow the incorporation of subjective prior knowledge about life parameters into the inferential procedure in reliability analysis. Therefore, for the same quality of inferences, Bayesian methods tend to require fewer sample data than traditional statistical methods do. This makes it extremely important in expensive life tests.
We investigate the Bayesian estimates in this section. Suppose that α and β independently have gamma prior distributions with the parameters ( a , b ) and ( c , d ) . Afterwards, we can obtain their joint prior distribution, that is
π ( α , β ) = b a d c Γ ( a ) Γ ( c ) α a 1 e b α β c 1 e d β , 0 < α , β < + ,
where the positive constants a , b , c and d are hyperparameters. Let x ˜ = ( x r + 1 , , x m ) be an observed value of X = ( X r + 1 , , X m ) . Based on the joint prior distribution and likelihood function, the joint posterior function is
π ( α , β | x ˜ ) = π ( α , β ) l ( α , β | x ˜ ) 0 + 0 + l ( α , β | x ˜ ) π ( α , β ) d α d β α m + a r 1 β m + c r 1 e ( b α + d β ) [ 1 e α ( e β x r + 1 1 ) ] r i = r + 1 m e β x i e α ( R i + 1 ) ( e β x i 1 ) .
It is clear that (26) is analytically tricky. Furthermore, the Bayesian estimation of a function with α and β is also intractable because it is related to a ratio of two integrals. For solving the corresponding ratio of two integrals, some approximate approaches have been presented in the literature. Among them, the TK method was proposed by Reference [23] to obtain the approximate posterior expectations. Besides, the MH algorithm is a simulation method with wide applications in sampling from posterior density function. In this article, we use the TK method and the MH algorithm to derive approximate explicit forms for the Bayesian estimates.

3.1. Loss Functions

In Bayesian statistics, the selection of loss function is a fundamental step. There are many symmetric loss functions, among which squared error loss (SEL) function is well-known for its good mathematical properties. Let δ be a Bayesian estimate of θ . The form of SEL function is
L 1 ( δ , θ ) = ( δ θ ) 2 ,
then under SEL function the Bayesian estimate for θ can be computed by δ S = E ( θ | X ) ( θ | X ) .
However, in many practical situations, overestimation and underestimation result in different losses, and the consequence is likely to be quite serious if one uses symmetric loss function indiscriminately. In these cases, asymmetrical loss functions are considered to be more suitable. In the literature, many different asymmetric loss functions were used. Among them, LINEX is dominant, and this loss function can be expressed as
L 2 ( δ , θ ) = ζ ( e h ( δ θ ) h ( δ θ ) 1 ) , h 0 , ζ > 0 .
Without loss of generality, here, we take ζ = 1 . Thus Bayesian estimate for θ is given by the expression δ L = 1 h ln { E ( θ | X ) ( e h θ | X ) } .
Later, Reference [24] proposed a balanced loss function that has a more generalized form
L 3 ( δ , θ ) = σ ρ ( δ , δ 0 ) + ( 1 σ ) ρ ( δ , θ ) , 0 σ 1 ,
where δ 0 is a known estimate of θ such as the MLE, and ρ is a loss function selected arbitrarily. By choosing ρ as SEL function given by (27), the (29) is transformed into balanced squared error loss (BSEL) function. According to BSEL we can give the Bayesian estimate by δ B S = ( 1 σ ) E ( θ | X ) ( θ | X ) + σ δ 0 .
It can be clearly seen that the balanced loss function is more general since it includes special cases of the MLE, symmetric and asymmetric loss functions. For instance, by setting δ 0 to be the MLE of parameter, under BSEL function, the Bayesian estimate is exactly equal to MLE if σ = 1 , and it is simplified into the Bayesian estimate under SEL function when σ = 0 . Similarly, if ρ is chosen as LINEX loss function given by (28), L 3 ( · ) is called BLINEX function. When σ = 1 and σ = 0 , the Bayesian estimates under the BLINEX loss function correspondingly reduce to MLE and the case of LINEX loss function.
In this article, we derive Bayesian estimates under SEL, BSEL functions and LINEX loss functions, respectively. Next, the TK method is suggested to deal with the ratio of the integrals problem on posterior expectation estimation.

3.2. TK Method

We assume that u ( α , β ) denotes an arbitrary function of ( α , β ) . Following Reference [23], the posterior expectation for u ( α , β ) is written as
E ( u ( α , β ) ) = 0 + 0 + u ( α , β ) π ( α , β ) e L ( α , β | x ˜ ) d α d β 0 + 0 + e L ( α , β | x ˜ ) π ( α , β ) d α d β ,
where π ( α , β ) denotes the prior density, and L ( α , β | x ˜ ) represents the logarithm of (4). We set:
φ ( α , β ) = ln π ( α , β ) + L ( α , β | x ˜ ) n and φ u * ( α , β ) = ln u ( α , β ) n + φ ( α , β ) .
Maximizing φ ( α , β ) and φ u * ( α , β ) individually, we derive ( α ^ 1 , β ^ 1 ) and ( α ^ u , β ^ u ) . Then, the approximate posterior expectation of u ( α , β ) by applying the TK method is obtained as
E ^ ( u ( α , β ) ) = | u * | | | exp { n [ φ u * ( α ^ u , β ^ u ) φ ( α ^ 1 , β ^ 1 ) ] } ,
where | | and | u * | represent the corresponding determinants of negative inverse Hessian matrix of φ ( α , β ) and φ u * ( α , β ) . Next, ignoring the constant term, we note that
φ ( α , β ) = 1 n ( m r ) ln ( α β ) + β i = r + 1 m x i + r ln [ 1 e α ( e β x r + 1 1 ) ] α i = r + 1 m ( R i + 1 ) ( e β x i 1 ) b α + ( a 1 ) ln α d β + ( c 1 ) ln β .
Now, we compute the partial derivatives of φ :
φ α = 1 n m r α i = r + 1 m ( e β x i 1 ) ( R i + 1 ) b + a 1 α + r e α ( e β x r + 1 1 ) ( e β x r + 1 1 ) F ( x r + 1 | α , β ) ,
and
φ β = 1 n m r β + i = r + 1 m [ x i α ( R i + 1 ) e β x i x i ] + c 1 β d + α x r + 1 e β x r + 1 e α ( e β x r + 1 1 ) F ( x r + 1 | α , β ) .
Similarly, the second derivatives can be derived as
2 φ α 2 = 1 n m + a r 1 α 2 r ( e β x r + 1 1 ) 2 e α ( e β x r + 1 1 ) F 2 ( x r + 1 | α , β ) ,
2 φ β 2 = 1 n m + c r 1 β 2 α i = r + 1 m ( R i + 1 ) e β x i x i 2 + r α x r + 1 2 e β x r + 1 e α ( e β x r + 1 1 ) [ 1 α e β x r + 1 e α ( e β x r + 1 1 ) ] F 2 ( x r + 1 | α , β ) ,
and
2 φ β α = 2 φ α β = 1 n i = r + 1 m ( R i + 1 ) e β x i x i + r x r + 1 e β x r + 1 e α ( e β x r + 1 1 ) [ 1 α ( e β x r + 1 1 ) e α ( e β x r + 1 1 ) ] F 2 ( x r + 1 | α , β ) .
Through (36)–(38), | | is obtained as
| | = 2 φ α 2 2 φ β 2 2 φ β α 2 φ α β α = α ^ 1 , β = β ^ 1 1 .
For | * | , we first compute that
φ * α = φ α + u α n u ( α , β ) ,
φ * β = φ β + u β n u ( α , β ) ,
2 φ * α 2 = 2 φ α 2 + 1 n u ( α , β ) u α α ( u α ) 2 [ u ( α , β ) ] 2 ,
2 φ * β 2 = 2 φ β 2 + 1 n u ( α , β ) u β β ( u β ) 2 [ u ( α , β ) ] 2 ,
and
2 φ * β α = 2 φ * α β = 2 φ α β + 1 n u ( α , β ) u α β u α u β [ u ( α , β ) ] 2 .
As a result, | * | is
| * | = 2 φ * β 2 2 φ * α 2 2 φ * β α 2 φ * α β α = α ^ u , β = β ^ u 1 .
Finally in the above calculation processes, setting u ( α , β ) as α and β respectively, the estimates on the basis of SEL function are given by
α ^ S = | α * | | | exp { n [ φ α * ( α ^ α , β ^ α ) φ ( α ^ 1 , β ^ 1 ) ] } and
β ^ S = | β * | | | exp { n [ φ β * ( α ^ β , β ^ β ) φ ( α ^ 1 , β ^ 1 ) ] } .
Further, we are able to calculate the Bayesian estimates based on BSEL function using the equation δ B S = σ δ 0 + ( 1 σ ) δ S with different σ , 0 σ 1 .
Similarly, by treating u 1 ( α , β ) as e h α and u 2 ( α , β ) as e h β , the estimates for unknown parameters under LINEX loss function are given by
α ^ L = 1 h ln { | u 1 * | | | exp { n [ φ u 1 * ( α ^ u 1 , β ^ u 1 ) φ ( α ^ 1 , β ^ 1 ) ] } } and
β ^ L = 1 h ln { | u 2 * | | | exp { n [ φ u 2 * ( α ^ u 2 , β ^ u 2 ) φ ( α ^ 1 , β ^ 1 ) ] } } .

3.3. MH Algorithm

We derive Bayesian estimates for α and β by the MH algorithm (see Reference [20]). First, we suppose that the bivariate normal distribution is the proposal distribution for the parameter θ = ( α , β ) , then the MH algorithm can generate samples from the bivariate normal distribution and finally get convergent samples from the posterior distribution. Using the samples, we first compute the Bayesian estimates under different loss functions, thereafter, establish HPD intervals. The MH algorithm can be summarized as:
(1)
Begin with an initial value θ 0 = ( α 0 , β 0 ) , set n = 1 .
(2)
Generate a proposal θ = ( α , β ) from the bivariate normal distribution N 2 ( θ n 1 , 1 ) where θ n 1 = ( α n 1 , β n 1 ) , and 1 denotes the variance-covariance matrix which tends to be considered as the inverse for Fisher information matrix.
(3)
Calculate the acceptance probability q = min { π ( θ | X ) π ( θ n 1 | X ) , 1 } , and π ( · | X ) is corresponding joint posterior distribution.
(4)
Generate μ from Uniform ( 0 , 1 ) .
(5)
If μ q , let θ n = θ ; else, let θ n = θ n 1 .
(6)
Set n = n + 1 .
(7)
Repeat steps (2–6) D times to get required size of sample.
Removing the first D 0 number of iterative values, the Bayesian estimates under SEL function are derived as
α ˜ S = 1 D D 0 n = D 0 + 1 D α n and
β ˜ S = 1 D D 0 n = D 0 + 1 D β n .
Proceeding similarly, the desired estimates under BSEL function can be obtained easily. Further the Bayesian estimates under LINEX can be computed as
α ˜ L = 1 h ln { 1 D D 0 n = D 0 + 1 D e h α n } and
β ˜ L = 1 h ln { 1 D D 0 n = D 0 + 1 D e h β n } .
Now, we can establish the 100 ( 1 γ ) % HPD interval (see Reference [25]) for the unknown parameter α . Sort the remaining D D 0 values in ascending order to be α ( 1 ) , α ( 2 ) , , α ( D D 0 ) . The 100 ( 1 γ ) % HPD interval of α is given as
( α ( w * ) , α ( w * + [ ( 1 γ ) × ( D D 0 ) ] ) ) ,
where w * is selected when the following equation is satisfied:
α ( w * + [ ( 1 γ ) × ( D D 0 ) ] ) α ( w * ) = min 1 w ( D D 0 ) [ ( 1 γ ) × ( D D 0 ) ] ( α ( w + [ ( 1 γ ) × ( D D 0 ) ] ) α ( w ) ) ,
and [ ( 1 γ ) × ( D D 0 ) ] denotes the integer part of ( 1 γ ) × ( D D 0 ) . Likewise, the HPD interval of β can be obtained.

4. Bayesian Prediction

Now we obtain the prediction estimates for the future sample on the basis of available sample and obtain predictive intervals. Bayesian prediction for future sample is a fundamental subject in many fields such as medical, agricultural and engineering experiments. Interested readers may refer to Reference [11].
Suppose that the existing X = ( X r + 1 , , X m ) is a group of general progressive censored data observed from a population with Gompertz distribution. Let Y 1 Y 2 Y W denote the ordered failures time for a future sample with size W, which is also obtained from Gompertz distribution. We aim to obtain their predictive estimation (two-sample prediction). Suppose that Y v ( 1 v W ) represents the v-th failure time of the future sample. Then, for given α and β , we can obtain the density function of Y v as
g ( y v | α , β ) = v W v [ 1 F ( y v | α , β ) ] W v [ F ( y v | α , β ) ] v 1 f ( y v | α , β ) = v W v j = 0 v 1 v 1 j ( 1 ) j [ 1 F ( y v | α , β ) ] W v + j f ( y v | α , β ) .
Consequently, the posterior predictive density function for Y v is derived as
g * ( y v | X ) = 0 + 0 + g ( y v | α , β ) π ( α , β | X ) d α d β .
It is infeasible to compute (57) analytically. By using the MH algorithm, we can obtain its approximate solution as follows:
g * ( y v | X ) = 1 D D 0 i = D 0 + 1 D g ( y v | α i , β i ) .
Further, the survival function is computed as
S ( y v | α , β ) = y v + g ( z | α , β ) d z .
The posterior predictive survival function for Y v can be derived by
S 1 ( y v | X ) = 0 + 0 + S ( y v | α , β ) π ( α , β | X ) d α d β 1 D D 0 i = D 0 + 1 D y v + g ( z | α i , β i ) d z = v D D 0 W v i = D 0 + 1 D j = 0 v 1 v 1 j ( 1 ) j [ 1 F ( y v | α i , β i ) ] W + j v + 1 W + j v + 1 .
Then, we construct the 100 ( 1 γ ) % Bayesian predictive interval ( L 0 , U 0 ) of Y v by finding the solution of the equations
S 1 ( L 0 | X ) = 1 γ 2 and S 1 ( U 0 | X ) = γ 2 .
Further, it is convenient to derive the predictive estimate of the future v-th ordered lifetime, which is given by
y ^ v = E ( y v | X ) = 0 + y v g * ( y v | X ) d y v = 0 + 0 + H ( α , β ) π ( α , β | X ) d α d β ,
where H ( α , β ) = 0 + y v g ( y v | α , β ) d y v is obtained as
H ( α , β ) = v α β W v j = 0 v 1 v 1 j ( 1 ) j 0 + e α ( W + 1 v + j ) t ln ( 1 + t ) d t .
By using the MH algorithm described in the previous section, the prediction estimate of Y k is derived as
y ^ v = 1 D D 0 i = D 0 + 1 D H ( α i , β i ) .

5. Simulation and Data Analysis

For evaluating the quality of the approaches, a numeric simulation study is carried out. In addition, we also analyze two real data sets for further illustration.

5.1. Simulation Study

For the sake of simulation, first we generate general progressive censored sample with the algorithm discussed by Reference [22]. The procedures are as below:
(1)
Generate B m from Beta( n r , r + 1 ).
(2)
Generate independent U r + k from Uniform ( 0 , 1 ) , k = 1 , 2 , , m r 1 .
(3)
Set B r + k = U r + k 1 ξ r + k , where ξ r + k = k + i = m k + 1 m R i , k = 1 , 2 , , m r 1 .
(4)
Set Z r + k = 1 B m k + 1 B m k + 2 B m , k = 1 , 2 , , m r .
(5)
Set X k = F 1 ( Z k ) , k = r + 1 , , m , and F ( x ) represents the CDF of Gompertz distribution.
Then we get the desired general progressive censored data X i , i = r + 1 , , m drawn from Gompertz distribution. In our experiment, the true values for ( α , β ) are selected to be ( 0.3 , 1.2 ) . The MLEs for the two parameters are calculated by means of EM algorithm. In the aspect of Bayesian estimation and prediction, ( 0.2 , 7.8 , 0.1 , 3.7 ) are chosen to be the values of hyperparameters ( a , b , c , d ) respectively. Moreover, Bayesian estimates are obtained by TK and MH methods under different loss functions. Comparison between the results is made according to mean-square error (MSE).
For convenience, we use simplified notations to represent different censoring schemes (CS) with r, such as ( 2 , 0 * 3 , 5 ) for the case where r = 2 , m = 6 , n = 10 and the censoring scheme is ( 0 , 0 , 0 , 5 ) . Therefore, our schemes in simulation study can be expressed by the following notations: H 1 = ( 0 * 2 , 3 , 0 * 3 , 4 , 2 , 0 * 4 ) , H 2 = ( 2 , 0 * 3 , 2 , 0 * 5 , 5 , 0 ) , H 3 = ( 2 , 2 , 0 * 5 , 8 , 0 * 7 , 3 ) , H 4 = ( 5 , 0 * 2 , 1 , 2 , 0 * 3 , 7 , 0 * 7 ) , H 5 = ( 3 , 5 , 0 * 4 , 2 , 0 * 8 , 10 , 0 * 5 ) , H 6 = ( 5 , 0 * 4 , 5 , 0 * 3 , 4 , 0 * 4 , 6 , 0 * 6 ) , H 7 = ( 0 * 4 , 6 , 0 * 3 , 7 , 0 * 9 , 2 , 0 * 7 ) , H 8 = ( 3 , 0 * 2 , 2 , 0 * 4 , 10 , 0 * 4 , 5 , 0 * 12 ) , H 9 = ( 2 , 0 * 5 , 10 , 0 * 5 , 6 , 0 * 7 , 2 , 0 * 10 ) , H 10 = ( 5 , 0 , 3 , 0 * 8 , 8 , 0 * 6 , 4 , 0 * 12 ) .
Table 1 reports all the average estimates and corresponding MSEs for the parameters. In this table, for a given censoring scheme, the average estimates are placed on the first and third rows respectively, and the second and fourth rows refer to the corresponding MSEs. From tabulated estimates, in general, the MH estimates are observed to have smaller MSEs compared with the estimates using the TK method. Furthermore, we find that the performance of Bayes estimates for the parameters under LINEX is better than those based on the SEL and BSEL functions in MSEs. However, Bayesian estimates under the SEL function are closer to the actual values. For MLEs, it can be seen that larger m r and n bring about more outstanding estimates, where m r and n are the corresponding sizes of observed and complete sample. On the whole, Bayesian estimates have an advantage over the corresponding MLEs.
Furthermore, different intervals have also been constructed, including ACIs on the basis of Fisher information matrix, parametric bootstrap intervals, and HPD intervals based on the sample generated from the MH algorithm. Table 2 presents their average length (AL) and coverage probabilities (CPs). The tabulated values indicate that the AL of the HPD intervals is the shortest among those obtained from other interval estimates. Besides, we also find that the ACIs have better performance according to CPs. In general, bootstrap-t and bootstrap-p intervals behave similarly, and their CPs tend to be below the 95 % confidence level. Table 3 lists the results of point prediction and 95 % interval prediction. We give the prediction results of y 3 , y 7 and y 10 in a future sample with size 10. Furthermore, we discover that the interval length becomes wider as v increases.

5.2. Data Analysis

Dataset 1: First we analyze a real dataset about the breaking stress of carbon fibers (in Gba) ( n = 66 ) (see Reference [26]). It is listed as follows:
3.70 , 2.74 , 2.73 , 2.50 , 3.60 , 3.11 , 3.27 , 2.87 , 1.47 , 3.11 , 3.56 , 4.42 , 2.41 , 3.19 , 3.22 , 1.69 , 3.28 , 3.09 , 1.87 , 3.15 , 4.90 , 1.57 , 2.67 , 2.93 , 3.22 , 3.39 , 2.81 , 4.20 , 3.33 , 2.55 , 3.31 , 3.31 , 2.85 , 1.25 , 4.38 , 1.84 , 0.39 , 3.68 , 2.48 , 0.85 , 1.61 , 2.79 , 4.70 , 2.03 , 1.89 , 2.88 , 2.82 , 2.05 , 3.65 , 3.75 , 2.43 , 2.95 , 2.97 , 3.39 , 2.96 , 2.35 , 2.55 , 2.59 , 2.03 , 1.61 , 2.12 , 3.15 , 1.08 , 2.56 , 1.80 , 2.53 .
In order to analyze these data, we calculate the MLEs for the two parameters, and then for the Gompertz distribution we conduct a goodness-of-fit test with some practical guidelines like the Kolmogorov-Smirnov (K-S) statistics, and other criteria, for example, the Akaike Information Criterion (AIC), as well as the Bayesian Information Criterion (BIC). For comparison, some other life distributions have also been tested for goodness-of-fit, including Generalized Exponential (GE), Inverse Weibull and Exponential distributions. Their PDFs have the following forms, respectively:
(1)
The PDF of GE distribution:
f G E ( x | α , β ) = α β ( 1 e β x ) α 1 e β x , 0 < x < + , 0 < α , β < + ;
(2)
The PDF of Inverse Weibull distribution:
f I W ( x | α , β ) = α β x α 1 e β x α , 0 < x < + , 0 < α , β < + ;
(3)
The PDF of Exponential distribution:
f E ( x | β ) = β e β x , 0 < x < + , 0 < β < + .
The results of the tests are presented in Table 4 together with the MLEs. Note that the distribution is more suitable to fit the data when K-S, AIC and BIC values are smaller. Comparing the values, we can conclude that the Gompertz distribution is more appropriate.
To illustrate the proposed methods, three groups of general progressive censored data have been randomly drawn from the parent sample as follows:
Scheme 1: (–,–,–,1.25, 1.47, 1.57, 1.61, 1.61, 1.69, 1.80, 1.87, 2.03, 2.05, 2.35, 2.41, 2.43, 2.48, 2.50, 2.53, 2.55, 2.56, 2.67, 2.73, 2.97, 3.11, 3.15, 3.22, 4.42), r = 3 , m r = 25 , R 6 = 11 , R 15 = 8 , R 19 = 14 , R 26 = 5 , R i = 0 , others;
Scheme 2: (–,–,–,–,–,1.57, 1.61, 1.61, 1.69, 1.80, 1.84, 1.87, 2.03, 2.03, 2.05, 2.12, 2.43, 2.48, 2.55, 2.59, 2.67, 2.73, 2.82, 2.87, 2.88, 2.96, 3.09, 3.11, 3.11, 3.15, 3.19, 3.60, 3.75, 4.42, 4.70), r = 5 , m r = 30 , R 6 = 4 , R 12 = 13 , R 16 = 10 , R 30 = 4 , R i = 0 , others;
Scheme 3: (–,–,1.08, 1.25, 1.47, 1.57, 1.61, 1.61, 1.69, 1.80, 1.84, 1.89, 2.03, 2.03, 2.05, 2.12, 2.35, 2.41, 2.48, 2.50, 2.53, 2.55, 2.79, 2.82, 2.93, 2.95, 3.19, 3.22, 3.27, 3.31, 3.33, 3.39, 3.60, 3.68, 3.75, 4.20, 4.90), r = 2 , m r = 35 , R 4 = 7 , R 17 = 9 , R 22 = 13 , R i = 0 , others.
With the EM algorithm, we calculate the MLEs, and the corresponding Bayesian estimates are also derived by TK and MH methods. For the sake of no prior information, all the hyperparameters are set close to zero values. We list the average MLEs and Bayesian estimates in Table 5 and Table 6. In Table 7, the 90% interval estimates are tabulated, which are ACIs, parametric bootstrap and HPD intervals. Finally in Table 8, point prediction and 95% interval prediction of y 1 and y 6 in a future sample with size 6 are presented.
Dataset 2: Reference [27] presented a dataset on the tumor-free days of 30 rats which were fed with unsaturated diet which is listed below as
112 , 68 , 84.109 , 153 , 143 , 60 , 70 , 98 , 164 , 63 , 63 , 77 , 91 , 91 , 66 , 70 , 77 , 63 , 66 , 66 , 94 , 101 , 105 , 108 , 112 , 115 , 126 , 161 , 178 .
In order to analyze these data, Reference [28] assumed that the number of tumor-free days obeys the Gompertz distribution. To illustrate the methods discussed, here we also suppose that the distribution for these data is Gompertz with ( α , β ) . Let m r = 20 , we set up two censoring schemes, respectively s 1 = ( 0 * 4 , 5 , 0 * 7 , 2 , 0 * 7 ) , r = 3 and s 2 = ( 0 * 2 , 4 , 0 * 6 , 5 , 0 * 10 ) , r = 1 . Then we have obtained the sample under s 1 :
63 , 66 , 66 , 66 , 68 , 70 , 70 , 77 , 77 , 84 , 91 , 91 , 94 , 98 , 101 , 105 , 108 , 112 , 115 , 178 ,
and the sample under s 2 :
63 , 63 , 63 , 66 , 66 , 66 , 68 , 70 , 70 , 77 , 77 , 84 , 91 , 91 , 94 , 101 , 109 , 112 , 115 , 178 .
In Table 9 and Table 10, we calculate the average MLEs, and average Bayesian estimates are derived by the TK method and the MH algorithm, respectively. The interval estimates are presented in Table 11, including ACIs, parametric bootstrap and HPD intervals. Finally Table 12 presents the results of the point prediction and the 95 % interval prediction of y 1 and y 5 with W = 5 .

6. Conclusions

In summary, we discuss the classical and Bayes inferences for the Gompertz distribution using the general progressive censoring. First, the MLEs are acquired by the Expectation–Maximization algorithm. Then, according to the asymptotic normality of MLEs and the principle of missing information, we provide the asymptotic confidence intervals. Moreover, we derive parametric percentile bootstrap and bootstrap-t intervals. In Bayesian statistics, three loss functions are considered, which are symmetrical, asymmetrical and balanced, respectively. Since the posterior expectation is intractable to obtain in explicit form, the TK method is employed to calculate approximate Bayesian estimates. Besides, the Metropolis-Hasting algorithm is applied to get the Bayesian estimates and establish HPD intervals. Furthermore, we derive the prediction estimates of future samples. Finally, a numerical simulation is executed to appraise the quality of the approaches, and two real data sets are also analyzed. The results indicate that these approaches have good performance. In addition, the methods in this article can be extended to other distributions.

Author Contributions

Investigation, Y.W.; Supervision, W.G. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202110004106 supported by Beijing Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gompertz, B. On the nature of the function expressive of the law of human mortality and on a new mode of determining life contingencies. Philos. Trans. R. Soc. Lond. 1825, 115, 513–585. [Google Scholar]
  2. Willekens, F. Gompertz in context: The Gompertz and related distributions. In Forecasting Mortality in Developed Countries: Insights from a Statistical, Demographic and Epidemiological Perspective; Springer: Berlin/Heidelberg, Germany, 2001; Volume 9, pp. 105–126. [Google Scholar]
  3. Wu, J.-W.; Hung, W.-L.; Tsai, C.-H. Estimation of parameters of the Gompertz distribution using the least squares method. Appl. Math. Comput. 2004, 158, 133–147. [Google Scholar] [CrossRef]
  4. Chang, S.; Tsai, T. Point and interval estimations for the Gompertz distribution under progressive Type-II censoring. Metron 2003, 61, 403–418. [Google Scholar]
  5. Mohie El-Din, M.M.; Nagy, M.; Abu-Moussa, M.H. Estimation and prediction for Gompertz distribution under the generalized progressive hybrid censored data. Ann. Data Sci. 2019, 6, 673–705. [Google Scholar] [CrossRef]
  6. Soliman, A.A.; Abd-Ellah, A.H.; Abou-Elheggag, N.A. Abd-Elmougod, G.A. Estimation of the parameters of life for Gompertz distribution using progressive first-failure censored data. Comput. Stat. Data Anal. 2012, 56, 2471–2485. [Google Scholar] [CrossRef]
  7. Bakouch, H.S.; El-Bar, A. A new weighted Gompertz distribution with applications to reliability data. Appl. Math. 2017, 62, 269–296. [Google Scholar] [CrossRef]
  8. Ghitany, M.; Alqallaf, F.; Balakrishnan, N. On the likelihood estimation of the parameters of Gompertz distribution based on complete and progressively Type-II censored samples. J. Stat. Comput. Simul. 2014, 84, 1803–1812. [Google Scholar] [CrossRef]
  9. Balakrishnan, N.; Sandhu, R. Best linear unbiased and maximum likelihood estimation for exponential distributions under general progressive Type-II censored samples. Sankhyā Indian J. Stat. Ser. 1996, 58, 1–9. [Google Scholar]
  10. Fernandez, A.J. On estimating exponential parameters with general Type-II progressive censoring. J. Stat. Plan. Inference 2004, 121, 135–147. [Google Scholar] [CrossRef]
  11. Peng, X.Y.; Yan, Z.Z. Bayesian estimation and prediction for the Inverse Weibull distribution under general progressive censoring. Commun. Stat. Theory Methods 2016, 45, 621–635. [Google Scholar]
  12. Soliman, A.A.; Al-Hossain, A.Y.; Al-Harbi, M.M. Predicting observables from Weibull model based on general progressive censored data with asymmetric loss. Stat. Methodol. 2011, 8, 451–461. [Google Scholar] [CrossRef]
  13. Kim, C.; Han, K. Estimation of the scale parameter of the Rayleigh distribution under general progressive censoring. J. Korean Stat. Soc. 2009, 38, 239–246. [Google Scholar] [CrossRef]
  14. Soliman, A.A. Estimations for Pareto model using general progressive censored data and asymmetric loss. Commun. Stat. Theory Methods 2008, 37, 1353–1370. [Google Scholar] [CrossRef]
  15. Wang, B.X. Exact interval estimation for the scale family under general progressive Type-II censoring. Commun. Stat. Theory Methods 2012, 41, 4444–4452. [Google Scholar] [CrossRef]
  16. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. (Methodol.) 1977, 39, 1–38. [Google Scholar]
  17. Louis, T.A. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc. Ser. (Methodol.) 1982, 44, 226–233. [Google Scholar]
  18. Wang, J.; Wang, X.R. The EM algorithm for the estimation of parameters under the general Type-II progressive censoring data. J. Anhui Norm. Univ. (Nat. Sci.) 2014, 37, 524–529. [Google Scholar]
  19. Efron, B.; Tibshirani, R. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1986, 1, 54–75. [Google Scholar] [CrossRef]
  20. Kayal, T.; Tripathi, Y.M.; Singh, D.P.; Rastogi, M.K. Estimation and prediction for Chen distribution with bathtub shape under progressive censoring. J. Stat. Comput. Simul. 2017, 87, 348–366. [Google Scholar] [CrossRef]
  21. Kundu, D.; Kannan, N.; Balakrishnan, N. Analysis of progressively censored competing risks data. Advances in Survival Analysis. In Handbook of Statistics; Elsevier: New York, NY, USA, 2004; Volume 23, pp. 331–348. [Google Scholar]
  22. Aggarwala, R.; Balakrishnan, N. Some properties of progressivecensored order statistics from arbitrary and uniform distributions with applications to inference and simulation. J. Stat. Plan. Inference 1998, 70, 35–49. [Google Scholar] [CrossRef]
  23. Tierney, L.; Kadane, J.B. Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
  24. Jozani, M.J.; March, É.; Parsian, A. Bayesian and robust Bayesian analysis under a general class of balanced loss functions. Stat. Pap. 2012, 53, 51–60. [Google Scholar] [CrossRef]
  25. Bai, X.; Shi, Y.; Liu, Y.; Liu, B. Reliability estimation of stress–strength model using finite mixture distributions under progressively interval censoring. J. Comput. Appl. Math. 2019, 348, 509–524. [Google Scholar] [CrossRef]
  26. Nichols, M.D.; Padgett, W. A bootstrap control chart for Weibull percentiles. Qual. Reliab. Eng. Int. 2006, 22, 141–151. [Google Scholar] [CrossRef]
  27. Hand, D.J.; Daly, F.; McConway, K.; Lunn, D.; Ostrowski, E. A Handbook of Small Data Sets; Chapman & Hall: London, UK, 1994. [Google Scholar]
  28. Chen, Z. Parameter estimation of the Gompertz population. Biom. J. 1997, 39, 117–124. [Google Scholar] [CrossRef]
Table 1. Estimates and MSEs under general progressive censoring schemes.
Table 1. Estimates and MSEs under general progressive censoring schemes.
TKMH
BSELLINEX BSELLINEX
n ( r , m r ) CSMLESEL σ = 0.3 σ = 0.7 h = 2 h = 2 SEL σ = 0.3 σ = 0.7 h = 2 h = 2
20(0,11) H 1 0.36580.31080.32730.34930.35890.26130.31300.32880.35000.32940.2894
(0.3800)(0.004819)(0.04809)(0.1982)(0.03838)(0.009074)(0.0078843)(0.003863)(0.0007096)(0.008166)(0.005391)
1.5811.1801.4611.4431.3291.0481.1801.3001.4611.3531.104
(0.5589)(0.03178)(0.1174)(0.3282)(0.07918)(0.02627)(0.03099)(0.01518)(0.002789)(0.002789)(0.002788)
20(2,11) H 2 0.36420.31880.33240.35050.42620.21720.32360.33580.35200.33280.2864
(0.2977)(0.004574)(0.03937)(0.1566)(0.0388)(0.007151)(0.005837)(0.002860)(0.0005253)(0.006824)(0.005360)
1.6021.1701.3001.4721.4120.99071.1571.2911.4691.3721.067
(0.6573)(0.02902)(0.1272)(0.3785)(0.07533)(0.02982)(0.03075)(0.01507)(0.002768)(0.002768)(0.002768)
30(2,15) H 3 0.35410.32310.33240.34480.34550.28800.32120.33110.34420.34140.3013
(0.2357)(0.005198)(0.03374)(0.1259)(0.03671)(0.005695)(0.007422)(0.003637)(0.0006680)(0.007797)(0.004011)
1.4791.1931.2791.3931.3301.0801.1861.2741.3911.3111.104
(0.3951)(0.03355)(0.09828)(0.2429)(0.05929)(0.03756)(0.03513)(0.01721)(0.003161)(0.003161)(0.003161)
30(5,15) H 4 0.33750.31890.32450.33190.38460.23140.31240.31990.33000.33150.2863
(0.1270)(0.006982)(0.02480)(0.07280)(0.04221)(0.005941)(0.007436)(0.003644)(0.0006692)(0.009107)(0.006218)
1.4081.2251.2801.3531.3871.0891.2161.2741.3511.3161.175
(0.2745)(0.04216)(0.08951)(0.1825)(0.05769)(0.06132)(0.03791)(0.01858)(0.003412)(0.003412)(0.003412)
40(3,20) H 5 0.32600.32250.32360.32500.29990.33700.32100.32250.32450.32630.3030
(0.08369)(0.007040)(0.01979)(0.05045)(0.03917)(0.004908)(0.007466)(0.003658)(0.0006720)(0.01124)(0.008610)
1.3821.2081.2601.3301.2651.1611.2061.2591.3291.3191.147
(0.2233)(0.03806)(0.07655)(0.1507)(0.05024)(0.05376)(0.03845)(0.01884)(0.003460)(0.003460)(0.003460)
40(5,20) H 6 0.32700.32390.32480.32610.33600.28540.31400.31800.32310.32680.2942
(0.07880)(0.007134)(0.01915)(0.04781)(0.04173)(0.004668)(0.009779)(0.004792)(0.0008801)(0.007355)(0.007396)
1.3501.2111.2531.3081.3051.1351.2331.2681.3151.2921.169
(0.1814)(0.03611)(0.06730)(0.1254)(0.04621)(0.06150)(0.04281)(0.02098)(0.003853)(0.004146)(0.004146)
40(0,25) H 7 0.31530.30960.31130.31360.43760.17560.30260.30640.31150.30750.3023
(0.05276)(0.007417)(0.01595)(0.03409)(0.05085)(0.004505)(0.007637)(0.003742)(0.0006873)(0.007945)(0.006896)
1.3411.2381.2691.3101.4151.0451.2401.2711.3111.3261.153
(0.1537)(0.03891)(0.06481)(0.1107)(0.03442)(0.06736)(0.04114)(0.02016)(0.003702)(0.003702)(0.003702)
40(3,25) H 8 0.32020.31890.31920.31980.27320.34180.30100.30670.31440.33390.3050
(0.03832)(0.007207)(0.01349)(0.02594)(0.04694)(0.003930)(0.008018)(0.003929)(0.0007216)(0.01114)(0.006569)
1.2991.2161.2411.2741.2441.2161.2151.2401.2741.3061.159
(0.1220)(0.03458)(0.05478)(0.08973)(0.03643)(0.06051)(0.03461)(0.01696)(0.003115)(0.003115)(0.003115)
50(2,30) H 9 0.31300.31230.31250.31280.38680.22660.31660.31550.31410.32000.3026
(0.04112)(0.008172)(0.01486)(0.02804)(0.04575)(0.003879)(0.01427)(0.006994)(0.001285)(0.01180)(0.01074)
1.3101.2341.2571.2871.3641.1041.2231.2491.2841.2991.182
(0.1154)(0.03679)(0.05546)(0.08691)(0.02824)(0.07838)(0.05966)(0.02923)(0.005369)(0.005369)(0.005369)
50(5,30) H 10 0.30790.31290.31140.30940.32980.30120.31200.31080.30910.31270.2990
(0.03577)(0.007717)(0.01351)(0.02473)(0.04378)(0.002980)(0.006585)(0.003226)(0.0005926)(0.008346)(0.007515)
1.3131.2381.2611.2911.2801.1731.2191.2471.2851.3111.194
(0.1103)(0.03658)(0.05427)(0.08375)(0.02626)(0.06414)(0.03330)(0.01631)(0.002997)(0.002997)(0.002997)
Table 2. Interval estimates with confidence level of 95% for α and β .
Table 2. Interval estimates with confidence level of 95% for α and β .
AsymptoticBootstrap-pBootstrap-tHPD
n ( r , m r ) CSALsCPsALsCPsALsCPsALsCPs
20 ( 0 , 11 ) H 1 2.0600.771.5700.851.5510.840.48370.83
2.4580.942.8390.792.9110.801.3880.78
20 ( 2 , 11 ) H 2 3.1530.732.2020.881.6410.840.45820.87
2.9070.953.1890.822.9030.851.1860.81
30(2,15) H 3 3.3910.781.6910.831.9790.820.60350.87
2.3460.942.5520.872.6640.841.1390.84
30(5,15) H 4 1.0060.830.99960.840.93960.800.49210.84
1.7530.951.9440.811.8850.860.89880.86
40(3,20) H 5 1.0010.830.93480.840.84640.850.60660.85
1.7550.951.8740.811.7400.830.58790.84
40(5,20) H 6 0.86900.830.86930.840.90510.880.45020.86
1.5960.951.7350.861.7700.840.92820.83
40(0,25) H 7 0.91110.850.87620.870.94120.830.28810.74
1.4500.931.5200.821.4880.850.85890.91
40(3,25) H 8 0.76310.840.78060.790.84290.850.35310.93
1.3540.931.4310.851.4650.800.82370.92
50(2,30) H 9 0.72480.860.73100.880.59840.820.41840.99
1.2570.951.3150.871.2830.840.69210.96
50(5,30) H 10 0.69950.850.70210.850.72540.840.49840.83
1.2270.941.2850.831.3080.880.86840.87
Table 3. Point prediction and 95 % prediction interval with W = 10 .
Table 3. Point prediction and 95 % prediction interval with W = 10 .
n ( r , m r ) CSvPoint PredictionInterval Prediction
20(0,11) H 1 30.6797(7.802 ×  10 5 , 1.991)
71.437(0.6615, 3.181)
102.407(1.954, 4.649)
20(2,11) H 2 30.7784(6.630 ×  10 5 , 2.222)
71.665(1.007, 3.402)
102.717(2.345, 4.952)
30(2,15) H 3 30.6801(4.705 ×  10 5 , 2.063)
71.474(0.8233, 3.029)
102.349(2.157, 4.415)
30(5,15) H 4 30.6176(6.116 ×  10 5 , 1.881)
71.325(0.9165, 2.787)
102.155(2.011, 3.876)
40(3,20) H 5 30.6123(4.757 ×  10 5 , 1.925)
71.369(0.8673, 2.851)
102.094(2.035, 3.937)
40(5,20) H 6 30.7092(4.432 ×  10 5 , 2.074)
71.445(1.059, 2.968)
102.263(2.221, 4.123)
40(0,25) H 7 30.6147(5.143 ×  10 5 , 1.905)
71.318(0.8493, 2.812)
102.089(2.032, 3.951)
40(3,25) H 8 30.6341(5.932 ×  10 5 , 1.986)
71.320(1.021, 2.838)
102.096(2.106, 3.902)
50(2,30) H 9 30.6090(7.831 ×  10 5 , 1.914)
71.325(1.033, 2.783)
102.025(2.092, 3.867)
50(5,30) H 10 30.5322(4.538 ×  10 5 , 1.859)
71.210(1.021, 2.651)
101.918(2.025, 3.665)
Table 4. The MLE and goodness-of-fit tests results in Dataset 1.
Table 4. The MLE and goodness-of-fit tests results in Dataset 1.
Distribution α ^ ML β ^ ML K-SAICBIC
Gompertz0.03482011.070680.11122180.177184.556
GE9.199111.007550.15472194.745199.124
Inverse weibull1.648053.226240.23042246.390250.769
Exponential 0.3623790.28615267.989270.178
Table 5. Estimates for α and β by EM and TK method in Dataset 1.
Table 5. Estimates for α and β by EM and TK method in Dataset 1.
BSELLINEX
CSMLESEL σ = 0.2 σ = 0.4 σ = 0.6 σ = 0.8 h = 5 h = 3 h = 5 h = 7
10.019460.026090.024760.023440.022110.020790.023460.029230.0061370.008611
1.2271.1991.2051.2101.2161.2211.2901.1421.7341.671
20.031020.037860.036490.035120.033760.032390.036350.043260.015590.01855
1.0911.0721.0761.0791.0831.0871.1261.0351.3971.358
30.029360.035550.034310.033070.031830.030600.034190.040520.015190.01790
1.0911.0741.0771.0811.0841.0881.1231.0391.3771.340
Table 6. Estimates for α and β by MH algorithm in Dataset 1.
Table 6. Estimates for α and β by MH algorithm in Dataset 1.
BSELLINEX
CSSEL σ = 0.2 σ = 0.4 σ = 0.6 σ = 0.8 h = 5 h = 3 h = 5 h = 7
10.038330.034560.030780.027010.023230.025290.028130.023420.02547
1.0611.0951.1281.1611.1941.2641.2281.1451.090
20.044820.042060.039300.036540.033780.039030.037650.037070.03750
1.0131.0291.0441.0601.0751.1151.0951.0200.9947
30.051730.047250.042780.038310.033830.036250.035910.036310.03775
0.98781.0081.0291.0501.0711.1141.1011.0220.9948
Table 7. Interval estimates with confidence level of 90% for α and β in Dataset 1.
Table 7. Interval estimates with confidence level of 90% for α and β in Dataset 1.
CSAsymptoticBootstrap-pBootstrap-tHPD
1(0.001019, 0.03790)(0.004187, 0.04863)(0.004368, 0.04738)(0.01946, 0.06289)
(0.9007, 1.554)(0.9560, 1.740)(0.9548, 1.743)(0.7067, 1.227)
2(0.004108, 0.05794)(0.009230, 0.06913)(0.01021, 0.07103)(0.02092, 0.09859)
(0.8435, 1.338)(0.8692, 1.464)(0.8682, 1.439)(0.7160, 1.197)
3(0.004407, 0.05431)(0.009369, 0.06255)(0.009445, 0.06360)(0.01217, 0.04114)
(0.8534, 1.329)(0.8871, 1.436)(0.8900, 1.434)(0.9851, 1.321)
Table 8. Point prediction and 95 % interval prediction with W = 6 in Dataset 1.
Table 8. Point prediction and 95 % interval prediction with W = 6 in Dataset 1.
CSvPoint PredictionInterval Prediction
111.309(7.982 ×  10 5 , 3.763)
63.985(3.898, 5.566)
211.414(4.707 ×  10 5 , 3.619)
64.078(4.004, 5.696)
311.457(4.672 ×  10 5 , 3.666)
63.833(3.817, 6.542)
Table 9. Estimates for α and β by EM and TK methods in Dataset 2.
Table 9. Estimates for α and β by EM and TK methods in Dataset 2.
BSELLINEX
CSMLESEL σ = 0.2 σ = 0.4 σ = 0.6 σ = 0.8 h = 3 h = 1 h = 5 h = 7
s 1 0.083600.11250.10670.10090.095160.089380.077640.10130.058680.06071
0.02461.023830.023980.024140.024300.024460.025460.023600.026960.02680
s 2 0.074550.099430.094460.089480.084500.079530.068700.089730.051890.05369
0.025260.024490.024640.024800.024950.025100.026120.024260.027610.02745
Table 10. Estimates for α and β by MH algorithm in Dataset 2.
Table 10. Estimates for α and β by MH algorithm in Dataset 2.
BSELLINEX
CSSEL σ = 0.2 σ = 0.4 σ = 0.6 σ = 0.8 h = 3 h = 1 h = 5 h = 7
s 1 0.11710.11040.10370.097010.090310.097740.10440.093310.09782
0.023310.023570.023830.024090.024350.024460.024010.024260.02381
s 2 0.10450.098500.092510.086530.080540.093510.090690.084400.08858
0.024050.024290.024530.024780.025020.024280.024470.024770.02445
Table 11. Interval estimates with confidence level of 90% for α and β in Dataset 2.
Table 11. Interval estimates with confidence level of 90% for α and β in Dataset 2.
CSAsymptoticBootstrap-pBootstrap-tHPD
s 1 (0.001083, 0.1661)(0.02024, 0.2246)(0.01836, 0.2146)(0.007255, 0.1017)
(0.01715, 0.03207)(0.01784, 0.03746)(0.01766, 0.03789)(0.02008, 0.04409)
s 2 (0.0008985, 0.1482)(0.01639, 0.1995)(0.01641, 0.2031)(0.02321, 0.1732)
(0.01781, 0.03270)(0.01810, 0.03904)(0.01820, 0.03875)(0.01878, 0.03419)
Table 12. Point prediction and 95% interval prediction with W = 5 in Dataset 2.
Table 12. Point prediction and 95% interval prediction with W = 5 in Dataset 2.
CSvPoint PredictionInterval Prediction
s 1 141.53805(6.196145 × 10 5 , 79.12604)
5136.6431(74.30713, 188.8303)
s 1 143.12672(5.896611 × 10 5 , 94.66528)
5126.0958(86.76296, 169.7341)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Gui, W. Estimation and Prediction for Gompertz Distribution under General Progressive Censoring. Symmetry 2021, 13, 858. https://doi.org/10.3390/sym13050858

AMA Style

Wang Y, Gui W. Estimation and Prediction for Gompertz Distribution under General Progressive Censoring. Symmetry. 2021; 13(5):858. https://doi.org/10.3390/sym13050858

Chicago/Turabian Style

Wang, Yuxuan, and Wenhao Gui. 2021. "Estimation and Prediction for Gompertz Distribution under General Progressive Censoring" Symmetry 13, no. 5: 858. https://doi.org/10.3390/sym13050858

APA Style

Wang, Y., & Gui, W. (2021). Estimation and Prediction for Gompertz Distribution under General Progressive Censoring. Symmetry, 13(5), 858. https://doi.org/10.3390/sym13050858

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop