1. Introduction
Censored data are a common feature of reliability and life testing studies. Experimenters must have the experience of various test situations pertaining to time, cost, or money constraints where the removal of units is planned in advance before failure. Time censoring (Type-I) and failure censoring (Type-II) schemes are the most often used censoring systems in life testing and reliability studies. One of the major weaknesses of these schemes is that they do not permit items to be withdrawn from the experiment at any point other than the end. As a result, the progressive Type-II censoring scheme (PT-II-CS) is used, which is a more widely used censoring system.
n items are set on a test in the PT-II-CS, and
m is a prefixed number of items to be failed. At the time of the first failure
,
items are randomly extracted from the staying
outlasting items. Likewise, at the time of the second failure,
,
items of the remaining
items are randomly withdrawn and so on. At the time of the
mth failure
all the remaining
items are removed, see Balakrishnan [
1] for more details.
Kundu and Joarder [
2] suggested a progressive Type-I hybrid censoring scheme (PT-I-HCS), in which
n identical items are tested using a specified progressive censoring scheme.
and the test is ended at random time
, where
T is a predetermined time. The PT-I-HCS has the disadvantage that the useful sample size is random and might turn out to be a very small number. As a result, the statistical inference method will be efficient. Ng et al. [
3] proposed an adaptive progressive Type-II hybrid censoring scheme to increase the efficiency of statistical analysis (AP-II-HCS). The number of failures
m is predetermined in advance in the AP-II-HCS, and the testing time is permitted to run over the time
T. Moreover, we have the progressive censoring scheme
, but the values of some of the
maybe adjust consequently during the test. If the
mth failure happens before time
T , the test stops at this time and we will have the usual PT-II-CS. On the other hand, if
, where
and
is the
Dth failure time happen before time
T, then we will not withdraw any surviving item from the test by putting
and
. This setting ensures that we will terminate the experiment when we reach the preferred number of failures
m, and the total test time will not be too far away from the ideal time
T. Let
be an adaptive progressive Type-II hybrid censored sample from a continuous population with probability density function (PDF)
and cumulative distribution function (CDF)
with progressive censoring scheme
, then the likelihood function of the observed data takes the form
where
C is a constant that is independent of the parameters. Various studies based on the AP-II-HCS have been conducted; readers can refer to the findings of Hemmati and Khorram [
4], Nassar and Abo-Kasem [
5], Ateya and Mohammed [
6], Nassar et al. [
7], and Nassar et al. [
8] among many others.
The alpha power exponential (APE) distribution was introduced by Mahdavi and Kundu [
9] as a novel extension of the exponential distribution. They studied the APE distribution’s main characteristics and using the method of maximum likelihood to estimate the unknown parameters. They claimed that the APE distribution had a lot of qualities. Weibull, gamma, and exponentiated exponential distributions are all quite similar to it. The Weibull, gamma, and exponentiated exponential distributions have similar PDF and hazard rate functions (HRF). As a result, it may be thought of as a alternative choice to these well-known distributions. Furthermore, because the APE distribution’s CDF can be represented in an explicit structure, it may be used to investigate censored data very easily. If
X is a random variable that follows the APE distribution, its PDF and CDF may be represented as follows.
and
The shape and scale parameters, respectively, are
and
. The APE distribution’s reliability function (RF) and HRF are calculated as follows:
and
Nassar et al. [
10] studied different classical estimation methods of the APE distribution using a complete sample. Salah [
11] investigated the estimation problems of the APE distribution under PT-II-CS using the maximum likelihood approach. Salah et al. [
12] used the maximum likelihood approach to study the point and interval estimates of the APE distribution based on Type-II hybrid censored data. These studies concentrated on the estimations of the APE distribution using the classical approaches only by utilizing complete samples or some conventional censoring schemes. Investigating the estimation problems of the APE distribution using classical and Bayesian procedures are the main core of the present study by utilizing a more flexible censoring scheme.
The originality of this study comes from the fact that, to the best of our knowledge, it is the first time researchers have explored the estimation problems of the APE distribution under an AP-II-HCS. Further, despite the various researches utilizing the APE distribution, no study investigates the Bayesian estimation of its parameters and reliability indices. For more information about the importance of estimating the reliability characteristics, one may refer to Xu et al. [
13], Luo et al. [
14], Hu and Chen [
15], and Chen and Ye [
16]. The key role of this study is three fold. Firstly, we consider the estimation problems of the APE distribution using AP-II-HCS using classical and Bayesian approaches to fill the gap of the previous studies that utilized only classical approaches. Accordingly, the point and interval estimates of the unknown parameters as well as the RF and HRF are investigated. The second is to find the optimal sampling scheme for adaptive progressive Type-II hybrid censored APE distribution. The third is to create a policy to select the most suitable estimation method for the APE distribution based on AP-II-HCS as well as the optimal sampling scheme. In Bayesian estimation, the estimators are acquired by employing the squared error loss function. The squared error loss function is the most commonly used symmetric loss function, in which, the estimation treats overestimation and underestimation equally. To evaluate the results, we perform a simulation research to test the behavior of the suggested approaches, and two data sets are used as examples.
The rest of the article is organized as follows: The classical inference of the APE distribution is discussed in
Section 2. The Bayesian estimating method is discussed in
Section 3.
Section 4 presents the results of a simulation investigation. In
Section 5, we provide different approaches for determining the best censoring scheme.
Section 6 examines two real data sets, and
Section 7 concludes the paper.
3. Bayesian Estimation
The Bayesian estimators of the unknown parameters
and
, as well as the RF and HRF, are derived in this section. The related credible intervals are also studied in addition to the point estimates. When compared to the maximum likelihood method in statistical analysis, the Bayesian approach offers several advantages. The Bayesian technique is very effective in dependability studies and many other fields where one of the significant challenges is the restricted availability of data. The Bayesian estimates are investigated in this paper under the assumption that the unknown parameters are independent and have gamma distributions, i.e.,
and
. In this case, we can write the joint prior distribution of
and
as
and
, are the hyper parameters. The posterior distribution is the most significant part of the Bayesian analysis. It retains all the knowledge obtainable regarding the unknown parameters after holding the observed data. Based on the likelihood function in (
6) and the joint prior distribution in (
12), we can express the joint posterior distribution of
and
as follows
where
A is the normalized constant and given by
Based on a specific loss function, the Bayesian estimator of any function of
and
, say
, may be expressed as
It is clear that calculating (
14) analytically is not attainable. As a result, we recommend using the Markov chain Monte Carlo (MCMC) approach to obtain Bayesian estimates and, as a result, to construct Bayesian credible intervals. The full conditional posterior distributions of the unknown parameters are naturally required to produce samples using the MCMC approach. From (
13), the full conditional distributions for
and
may be stated as follow
and
It is noted the full conditional distributions of
and
in (
15) and (
16), respectively, can not be expressed as well-known densities; therefore, generating
and
from these densities is not attainable by employing the standard methods. In this case, we need to generate the unknown parameters by using Metropolis–Hastings algorithm. To apply the Metropolis–Hastings steps, we consider the normal distribution as a proposal distribution in order to obtain the Bayesian estimates and to construct the credible intervals for the unknown parameters. To generate samples from (
15) and (
16), we offer the following steps of the Metropolis–Hastings algorithm:
- Step 1.
Set the start values of , say .
- Step 2.
Put .
- Step 3.
Simulate
from (
15) from the normal distribution
.
- Step 4.
Compute the acceptance ratio:
- Step 5.
Simulate u, where .
- Step 6.
If , put , else, put .
- Step 7.
Redo Steps 3–6 for
to obtain
from (
16).
- Step 8.
Obtain
and
as
and
- Step 9.
Set .
- Step 10.
Repeat Steps 3–8
M times to get
- Step 11.
Compute the Bayesian estimates of
,
,
, and
under squared error loss function as
- Step 12.
To obtain the highest posterior density (HPD) credible intervals of
,
,
, and
: First, order the MCMC samples of
,
,
, and
for
after burn-in as
;
;
, and
, respectively. Then, applying the approach proposed by Chen and Shao [
17], the
two-sided HPD credible interval for the unknown parameter
is given by
where
is chosen such that
The largest integer less than or equal to x is denoted by . Then, the HPD credible interval of x with the shortest length is that interval. The HPD credible intervals of , , and may be easily obtained in a similar way.
4. Monte Carlo Simulation
In this section, a Monte Carlo simulation study was used to examine the behavior of the suggested estimators of and , as well as and . Based on the actual values of the parameters , a large number of 1000 adaptive Type-II progressively hybrid censored samples are generated from the APE distribution using various mixtures of n(total test units), m(effective sample size), and T(threshold time point). As a result, at the required time , the corresponding actual values of RF and HRF are and . Further, different values of are taken such as and 100 for each specified time and 0.75. The test is ended when the number of failed subjects reaches a particular value m, where the failure ratio and .
Briefly, for given values of n, m and T, we clarify the procedure of generating adaptive Type-II progressive hybrid censored samples as follows:
- Step 1:
Using the algorithm outlined by Balakrishnan and Sandhu [
18], generate an ordinary progressive Type-II censored sample as follows:
- (a)
Create independent observations of size m as .
- (b)
For specific n, m, T and , put
.
- (c)
Let for . Then, is a PT-II-CS sample of size m from distribution.
- (d)
Set , is the generated progressively Type-II censored sample from .
- Step 2:
Decide D, where , and remove the staying sample .
- Step 3:
From , generate the first order statistics with sample size as .
To see the effects of the priors on the Bayesian inference, besides the noninformative priors, say Prior 0:
, we have used two different informative sets of the hyperparameters
namely Prior 1:
and
and Prior 2:
and
. Here, the hyperparameter values are selected in such a way that the prior mean became the expected value of the model parameter. It is clear that, when
, the posterior distribution is proportional to the corresponding likelihood function; therefore, if one does not have prior information on the unknown parameters of interest, it is better to use the frequentist estimates instead of the Bayesian estimates because the later are computationally more expensive. Using the Metropolis–Hastings algorithm described in
Section 3, we create 12,000 MCMC samples with 2000 iterations as a burn-in period. Thus, using the remaining 10,000 MCMC samples, the average Bayesian estimates and the associated 95% HPD credible intervals of
,
,
, and
are calculated.
To evaluate the performance of removal designs, for each
n and
m, we assume the following different censoring schemes
The performance of the different estimates are evaluated based on the root mean square error (RMSE) and relative absolute bias (RAB), while the performances of
two-sided ACI/HPD credible intervals estimates are examined using the average interval lengths (AILs). The average point estimates of any function of the unknown APE parameters
and
(say
) are calculated numerically as follows:
and
where
is the desired estimate of the parametric function
,
represents the obtained estimate of the unknown parameter at the
sample
,
Q is number of generated sequence data,
,
,
,
,
, and
refer to the lower and upper interval limits, respectively, of
asymptotic (or credible) interval of
.
All numerical computations were achieved using R 4.0.4 software with two helpful packages namely ‘coda’ package suggested by Plummer et al. [
19] and ‘maxLik’ package offered by Henningsen and Toomet [
20]. Recently, these packages are also recommended by Elshahhat and Nassar [
21] and Elshahhat and Rastogi [
22]. The average estimates of
,
,
, and
with their RMSEs and RABs are obtained and displayed in
Table 1 and
Table 2. Moreover, the associated AILs are presented in
Table 3 and
Table 4.
We may make the following observations based on
Table 1,
Table 2,
Table 3 and
Table 4. In terms of minimum RMSEs, RABs, and AILs, the suggested estimations of the unknown parameters and/or reliability characteristics are often extremely excellent. Furthermore, the behavior of the different estimates improves as
n(or
m) grows. When
increases, the same performance pattern is also seen. Furthermore, when the total progressively censoring scheme decreases, the RMSEs, RABs, and AILs of all estimates tend to decrease for fixed
n. The RMSEs, RABs, and AILs associated with
increase as
T increases, whereas those related with
,
, and
decrease.
Comparing Schemes I–III, it is observed that the RMSEs, RABs, and AILs of are greater for Scheme-I than Scheme-III whereas for , , and are smaller based on Scheme-I than Scheme-III. This result is due the fact that the expected duration of the experiments using Scheme-I, where the remaining live items removed in the first stage, is greater than any other; therefore, the data collected under Scheme-I provided more information about the unknown parameters than those acquired by Schemes II and III.
In terms of the smallest RMSEs, RABs, and AILs, the Bayesian estimates using gamma informative priors perform better than the frequentist estimates since they contain prior knowledge. Furthermore, because Prior 2 has a smaller variance than Prior 1, the Bayesian (point/interval) based on Prior 2 perform better than those based on Prior 1 while both are more informative than the Prior 0. This result is due the fact that, if the prior information of and is not available, the posterior PDF is reduced in proportion to the corresponding likelihood function. In summary, the Bayesian inference of the unknown parameters of the APE lifetime model using the Metropolis–Hastings method is recommended.
5. Optimal Progressive Censoring Plan
In recent years, the statistical literature has focused on finding the best censoring scheme; see, for example, Chapter 10 of Balakrishnan and Aggarwala [
23], Ng et al. [
24], Balasooriya and Balakrishnan [
25], Balasooriya et al. [
26], and Pradhan and Kundu [
27]. For specified
n and
m, probable censoring schemes refers to all likely
combinations such that
, and selecting the progressive censoring scheme that gives the most information about the unknown parameters among all possible progressive censoring schemes is part of selecting the optimal sample approach. Practically, we would like to pick the censoring scheme that delivers the maximum information of the unknown parameters, see Elshahhat and Rastogi [
22] and Alotaibi et al. [
28] for more information. In our example, numerous widely used measures are offered in
Table 5 to help us choose the most progressive censoring approach.
Regarding criterion
, our goal is to maximize the observed Fisher
information values. In addition, regarding criteria
and
, our goal is to minimize the determinant and the trace of
, respectively. When dealing with single-parameter distributions, comparing multiple criteria is easy; however, when dealing with multi-parameter distributions are unknown, then the comparison of the two Fisher information matrices is more difficult since the criterion
and
are not scale invariant—see Gupta and Kundu [
29]; however, the optimal censoring scheme of multi-parameter distributions can be chosen using scale invariant criteria
and
.
It is clear that the criterion
depends on the choice of
p, tends to minimize the variance of logarithmic of MLE of the
quantile,
, where
. According to criterion
, the weight function
is a non-negative function satisfying
, also,
is the same as in criterion
. Without loss of generality, the weight function can be taken as
for
. Hence, the logarithmic for
of the APE distribution is given by
From, (
3), the delta method is used to obtain the approximated variance for
of the APE distribution as
where
However, the optimized progressive censoring corresponds to the highest value of the criterion and the lowest value of the criteria .