Next Article in Journal
Robust Fisher-Regularized Twin Extreme Learning Machine with Capped L1-Norm for Classification
Next Article in Special Issue
Stochastic Ordering Results on Implied Lifetime Distributions under a Specific Degradation Model
Previous Article in Journal
Stability Results and Reckoning Fixed Point Approaches by a Faster Iterative Method with an Application
Previous Article in Special Issue
Optimal Decision for Repairable Products Sale and Warranty under Two-Dimensional Deterioration with Consideration of Production Capacity and Customers’ Heterogeneity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Bayes Prediction Study Based on Joint Type-II Censoring

1
Department of Mathematics, College of Science, Taibah University, Al-Madinah Al-Munawarah 30002, Saudi Arabia
2
Department of Mathematics, Faculty of Science, Al-Azhar University, Nasr City 11884, Egypt
3
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(7), 716; https://doi.org/10.3390/axioms12070716
Submission received: 17 June 2023 / Revised: 12 July 2023 / Accepted: 20 July 2023 / Published: 23 July 2023

Abstract

:
In this paper, the problem of predicting future failure times based on a jointly type-II censored sample from k exponential populations is considered. The Bayesian prediction intervals and point predictors were then obtained. Generalized Bayes is a Bayesian study based on a learning rate parameter. This study investigated the effects of the learning rate parameters on the prediction results. The loss functions of squared error, Linex, and general entropy were used as point predictors. Monte Carlo simulations were performed to show the effectiveness of the learning rate parameter in improving the results of prediction intervals and point predictors.

1. Introduction and Motivations

Generalized Bayes is a Bayesian study based on a learning rate parameter ( η > 0 ) as a power of the likelihood function L ( θ ;   d a t a ) . The traditional Bayes framework is obtained for η = 1 , and we demonstrate the effect of the learning rate parameter on the prediction results. That is, if the prior distribution of the parameter θ is π ( θ ) then the generalized Bayes posterior distribution for θ is
π * ( θ   |   d a t a ) L η ( θ ;   d a t a )   π ( θ ) ,         θ Θ ,         η > 0 .
For more details on the generalized Bayes method and the choice of the value of the rate parameter, we refer the reader to [1,2,3,4,5,6,7,8,9,10,11]. In a special way, the choice of the learning rate η was studied in [3,4,5,6] by a so-called safe Bayes algorithm based on the minimization of a sequential risk measure. Another learning rate selection method considers the two different information-matching strategies proposed in [7,8]. In addition, a generalized Bayes estimation based on a joint censored sample of type II from k exponential populations using different values of the learning rate parameter was studied in [11]. An exact inference method based on maximum likelihood estimates (MLEs) was developed in [12], and its performance was compared with that of approximate, Bayesian, and bootstrap methods. The joint progressive censoring type II and the expected number of failures for two populations under the joint progressive censoring type II were introduced and studied by [13]. In contrast, the exact likelihood inference for two exponential populations under joint progressive censoring of type II was studied in [14], and some precise results were obtained based on the maximum likelihood estimates developed by [15]. Exact likelihood inference for two populations of two-parameter exponential distributions under type II joint censoring was studied by [16].
One might be interested in predicting future failures using a joint type II censored sample. To accomplish this, prediction points or intervals should be determined. Bayesian prediction bounds for future observations based on certain distributions were discussed by several authors. A study of Bayesian estimation and prediction based on a joint censored sample of type II from two exponential populations was presented by [17]. Prediction (various classical and Bayesian point predictors) for future failures in the Weibull distribution under hybrid censoring was studied by [18]. A Bayesian prediction based on generalized order statistics with multiple censoring of type II was developed by [19].
The main objective of this study is to predict future failures based on a joint type-II censoring scheme for k-exponential populations when censoring is performed on k-samples in a combined manner. Suppose that products from k different lines are produced in the same factory, and k independent samples of size n h , 1 h k are selected from these k lines and simultaneously placed in a lifetime experiment. To reduce the cost and time of the experiment, the experimenter may decide to stop the lifetime test when a certain number ( r ) of failures occurs. The nature of the problem and the distributions used in our study are presented below.
Suppose { X h n h , h = 1 , , k } are k -samples, where X h n h = { X h 1 , X h 2 , , X h n h } are the lifetimes of n h samples of product line A h and are assumed to be independent and identically distributed (iid) random variables from a population with a probability density function (pdf) f h ( x ) and a cumulative distribution function (cdf) F h ( x ) .
Furthermore, let N = i = 1 k n i be the total sample size, and let r be the total number of observed failures. Let W 1 W N denote the order statistics of N random variables { X h n h , h = 1 , , k } . Under the joint type-II censoring scheme for the k -samples, the observable data then consist of ( δ , W ) , where W = ( W 1 , , W r ) ,   W i { X h i n h i , h i = 1 , , k } , with r < N being a pre-fixed integer and δ = ( δ 1 ( h ) , , δ r ( h ) ) associated to ( h 1 , , h r ) is defined by
δ i ( h ) = { 1 ,         if         h = h i 0 ,         otherwise .
Letting M r ( h ) = i = 1 r δ i ( h ) denote the number of X h -failures in W and r = h = 1 k M r ( h ) , the joint density function of ( δ , w ) is given by
f ( δ , w ) = h = 1 k c r ( F ¯ h ( w r ) ) n h M r ( h ) . i = 1 r h = 1 k ( f h ( w i ) ) δ i ( h )
where F ¯ h = 1 F h is the survival functions of hth population and c r = n h ! ( n h M r ( h ) ) ! .
For any continuous variables Y 1 Y n , the joint density function of Y 1 , , Y r , Y s ,   r < s n is given by.
f ( y 1 , , y r , y s ) = n ! ( s r 1 ) ! ( n s ) ! [ F ¯ ( y r ) F ¯ ( y s ) ] s r 1 ( F ¯ ( y s ) ) n s f ( y s ) i = 1 r f ( y i ) .
Here, ( W 1 , , W r , W s ) ( W , W s ) linked with the discrete variables ( δ 1 , , δ r , δ s ) ( δ , δ s ) .
Then the joint density function of ( δ , δ s , W , W s ) , r < s N is given by
f ( δ , δ s , w , w s ) = Q s 1 h = 1 k c 1 h { F ¯ h ( w r ) F ¯ h ( w s ) } M r s ( h ) × ( F ¯ h ( w s ) ) n ¯ h s ( f h ( w s ) ) δ s ( h ) i = 1 r h = 1 k ( f h ( w i ) ) δ i ( h )
where
w r < w s w N ,     M r s ( h ) = M s 1 ( h ) M r ( h ) , n ¯ h s = n h M s ( h ) , c 1 h = n h ! M r s ( h ) ! n ¯ h s ! ,   Q s = δ r + 1 = 0 1 δ s = 0 1   with   Q s = { δ ( h ) = ( δ r + 1 , , δ s ) ,   for     1 h k } .
The conditional density function of W s given ( δ , W ) = ( δ , w ) , is given by
f ( w s   |   δ , w ) = Q s h = 1 k c h ( f h ( w s ) ) δ s ( h ) { F ¯ h ( w r   F ¯ h ( w s ) } M r s ( h ) ( F ¯ h ( w s ) ) n ¯ h s ( F ¯ h ( w r ) ) n ¯ h r = Q s h = 1 k c h ( f h ( w s ) ) δ s ( h ) l h = 0 M r s ( h ) a l h ( F ¯ h ( w s ) ) n ¯ h s + l h ( F ¯ h ( w r ) ) n ¯ h ( s 1 ) + l h
where
c h = n ¯ h r ! M r s ( h ) ! n ¯ h s ! ,   a l h = ( 1 ) l h ( M r s ( h )           l h ) .
In addition, when the k   populations are exponential, the pdf is given by
f h ( w ) = θ h exp ( θ h w ) , and   cdf     F h ( w ) = 1 exp ( θ h w ) ,
where   w > 0 ,   θ h > 0 ;   1 h k .
Then, the likelihood function in (3) becomes
f ( Θ , δ , w ) = h = 1 k c r { exp ( θ h w r ) } n ¯ h r i = 1 r h = 1 k { θ h exp ( θ h w i ) } δ i ( h ) = h = 1 k c r θ h M r ( h ) exp { θ h u h }
where Θ = ( θ 1 , , θ k ) and u h = i = 1 r w i δ i ( h ) + w r n ¯ h r .
Substituting (6) into (5), we obtain the conditional density function of W s , given ( δ , W ) = ( δ , w ) ,
f ( w s   |   δ , w ) = Q s h = 1 k θ h δ s ( h ) l h = 0 M r s ( h ) C h exp { θ h D h ( w s w r ) }
where C h = c h a l h , D h = n ¯ h ( s 1 ) + l h and n ¯ h s + δ s ( h ) = n ¯ h ( s 1 ) .
Some special cases of the conditional density function are described as follows:
  • Case 1:
Suppose that k 1 is the number of samples satisfy M r ( h ) = n h , but only one sample from the k samples say { X q n q } satisfies M r ( q ) < n q or { w r ,   ,   w N } { X q n q } .
Under Case 1, the conditional density function of W s given ( δ , W ) = ( δ , w ) becomes
f 1 ( w s   |   δ , w ) = n ¯ q r ! ( n ¯ q r s + r ) ! l q = 0 s r 1 a l q θ q exp { θ q D q ( w s w r ) }
where a l q = ( 1 ) l q ( s r 1   l q ) , D q = n ¯ q r s + r + l q + 1 , w r < w s x q n q .
  • Case 2:
Suppose that k R is the number of samples satisfy M r ( h i ) = n h i for h i = 1 , , k ; and R < k is the number of samples satisfy M r ( h j ) < n h j for h j = 1 , , k ;   h i h j , equivalently, W r > max { X h i n h i , h i = 1 , , k } but W r { X h j n h j , h j = 1 , , k ; h i h j } .
Under Case 2, let us just consider the R samples, where q = 1 , , R , then the conditional density function of W s given ( δ , W ) = ( δ , w ) becomes
f 2 ( w s | δ , w ) = Q s q = 1 R θ q δ s ( q ) l q = 0 M r s ( q ) C q exp { θ q D q ( w s w r ) }
where Q s = { δ ( q ) = ( δ r + 1 , , δ s )   for   1 q R } , C q = c q a l q , D q = n ¯ q ( s 1 ) + l q and n ¯ q s + δ s ( q ) = n ¯ q ( s 1 ) .
The remainder of this article is organized as follows: Section 2 presents the generalized Bayesian and Bayesian prediction points and intervals using squared error, Linex, and general entropy loss functions in the point predictor. A numerical study of the results from Section 2 is presented in Section 3. Finally, we conclude the paper in Section 4.

2. Generalized Bayes Prediction

In this section, we introduce the concept of generalized Bayesian prediction, which is an investigation of Bayesian prediction under the influence of a learning rate parameter η > 0 . To apply the concept of generalized Bayesian prediction to a prediction study, we give a brief description of generalized Bayesian prediction based on a learning rate parameter η > 0 . A scheme for predicting a sample based on joint censoring of type II samples from k exponential distributions is presented. The main goal is to obtain the point predictors and prediction intervals given at the end of this section.

2.1. Generalized Bayes

The parameters Θ are assumed to be unknown, we may consider the conjugate prior distributions of Θ as independent gamma prior distributions, i.e., θ h G a m ( a h , b h ) . Hence, the joint prior distribution of Θ is given by
π ( Θ ) = h = 1 k π h ( θ h ) ,
where
π h ( θ h ) = b h a h Γ ( a h )   θ h a h 1 exp { b h θ h } ,
and Γ ( ) denotes the complete gamma function.
Combining (7) and (11) after raising (7) to the power η , the posterior joint density function of Θ is then
π * ( Θ   |   δ , w ) = h = 1 k ( u h η + b h ) η M r ( h ) + a h θ h η M r ( h ) + a h 1 Γ ( η M r ( h ) + a h ) exp { θ h ( u h η + b h ) } , = h = 1 k ξ h μ h θ h μ h 1 Γ ( μ h ) exp { θ h ξ h } ,
where ξ h = u h η + b h ,   μ h = η M r ( h ) + a h .
Since π h is a conjugate prior, where θ h G a m ( a h , b h ) , then it follows that the posterior density function of ( θ h | δ , w ) is G a m ( η M r ( h ) + a h , u h η + b h ) .

2.2. One Sample Prediction

A sample prediction scheme for the case of the joint censoring of samples from two exponential distributions was studied in [17], and then three cases for the future failures were derived; where in the first case, the future predicted failure surly belongs to X 1 n 1 failures if M r ( 1 ) < n 1 , M r ( 2 ) = n 2 , in the second case, the future predicted failure surly belongs to X 2 n 2 failures if M r ( 2 ) < n 2 , M r ( 1 ) = n 1 , and in the third case, it is unknown to which sample the future predicted failure belongs. Here, we generalize the results reported in [17] and examine two special cases in addition to the general case.
In the general case, the size of any sample is greater than the number of observed failures; that is, M r ( h ) < n h w r < x h n h for h = 1 , , k . The first special case arises when all future values (predictors) belong to only one sample and the observations of the remaining k 1   samples are less than w r . The second special case arises when all future values (predictors) belong to some samples and all observations of the other samples are less than w r . The forms of all functions related to the second special case are similar to those related to the general case; therefore, we will introduce only the general case and the first special case.
For the general case, to predict w r for r < s N based on the observed data ( δ , w ) , we use the conditional density function ( 9 ) . Let us define the following integral:
I h δ s ( h ) = ξ h μ h Γ ( μ h ) 0 θ h μ h 1 + δ s ( h ) exp { θ h [ ξ h + D h ( w s w r ) ] } d θ h = { μ h ξ h ( 1 + D h ( w s w r ) ξ h ) ( μ h + 1 ) ,     f o r   δ s ( h ) = 1 ( 1 + D h ( w s w r ) ξ h ) μ h                     ,     f o r   δ s ( h ) = 0 .
Since a b Γ ( b ) 0 x b   e x p { x ( a + c ) } d x = a b Γ ( b )   Γ ( b + 1 ) ( a + c ) b + 1 = b a   ( 1 + c a   ) ( b + 1 ) ,
a b Γ ( b ) 0 x b 1   e x p { x ( a + c ) } d x = a b Γ ( b )   Γ ( b ) ( a + c ) b = ( 1 + c a   ) b .
Using (8), (13), and (14), the Bayesian predictive density function of W s given ( δ , W ) = ( δ , w ) becomes
f B ( w s   |   δ , w ) = Θ f ( w s | δ , w ) π * ( Θ   |   δ , w ) d Θ   = Q s h = 1 k l h = 0 M r s C h ξ h μ h Γ ( μ h ) 0 θ h μ h 1 + δ s ( h ) exp { θ h [ ξ h + D h ( w s w r ) ] } d θ h = Q s h = 1 k C h { υ = 1 k ( I υ 1   q = 1 , q υ k I q 0 ) } ,
where
h = 1 k 0 f ( θ h ) d θ h = 0 0 f ( θ 1 ) f ( θ k ) d θ 1 d θ k   .
Under Case 1, the Bayesian predictive density function of W s given ( δ , W ) = ( δ , w ) becomes
f 1 B ( w s | δ , w ) = n ¯ q r ! μ q ( n ¯ q r s + r ) ! ξ q l q = 0 s r 1 a l q   ( 1 + D q ( w s w r ) ξ q ) ( μ q + 1 ) ,
where ξ q = u q η + b q ,   μ q = η M r ( q ) + a q , w r < w s x q n q .

2.3. Bayesian Point Predictors

For the point predictor, we considered three types of loss functions:
(i).
The squared error loss function (SE), which is classified as a symmetric function, is given by
L S E ( φ * , φ ) ( φ * φ ) 2 ,
where φ * is an estimate of φ .
(ii).
The Linex loss function, which is asymmetric, is given by
L L ( φ * , φ ) e τ ( φ * φ ) τ ( φ * φ ) 1 ,     τ 0 .
(iii).
The generalization of the entropy (GE) loss function is
L G E ( φ * , φ ) ( φ * φ ) c c ln ( φ * φ ) 1 ,     c 0 .
It is worth noting that the Bayes estimates under the GE loss function coincide with those under the SE loss function when c = 1 . However, when c = 1 , 2 ,   the Bayes estimates under GE become those under the weighted squared error loss function and the precautionary loss function, respectively.
Now, the Bayesian point predictors W s ,   r < s N , under different loss functions (SE, Linex, and GE) can be obtained using the predictive density function (15), which are denoted, respectively, by W S P , W L P , W E P and given as follows:
W S P = E ( W s   |   δ , w ) = 0 w s f B ( w s | δ , w ) d w s = Q s 1 h = 1 k l h = 0 M r s C h { v = 1 k 0 w s ( I υ 1   q = 1 , q υ k I q 0 ) d w s }
W L P = 1 τ ln [ E ( e τ W s | δ , w ) ] = 1 τ ln [ 0 e τ w s f B ( w s | δ , w ) d w s ] = 1 τ ln [ Q s 1 h = 1 k l h = 0 M r s C h { v = 1 k 0 e τ w s   ( I υ 1 q = 1 , q υ k I q 0 ) d w s } ]
W E P = [ E ( W s c | δ , w ) ] 1 c = [ 0 w s c f B ( w s | δ , w ) d w s ] 1 c = [ Q s 1 h = 1 k l h = 0 M r s C h { v = 1 k 0 w s c ( I υ 1   q = 1 , q υ k I q 0 ) d w s } ] 1 c  
Under Case 1, W S P , W L P ,   and   W E P are, respectively, given by
W S P = n ¯ q r ! μ q ( n ¯ q r s + r ) ! ξ q l q = 0 s r 1 a l q 0 w s ( 1 + D q ( w s w r ) ξ q ) ( μ q + 1 ) d w s
W L P = 1 τ ln [ n ¯ q r ! μ q ( n ¯ q r s + r ) ! ξ q l q = 0 s r 1 a l q 0 e τ w s ( 1 + D q ( w s w r ) ξ q ) ( μ q + 1 ) d w s ]
W E P = [ n ¯ q r ! μ q ( n ¯ q r s + r ) ! ξ q l q = 0 s r 1 a l q 0 w s c ( 1 + D q ( w s w r ) ξ q ) ( μ q + 1 ) d w s ] 1 c .
The above equations are solved numerically to obtain the predictors W S P , W L P , and   W E P .

2.4. Prediction Interval

The predictive survival function of W s is given by
F ¯ B ( t ) = P ( W s > t   |   δ , w ) = t f B ( w s | δ , w ) d w s = Q s 1 h = 1 k l h = 0 M r s C h [ v = 1 k t ( I υ 1   q = 1 , q υ k I q 0 ) d w s ]
Numerical integration is required to obtain the predictive survival function in Equation (23).
In Case 1, the predictive survival function of W s is given by
F ¯ 1 B ( t ) = P ( W s > t | δ , w ) = t f 1 B ( w s | δ , w ) d w s = n ¯ q r ! ( n ¯ q r s + r ) ! l q = 0 s r 1 a l q D q ( 1 + D q ( t w r ) ξ q ) μ q .
The Bayesian predictive bounds of a two-sided equi-tailed 100 ( 1 γ ) % interval for W s ,   r < s N , can be obtained by solving the following two equations numerically,
F ¯ ( L | δ , w ) = 1 γ 2 ,             F ¯ ( U | δ , w ) = γ 2 .

3. Numerical Study

In this section, the results of the Monte Carlo simulation study are conducted to evaluate the performance of the prediction study derived in the previous section, and an example is presented to illustrate the prediction methods discussed here.

3.1. Simulation Study

We considered three samples from three populations with ( n 1 , n 2 , n 3 , r ) for choices ( 10 ,   10 ,   10 ,   25 ) and ( 15 ,   15 ,   15 ,   40 ) . In Case 1, we choose the exponential parameters ( θ 1 , θ 2 , θ 3 ) as (2, 1, 0.1) based on the hyperparameters represented by Δ = ( a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) , where Δ = Δ 1 = ( 2 , 1 , 2 , 2 , 1 , 10 ) .
In the general case, we choose the exponential parameters ( θ 1 ,   θ 2 , θ 3 ) as (2, 2.5, 3) based on the hyperparameters Δ = Δ 2 = ( 2 ,   1 ,   5 ,   2 ,   3 ,   1 ) .
For the generalized Bayesian study, three values are chosen for the learning rate parameter η = 1 ,   2 ,   5 , and 10,000 repetitions are used for the Monte Carlo simulations.
The mean observations values of the three generated samples X 1 , X 2 ,   and   X 3 , and their joint sample W , using 10,000 repetitions, are presented in Table 1, Table 2, Table 3 and Table 4, where the underlined values are greater than w r .
We notice from Table 1 and Table 2 that, the future values come only from sample X 3 .
We notice from Table 3 and Table 4 that the future values come from the three samples.
For ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) , under Case 1, we use (20), (21), and (22) to calculate the mean squared prediction errors (MSPEs) of the point predictors ( W S P ,   W L P ,   and   W E P ) for s = 26 , , 30 , where τ = 0.1 ,   0.5 ; c = 0.1 ,   0.5 , and the results are presented in Table 5.
The results of the MSPEs in the general case are calculated using (17), (18), and (19), and shown in Table 6.
For ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) and ( n 1 , n 2 , n 3 , r ) = ( 15 , 15 , 15 , 40 ) , the results of the prediction bounds of W s ,   s = 26 , , 30 and s = 41 , , 45 , respectively, are calculated using (24) and (25) in Case 1, then are presented in Table 7.
Table 8 presents the prediction bounds using (23), and (25) to show the results of the general case.

3.2. Illustrative Example

To illustrate the usefulness of the results developed in the previous sections, we consider three samples of size n 1 = n 2 = n 3 = 10 from Nelson’s data (groups 1, 4, and 5) corresponding to the breakdown of an insulating fluid subjected to a high-stress load (see [20] p. 462). These breakdown times, referred to here as samples X i ,     i = 1 , 2 , 3 , are jointly type-II censored data in the form of ( w , h i ) obtained from these three samples with r = 24 and are shown in Table 9.
Using (17), (18), (19), (23), and (25), the MSPEs of the point predictors and prediction intervals of w s , s = 25 , ,   30 are calculated and presented in Table 10 using η = 1 ,   2 ,   5 and Δ = Δ 3 = ( 1 , 2.6 , 1 , 2 , 1 , 3 ) ; and τ = 0.1 ,   0.5 and c = 0.1 ,   0.5 .

4. Conclusions

In this study, we examined the effects of learning rate parameters on prediction results. We used Monte Carlo simulations to show the effectiveness of the learning rate parameter in improving the results of prediction intervals and point predictors. Formally, we considered a joint type-II censoring scheme in which the lifetimes of the three populations have exponential distributions. We determined the MSPEs of the point predictors and prediction intervals using different values for the learning rate parameter η and different values for the parameters of the losses in both the simulation study and the illustrative example. From all tables in this prediction study, it can be seen that the results improve with increasing the loss parameters c, τ , and learning rate parameter η . In the simulation study, a comparison of the results in Table 5 and Table 6 shows that the results in Table 5 are better, and the length of the prediction intervals in Table 8 is smaller than those in Table 7 because the observed values used in Table 7 are larger than those used in Table 8. The results of the illustrative example improve with larger values of loss parameters and learning rate parameter. So we conclude that the results of the prediction study became better as learning rate parameter increased. However, in both studies, the lengths of the prediction intervals increased for the larger future lifetimes. It may be interesting to examine this work using a different type of censoring.

Author Contributions

Conceptualization, Y.A.-A.; methodology, Y.A.-A. and M.K.; software, G.A.; validation, G.A.; formal analysis, M.K.; resources, Y.A.-A.; writing—original draft, Y.A.-A. and M.K.; writing—review & editing, G.A.; supervision, M.K.; project administration, Y.A.-A. and G.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Acknowledgments

The authors thank the three anonymous reviewers and the editor for their constructive criticism and valuable suggestions, which have greatly improved the presentation and explanations in this article. The authors extend their sincere appreciation to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. General framework for updating belief distributions. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Miller, J.W.; Dunson, D.B. Robust Bayesian inference via coarsening. J. Am. Stat. Assoc. 2019, 114, 1113–1125. [Google Scholar] [CrossRef] [PubMed]
  3. Grünwald, P. The safe Bayesian: Learning the learning rate via the mixability gap. In Algorithmic Learning Theory; MR3042889; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7568, pp. 169–183. [Google Scholar]
  4. Grünwald, P.; van Ommen, T. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Anal. 2017, 12, 1069–1103. [Google Scholar] [CrossRef]
  5. Grünwald, P. Safe probability. J. Stat. Plan. Inference 2018, 195, 47–63. [Google Scholar] [CrossRef] [Green Version]
  6. De Heide, R.; Kirichenko, A.; Grünwald, P.; Mehta, N. Safe-Bayesian generalized linear regression. Proc. Mach. Learn. Res. 2020, 106, 2623–2633. [Google Scholar]
  7. Holmes, C.C.; Walker, S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika 2017, 104, 497–503. [Google Scholar]
  8. Lyddon, S.P.; Holmes, C.C.; Walker, S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika 2019, 106, 465–478. [Google Scholar] [CrossRef] [Green Version]
  9. Martin, R. Invited comment on the article by van der Pas, Szabó, and van der Vaart. Bayesian Anal. 2017, 12, 1254–1258. [Google Scholar]
  10. Martin, R.; Ning, B. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhya Ser. A 2020, 82, 477–498. [Google Scholar] [CrossRef] [Green Version]
  11. Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes estimation based on a joint type-II censored sample from k-exponential populations. Mathematics 2023, 11, 2190. [Google Scholar] [CrossRef]
  12. Balakrishnana, N.; Rasouli, A. Exact likelihood inference for two exponential populations under joint Type-II censoring. Comput. Stat. Data Anal. 2008, 52, 2725–2738. [Google Scholar] [CrossRef]
  13. Parsi, S.; Bairamov, I. Expected values of the number of failures for two populations under joint Type-II progressive censoring. Comput. Stat. Data Anal. 2009, 53, 3560–3570. [Google Scholar] [CrossRef]
  14. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  15. Su, F. Exact Likelihood Inference for Multiple Exponential Populations under Joint Censoring. Ph.D. Thesis, McMaster University, Hamilton, ON, Canada, 2013. [Google Scholar]
  16. Abdel-Aty, Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Commun. Stat. Theory Methods 2017, 46, 9026–9041. [Google Scholar] [CrossRef]
  17. Shafay, A.R.; Balakrishnan, N.Y.; Abdel-Aty, Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. J. Stat. Comput. Simul. 2014, 84, 2427–2440. [Google Scholar] [CrossRef]
  18. Asgharzadeh, A.; Valiollahi, R.; Kundu, D. Prediction for future failures in Weibull distribution under hybrid censoring. J. Stat. Comput. Simul. 2013, 85, 824–838. [Google Scholar] [CrossRef]
  19. Abdel-Aty, Y.; Franz, J.; Mahmoud, M.A.W. Bayesian prediction based on generalized order statistics using multiply type-II censoring. Statistics 2007, 41, 495–504. [Google Scholar] [CrossRef]
  20. Nelson, W. Applied Life Data Analysis; Wiley: New York, NY, USA, 1982. [Google Scholar]
Table 1. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) , Δ = Δ 1 (Case 1).
Table 1. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) , Δ = Δ 1 (Case 1).
SampleData
X 1 0.0498, 0.1058, 0.1674, 0.2391, 0.3225, 0.4226, 0.5477, 0.7159, 0.9664, 1.4685.
X 2 0.1005, 0.2130, 0.3363, 0.4780, 0.6444, 0.8434, 1.0905, 1.4248, 1.9253, 2.9195.
X 3 1.01284, 2.1473, 3.3872, 4.8117, 6.4598, 8.4947, 10.9570, 14.3184, 19.3712, 29.4384.
Ordered data ( w , h i ) , r = 25 .
(0.0498, 1), (0.1005, 2), (0.1058, 1), (0.1674, 1), (0.2130, 2), (0.3225, 1), (0.3225, 1), (0.3363, 2), (0.4226, 1), (0.4780, 2), (0.5477, 1), (0.6444, 2), (0.7159, 1), (0.8434, 2), (0.9664, 1), (1.01284, 3), (1.0905, 2), (1.4248, 2), (1.4685, 1), (1.9253, 2), (2.1473, 3), (2.9195, 2), (3.3872, 3), (4.8117, 3), (6.4598, 3).
Table 2. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 15 ,   15 ,   15 ,   40 ) , Δ = Δ 1 (Case 1).
Table 2. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 15 ,   15 ,   15 ,   40 ) , Δ = Δ 1 (Case 1).
SampleData
X 1 0.0333, 0.0693, 0.1078, 0.1487, 0.1944, 0.2449, 0.3004, 0.3646, 0.4363, 0.5206, 0.6208, 0.7479, 0.9146, 1.1641, 1.6557.
X 2 0.0663, 0.1388, 0.2164, 0.2991, 0.3916, 0.4914, 0.6029, 0.7289, 0.8690, 1.0349, 1.2367, 1.4884, 1.8181, 2.316, 3.3304.
X 3 0.6610, 1.3818, 2.1570, 2.9962, 3.9136, 4.9102, 6.0317, 7.2834, 8.7249, 10.3833, 12.3990, 14.9019, 18.2287, 23.2551, 33.3370.
Ordered data ( w , h i ) ,   r = 40
(0.0333,1), (0.0663,2), (0.0693,1), (0.1078,1), (0.1388,2), (0.1487,1), (0.1944,1), (0.2164,2), (0.2449,1), (0.2991,2), (0.3004,1), (0.3646,1), (0.3916,2), (0.4363,1), (0.4914,2), (0.5206,1), (0.6029,2), (0.6208,1), (0.6610,3), (0.7289,2), (0.7479,1), (0.8690,2), (0.9146,1), (1.0349,2), (1.1641,1), (1.2367,2), (1.3818,3), (1.4884,2), (1.6557,1), (1.8181,2), (2.1570,3), (2.3160,2), (2.9962,3), (3.3304,2), (3.9136,3), (4.9102,3), (6.0317,3), (7.2834,3), (8.7249,3), (10.3833,3).
Table 3. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) , Δ = Δ 2 , (general case).
Table 3. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 10 ,   10 ,   10 ,   25 ) , Δ = Δ 2 , (general case).
SampleData
X 1 0.0508, 0.1062, 0.1693, 0.2417, 0.3241, 0.4219, 0.5451, 0.7110, 0.9583, 1.4663.
X 2 0.0401, 0.0844, 0.1338, 0.1907, 0.2575, 0.3384, 0.4395, 0.5727, 0.7732, 1.1655.
X 3 0.0334, 0.0703, 0.1127, 0.1604, 0.2167, 0.2839, 0.3671, 0.4778, 0.6474, 0.9823.
Ordered data ( w , h i ) ,   r = 25
(0.0334, 3), (0.0401, 2), (0.0508, 1), (0.0703, 3), (0.0844, 2), (0.1062, 1), (0.1127, 3), (0.1338, 2), (0.1604, 3), (0.1693, 1), (0.1907,2), (0.2167, 3), (0.2417, 1), (0.2575, 2), (0.2839, 3), (0.3241, 1), (0.3384, 2), (0.3671, 3), (0.4219, 1), (0.4395, 2), (0.4778, 3), (0.5451, 1), (0.5727, 2), (0.6474, 3), (0.7110, 3).
Table 4. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 15 ,   15 ,   15 ,   40 ) , Δ = Δ 2 , (general case).
Table 4. The mean observations values for ( n 1 , n 2 , n 3 , r ) = ( 15 ,   15 ,   15 ,   40 ) , Δ = Δ 2 , (general case).
SampleData
X 1 0.0326, 0.0681, 0.1068, 0.1487, 0.1944, 0.2449, 0.3003, 0.3632, 0.4344, 0.5186, 0.6195, 0.7433, 0.9125, 1.1605, 1.6591.
X 2 0.0269, 0.0556, 0.0864, 0.1193, 0.1557, 0.1965, 0.2414, 0.2913, 0.3484, 0.4137, 0.4943, 0.5941, 0.7264, 0.9289, 1.3301.
X 3 0.0220, 0.0459, 0.0713, 0.0988, 0.1295, 0.1624, 0.1989, 0.2399, 0.2872, 0.3424, 0.4084, 0.4915, 0.6032, 0.7706, 1.1045.
Ordered data ( w , h i ) ,   r = 40
(0.0220,3), (0.0269,2), (0.0326,1), (0.0459,3), (0.0556,2), (0.0681,1), (0.0713,3), (0.0864,2), (0.0988,3), (0.1068,1), (0.1193,2), (0.1295,3), (0.1487,1), (0.1557,2), (0.1624,3), (0.1944,1), (0.1965,2), (0.1989,3), (0.2399,3), (0.2414,2), (0.2449,1), (0.2872,3), (0.2913,2), (0.3003,1), (0.3424,3), (0.3484,2), (0.3632,1), (0.4084,3), (0.4137,2), (0.4344,1), (0.4915,3), (0.4943,2), (0.5186,1), (0.5941,2), (0.6032,3), (0.6195,1), (0.7264,2), (0.7433,1), (0.7706,3), (0.9125,1).
Table 5. MSPE of point predictions for η = 1 , 2 , 5 ; Δ = Δ 1 in Case 1.
Table 5. MSPE of point predictions for η = 1 , 2 , 5 ; Δ = Δ 1 in Case 1.
η = 1
( n 1 , n 2 , n 3 , r ) s S P L P E P
τ = 0.1 τ = 0.5 c = 0.1 c = 0.5
( 10 , 10 , 10 , 25 ) 260.02380.02150.02110.01930.0175
270.26230.24250.23830.21790.1916
280.89240.89140.87670.86200.8125
291.49671.46871.32631.25791.2191
302.16102.03411.98641.74511.5218
η = 2
260.02130.02030.01980.01910.0171
270.26200.23190.23030.20940.1815
280.87810.87230.86270.82050.7685
291.46871.45141.30411.22561.1873
302.02451.93411.85611.63811.2552
η = 5
260.01830.01740.01710.01540.0144
270.24230.22530.22300.22010.1796
280.81650.80640.81370.79050.7125
291.39871.37821.35851.26821.0671
302.02011.91521.86101.43731.1782
Table 6. MSPEs of point predictions for η = 1 , 2 , 5 ;   Δ = Δ 2 .
Table 6. MSPEs of point predictions for η = 1 , 2 , 5 ;   Δ = Δ 2 .
η = 1
( n 1 , n 2 , n 3 , r ) s S P L P E P
τ = 0.1 τ = 0.5 c = 0.1 c = 0.5
260.01120.01150.01130.01100.0097
270.13420.12330.11320.08720.0821
280.27520.25710.21680.22050.2064
291.35671.31251.30611.21431.2083
301.59841.62131.57831.44371.2981
η = 2
260.01010.01120.00860.00880.0078
270.12450.11560.10120.07890.0689
280.22340.22330.20870.20150.2001
291.40101.41111.30211.21821.1892
301.61251.59871.56541.41271.2678
η = 5
260.00970.00720.00270.00250.0012
270.10240.10030.09980.07750.0567
280.18920.15660.10260.08760.0278
291.23461.32711.29871.17651.0482
301.87621.59871.56541.23541.1567
Table 7. Lower and upper 95 % prediction bounds for W s in Case 1, for different choices of n 1 , n 2 , n 3 , r and Δ = Δ 1 .
Table 7. Lower and upper 95 % prediction bounds for W s in Case 1, for different choices of n 1 , n 2 , n 3 , r and Δ = Δ 1 .
( n 1 , n 2 , n 3 , r ) s η = 1 η = 2 η = 5
L U L U L U
( 10 , 10 , 10 , 25 ) 267.14269.37427.15179.32687.15469.2985
279.876513.64969.916413.64129.984313.6191
2812.628921.328912.671421.269212.892121.21482
2916.764329.149616.865328.989616.922529.9641
3025.765849.565425.858549.515425.976548.3654
( 15 , 15 , 15 , 40 ) 4111.653913.876411.915213.847211.958413.8174
4212.987617.682413.025617.193413.287617.1264
4315.287923.185915.542923.067415.769822.5429
4420.328933.912620.972133.312621.996832.8952
4529.128951.161029.528951.032629.785350.4761
Table 8. Lower and upper 95 % prediction bounds for W s , for different choices of n 1 , n 2 , n 3 , r and Δ = Δ 2 .
Table 8. Lower and upper 95 % prediction bounds for W s , for different choices of n 1 , n 2 , n 3 , r and Δ = Δ 2 .
( n 1 , n 2 , n 3 , r ) s η = 1 η = 2 η = 5
L U L U L U
( 10 , 10 , 10 , 25 ) 260.65280.88890.68140.85920.69200.8342
270.75891.10960.77681.20330.78791.2191
280.89341.95830.92141.94620.95201.9321
290.97893.56960.99013.37331.01273.29341
301.12895.56101.25415.34511.29645.2218
( 15 , 15 , 15 , 40 ) 410.79021.88460.79681.87320.80791.8841
420.86742.23540.87762.21250.88412.1254
430.92823.75830.92453.74830.95823.6783
440.99494.56101.02374.55161.02634.4712
451.23675.98101.25685.97361.316645.9523
Table 9. The failure time data for X 1 , X 2 , and X 3 , and their order ( w , h i ) , where   δ h i = 1 .
Table 9. The failure time data for X 1 , X 2 , and X 3 , and their order ( w , h i ) , where   δ h i = 1 .
SampleData
X 1 1.89, 4.03, 1.54, 0.31, 0.66, 1.7, 2.17, 1.82, 9.99, 2.24
X 2 1.17, 3.87, 2.8, 0.7, 3.82, 0.02, 0.5, 3.72, 0.06, 3.57
X 3 8.11, 3.17, 5.55, 0.80, 0.20, 1.13, 6.63, 1.08, 2.44, 0.78
Ordered data ( w , h i )
(0.02, 2), (0.06, 2), (0.20, 3), (0.31, 1), (0.50, 2), (0.66, 1), (0.70, 2), (0.78, 3), (0.80, 3), (1.08, 3)
(1.13, 3), (1.17, 2), (1.54, 1), (1.70, 1), (1.82, 1), (1.89, 1), (2.17, 1), (2.24, 1), (2.44, 3), (2.80, 2)
(3.17, 3), (3.57, 2), (3.72, 2), (3.82, 2).
Table 10. The values of point predictors and 95 % prediction interval of W s , for η = 1 ,   2 ,   5 ;   Δ = Δ 3 .
Table 10. The values of point predictors and 95 % prediction interval of W s , for η = 1 ,   2 ,   5 ;   Δ = Δ 3 .
η = 1
s ( w s , h i ) W S P W L P W E P ( L , U )
Exactvalue τ = 0.1 τ = 0.5 c = 0.1 c = 0.5
25(3.87,2)3.56783.67283.78763.77893.8102(3.8476,4.7658)
26(4.03,1)4.64534.53644.45734.46534.1653(3.8645,5.7653)
27(5.55,3)5.89725.46765.38645.49435.5127(3.9785,7.8946)
28(6.63,3)7.12366.94566.87576.97646.7685(4.5632,12.5467)
29(8.11,3)10.27389.95949.57349.48769.2236(5.8762,18.3765)
30(9.99,1)13.675812.865412.176512.563812.1128(6.4657,30.4687)
η = 2
25(3.87,2)3.61323.65273.79643.89673.8662(3.8499,4.5473)
26(4.03,1)4.35674.33844.25824.27654.1234(3.8764,5.7564)
27(5.55,3)5.76535.47655.38765.51235.5742(3.9967,7.8125)
28(6.63,3)7.11686.91746.85426.87896.7125(4.7842,12.5165)
29(8.11,3)10.13759.928759.62139.21348.9984(5.8923,17.8964)
30(9.99,1)13.823412.675312.347612.1227311.8657(6.8973,30.1374)
η = 5
25(3.87,2)3.69543.71353.82173.83783.8675(3.8564,4.4623)
26(4.03,1)4.27654.21874.20714.21354.1098(3.8976,5.7245)
27(5.55,3)5.73215.48765.4525.52165.5731(3.9986,7.8087)
28(6.63,3)7.10656.92766.86286.75226.7081(4.8569,12.5097)
29(8.11,3)10.08799.91349.56759.15618.9786(5.9872,17.2876)
30(9.99,1)13.543712.567412.365212.113511.8543(6.9965,29.4567)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms 2023, 12, 716. https://doi.org/10.3390/axioms12070716

AMA Style

Abdel-Aty Y, Kayid M, Alomani G. Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms. 2023; 12(7):716. https://doi.org/10.3390/axioms12070716

Chicago/Turabian Style

Abdel-Aty, Yahia, Mohamed Kayid, and Ghadah Alomani. 2023. "Generalized Bayes Prediction Study Based on Joint Type-II Censoring" Axioms 12, no. 7: 716. https://doi.org/10.3390/axioms12070716

APA Style

Abdel-Aty, Y., Kayid, M., & Alomani, G. (2023). Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms, 12(7), 716. https://doi.org/10.3390/axioms12070716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop