Next Article in Journal
The Constrained 2-Maxian Problem on Cycles
Next Article in Special Issue
General Mean-Field BDSDEs with Stochastic Linear Growth and Discontinuous Generator
Previous Article in Journal
An Analysis of Power Friction Losses in Gear Engagement with Intermediate Rolling Elements and a Free Cage
Previous Article in Special Issue
Forward Selection of Relevant Factors by Means of MDR-EFE Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Bayesian Inferences for Right-Censored Birnbaum–Saunders Data

by
Kalanka P. Jayalath
Department of Mathematics and Statistics, University of Houston—Clear Lake, Houston, TX 77058, USA
Mathematics 2024, 12(6), 874; https://doi.org/10.3390/math12060874
Submission received: 10 January 2024 / Revised: 13 March 2024 / Accepted: 14 March 2024 / Published: 16 March 2024
(This article belongs to the Special Issue New Trends in Stochastic Processes, Probability and Statistics)

Abstract

:
This work focuses on making Bayesian inferences for the two-parameter Birnbaum–Saunders (BS) distribution in the presence of right-censored data. A flexible Gibbs sampler is employed to handle the censored BS data in this Bayesian work that relies on Jeffrey’s and Achcar’s reference priors. A comprehensive simulation study is conducted to compare estimates under various parameter settings, sample sizes, and levels of censoring. Further comparisons are drawn with real-world examples involving Type-II, progressively Type-II, and randomly right-censored data. The study concludes that the suggested Gibbs sampler enhances the accuracy of Bayesian inferences, and both the amount of censoring and the sample size are identified as influential factors in such analyses.

1. Introduction

The Birnbaum–Saunders (BS) distribution is a two-parameter lifetime distribution that was originally introduced by [1] to model the failure time due to the growth of a dominant crack that is subjected to cyclic stress, which causes a failure upon reaching the threshold level. The BS distribution has gone through various developments and generalizations and is found to be suitable for life testing applications. The distribution was originally derived to model the fatigue life of metals that are subject to periodic stress; this is sometimes referred to as the fatigue life distribution. Interestingly, it can also be obtained by using a monotone transformation on the standard normal distribution [2]. Moreover, as [3] indicated, the BS distribution can be viewed as an equal mixture of an inverse Gaussian (IG) distribution and its reciprocal. These relations are useful in deriving important properties of the BS distribution based on well-known properties of the normal and IG distributions. Ref. [4] showed that the BS distribution can be used as an approximation of the IG distribution. In practice, both the BS and IG distributions are often considered very competitive lifetime models for right-skewed data [5,6].
The distribution function of the BS failure time T with parameters α and β , denoted by T B S ( α , β ) , is given by
F T ( t ) = Φ 1 α t β β t ,
where 0 < t < , and α > 0 , β > 0 are the shape and scale parameters, respectively. Here, Φ ( · ) represents the distribution function of the standard normal distribution. Since F T ( β ) = Φ ( 0 ) = 0.5 , the scale parameter β is the median of the BS distribution. The probability density function (pdf) of the B S ( α , β ) is given by
f T ( t ) = 1 2 2 π α β β t 1 / 2 + β t 3 / 2 exp 1 2 α 2 t β + β t 2
It can be easily shown that E ( T ) = β ( 1 + α 2 / 2 ) and V a r ( T ) = ( α β ) 2 ( 1 + 5 α 2 / 4 ) . Interestingly, Ref. [7] indicates that T 1 B S ( α , β 1 ) , and therefore the reciprocal variable T 1 also belongs to the same family.
The parameter estimation for the BS distribution, including the maximum likelihood estimation (MLE), is largely discussed in its literature. For complete data, Ref. [8] derived the MLE’s of the BS parameters. Ref. [9] introduced modified moment estimators (MMEs), a bias reduction method and a Jackknife technique to reduce the bias of both MMEs and MLEs. Ref. [10] introduced alternative estimators with a smaller bias compared to that for Ref. [9]. Point and interval estimations of the BS parameters under Type-II censoring are discussed in [11]. Ref. [12] suggested a modified censored moment estimation method to estimate its parameters under random censoring. Ref. [13] suggested using a fiducial inference on BS parameters for right-censored data.
Bayesian approaches have also been used to make inferences on the BS parameters. Ref. [14] used both Jeffrey’s prior and a reference prior to derive marginal posteriors using Laplace’s approximation; while [15] employed only the reference priors and considered an approximate Bayesian approach using Lindley’s method. Ref. [16] justified that Jeffrey’s reference prior results in an improper posterior for the scale parameter and suggested employing the reference priors that incorporate some partial information. In this situation, they suggested applying the slice sampling method to obtain a proper posterior for the case of censored data. A work by [17] adopted inverse-gamma priors for the shape and scale parameters and proposed an efficient sampling algorithm using the generalized ratio-of-uniforms method to compute Bayesian estimates. Ref. [18] also adapted the inverse-gamma priors for both the BS parameters and applied Markov Chain Monte Carlo (MCMC)-based conditional and joint sampling methods to handle censored data.
The censored data appear in life-time experiments due to various reasons; the nature of censoring plays a vital role in its analysis. In this study, we focus on the right-censored data that occurs when the test start time of each unit is known, but the test end time is unknown. This includes the random right, Type-II, and progressively Type-II censoring schemes. The progressively Type-II censoring scheme allows one to remove a pre-specified number of uncensored units from the remaining experimental units at the observed failure times [19]. As such, it is a more general form of Type-II censoring, where censoring takes place progressively in r stages. In this scheme, a total of n units are placed on a life-test, only r are completely observed until failure and the rest of n r units are rightly censored. However, at the time of the first failure, say t ( 1 ) , R 1 of the n 1 surviving units are randomly withdrawn from the experiment; at the time of the next failure, say t ( 2 ) , R 2 of the n 2 R 1 surviving units are censored, and so on. At the time of the last (rth) failure, say t ( r ) , all the remaining R r = n r j = 1 r 1 R j surviving units are censored. Therefore, in progressively Type-II censoring experiments with pre-specified r and { R 1 , R 2 , , R r } , the data will take the form { ( t ( 1 ) , R 1 ) , ( t ( 2 ) , R 2 ) , , ( t ( r ) , R r ) } .
In this work, we focus on estimating both BS parameters in the presence of right-censored units as well as the average remaining test time T ¯ of the censored units. For instance, let us consider n non-repairable units and assume we observe failures in r progressively censored stages with censored times y = ( t ( 1 ) , t ( 2 ) , , t ( r ) ) . If the experiment were to continue so that all ( n r ) -censored values could be observed, then we let y ˜ i = ( t ( i : 1 ) , t ( i : 2 ) , , t ( i : R i ) ) be the set of true observed values of the censored values at the ith progressive stage. Then, the remaining total test time for these R i censored elements is y ˜ i 1 t ( i ) 1 1 , where 1 is a column vector of 1’s of length R i . As such, the estimated and average remaining test time for all the censored units from all r progressive stages is
T ¯ = 1 n r i = 1 r y ˜ i 1 t ( i ) 1 1 .
The rest of the article is organized as follows: In Section 2, we discuss the parameter estimation of the BS distribution using both the maximum likelihood method and the Bayesian method. Section 3 covers the Gibbs sampling procedure for handling censored data. In Section 4, we conduct a Monte–Carlo simulation study to compare the performance of the aforementioned methods. Illustrative examples are included in Section 5, and we conclude with remarks and recommendations in Section 6.

2. BS Parameter Estimation

In this section, we focus on the Bayesian parameter estimation with two different prior specifications: Jeffrey’s and Achcar’s priors for the BS parameters α and β . We discuss some of the practical challenges of these procedures while summarizing their methodological foundations.
On the other hand, the MLE of BS parameters, α and β , were heavily discussed in the literature; see [2,8] for details. Consider an experiment with n random failure times T = { t 1 , t 2 , , t n } that follow the BS distribution. Then, its log-likelihood function, without the additive constant, becomes
l ( α , β | T ) = n ln ( α β ) + i = 1 n ln β t i 1 / 2 + β t i 3 / 2 1 2 α 2 i = 1 n t i β + β t i 2 .
By differentiating Equation (4) with respect to α and solving it for zero, one can obtain
α 2 = s β + β q 2 ,
where s = i = 1 n t i / n and q = i = 1 n t i 1 / n 1 are the sample arithmetic and harmonic means of the observed data. Next, when differentiating Equation (4) with respect to β and substituting α 2 from Equation (5), the following can be obtained to determine the MLE of β .
β 2 β ( 2 q + K ( β ) ) + q ( s + K ( β ) ) = 0 ,
where K ( β ) = i = 1 n ( β + t i ) 1 / n 1 . The MLE β ^ of β is the unique positive root of Equation (6), in which q < β ^ < s . With this estimate, the MLE of α becomes α ^ = s β ^ + β ^ q 2 1 / 2 .

2.1. Bayesian Inference

Here, we consider the Bayesian work that was originally suggested by [14] by employing non-informative priors that include Jeffrey’s and Achcar’s reference priors. Jeffrey’s prior density for α and β is given by
π ( α , β ) det I ( α , β ) ,
where I ( α , β ) = 2 n α 2 0 0 n [ 1 + α g ( α ) / 2 π ] α 2 β 2 is the Fisher information matrix of the BS distribution, g ( α ) = α π / 2 π exp { 2 / α 2 } [ 1 Φ ( 2 / α ) ] , and Φ is the standard normal distribution function.
Using the Laplace approximation, it can be shown that Jeffrey’s prior takes the following form
π ( α , β ) 1 α β H ( α 2 ) , α > 0 , β > 0 ,
where H ( α 2 ) = 1 α 2 + 1 4 .
Assuming independence between α and β , [14] suggested a reference prior that takes the following form
π ( α , β ) 1 α β , α > 0 , β > 0 .
In our discussion, we called this Achcar’s reference prior.

2.2. Posterior Inference

For Jeffrey’s prior, the joint posterior distribution of α and β becomes [14]
π ( α , β | T ) i = 1 n ( β + t i ) exp { Q ( β ) / α 2 } α n + 1 β ( n / 2 ) + 1 H ( α 2 ) ,
where Q ( β ) = n s 2 β + n β 2 q n .
Then, using the Laplace approximation (see Appendix A), the approximate marginal posterior distributions of α and β for Jeffrey’s prior can be written as
π ( α | T ) α ( n + 1 ) ( 4 + α 2 ) 1 / 2 exp n α 2 ( s / q 1 ) , α > 0 ,
and
π ( β | T ) i = 1 n ( β + t i ) 4 + [ 2 n / ( n + 2 ) ] [ s / ( 2 β ) + β / ( 2 q ) 1 ] 1 / 2 β ( n / 2 ) + 1 s / ( 2 β ) + β / ( 2 q ) 1 ( n + 1 ) / 2 , β > 0 ,
respectively.
Then, for Achcar’s reference prior, the joint posterior becomes the same as Equation (7) except where H ( α 2 ) = 1 and the approximate marginal posterior distributions of α and β become
π ( α | T ) α n exp n α 2 ( s / q 1 ) , α > 0 ,
and
π ( β | T ) i = 1 n ( β + t i ) β ( n / 2 ) + 1 s / ( 2 β ) + β / ( 2 q ) 1 n / 2 , β > 0 ,
respectively.
As both Jeffrey- and Achcar-based posteriors do not have closed-form distributions, the Bayes estimates of α and β cannot be obtained in an explicit form. However, [14] proposed that the mode of the corresponding posteriors may be used as the Bayes estimates for α and β .
The work by [16] has shown that the above Achcar’s reference prior based posterior given in Equation (11) becomes improper when β . In practice, both posteriors given in Equations (9) and (11) are numerically intractable for larger β and n values due to the increments of the products in their numerators. However, as F T ( t , α , β ) = F T ( t / β , α , 1 ) , the parameter β in the BS distribution is solely a scale parameter which represents the median. Therefore, we suggest a simple and computationally efficient scaler transformation t n e w = t / β ^ to reduce this inflation and to avoid the situation that β . As a result, the median of the transformed data and the posteriors of β will be centered around one.

3. Application of Gibbs Sampler

In this section, we introduce a Gibbs sampling procedure that can be used to estimate the parameters of the BS distribution in the presence of censored data. The procedure uses Markov Chain Monte Carlo (MCMC) techniques to generate data samples that replace the censored portion of the data set.
Here, we propose using Bayesian inference for BS parameters α and β , employing marginal posteriors obtained using both Jeffrey’s and Achcar’s priors via the Gibbs sampler. Moreover, upon sampling from a BS distribution for unknown realizations of censored units, the remaining average lifetime is also estimated.
The Gibbs sampler requires suitable initial values for α and β to achieve its convergence. Often the MLE’s from the observed data given censoring are preferred for this purpose. Ignoring the additive constant, the BS log-likelihood function for the progressively Type-II-censored data of the form { ( t ( 1 ) , R 1 ) , ( t ( 2 ) , R 2 ) , , ( t ( r ) , R r ) } can be written as
l ( α , β | T ) = i = 1 r ln f ( t ( i ) ; α , β ) + R i ln ( Φ ( g ( t ( i ) ; α , β ) ) ) ,
where f ( · ) is the pdf of the BS distribution given in Equation (2), and g ( t ; α , β ) = 1 α t β β t .
The MLEs of the BS parameters cannot be obtained in the closed form for this censoring scheme. Using the property that the BS distribution can be written as an equal mixture of an IG distribution and its reciprocal [20] outlined an EM algorithm to obtained its MLEs. In this work, we use a computational tool introduced in [21] that is freely available in [22], which can be used to obtain MLEs of the BS parameters with all major censoring schemes.
Below, we outline the major steps of the Gibbs sampler, which employs progressively Type-II-censored BS data.
  • Calculate the MLE α ^ M L E and β ^ M L E from the available right-censored data. Set α ^ M L E = α 1 ( 0 ) and β ^ M L E = β 1 ( 0 ) .
  • Generate R i random variates from a uniform distribution bounded by the BS CDF ( F T ) value of the respective censored observation and one. Then, use the inverse CDF ( F T 1 ) value of the newly sampled random variate to replace the censored value. For instance, for the jth censored observation in ( t ( i ) , R i ) ,
    • Generate: u ( j : i ) U F T t ( i ) , 1 , where F T t ( i ) = Φ 1 α 1 ( 0 ) t ( i ) β 1 ( 0 ) β 1 ( 0 ) t ( i ) .
    • Then, set: t ( j : i ) ( 0 ) = F T 1 u ( j : i ) ; α 1 ( 0 ) , β 1 ( 0 ) .
  • Repeat Step 2 for all censored units in all r censored stages. The censored data will be replaced by the the newly simulated data t ( j : i ) ( 0 ) ( > t ( i ) ), j = 1 , 2 , , R i for each i = 1 , 2 , , r and will be combined with the observed failure times t ( 1 ) , t ( 2 ) , , t ( r ) to form an updated and complete sample of size n.
  • Using the updated sample in Step 3, sample α 1 ( 1 ) and β 1 ( 1 ) from their respective posterior distributions.
  • Repeat Steps 2–4 starting with the newly sampled parameters, α 1 ( 1 ) and β 1 ( 1 ) . This procedure will continue for k total iterations and conclude with the results for α 1 ( k ) and β 1 ( k ) . A new set of simulated BS observations should be picked in the same manner as in Step 3 using the α 1 ( k ) and β 1 ( k ) as newly updated parameters.
  • At the conclusion of Step 5, the average remaining life of censored units defined in Equation (3) shall be calculated using the newly sampled data and is designated as T ¯ 1 ( k ) .
  • Repeat the above process in Step 2–6 a large number of times, say m total replications. This will result:
    ( α 1 ( k ) , α 2 ( k ) , , α m ( k ) ) , ( β 1 ( k ) , β 2 ( k ) , , β m ( k ) ) , ( T ¯ 1 ( k ) , T ¯ 2 ( k ) , , T ¯ m ( k ) ) .
In the Gibbs sampler, we guarantee the convergence of the sampled data using both numerical and graphical summaries. This includes monitoring the scalar summary ψ and the scale reduction statistic R ^ defined in [23]. As suggested in [24], we confirm that this scale reduction statistic is well below 1.1 and the trace plots behave appropriately to ensure the convergence of the Gibbs samples in all situations considered. After confirming the convergence, we report both point and interval estimates. This includes mean and its stand error estimates as well as the 95% equal-tailed credible intervals for all the parameters including T ¯ . Moreover, we use the Kernel density estimation procedure to make visual comparisons between estimation methods. A sample R code to exhibit this algorithm is included in the Supplementary Materials.

4. Monte–Carlo Simulation

We conduct a simulation study to compare the performance of the discussed Bayesian estimates. The data are generated from the B S ( α , β ) distribution with four different sample sizes n = 10 , 20 , 30 , 50 and four different Type-II right-censoring percentages (CEP) at 10%(10%)40%. Without a loss of generality, we kept the scale parameter β fixed at 1.0 while varying the shape parameter α = 0.10 , 0.30 , 0.50 , 1.00 , 2.00 . In each experimental condition, we repeatedly generated 2000 BS data sets and applied the proposed Gibbs sampler.
We noticed that the Gibbs sampler converges in k = 3000 iterations for both Bayesian priors, and the scale reduction factor R ^ for both parameters is less than 1.1. After assessing the convergence, we replicate the Gibbs sampler m = 1000 times to obtain the point and 95% equal-tailed credible intervals for α , β , and T ¯ for each generated data set. Then for each parameter, the overall average of the posterior mean estimates (PE), its standard error (SE), and the coverage probability (CP) for 1000 randomly generated BS samples are acquired. To compare posterior point estimates, we refer to an observed bias as the difference between the true BS parameter and its PE. These results are shown in Table 1, Table 2, Table 3 and Table 4.
As shown in Table 1, for n = 10 , Jeffrey’s method slightly underestimates the true α value, and the size of the bias increases with the amount of the censoring percentage. The difference is more apparent for higher α values. Achcar’s method slightly overestimates the true value of the α regardless of the censoring percentage, except for high censoring α = 2 cases. The standard errors of the α estimates are somewhat similar for both methods. Interestingly, Achcar’s prior maintains the coverage probability at the nominal 95% level while Jeffrey’s prior becomes slightly liberal, as its coverage probability is around 93%. The β estimates for both the methods are somewhat consistent for all α values. The coverage probability comparison for the β estimates is quite similar to that of the α .
For the average remaining time, Achcar’s estimates provide somewhat higher estimates than Jeffrey’s estimates. Again, the differences are greater for larger α values than for the smaller α s. As far as the standard error is concerned, both methods are equally good and proportional to the true α value. The coverage probability comparison is quite similar to that of the α and β comparisons.
Based on the estimates shown in Table 2, when n = 20 , the comparisons we made earlier are still valid for all estimates, but the differences between estimates and the effects of high censoring are not as pronounced as in n = 10 cases, and their standard errors are also now lower. When the sample size increases to n = 30 and 50 (see Table 3 and Table 4), both the methods provide better results with increasing precision. The differences between the point estimates for lower α values are further narrowing and the coverage probabilities of all estimates approach the nominal 95% level, showing greater precision in the interval estimations for large samples.

5. Illustrative Examples

In this section, we consider three examples to illustrate the Gibbs sampler procedure described in Section 3. These examples exhibit the parameter estimation in randomly right, Type-II, and progressively Type-II-censored data.
  • Example 01 (Cancer Patients Data): This data set was originally presented in [25] and consists of lifetimes (in months) of 20 cancer patients who received a new treatment. The complete lifetime of only 17 cancer patients was recorded and the rest of the three patients were right-censored and denoted by “+” in the following data set.
3567891010+1215
15+181920222528304045+
The Kolmogorov–Smirnov goodness-of-fit test indicates that these data adequately follow a BS distribution, and its MLEs are α ^ M L E = 0.805 and β ^ M L E = 14.899 . For these data, T ¯ represents the average remaining lifetime for each of three patients censored during the experiment until they die. With only three observations out of 20 being censored, the number of iteration k = 2000 was found to be sufficient to ensure the convergence, and m = 10,000 Gibbs sample chains were used for the parameter estimation. The resulting estimates are shown in Table 5.
In addition, in the lower portion of Table 5, we report both point and interval estimates obtained in [25] Bayesian work (A-M 2010) and also the [18]’s Bayesian and MLE results (S-N MLE and S-N Bayesian), where they applied a generalized Birnbaum–Saunders distribution for the same data.
We note that both the initial MLEs, α ^ M L E = 0.805 and β ^ M L E = 14.899 , fall well within all the corresponding 95% credible interval bounds (see Table 5). Both Jeffre’s and Achcar’s estimates compare favorably to one another. Lengths of the credible intervals are somewhat narrower for α when compared to the [18,25] results. The estimated average remaining lifetime for the censored patients ranges from 18 to 20 months after their observation period was completed.
  • Example 02 (Fatigue Life): This example consists of the fatigue life of 6061-T6 aluminum coupons cut parallel to the direction of rolling and oscillated at 18 cycles per second, with a maximum stress per cycle of 31,000 psi reported in [8]. We reconfirmed that these data can be adequately modeled using the BS distribution, and the MLEs for complete data are α ^ = 0.170 and β ^ = 131.819 .
7090969799100103104104105107108108108109
109112112113114114114116119120120120121121123
124124124124124128128129129130130130131131131
131131132132132133134134134134134136136137138
138138139139141141142142142142142142144144145
146148148149151151152155156157157157157158159
162163163164166166168170174196212
We applied the Type-II right-censoring scheme with censoring percentages (CEP) at 10%(10%)60% for these data, and the estimated MLEs at different censoring levels are shown in Table 6. Due to the relatively larger β and sample size, first, we transform these data using the scale transformation t / β ^ suggested in Section 2.2 and adjust the MLEs accordingly to be used in the Gibbs sampler. We observed that the Gibbs sampler adequately converged with k = 2000 iterations and obtained m = 10,000 Gibbs sample chains to obtain estimates.
Also, Figure 1 shows the kernel density estimates of the parameters for Jeffrey’s and Achcar’s priors at 10%, 30%, and 60% censoring levels. The plots seem adequate and both methods seem to provide very similar estimates. However, as [26] indicated, the Gibbs output may not detect improper posteriors; the scale transformation we suggested should have scaled-down β to prevent such possible divergences.
The resulting point estimates along with the widths of the 95% credible intervals for both the priors are reported in Table 7. Interestingly, both α ^ and β ^ estimates for both the methods for lower to mid-censoring percentages 10%, 20%, and 30% are very close to their uncensored MLEs for complete data ( α ^ = 0.170 and β ^ = 131.819 ). However, when censoring percentage increases, both α ^ and β ^ overestimate the true values. As expected, the average remaining time T ¯ is also increased with respect to the censoring percentages. It is also noted that all six T ¯ estimates overestimated the true average remaining time; the increments are proportional to the true values for increasing the censoring percentages reported in Table 6.
  • Example 03 (Ball Bearings’ Data): This data set was originally presented in [27] and provides the fatigue life in hours of ten ball bearings of a certain type:
152.7172.0172.5173.3193.0204.7216.5234.9262.6422.6
Ref. [9] used the full data set and fitted BS distribution and reported that unbiased MLEs of α and β are 0.314 and 211.528, respectively. Ref. [20] used these data to generate three different progressively Type-II-censored samples and estimated BS parameters. We use somewhat similar progressively Type-II-censored samples, as shown below.
  • Scheme I: n = 10 , m = 6 , R 1 = 4 , R 2 = = R 6 = 0 ;
  • Scheme II: n = 10 , m = 6 , R 1 = 0 , R 2 = 2 , R 3 = R 4 = R 5 = 0 , R 6 = 2 .
The resulting parameter estimates, along with their 95% credible intervals, are reported in Table 8. Due to censoring in this small dataset, both Bayesian priors underestimate the unbiased MLEs. However, the credible intervals adequately capture these values. Achcar’s estimates become slightly better, as they are closer to the unbiased MLEs obtained from the complete data. This example indicates that the suggested method can be used effectively even for small datasets, yielding decent results.

6. Conclusions

This study reveals that the suggested Gibbs sampler performs reasonably well with both Bayesian priors. Achcar’s priors appear to provide better coverage probability than Jeffrey’s prior in the considered cases of this simulation study. Additionally, Achcar’s priors tend to slightly overestimate the true parameter value, while Jeffrey’s tends to underestimate it. The amount of censoring and sample size has an impact on the performance of both methods, and therefore, one should be aware of this limitation in practice. With an increase in sample size, all methods perform better, although the amount of censoring seems to slightly affect the estimates. Care must be taken regarding the size of the β parameter and the sample size when applying non-informative priors. The suggested scale transformation may need to be adopted to guarantee proper posteriors when using Achcar’s reference prior. Also, because the marginal posterior distributions relied on the Laplace approximation, there may be limitations on estimating the average lifetime because the BS density T B S ( α ^ , β ^ ) is an approximation to its true underlying distribution. However, this study reveals that the Gibbs sampler is capable enough to provide accurate remaining average lifetime estimates.
The simulation results indicate that the method considered shows some improvements with regards to point estimates and coverage probabilities when compared to [15] Bayesian results. In particular, our algorithm shows no substantial effect on the coverage probability by the amount of censoring. Also, the posterior distributions discussed here have tractable closed forms that require no partial or hyper-prior information. Also, our results are consistent with regards to the bias and coverage probability for all parameter combinations we considered; this shows a clear improvement when compared to the simulation results shown in [18].
With the Gibbs sampler, there is less restriction on the type of prior distribution that can be chosen. However, caution must be exercised in programming to ensure the well-behaved nature of both prior and posterior distributions. If posterior distributions, whether conditional or otherwise, cannot be precisely determined, asymptotic distributions may be employed. The Gibbs sampler procedures offer a high degree of flexibility in implementation, allowing the adjustment of the number of iterations based on the trade-off between the speed and desired accuracy. Undoubtedly, the Gibbs sampler finds its place in developing complex models, particularly when dealing with censored data. Its computation involves a series of calculations that are easy to understand, and its implementation is relatively straightforward.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math12060874/s1. Supplementary File: “R code for Fatigue Life data Analysis.R”.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article and Supplementary Materials.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The Laplace’s method for integrals provides an approximate to the integral of the form
I = f ( θ ) exp { n h ( θ ) } d θ ,
where h is a smooth function of θ , having its maximum at θ ^ . Then, the Laplace’s approximate for integral I becomes
I ^ 2 π n σ f ( θ ^ ) exp { n h ( θ ^ ) } ,
where σ = 1 / h ( θ ^ ) .
Now, as outlined in [14], assuming Jeffrey’s prior, the joint posterior distribution of α and β becomes
π ( α , β | T ) i = 1 n ( β + t i ) exp { Q ( β ) / α 2 } α n + 1 β ( n / 2 ) + 1 H ( α 2 ) ,
where Q ( β ) = n s 2 β + n β 2 q n and H ( α 2 ) = 1 α 2 + 1 4 .
For Achcar’s prior π ( α , β ) , H ( α 2 ) = 1 and the marginal posterior of α can be written as
π ( α | T ) exp { n / α 2 } α n + 1 0 f ( β ) exp { n h ( β ) } d β ,
where f ( β ) = i = 1 n ( β + t i ) β ( n / 2 ) + 1 and h ( β ) = s 2 β α 2 + β 2 q α 2 . The maximum of the h ( β ) occurs at β ^ = s q and therefore, h ( β ^ ) = s / q α 2 and h ( β ^ ) = 1 α 2 q s q .
Then, using the Laplace approximation, the integral I ( α ) = 0 f ( β ) exp { n h ( β ) } d β can be approximated by
I ^ ( α ) 2 π n α q s q i = 1 n ( s q + t i ) ( s q ) n / 4 + 1 / 2 exp { n s / q α 2 }
By neglecting all but α terms in I ^ ( α ) , the approximate marginal posterior distribution of α becomes
π ( α | T ) α n exp n α 2 ( s / q 1 ) , α > 0 ,
To obtain the marginal posterior of the β , we integrate α in the joint posterior in Equation (A1).
π ( β | T ) i = 1 n ( β + t i ) β ( n / 2 ) + 1 0 exp { Q ( β ) / α 2 } α n + 1 d α ,
i = 1 n ( β + t i ) β ( n / 2 ) + 1 Γ ( n / 2 ) 2 [ ( Q ( β ) ] n / 2 ,
i = 1 n ( β + t i ) β ( n / 2 ) + 1 s / ( 2 β ) + β / ( 2 q ) 1 n / 2 , β > 0
Using similar arguments, Jeffrey’s prior-based marginal posteriors given in Equations (8) and (9) can be obtained.

References

  1. Birnbaum, Z.W.; Saunders, S.C. A new family of life distributions. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef]
  2. Balakrishnan, N.; Kundu, D. Birnbaum-Saunders distribution: A review of models, analysis, and applications. Appl. Stoch. Model. Bus. Ind. 2019, 35, 4–49. [Google Scholar]
  3. Desmond, A.F. On the relationship between two fatigue-life models. IEEE Trans. Reliab. 1986, 35, 167–169. [Google Scholar] [CrossRef]
  4. Bhattacharyya, G.; Fries, A. Fatigue Failure Models—Birnbaum-Saunders vs. Inverse Gaussian. IEEE Trans. Reliab. 1982, 31, 439–441. [Google Scholar] [CrossRef]
  5. Owen, W.J.; Ng, H.K.T. Revisit of relationships and models for the Birnbaum-Saunders and inverse-Gaussian distributions. J. Stat. Distrib. Appl. 2015, 2, 11. [Google Scholar]
  6. Ng, H.K.T. Discussion of “Birnbaum-Saunders distribution: A review of models, analysis, and applications”. Appl. Stoch. Model. Bus. Ind. 2019, 35, 64–71. [Google Scholar] [CrossRef]
  7. Saunders, S.C. A family of random variables closed under reciprocation. J. Am. Stat. Assoc. 1974, 69, 533–539. [Google Scholar] [CrossRef]
  8. Birnbaum, Z.W.; Saunders, S.C. Estimation for a family of life distributions with applications to fatigue. J. Appl. Probab. 1969, 6, 328–347. [Google Scholar] [CrossRef]
  9. Ng, H.; Kundu, D.; Balakrishnan, N. Modified moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2003, 43, 283–298. [Google Scholar] [CrossRef]
  10. Balakrishnan, N.; Zhu, X. An improved method of estimation for the parameters of the Birnbaum–Saunders distribution. J. Stat. Comput. Simul. 2014, 84, 2285–2294. [Google Scholar] [CrossRef]
  11. Ng, H.; Kundu, D.; Balakrishnan, N. Point and interval estimation for the two-parameter Birnbaum–Saunders distribution based on Type-II censored samples. Comput. Stat. Data Anal. 2006, 50, 3222–3242. [Google Scholar] [CrossRef]
  12. Wang, Z.; Desmond, A.F.; Lu, X. Modified censored moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2006, 50, 1033–1051. [Google Scholar] [CrossRef]
  13. Jayalath, K.P. Fiducial Inference on the Right Censored Birnbaum–Saunders Data via Gibbs Sampler. Stats 2021, 4, 385–399. [Google Scholar] [CrossRef]
  14. Achcar, J.A. Inferences for the Birnbaum—Saunders fatigue life model using Bayesian methods. Comput. Stat. Data Anal. 1993, 15, 367–380. [Google Scholar] [CrossRef]
  15. Xu, A.; Tang, Y. Reference analysis for Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2010, 54, 185–192. [Google Scholar] [CrossRef]
  16. Xu, A.; Tang, Y. Bayesian analysis of Birnbaum–Saunders distribution with partial information. Comput. Stat. Data Anal. 2011, 55, 2324–2333. [Google Scholar] [CrossRef]
  17. Wang, M.; Sun, X.; Park, C. Bayesian analysis of Birnbaum–Saunders distribution via the generalized ratio-of-uniforms method. Comput. Stat. 2016, 31, 207–225. [Google Scholar] [CrossRef]
  18. Sha, N.; Ng, T.L. Bayesian inference for Birnbaum–Saunders distribution and its generalization. J. Stat. Comput. Simul. 2017, 87, 2411–2429. [Google Scholar] [CrossRef]
  19. Balakrishnan, N.; Cramer, E. The art of progressive censoring. In Statistics for Industry and Technology; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  20. Pradhan, B.; Kundu, D. Inference and optimal censoring schemes for progressively censored Birnbaum–Saunders distribution. J. Stat. Plan. Inference 2013, 143, 1098–1108. [Google Scholar] [CrossRef]
  21. Delignette-Muller, M.; Dutang, C. An R Package for Fitting Distributions. J. Stat. Softw. 2015, 61, 1–34. [Google Scholar]
  22. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
  23. Brooks, S.P.; Gelman, A. General methods for monitoring convergence of iterative simulations. J. Comput. Graph. Stat. 1998, 7, 434–455. [Google Scholar]
  24. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013. [Google Scholar]
  25. Achcar, J.A.; Moala, F.A. Use of MCMC methods to obtain Bayesian inferences for the Birnbaum-Saunders distribution in the presence of censored data and covariates. Adv. Appl. Stat. 2010, 17, 1–27. [Google Scholar]
  26. Hobert, J.P.; Casella, G. The effect of improper priors on Gibbs sampling in hierarchical linear mixed models. J. Am. Stat. Assoc. 1996, 91, 1461–1473. [Google Scholar]
  27. McCool, J. Inferential Techniques for Weibull Populations. Aerospace Research Laboratories Report; Technical Report, ARL TR 74-0180; Wright-Patterson AFB: Fairborn, OH, USA, 1974. [Google Scholar]
Figure 1. Kernel density estimates of Achcar’s and Jeffrey’s priors for censored fatigue life data. Top panel, middle, and bottom panels are for 10%, 30%, and 60% censoring schemes, respectively.
Figure 1. Kernel density estimates of Achcar’s and Jeffrey’s priors for censored fatigue life data. Top panel, middle, and bottom panels are for 10%, 30%, and 60% censoring schemes, respectively.
Mathematics 12 00874 g001
Table 1. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 10 ).
Table 1. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 10 ).
α ^ β ^ T ¯ ^
ParamCEP%Jeffrey’sAchcar’sJeffrey’sAchcar’sJeffrey’sAchcar’s
α PESECPPESECPPESECPPESECPPESECPPESECP
0.1100.1000.0250.9390.1080.0270.9571.0030.0340.9271.0010.0310.9520.0630.0190.9400.0710.0210.942
200.1010.0290.9420.1100.0300.9491.0020.0330.9321.0010.0320.9550.0720.0240.9360.0800.0270.940
300.1010.0310.9340.1130.0350.9561.0020.0330.9161.0060.0340.9510.0780.0280.9250.0910.0330.954
400.0990.0350.9220.1120.0360.9651.0020.0370.9201.0070.0380.9450.0850.0340.9210.1010.0390.951
0.3100.2990.0740.9330.3160.0750.9531.0130.0990.9331.0080.0970.9490.2560.0910.9460.2800.0960.941
200.2930.0790.9320.3190.0820.9601.0080.0990.9311.0140.1000.9470.2650.1020.9280.3010.1080.943
300.2890.0830.9330.3170.0880.9591.0070.1010.9281.0200.1040.9440.2770.1100.9440.3190.1220.948
400.2890.0910.9180.3200.0940.9521.0130.1100.9221.0260.1170.9380.2970.1250.9230.3460.1370.944
0.5100.4890.1150.9380.5170.1230.9451.0320.1590.9411.0260.1660.9360.5430.2180.9400.5920.2440.951
200.4870.1240.9370.5230.1210.9621.0330.1620.9381.0380.1710.9490.5590.2360.9380.6170.2340.960
300.4750.1250.9490.5150.1280.9601.0250.1720.9271.0460.1680.9500.5520.2350.9420.6290.2470.945
400.4660.1410.9300.5090.1320.9731.0180.1750.9201.0390.1870.9510.5620.2660.9340.6380.2660.961
1101.0020.2610.9391.0560.2600.9531.0880.3060.9361.0850.2970.9591.8491.0580.9381.9291.0210.946
201.0020.2890.9241.0730.2910.9471.0850.3070.9431.1040.3040.9611.8991.1880.9482.0721.1800.940
301.0000.3150.9291.0620.3140.9531.0890.3190.9311.1080.3200.9511.9171.2350.9282.0531.2380.939
400.9830.3370.9301.0790.3440.9571.0670.3180.9221.1160.3280.9561.8731.2460.9292.1411.3390.961
2101.9610.4650.9462.0130.4730.9481.1110.4160.9541.1090.4200.9656.6344.5770.9186.9234.6580.917
201.9440.5190.9242.0140.5120.9381.0980.4200.9451.1030.4190.9626.1864.5830.9046.1724.3440.923
301.8680.5450.9271.9430.5410.9381.0650.4250.9491.1030.4330.9575.1933.9550.9215.5294.4390.922
401.7550.5880.9041.8980.5220.9571.0220.4340.9121.0590.4370.9544.4373.6160.8764.8333.5300.934
Table 2. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 20 ).
Table 2. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 20 ).
α ^ β ^ T ¯ ^
ParamCEP%Jeffrey’sAchcar’sJeffrey’sAchcar’sJeffrey’sAchcar’s
α PESECPPESECPPESECPPESECPPESECPPESECP
0.1100.0990.0180.9390.1030.0180.9430.9990.0220.9431.0020.0230.9440.0590.0130.9470.0620.0130.938
200.1000.0190.9360.1040.0200.9511.0000.0230.9401.0020.0240.9440.0660.0150.9370.0710.0160.951
300.0990.0220.9330.1040.0220.9630.9990.0240.9451.0010.0240.9370.0730.0180.9290.0780.0190.950
400.1000.0240.9360.1050.0250.9571.0000.0250.9341.0030.0260.9450.0810.0220.9290.0870.0230.946
0.3100.3020.0540.9440.3100.0540.9501.0040.0660.9391.0070.0680.9490.2450.0660.9540.2570.0680.954
200.3030.0590.9370.3130.0600.9541.0060.0700.9381.0050.0690.9480.2630.0750.9440.2770.0780.953
300.2990.0620.9460.3080.0640.9481.0060.0710.9381.0100.0760.9340.2740.0810.9440.2880.0860.949
400.2990.0690.9450.3220.0700.9531.0020.0760.9351.0150.0800.9490.2910.0930.9350.3250.0990.957
0.5100.5040.0890.9380.5220.0880.9591.0160.1160.9351.0180.1130.9480.5400.1700.9610.5710.1700.959
200.4990.0960.9370.5130.0930.9531.0190.1140.9541.0200.1150.9390.5480.1790.9450.5730.1790.959
300.4940.1000.9400.5090.0960.9611.0190.1220.9371.0200.1240.9390.5560.1920.9370.5820.1890.952
400.5010.1100.9410.5190.1110.9511.0170.1250.9521.0290.1360.9450.5860.2120.9450.6250.2220.962
1100.9980.1770.9471.0290.1820.9471.0490.2180.9421.0500.2200.9511.8410.7450.9481.9560.7980.954
201.0030.2000.9391.0380.2080.9391.0520.2200.9501.0730.2360.9551.8470.8440.9382.0020.9630.948
301.0040.2180.9421.0510.2340.9431.0510.2300.9451.0700.2550.9511.8430.9170.9452.0541.1290.957
401.0080.2480.9521.0630.2650.9411.0700.2590.9531.0930.2640.9581.9331.1380.9512.1551.2360.946
2102.0210.3640.9352.0550.3610.9421.1030.3280.9521.1060.3290.9577.5923.5520.9457.8413.7050.955
201.9880.3950.9282.0250.3740.9511.1030.3600.9341.1180.3650.9437.0703.7770.9447.3673.8070.951
301.9720.4250.9532.0320.4120.9571.1010.3580.9551.1220.3690.9596.8133.8150.9497.2503.8880.946
401.9970.4580.9502.0460.4430.9661.1060.3810.9531.1360.3760.9636.8444.1080.9577.2193.9820.963
Table 3. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 30 ).
Table 3. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 30 ).
α ^ β ^ T ¯ ^
ParamCEP%Jeffrey’sAchcar’sJeffrey’sAchcar’sJeffrey’sAchcar’s
α PESECPPESECPPESECPPESECPPESECPPESECP
0.1100.0990.0140.9430.1010.0140.9561.0000.0180.9460.9990.0190.9380.0580.0100.9490.0600.0100.954
200.1010.0150.9520.1020.0150.9481.0000.0190.9471.0010.0190.9360.0660.0120.9500.0680.0120.951
300.1000.0170.9370.1030.0180.9391.0010.0200.9381.0000.0200.9550.0730.0150.9530.0760.0150.949
400.1000.0200.9330.1030.0190.9551.0000.0210.9321.0010.0200.9580.0800.0180.9250.0830.0170.963
0.3100.3010.0430.9480.3060.0420.9561.0060.0560.9401.0030.0550.9430.2390.0520.9430.2450.0520.940
200.2980.0460.9410.3080.0470.9571.0010.0560.9401.0050.0570.9570.2510.0570.9400.2650.0590.956
300.3000.0510.9260.3090.0530.9521.0030.0600.9441.0040.0600.9480.2690.0660.9450.2820.0680.960
400.2950.0570.9330.3110.0580.9491.0020.0610.9381.0060.0620.9590.2810.0750.9340.3020.0810.947
0.5100.5020.0690.9560.5060.0750.9381.0130.0930.9461.0100.0960.9340.5260.1280.9480.5330.1420.935
200.5020.0750.9520.5130.0840.9321.0100.0930.9511.0130.0950.9480.5370.1400.9520.5600.1590.932
300.5000.0860.9350.5180.0860.9471.0120.0980.9381.0170.0980.9560.5520.1620.9480.5830.1650.954
400.5020.0940.9460.5170.0920.9621.0150.1060.9371.0150.1040.9550.5740.1770.9250.5990.1790.962
1100.9940.1450.9451.0240.1490.9561.0380.1750.9521.0420.1800.9371.7850.5930.9421.8950.6600.936
201.0040.1620.9351.0300.1610.9481.0280.1810.9431.0370.1810.9541.7550.6500.9511.8500.6680.946
301.0020.1790.9461.0240.1860.9381.0400.2050.9371.0530.2000.9451.7550.7630.9451.8400.7810.953
401.0050.2100.9381.0430.2110.9511.0490.2160.9451.0750.2320.9441.7990.9210.9401.9540.9730.949
2102.0100.3010.9332.0270.2950.9411.0740.2860.9391.0750.2670.9587.2192.9930.9467.2842.8820.949
202.0080.3240.9452.0240.3360.9291.0780.2990.9371.0660.2950.9436.9143.2020.9486.9703.3440.937
302.0280.3730.9472.0520.3740.9461.1140.3390.9341.1170.3250.9537.0693.7790.9517.2323.7390.936
401.9960.4110.9432.0530.4090.9531.1010.3470.9591.1480.3550.9626.6563.8480.9417.2214.0980.955
Table 4. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 50 ).
Table 4. Mean and standard error of the point estimates and probability coverages of 95% credible intervals based on Monte–Carlo simulation ( n = 50 ).
α ^ β ^ T ¯ ^
ParamCEP%Jeffrey’sAchcar’sJeffrey’sAchcar’sJeffrey’sAchcar’s
α PESECPPESECPPESECPPESECPPESECPPESECP
0.1100.1000.0110.9500.1010.0110.9491.0000.0140.9450.9990.0140.9580.0570.0070.9500.0580.0080.944
200.1000.0120.9350.1010.0120.9410.9990.0140.9421.0000.0150.9430.0650.0090.9470.0660.0090.945
300.1000.0130.9340.1020.0130.9531.0000.0150.9401.0000.0160.9360.0720.0110.9350.0730.0110.932
400.0990.0140.9450.1020.0150.9451.0000.0160.9391.0000.0160.9520.0780.0130.9440.0810.0130.938
0.3100.3020.0330.9460.3040.0330.9481.0030.0420.9451.0030.0420.9490.2360.0390.9500.2390.0390.935
200.2990.0350.9520.3060.0340.9541.0030.0450.9281.0000.0420.9490.2490.0430.9450.2560.0420.952
300.2980.0390.9420.3050.0410.9491.0020.0460.9411.0020.0440.9510.2630.0490.9460.2720.0510.942
400.3010.0450.9300.3060.0450.9451.0040.0490.9441.0020.0490.9470.2820.0590.9460.2890.0600.939
0.5100.5000.0560.9430.5060.0550.9461.0040.0710.9371.0070.0690.9590.5110.1030.9540.5220.1030.933
200.5020.0570.9670.5040.0590.9501.0040.0720.9391.0070.0740.9420.5270.1070.9490.5320.1110.944
300.5000.0670.9440.5070.0670.9421.0070.0760.9451.0140.0750.9460.5400.1250.9630.5540.1260.947
400.5020.0760.9420.5110.0710.9591.0110.0820.9481.0090.0850.9420.5610.1430.9510.5740.1390.947
1100.9940.1060.9551.0110.1090.9541.0210.1300.9521.0200.1290.9601.7300.4220.9431.7800.4390.947
201.0060.1180.9631.0160.1200.9531.0130.1370.9461.0220.1440.9391.6990.4600.9341.7440.4840.934
301.0030.1380.9401.0180.1410.9421.0230.1480.9521.0300.1580.9411.6690.5330.9501.7290.5800.947
401.0080.1690.9191.0170.1590.9441.0350.1690.9411.0430.1710.9421.6940.6840.9411.7240.6440.957
2101.9920.2270.9362.0170.2310.9451.0330.2070.9481.0500.2180.9496.7462.2000.9437.0212.3260.947
202.0050.2510.9452.0300.2520.9561.0540.2270.9541.0690.2370.9536.5822.4340.9396.8162.5340.962
302.0080.2990.9392.0310.3010.9371.0700.2620.9531.0740.2860.9386.4473.0020.9536.5943.0950.938
402.0160.3370.9452.0790.3590.9271.0870.3110.9361.1210.3120.9376.3773.3610.9366.9763.6850.941
Table 5. Parameter estimates on cancer data.
Table 5. Parameter estimates on cancer data.
Parameter α β T ¯
PE95% CIWidthPE95% CIWidthPE95% CI
Jeffrey’s0.849(0.603, 1.211)0.60815.335(10.481, 22.127)11.64618.560(3.113, 53.284)
Achcar’s0.874(0.614, 1.284)0.67015.393(10.376, 22.648)12.27219.393(3.076, 59.307)
S-N MLE (2017)0.974(0.127,  1.821)1.69315.629(9.614,  21.644)12.030
S-N Bayesian (2017)0.962(0.604,  1.510)0.90715.411(10.489,  21.696)11.207
A-M (2010)0.885(0.610,  1.295)0.68516.030(10.930,  24.360)13.430
Table 6. Initial Parameter estimates on Type-II-censored fatigue life data.
Table 6. Initial Parameter estimates on Type-II-censored fatigue life data.
CEP0.10.20.30.40.50.6
α ^ M L E ( 0 ) 0.1690.1740.1720.1820.1840.210
β ^ M L E ( 0 ) 131.489131.900131.804132.940133.225137.270
True T ¯ 10.2011.5515.0014.8517.2816.25
Table 7. Point estimates ( α ^ , β ^ , and T ¯ ) and widths of the corresponding credible intervals for different censoring percentages.
Table 7. Point estimates ( α ^ , β ^ , and T ¯ ) and widths of the corresponding credible intervals for different censoring percentages.
Jeffrey’sAchcar’s
DOC%ParamPEWidthPEWidth
10 α 0.1690.0510.1700.051
β 131.5828.926131.5018.796
T ¯ 14.34817.62814.51918.105
20 α 0.1740.0580.1750.058
β 131.9009.386131.9949.356
T ¯ 16.31815.29716.42115.277
30 α 0.1720.0630.1740.063
β 131.8309.601131.8839.876
T ¯ 17.75914.65417.95114.705
40 α 0.1820.0720.1840.074
β 132.99310.921133.10111.091
T ¯ 20.40516.22820.69116.533
50 α 0.1850.0820.1870.082
β 133.27612.267133.46912.271
T ¯ 22.54117.95922.88118.037
60 α 0.2110.1060.2140.108
β 137.28115.897137.56916.052
T ¯ 28.76823.98529.22724.213
Table 8. Parameter estimates on Type-II progressively censored ball bearings’ data.
Table 8. Parameter estimates on Type-II progressively censored ball bearings’ data.
Parameter α β T ¯
PE95% CIPE95% CIPE95% CI
Scheme–I
Jeffrey’s0.182(0.107, 0.327)201.939(175.660, 236.367)57.921(21.599, 116.568)
Achcar’s0.199(0.113, 0.368)201.993(174.342, 236.251)60.627(21.944, 126.152)
Scheme–II
Jeffrey’s0.173(0.095, 0.337)201.058(178.361, 233.527)36.467(9.779, 94.211)
Achcar’s0.195(0.102, 0.400)201.686(177.561, 235.060)41.159(10.182, 109.039)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jayalath, K.P. Improved Bayesian Inferences for Right-Censored Birnbaum–Saunders Data. Mathematics 2024, 12, 874. https://doi.org/10.3390/math12060874

AMA Style

Jayalath KP. Improved Bayesian Inferences for Right-Censored Birnbaum–Saunders Data. Mathematics. 2024; 12(6):874. https://doi.org/10.3390/math12060874

Chicago/Turabian Style

Jayalath, Kalanka P. 2024. "Improved Bayesian Inferences for Right-Censored Birnbaum–Saunders Data" Mathematics 12, no. 6: 874. https://doi.org/10.3390/math12060874

APA Style

Jayalath, K. P. (2024). Improved Bayesian Inferences for Right-Censored Birnbaum–Saunders Data. Mathematics, 12(6), 874. https://doi.org/10.3390/math12060874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop