Next Article in Journal
Irreducible Characters with Cyclic Anchor Group
Previous Article in Journal
Research on the Calculation Model and Control Method of Initial Supporting Force for Temporary Support in the Underground Excavation Roadway of Coal Mine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring

by
Rashad M. EL-Sagheer
1,2,
Mohamed S. Eliwa
3,4,*,
Mahmoud El-Morshedy
5,6,*,
Laila A. Al-Essa
7,
Afrah Al-Bossly
5 and
Amel Abd-El-Monem
8
1
Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Egypt
2
High Institute of Computer and Management Information System, First Statement, New Cairo 11865, Egypt
3
Department of Statistics and Operation Research, College of Science, Qassim University, Buraydah 51482, Saudi Arabia
4
Department of Statistics and Computer Science, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
5
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
6
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
7
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
8
Department of Mathematics, Faculty of Education, Ain-Shams University, Cairo 11341, Egypt
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(10), 949; https://doi.org/10.3390/axioms12100949
Submission received: 26 July 2023 / Revised: 28 September 2023 / Accepted: 4 October 2023 / Published: 6 October 2023
(This article belongs to the Section Mathematical Analysis)

Abstract

:
In this study, we introduce a novel estimation technique for assessing the reliability parameter R = P ( Y < X ) of the uniform truncated negative binomial distribution (UTNBD) in the context of stress–strength analysis. We base our inferences on the assumption that both the strength (X) and stress (Y) random variables follow a UTNBD with identical first shape and scale parameters. In the presence of a progressive type-II censoring scheme, we employ maximum likelihood, two parametric bootstrap methods, and Bayesian estimation approaches to derive the estimators. Due to the complexity introduced by censoring, the estimators are not available in explicit forms and are instead obtained through numerical approximation techniques. Furthermore, we compute the highest posterior density credible intervals and determine the asymptotic variance-covariance matrix. To assess the performance of our proposed estimators, we conduct a Monte Carlo simulation study and provide a comparative analysis. Finally, we illustrate the practical applicability of our study with an engineering application.

1. Introduction

Due to its relevance in several fields, including engineering, economics, and quality control, and its countless applications to medical and engineering issues in recent years, stress–strength models have drawn the attention of many statisticians for a long time. Stress–strength patterns associated with any system or piece of equipment are often analyzed using stress–strength reliability (SSR) models in the discipline of mechanical engineering. The SSR model evaluates the performance of the system when strength is X and applied stress is Y. The mechanism will malfunction if the applied stress is greater than its capacity. In the present, the SSR model is frequently applied to evaluate the probability that one device will fail before the other in a life testing experiment where X and Y denote the lifetime of two devices. The model will behave as predicted when X surpasses Y, with the SSR model being represented by R = P Y < X . In this configuration, it is essential to estimate a component’s reliability characteristics. This helps us assess how effectively a product’s operating procedure works and enables us to take preventative measures to avoid production process hiccups. In a superb monograph published by Kotz et al. [1], the various SSR models created before 2001 were described in detail. Although the improvements for the stress–strength models under complete samples have received a great deal of attention see for instance [2,3,4,5,6,7,8,9,10], much focus has been placed on the situation where the data are record values, see [11,12,13,14,15,16,17].
Censoring occurs in a life-testing experiment when exact lifetimes of items are only known for a portion of their lives and the remaining lifetimes are only known to surpass specific values. Type-I and type-II censoring schemes are the two most widely utilized censoring techniques in the literature. Type-I censoring schemes end experiments after a predetermined amount of time, whereas type-II censoring schemes end experiments after a predetermined number of failures. However, these censoring strategies forbid the removal of units from the test at locations other than the final termination point. A more comprehensive censoring method, termed the progressive censoring scheme, which permits the removal of units from the test at points other than the eventual termination point, serves this goal. It can be described as follows: consider a scenario where n units are tested, but only m of the failures are completely apparent. At the time of the first failure, X 1 : m : n , one of the surviving units R 1 is randomly selected and eliminated from the remaining ( n 1 ) units. When a second failure, X 2 : m : n , is discovered, R 2 of the surviving units, just as in the first example, are randomly selected and deleted from the remaining n 2 R 1 units. At the mth stage, where X m : m : n is seen, all of the R m remaining surviving units are finally eliminated from the experiment. This process of obtaining the censored sample of size m is known as progressive type-II censored (PTIIC) sample with censoring scheme R 1 , R 2 , , R m . For more details, we refer the reader to Balakrishnan and Sandhu [18]. The issue of estimating the SSR parameter for various sampling schemes and distributions for X and Y has been the subject of extensive research by several statistical scholars including [19,20,21,22].
The uniform truncated negative binomial distribution is frequently used in reliability analysis. When contrasted to the well-known families of distributions, such as Weibull, gamma, generalized exponential, etc., it can occasionally be considered as a good substitute. Kamel et al. [23] studied some of its statistical and reliability characteristics, including the shape behavior of the density and hazard rate functions, the mean residual life and moment generating functions, the limiting distribution of sample extremes, quantiles, kurtosis, skewness, entropies, and stochastic orderings, and they first proposed the concept of UTNBD. They also conducted a straightforward investigation of maximum likelihood estimates using censored data from actual observations. UTNBD is examined in this article as part of a stress–strength study related to any system in the presence of a progressive type-II censoring scheme. The probability density function (PDF) and cumulative distribution function (CDF) of UTNBD are provided by
f x ; α , θ , λ = λ α λ 1 α θ 1 α λ α + 1 α θ x λ + 1 , α , λ > 0 , 0 < x < θ ,
and
F x ; α , θ , λ = 1 α λ 1 α λ α + 1 α θ x λ 1 , α , λ > 0 , 0 < x < θ ,
where α and λ are the shape parameters and θ is the scale parameter. It has sub-models including the uniform and Marshall–Olkin extended uniform (MOEU) distributions: when α 1 , then U T N B α , θ , λ reduce to U 0 , θ , and when λ 1 , the MOEU distribution is obtained. Let X and Y represent the two independent strength–stress random variables discovered from U T N B α , θ , λ 1 and U T N B α , θ , λ 2 , respectively. The SSR parameter is assessed with the presumption that models have the same first-shape and scale parameters but distinct second-shape parameters, i.e., X U T N B α , θ , λ 1 and Y U T N B α , θ , λ 2 . In light of this, the SSR parameter R is
R = P Y < X = 0 θ P X > Y | Y = y f 2 y ; α , θ , λ 2 d y = 0 θ F 1 ¯ y ; α , θ , λ 1 f 2 y ; α , θ , λ 2 d y = λ 2 α λ 1 λ 1 + λ 2 λ 1 α λ 2 λ 1 + λ 2 α λ 1 1 α λ 2 1 = Ω α , λ 1 , λ 2 .
The article’s general structure is as follows: The maximum likelihood estimates and asymptotic confidence intervals are covered in Section 2. In Section 3, two parametric bootstrap techniques are suggested. Section 4 examines estimate methods in a Bayesian framework. A simulation study is carried out to compare the suggested techniques in Section 5. A case study using actual data is given in Section 6 to show how the suggested inference processes might be used. In Section 7, a summary of the findings is presented.

2. Maximum Likelihood Estimation

Maximum likelihood estimation (MLE) is a well-liked method for parameter estimation in statistical models. It is a technique for determining statistical models’ unidentified parameters by using test data. The principle behind MLE is that the collection of parameter values that maximizes the probability of obtaining the observed data is the most likely value of an unknown parameter. MLE is a well-known and frequently used method because it provides a straightforward way to estimate the attributes of a population given a sample in many scientific fields. The fact that it is straightforward to apply computationally is an additional advantage. Let us assume that x 1 : m 1 : n 1 , x 2 : m 1 : n 1 , , x m 1 : m 1 : n 1 and y 1 : m 2 : n 2 , y 2 : m 2 : n 2 , , y m 2 : m 2 : n 2 are two PTIIC samples of strength X and stress Y under the schemes n 1 , m 1 , S 1 , S 2 , , S m 1 and n 2 , m 2 , T 1 , T 2 , , T m 2 , , respectively. Then, in this reliability scheme, the likelihood function (LF) of the observed samples is given by (see [15])
L ( α , θ , λ 1 , λ 2 | x ̲ , y ̲ ) = C i = 1 m 1 f x i ; α , θ , λ 1 1 F x i ; α , θ , λ 1 S i × j = 1 m 2 f y j ; α , θ , λ 2 1 F y j ; α , θ , λ 2 T j ,
where x i = x i : m 1 : n 1 and y j = y j : m 2 : n 2 to reduce the notation’s complexity, and
C = C S C T , C S = n 1 n 1 1 S 1 n 1 2 S 1 S 2 n 1 m 1 S 1 S m 1 1 , C T = n 2 n 2 1 T 1 n 2 2 T 1 T 2 n 2 m 2 T 1 T m 2 1 ,
both equally
L ( α , θ , λ 1 , λ 2 ) λ 1 m 1 λ 2 m 2 1 α θ m 1 + m 2 α λ 1 1 α λ 1 m 1 α λ 2 1 α λ 2 m 2 × exp λ 1 + 1 i = 1 m 1 ln α + 1 α θ x i λ 2 + 1 j = 1 m 2 ln α + 1 α θ y j × exp i = 1 m 1 S i ln α λ 1 1 α λ 1 + i = 1 m 1 S i ln α + 1 α θ x i λ 1 1 × exp j = 1 m 2 T i ln α λ 2 1 α λ 2 + j = 1 m 2 T i ln α + 1 α θ y j λ 2 1 .
Consequently, the LF (5) natural logarithm without constant additive terms is written as
( α , θ , λ 1 , λ 2 ) m 1 ln λ 1 + m 2 ln λ 2 + m 1 + m 2 ln 1 α θ + m 1 + i = 1 m 1 S i ln α λ 1 1 α λ 1 + m 2 + j = 1 m 2 T i ln α λ 2 1 α λ 2 λ 1 + 1 i = 1 m 1 ln α + 1 α θ x i + i = 1 m 1 S i ln α + 1 α θ x i λ 1 1 λ 2 + 1 j = 1 m 2 ln α + 1 α θ y j + j = 1 m 2 T i ln α + 1 α θ y j λ 2 1 .
Moreover, θ is a known common scale parameter. The MLEs of the parameters α , λ 1 , and λ 2 are calculated by differentiating the previously mentioned expression (6) with regard to the parameters ( α , λ 1 , and λ 2 ) and equating it to zero. There are the likelihood formulas listed below
( α , θ , λ 1 , λ 2 ) α = m 1 + m 2 1 α + m 1 + i = 1 m 1 S i λ 1 α 1 α λ 1 + m 2 + j = 1 m 2 T i λ 2 α 1 α λ 2 i = 1 m 1 S i λ 1 1 x i θ α + 1 α θ x i λ 1 + 1 α + 1 α θ x i λ 1 1 j = 1 m 2 T i λ 2 ( 1 y j θ ) α + 1 α θ y j λ 2 + 1 α + 1 α θ y j λ 2 1 λ 1 + 1 i = 1 m 1 1 x i θ α + 1 α θ x i λ 2 + 1 j = 1 m 2 1 y j θ α + 1 α θ y j
( α , θ , λ 1 , λ 2 ) λ 1 = m 1 λ 1 + m 1 + i = 1 m 1 S i ln α 1 α λ 1 i = 1 m 1 ln α + 1 α θ x i i = 1 m 1 S i α + 1 α θ x i λ 1 ln α + 1 α θ x i α + 1 α θ x i λ 1 1 ,
and
( α , θ , λ 1 , λ 2 ) λ 2 = m 2 λ 2 + m 2 + j = 1 m 2 T i ln α 1 α λ 2 j = 1 m 2 ln α + 1 α θ y j j = 1 m 2 T i α + 1 α θ y j λ 2 ln α + 1 α θ y j α + 1 α θ y j λ 2 1 .
To obtain the MLEs of the parameters, Equations (7)–(9) are simultaneously solved. Unfortunately, the aforementioned issues cannot be solved analytically. As a result, MLEs can be assessed using any numerical iterative technique. In this case, a non-linear maximization method was used to obtain MLEs (see [24]). By considering the MLEs’ invariance property and substituting their estimates for the parameters as given in the following, it is possible to create the MLE of R, denoted by R ^ M L :
R ^ M L = λ ^ 2 α ^ λ ^ 1 λ ^ 1 + λ ^ 2 λ ^ 1 α ^ λ ^ 2 λ ^ 1 + λ ^ 2 α ^ λ ^ 1 1 α ^ λ ^ 2 1 = Ω α ^ , λ ^ 1 , λ ^ 2 .

Asymptotic Confidence Interval

Even though Equation (3)’s precise expression for R has an explicit form, determining its exact distribution is difficult. The asymptotic distribution of R has been considered in order to produce an asymptotic confidence interval (ACI) for R. We were able to determine the asymptotic distribution of ϑ = α , λ 1 , λ 2 in this instance based on the asymptotic properties and general state of MLEs (see Casella and Berger [25]). The parameter’s asymptotic distribution often corresponds to the large sample’s normal distribution, i.e,
ϑ ^ ϑ N 0 , 1 I ϑ ^ ,
where I ϑ ^ is the observed Fisher information matrix which can be expressed as
I ϑ ^ = Λ 11 Λ 12 Λ 13 Λ 21 Λ 22 Λ 23 Λ 31 Λ 32 Λ 33 = E α α α λ 1 α λ 2 λ 1 α λ 1 λ 1 λ 1 λ 2 λ 2 α λ 2 λ 1 λ 2 λ 2 ϑ ^ .
It is obvious that in order to calculate R’s asymptotic confidence interval, the variance must be known. The delta approach from Xu and Long [26] is employed for this purpose. The delta method is a statistical technique to derive an approximate probability distribution for a function of an asymptotically normal estimator using the Taylor series approximation. According to the delta method, the variance of R can be expressed as the following
σ R ^ 2 = R α 2 Λ 11 1 + R λ 1 2 Λ 22 1 + R λ 2 2 Λ 33 1 + 2 R α R λ 1 Λ 12 1 , + 2 R α R λ 2 Λ 13 1 + 2 R λ 1 R λ 2 Λ 23 1 .
Thus, it is straightforward to construct the first partial derivatives listed in (13). According to Slutsky’s theorem, it is easy to verify that
z R = R ^ M L R σ R ^ 2 N 0 , 1 .
As a result, the two-sided 100 ( 1 δ ) % approximate confidence intervals (ACIs) for R are provided by
R ^ M L z δ 2 σ R ^ 2 , R ^ M L + z δ 2 σ R ^ 2 .

3. Parametric Bootstrap

Normal approximations perform effectively when the proper sample size is large, as mentioned in the previous section. A tiny sample size, however, does not fall within the assumption of normality. In this situation, confidence intervals can be approximated more accurately using resampling techniques such as the bootstrap. Due to its capacity to offer a strong and trustworthy method of evaluating the accuracy of a certain model, bootstrapping has grown in popularity in recent years. Bootstrapping entails resampling data from a population in order to estimate the true mean and variance of the population more precisely. Using this procedure, researchers may more accurately evaluate a model’s correctness. Additionally, the bootstrapping results are examined for potential sources of bias and volatility. Researchers can find possible areas for model development by repeatedly resampling the data. This is very helpful when using the model for prediction. Particularly for small samples, smoothing/bootstrap approaches are used to produce more precise confidence intervals. Two bootstrap resampling methods are suggested to estimate confidence intervals for the SSR parameter in this section.

3.1. Percentile Bootstrap

Efron [27] was the first to establish the bootstrap process (see Davison and Hinkley [28] for further information). In this part, we explore the identical techniques outlined in DiCiccio and Efron [29] in order to obtain a more often-used confidence interval of R. The percentile bootstrap (BP) is described by the following algorithm:
  • Create random sample sets x 1 : m 1 : n 1 , x 2 : m 1 : n 1 , , x m 1 : m 1 : n 1 and y 1 : m 2 : n 2 , y 2 : m 2 : n 2 , , y m 2 : m 2 : n 2 from U T N B α , θ 0 , λ 1 and U T N B α , θ 0 , λ 2 with n 1 , m 1 , S 1 , S 2 , , S m 1 and ( n 2 , m 2 , T 1 , T 2 , , T m 2 ) , , respectively. Determine the MLEs of α ^ , λ ^ 1 and λ ^ 2 .
  • Use α ^ , λ ^ 1 and λ ^ 2 to generate independent bootstrap samples x 1 : m 1 : n 1 , x 2 : m 1 : n 1 , , x m 1 : m 1 : n 1 and y 1 : m 2 : n 2 , y 2 : m 2 : n 2 , , y m 2 : m 2 : n 2 from U T N B α ^ , θ 0 , λ ^ 1 and U T N B α ^ , θ 0 , λ ^ 2 with n 1 , m 1 , S 1 , S 2 , , S m 1 and n 2 , m 2 , T 1 , T 2 , , T m 2 , , respectively. Compute the MLEs of unknown parameters based on the bootstrap samples, represented by α ^ , λ ^ 1 and λ ^ 2 .
  • Determine the bootstrap estimate of R in (10), then denote it with the symbol R ^ .
  • Repeat Steps 2 and 3 N times and obtain the ordered value R ^ 1 R ^ 2 R ^ N .
  • The 100 ( 1 δ ) % BP confidence interval of R is given by
    R ^ B P N δ 2 , R ^ B P N 1 δ 2 .

3.2. Bootstrap-t

The bootstrap-t (BT) approach, as explained by Efron and Tibshirani [30], enables the determination of the confidence interval for the parameters of interest when the sample size is small. BT confidence intervals with parametric data can be produced using the next algorithm.
1–3.
Similar to the BP algorithm mentioned above.
4.
Determine the forthcoming statistics:
Ψ R = R ^ R ^ σ R ^ 2 .
5.
Repeat Steps (2) through (4) N times.
6.
Assume that G z = P Ψ R z is the CDF of Ψ R . Define R ^ B T z = R ^ + σ R ^ 2 G 1 z . The approximate 100 ( 1 δ ) % BT confidence interval of R is given by
R ^ B T N δ 2 , R ^ B T N 1 δ 2 .

4. Bayesian Estimation Using MCMC

Due to its capacity to take into account previous information, Bayesian estimation has a number of benefits over conventional maximum likelihood estimation methods. Additionally, it is able to provide an assessment of the level of uncertainty surrounding each parameter. Bayesian estimation is becoming common in a variety of applications, including signal processing, machine learning, artificial intelligence, and therapeutic protocols, due to its many benefits. For instance, Bayesian estimation can be used to find the parameters of a linear system given a series of observations. It is possible to utilize it in machine learning to identify the best plausible theory given a set of facts. Similar to this, it can be applied to artificial intelligence to forecast an event’s result given a collection of circumstances. It can also be used in the medical field to predict the outcome of using one of the therapeutic protocols for a particular disease in light of a group of limited side effects.
The proper selection of priors for the parameters is necessary for Bayesian deduction. According to Arnold and Press [31], it is obvious that one cannot claim that one prior is superior to all others from a strictly Bayesian perspective. One must presumably accept their own subjective priors and all of their imperfections. However, using informative priors, which are undoubtedly favored over all other options, is preferable if we have sufficient knowledge of the parameter(s). Otherwise, utilizing nondescriptive or ambiguous priors may be appropriate; for further information, see Uppadhyay et al. [32]. According to Kundu and Howlader [33], the family of gamma distributions is recognized to be straightforward and adaptable enough to accommodate a wide range of the experimenter’s preexisting views. Consider a situation where the unknown parameters α , λ 1 and λ 2 have conjugate gamma priors and are stochastically independent distributed, i.e., α g a m m a a 1 , b 1 , λ 1 g a m m a a 2 , b 2 and λ 2 g a m m a a 3 , b 3 respectively. Thus, the combined prior density of α , λ 1 and λ 2 can be expressed as follows
π ( α , λ 1 , λ 2 ) α a 1 1 λ 1 a 2 1 λ 2 a 3 1 e b 1 α + b 2 λ 1 + b 3 λ 2 , a i , b i > 0 , i = 1 , 2 , 3 .
The joint posterior distribution π ( α , λ 1 , λ 2 | x ̲ , y ̲ ) = π ( ϑ ) can therefore be obtained by combining Equations (5) and (16), the resulting expression is provided by
π ( ϑ ) α a 1 1 λ 1 m 1 + a 2 1 λ 2 m 2 + a 3 1 e b 1 α exp λ 1 b 2 + i = 1 m 1 ln α + 1 α θ x i × exp λ 2 b 3 + j = 1 m 2 ln α + 1 α θ y j i = 1 m 1 1 α α λ 1 θ 1 α λ 1 α + 1 α θ x i 1 × j = 1 m 2 1 α α λ 2 θ 1 α λ 2 α + 1 α θ y j 1 i = 1 m 1 α λ 1 1 α λ 1 α + 1 α θ x i λ 1 1 S i × j = 1 m 2 α λ 2 1 α λ 2 α + 1 α θ y j λ 2 1 T j π 1 ( α | λ 1 , λ 2 , x ̲ , y ̲ ) π 2 ( λ 1 | α , λ 2 , x ̲ , y ̲ ) π 3 ( λ 2 | α , λ 1 , x ̲ , y ̲ ) h ( α , λ 1 , λ 2 | x ̲ , y ̲ ) ,
where
π 1 ( α | λ 1 , λ 2 , x ̲ , y ̲ ) α a 1 1 e b 1 α i = 1 m 1 1 α θ α + 1 α θ x i λ 1 + 1 × j = 1 m 2 1 α θ α + 1 α θ y j λ 2 + 1 ,
π 2 ( λ 1 | α , λ 2 , x ̲ , y ̲ ) λ 1 m 1 + a 2 1 exp λ 1 b 2 + i = 1 m 1 ln α + 1 α θ x i g a m m a m 1 + a 2 , b 2 + i = 1 m 1 ln α + 1 α θ x i ,
π 3 ( λ 2 | α , λ 1 , x ̲ , y ̲ ) λ 2 m 2 + a 3 1 exp λ 2 b 3 + j = 1 m 2 ln α + 1 α θ y j g a m m a m 2 + a 3 , b 3 + j = 1 m 2 ln α + 1 α θ y j ,
and
h ( α , λ 1 , λ 2 | x ̲ , y ̲ ) i = 1 m 1 α λ 1 1 α λ 1 j = 1 m 2 α λ 2 1 α λ 2 × i = 1 m 1 α λ 1 1 α λ 1 α + 1 α θ x i λ 1 1 S i × j = 1 m 2 α λ 2 1 α λ 2 α + 1 α θ y j λ 2 1 T j ,
when the loss from overestimation and underestimation are equally relevant, symmetric loss functions are utilized in practice. One such function is the squared error (SE), which is well recognized for its exceptional mathematical characteristics and may be written as
L S E μ ^ , μ = μ ^ μ 2 ,
where, μ ^ is the estimation of μ . To obtain the Bayes estimate of R under the SELF, the posterior function’s mean, denoted by R ^ S E L , can be employed as
R ^ B a y e s = 0 0 0 Ω α , λ 1 , λ 2 π ( α , λ 1 , λ 2 | x ̲ , y ̲ ) d α d λ 1 d λ 2 .
Now, we use the Markov chain Monte Carlo (MCMC) technique to derive the Bayes estimate and the accompanying credible interval of R because the posterior distribution in Equation (17) cannot be determined analytically. A series of samples are produced using the importance sampling process using the entire conditional probability distributions, see Chen and Shao [34]. When the entire conditional distributions are simple to sample from, the importance sampling method can be effective. Using any target distribution of any dimension that is known up to a normalizing constant, the importance sampling procedure can be used to create random samples. Because the posterior conditional distribution of α is not in a well-known form, the Metropolis–Hastings (M–H) algorithm can be used to generate random numbers from this distribution, see Hastings [35]. In this case, proposal density is based on the normal distribution. As a result, the sample generation procedure for the MCMC technique includes the following steps:
  • Start with an initial guess indicated by ( α 0 , λ 1 0 , λ 2 0 ) and set k = 1 .
  • Generate λ 1 k from gamma m 1 + a 2 , b 2 + i = 1 m 1 ln α k 1 + 1 α k 1 θ 0 x i .
  • Generate λ 2 k from gamma m 2 + a 3 , b 3 + j = 1 m 2 ln α k 1 + 1 α k 1 θ 0 y j .
  • Using the M–H algorithm, generate α k from π 1 ( α | λ 1 , λ 2 , x ̲ , y ̲ ) with the proposal distribution N ( α k 1 , Λ 11 ) , where Λ 11 is a variance of α .
  • Compute R k = Ω α k , λ 1 k , λ 2 k .
  • Set k = k + 1 .
  • Repeat Steps 2–6 M times.
  • The Bayesian estimate of R can be obtained using
    R ^ B a y e s = 1 M 0 M j = M 0 + 1 M R j h ( α j , λ 1 j , λ 2 j | x ̲ , y ̲ ) 1 M 0 M j = M 0 + 1 M h ( α j , λ 1 j , λ 2 j | x ̲ , y ̲ ) ,
    where M 0 is the burn-in period.
The HPD credible intervals (CRIs) for R are obtained by sorting R ( j ) , where j = M 0 + 1 , M + 2 , , M in ascending order as R ( 1 ) < R ( 2 ) < < R ( M ) . Therefore, one can obtain the R’s 100 1 δ % symmetric CRIs by
R ^ B a y e s M 0 M δ 2 , R ^ B a y e s M 0 M 1 δ 2 .

5. Numerical Explorations

Because it is impossible to compare the performance of the various estimation methods conceptually, in this section we conduct a Monte Carlo simulation study to do so. By employing a gamma informative prior, we compare the mean squared errors (MSE) of the ML, BP, BT, and Bayes estimations under SELF. Additionally, we contrast various confidence intervals, such as the asymptotic, two varieties of bootstrap confidence intervals, and HPD credible interval utilizing informative prior, in terms of their average widths (AWs) and coverage probability. Our attention is specifically on three sample sizes, such as n 1 , n 2 | m 1 , m 2 = 30 , 60 , 100 | 15 , 40 , 55 , 70 for two situations of the real values of the parameters α , λ 1 , λ 2 = 3 , 1.5 , 5 , 0.022 , 1 , 4.5 and the corresponding actual values of R = 0.282586 and 0.814092 , where the common parameter θ 0 = 202 . The several schemes of CSs R = S i , T i and i = 1 , , m are produced according to the respective choices of n , m = ( n 1 , m 1 ) , ( n 2 , m 2 ) . For the purpose of locating the removals, we developed three systematic CSs that provide, respectively, fast failure (left censoring scheme), moderate failure (usual type-II progressive censoring scheme), and late failure (type-II censoring scheme) as the following: Scheme I: R 1 = n m , R i = 0 for i 1 , Scheme II: R 1 = R 2 = = R n m = 1 and R n m + 1 = = R m = 0 , and Scheme III: R m = n m , R i = 0 for i m . The NMaximize command of the Mathematica 13 package is used to solve the non-linear equations and retrieve the MLEs of the parameter values. Furthermore, using the invariance feature of the MLE, the R ^ M L are produced. The study involves 1000 replicates. There are 1000 bootstrap (BP and BT) samples utilized for each replication. In Bayesian framework, Bayes estimates (BEs) and corresponding highest posterior density CRIs are computed using 12,000 MCMC samples, with the first 2000 values being eliminated as “burn-in”. Additionally, we consider informative gamma priors with the following hyperparameter values: a 1 = 2 , b 1 = 1 , a 2 = 1.5 , b 2 = 1 , a 3 = 5 , b 3 = 3.5 . The informative priors’ parameters are selected so that their mean equals the actual parameter values. The results of the simulation study are shown in Table 1, Table 2, Table 3 and Table 4, from which the following conclusions can be drawn:
  • It is clear that the MSEs and AWs decrease with increasing sample size n 1 , n 2 and effective sample size m 1 , m 2 for both Bayesian and non-Bayesian (ML, BP, and BT) estimation methods. This verifies the consistency characteristics of every estimation technique.
  • Because the related MSEs are relatively small, all point estimates are generally fully accurate. With rising n 1 , n 2 and m 1 , m 2 , MSEs tend to zero out.
  • The MSEs and AWs are dropping in tandem with a rise in the real value of R.
  • The outcomes of the simulation show that the Bayes estimations outperform the other estimates. The Bayes estimates have less MSE4 than any other estimates (see Table 1 and Table 2).
  • The AWs and CPs for all confidence intervals (see Table 3 and Table 4) show that the Bayes credible intervals offer smaller widths and a higher coverage probability than other methods. Therefore, we recommend using the Bayesian technique for interval estimations.
  • When sample sizes are fixed and there are observed failures, the first scheme (I,I) performs the best in terms of reduced MSEs and AWs.
  • With schemes (I,II), (I,III), and (II,III), neither MSEs nor AWs exhibit regular behavior (increasing or decreasing).
  • When removals are postponed, MSEs and AWs both rise.
  • In terms of MSEs and AWs, bootstrap approaches outperform the ML method of R. Additionally, BT outperforms BP in terms of MSEs and AWs.
  • In addition to having ACIs with high CPs (about 0.95), the estimates generated by the ML, bootstrap, and Bayesian techniques are quite similar.
  • The simulation findings demonstrate that all point and interval estimators approaches are effective, despite the fact that the Bayes estimators outperform all other estimators. If one has sufficient prior knowledge, they may choose for the Bayes approach. Using bootstrap approaches, which largely rely on MLEs, is preferable if prior knowledge about the subject under study is not accessible.

6. Application to Jute Fiber

To clarify the significance of the theoretical discoveries discussed in the preceding sections, this section will detail an application with jute fiber. The analysis of the real-world data set in this section lends validity to the suggested point and interval estimates for the SSR parameter R. In this application, we take into account data of the breaking strengths of jute fiber at two different gauge lengths, which are provided by Xia et al. [36]. Saracoglu et al. [18] also provided an illustration of these data for evaluation. The sets of data are: Breaking strength of jute fiber of gauge length 10 mm (strength—X), 693.73, 704.66, 323.83, 778.17, 123.06, 637.66, 383.43, 151.48, 108.94, 50.16, 671.49, 183.16, 257.44, 727.23, 291.27, 101.15, 376.42, 163.40, 141.38, 700.74, 262.90, 353.24, 422.11, 43.93, 590.48, 212.13, 303.90, 506.60, 530.55, 177.25. Breaking strength of jute fiber of gauge length 20 mm (stress—Y), 71.46, 419.02, 284.64, 585.57, 456.60, 113.85, 187.85, 688.16, 662.66, 45.58, 578.62, 756.70, 594.29, 166.49, 99.72, 707.36, 765.14, 187.13, 145.96, 350.70, 547.44, 116.99, 375.81, 581.60, 119.86, 48.01, 200.16, 36.75, 244.53, 83.55. Figure 1 and Figure 2 illustrate the nonparametric kernel density (KD) estimation approach used to examine the initial density shape. It is noticed that for data sets I and II, respectively, the initial density shape is asymmetric and unimodal (bimodal) functions. Figure 1 and Figure 2’s quantile–quantile (Q-Q) and TTT plots are used to test the normality condition and hazard rate shapes. The box and violin plots in Figure 1 and Figure 2 reveal the extremes, and it is demonstrated that several extreme observations were supported. For the purposes of the goodness of fit test, the Kolmogorov–Smirnov (K-S) separation between the fitted distribution function and the empirical distribution function was determined. This value is 0.095255 for set I and has a p-value of 0.9244 , whereas it is 0.12041 for set II and has a p-value of 0.7325 . As a result, the p-value for K-S was highest for sets I and II. It is safe to say that the UTNBD better fits the two real data sets. The empirical plots in Figure 3 and Figure 4 show how the UTNBD clearly fits the data.
By using the censoring scheme S = ( 2 , 0 , 2 , 0 , 1 , 0 , 3 , 0 , 2 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 0 ) , we generate a progressively type-II censored (PTIIC) sample of size m 1 = 18 , from set I with n 1 = 30 . The obtained PT2C sample is 43.93, 50.16, 101.15, 108.94, 141.38, 151.48, 163.4, 177.25, 183.16, 212.13, 257.44, 353.24, 376.42, 383.43, 422.11, 590.48, 700.74, and 727.23. For set II, suppose that the censoring scheme is given by T = ( 1 , 1 , 2 , 1 , 1 , 2 , 1 , 2 , 1 , 1 , 2 , 0 , 0 , 0 , 0 ) , then a PTIIC sample of size m 2 = 15 out of n 2 = 30 items of data is obtained as 36.75, 45.58, 48.01, 71.46, 83.55, 145.96, 166.49, 187.13, 187.85, 200.16, 578.62, 581.6, 585.57, 662.66, and 756.7.
The MLE of R and its associated ACI are calculated under the PTIIC implementation on the strength variable X and stress variable Y as follows: R ^ M L = 0.477182 and ( 0.37526 , 0.58538 ) with average interval width 0.21012 . Additionally, the BP and BT are computed as 0.473537 and 0.445765 by including the iterative techniques discussed in Section 3, and their associated ACIs are computed as ( 0.346753 , 0.552464 ) and ( 0.357472 , 0.519763 ) , with AWs of 0.205711 and 0.162291 , respectively. In order to delve into the uniqueness property, we have generated log-likelihood profiles and observed that all estimators exhibit unimodal shapes. Please consult Figure 5, which serves as an illustrative schema for further details for the model estimators.
It is now necessary to describe the prior distributions for the parameters α , λ 1 and λ 2 in order to obtain the Bayesian estimate of R. We make the assumption that the noninformative gamma priors for α , λ 1 and λ 2 , i.e., when the hyperparameters are a i = 0.0001 and b i = 0.0001 , i = 1 , 2 , 3 are applicable as we lack prior knowledge. The initial values for the parameters α , λ 1 , λ 2 and θ were taken to be their MLEs and θ 0 = 765.14 in order to run the MCMC procedure outlined in Section 4. Furthermore, 12,000 MCMC samples were produced. We remove the first 2000 samples as ’burn-in’ to eliminate the impact of the initial values. As a result, the Bayesian estimate of R and its corresponding CRI is calculated as R ^ M C = 0.46876 and ( 0.399967 , 0.561374 ) with average interval width 0.161407 . Figure 6 shows R’s 12,000 chain values. The R kernel density estimation and histogram are displayed in Figure 7.

7. Summary Findings

In this article, we explored the estimation of the stress–strength reliability (SSR) parameter R = P ( Y < X ) within the context of progressive type-II interval censoring (PTIIC) for two independent random variables: strength X and stress Y. Both X and Y followed a uniform truncated negative binomial distribution with identical shape and scale parameters. We employed various estimation methods in this study. First, we derived maximum likelihood estimators (MLEs) and asymptotic confidence intervals for the SSR parameter R using the observed Fisher information matrix. Additionally, we proposed two parametric bootstrap methods for constructing confidence intervals, with the finding that one of them, referred to as BT, remained highly effective even with small effective sample sizes. Furthermore, we explored the Bayesian estimation of R under the squared error loss function, utilizing an independent gamma prior. Because Bayesian estimators involved ratios of integrals that could not be analytically solved, we employed importance sampling techniques coupled with the Metropolis–Hastings algorithm to compute Bayes estimates along with credible intervals. To assess the performance of these estimation methods, we conducted a comprehensive simulation study that considered various sample sizes ( n i , m i ) , i = 1 , 2 , censoring schemes (I, II, and III), and combinations of unknown parameters α , λ 1 , λ 2 . This empirical comparison was necessary because a theoretical comparison was not feasible. The simulation results led us to two key conclusions. First, in cases where PTIIC data from multiple uniform truncated negative binomial distributions were available, the Bayesian technique was effectively employed to estimate the SSR parameter R and generate approximate confidence intervals. Second, the importance sampling approach consistently outperformed the maximum likelihood and bootstrap methods, demonstrating commendable performance. Overall, this study underscored the utility of the uniform truncated negative binomial distribution in accurately modeling real-world data, particularly in medical and engineering applications.

Author Contributions

Conceptualization, R.M.E.-S. and M.S.E.; methodology, R.M.E.-S., M.E.-M., and A.A.-E.-M.; software, R.M.E.-S., M.S.E. and A.A.-E.-M.; validation, A.A.-B. and L.A.A.-E.; formal analysis, M.S.E., M.E.-M. and A.A.-E.-M.; resources, M.E.-M. and A.A.-B.; data curation, M.E.-M. and L.A.A.-E.; Writing—original draft, A.A.-E.-M.; Writing—review and editing, R.M.E.-S. and M.S.E. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project and Prince Sattam bin Abdulaziz Universities under project numbers (PNURSP2023R443) and (PSAU/2023/R/1444), respectively.

Data Availability Statement

The data sets are available in the paper.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R443), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. This study is supported via funding from Prince Sattam bin Abdulaziz University, project number (PSAU/2023/R/1444).

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Kotz, S.; Lumelskii, Y.; Pensky, M. The Stress-Strength Model and Its Generalization: Theory and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  2. Church, J.D.; Harris, B. The estimation of the reliability from stress–strength relationships. Technometrics 1970, 12, 49–54. [Google Scholar] [CrossRef]
  3. Surles, J.G.; Padgett, W.J. Inference for P(Y < X) in the Burr Type X model. J. Appl. Stat. Sci. 1998, 7, 225–238. [Google Scholar]
  4. Surles, J.G.; Padgett, W.J. Inference for reliability and stress–strength for a scaled Burr-Type X distribution. Lifetime Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef]
  5. Kundu, D.; Gupta, R.D. Estimation of P(Y < X) for generalized exponential distribution. Metrika 2005, 61, 291–308. [Google Scholar]
  6. Raqab, M.Z.; Kundu, D. Comparison of different estimators of P(Y < X) for a scaled Burr type X distribution. Commun. Stat. Simul. Comput. 2005, 34, 465–483. [Google Scholar]
  7. Kundu, D.; Gupta, R.D. Estimation of P(Y < X) for Weibull Distribution, IEEE Trans. Reliab. 2006, 55, 270–280. [Google Scholar]
  8. Nadar, M.; Kizilaslan, F.; Papadopoulos, A. Classical and Bayesian estimation of P(Y < X) for Kumaraswamy distribution. Stat. Comput. Simul. 2014, 84, 1505–1529. [Google Scholar]
  9. Sharma, V.K.; Singh, S.K.; Singh, U.; Agiwal, V. The inverse Lindley distribution: A stress-strength reliability model with application to head and neck cancer data. J. Ind. Prod. 2015, 32, 162–173. [Google Scholar] [CrossRef]
  10. Ahmed, A.; Batah, F. On the estimation of stress-strength model reliability parameter of power rayleigh distribution. Iraqi J. Sci. 2023, 64, 809–822. [Google Scholar] [CrossRef]
  11. Baklizi, A. Estimation of P(Y < X) using record values in the one and two parameter exponential distributions. Valiollahi 2008, 37, 692–698. [Google Scholar]
  12. Nadar, M.; Kizilaslan, F. Classical and Bayesian estimation of P[Y < X] using upper record values from Kumaraswamy distribution. Stat. Pap. 2014, 55, 751–783. [Google Scholar]
  13. Mahmoud, A.W.M.; EL-Sagheer, R.M.; Soliman, A.A.; Abd Ellah, A.H. Bayesian estimation of P[Y < X] based on record values from the Lomax distribution and MCMC technique. J. Mod. Appl. Stat. 2016, 15, 488–510. [Google Scholar]
  14. Hasan, A.S.; Abd-Allah, M.; Nagy, H.F. Estimation of P(Y < X) using record values from the generalized inverted exponential distribution. Pak. J. Stat. Oper. Res. 2018, XIV, 645–660. [Google Scholar]
  15. Balakrishnan, N.; Sandhu, R.A. A simple simulation algorithm for generating progressive type-II censored samples. Am. Assoc. 1995, 49, 229–230. [Google Scholar]
  16. Yu, Y.; Wang, L.; Dey, S.; Liu, J. Estimation of stress-strength reliability from unit-Burr III distribution under records data. Math. Biosci. Eng. 2023, 20, 12360–12379. [Google Scholar] [CrossRef]
  17. Koul, S.; Chaturvedi, A. Estimation and testing procedures for the reliability functions of one parameter generalized exponential distribution. Thail. Stat. 2023, 21, 268–290. [Google Scholar]
  18. Saracoglu, B.; Kinaci, I.; Kundu, D. On estimation of R = P(Y < X) for exponential distribution under progressive type II censoring. Stat. Comput. Simul. 2012, 82, 729–744. [Google Scholar]
  19. Valiollahi, R.; Asgharzadeh, A.; Raqab, M.Z. Estimation of P(Y < X) for Weibull distribution under progressive type II censoring. Commun. Stat. Theory Methods 2013, 42, 4476–4498. [Google Scholar]
  20. Kumar, K.; Krishna, H.; Garg, R. Estimation of P(Y < X) in Lindley distribution using progressively first failure censoring. Int. J. Syst. Assur. Eng. Manag. 2015, 6, 330–341. [Google Scholar]
  21. Krishna, H.; Dube, M.; Garg, R. Estimation of P(Y < X) for progressively first failure censored generalized inverted exponential distribution. Stat. Comput. Simul. 2017, 87, 2274–2289. [Google Scholar]
  22. EL-Sagheer, R.M.; Mansour, M.M.M. The efficacy measurement of treatment methods: An application to stress-strength model. Appl. Math. Inf. Sci. 2020, 14, 487–492. [Google Scholar]
  23. Kamel, B.I.; Abo Youssef, S.E.; Sief, M.G. The Uniform truncated negative binomial distribution and its properties. J. Math. Stat. 2016, 12, 290–301. [Google Scholar] [CrossRef]
  24. EL-Sagheer, R.M. Estimation of parameters of Weibull-Gamma distribution based on progressively censored data. Stat. Pap. 2018, 59, 725–757. [Google Scholar] [CrossRef]
  25. Casella, G.; Berger, R.L. Statistical Inference, 2nd ed.; Duxbury Press: Pacific Grove, CA, USA, 2002. [Google Scholar]
  26. Xu, J.; Long, J.S. Using the delta method to construct confidence intervals for predicted probabilities, rates, and discrete changes. Stata J. 2005, 5, 537–559. [Google Scholar] [CrossRef]
  27. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 27, 1–26. [Google Scholar] [CrossRef]
  28. Davison, A.C.; Hinkley, D.V. Bootstrap Methods and Their Application; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  29. DiCiccio, T.J.; Efron, B. Bootstrap confidence intervals. Stat. Sci. 1996, 11, 189–212. [Google Scholar] [CrossRef]
  30. Efron, B.; Tibshirani, R. An Introduction to the Bootstrap; Chapman & Hall: New York, NY, USA, 1994. [Google Scholar]
  31. Arnold, B.C.; Press, S.J. Bayesian inference for Pareto populations. Econometrics 1983, 21, 287–306. [Google Scholar] [CrossRef]
  32. Upadhyay, S.K.; Vasistha, N.; Smith, A.F.M. Bayes inference in life testing and reliability via Markov chain Monte Carlo simulation. Sankhya A 2001, 63, 15–40. [Google Scholar]
  33. Kundu, D.; Howlader, H. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput. Stat. Data Anal. 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
  34. Chen, M.-H.; Shao, Q.-M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. 1999, 8, 69–92. [Google Scholar]
  35. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  36. Xia, Z.P.; Yu, J.Y.; Cheng, L.D.; Liu, L.F.; Wang, W.M. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
Figure 1. The KD, box, TTT, QQ, and violin plots for data set I.
Figure 1. The KD, box, TTT, QQ, and violin plots for data set I.
Axioms 12 00949 g001
Figure 2. The KD, box, TTT, QQ, and violin plots for set II.
Figure 2. The KD, box, TTT, QQ, and violin plots for set II.
Axioms 12 00949 g002
Figure 3. Empirical, PP, and SF plots for set I.
Figure 3. Empirical, PP, and SF plots for set I.
Axioms 12 00949 g003
Figure 4. Empirical, PP, and SF plots for set II.
Figure 4. Empirical, PP, and SF plots for set II.
Axioms 12 00949 g004
Figure 5. The log-likelihood profiles.
Figure 5. The log-likelihood profiles.
Axioms 12 00949 g005
Figure 6. MCMC trace plot of R.
Figure 6. MCMC trace plot of R.
Axioms 12 00949 g006
Figure 7. Histogram of R.
Figure 7. Histogram of R.
Axioms 12 00949 g007
Table 1. MSEs for R when the true value of R = 0.282586 .
Table 1. MSEs for R when the true value of R = 0.282586 .
( n 1 , m 1 ) , ( n 2 , m 2 ) S i , T i ML BPBTBayes
30 , 15 , 30 , 15 (I, I)0.008540.008150.007950.00772
(II, II)0.009230.009640.008360.00795
(III, III)0.009760.009960.009430.00837
(I, II)0.008670.008350.008150.00784
(I, III)0.008840.008540.008340.00805
(II, I)0.008590.008470.008260.00799
(II, III)0.008960.008790.008440.00816
(III, I)0.008870.008790.008570.00825
(III, II)0.008960.008780.008490.00820
50 , 30 , 50 , 30 (I, I)0.007470.007150.006880.00657
(II, II)0.007960.007780.007450.00698
(III, III)0.008390.008050.007760.00729
(I, II)0.007650.007480.007240.00678
(I, III)0.007790.007660.007410.00696
(II, I)0.007660.007580.007390.00711
(II, III)0.008220.008050.007730.00732
(III, I)0.007850.007740.007560.00718
(III, II)0.008190.007980.007780.00745
100 , 55 , 100 , 55 (I, I)0.006670.006550.006120.00589
(II, II)0.006970.006730.006390.00605
(III, III)0.007330.007070.006690.00634
(I, II)0.006780.006670.006470.00618
(I, III)0.007150.006990.006560.00627
(II, I)0.006800.006770.006540.00639
(II, III)0.007040.006980.006870.00644
(III, I)0.007170.006920.006660.00632
(III, II)0.007240.007080.006870.00644
100 , 70 , 100 , 70 (I, I)0.005570.005390.005150.00489
(II, II)0.005960.005780.005590.00526
(III, III)0.006220.006060.005860.00553
(I, II)0.005680.005470.005240.00501
(I, III)0.005760.005680.005430.00512
(II, I)0.006030.005890.005640.00519
(II, III)0.006190.005990.005870.00537
(III, I)0.005860.005780.005530.00522
(III, II)0.006180.005970.005770.00535
Table 2. MSEs for R when the true value of R = 0.814092 .
Table 2. MSEs for R when the true value of R = 0.814092 .
( n 1 , m 1 ) , ( n 2 , m 2 ) S i , T i ML BPBTBayes
30 , 15 , 30 , 15 (I, I)0.004690.004270.003990.00368
(II, II)0.004960.004580.004270.00395
(III, III)0.005230.004960.004680.00437
(I, II)0.004750.004460.004150.00379
(I, III)0.004850.004690.004380.00415
(II, I)0.004790.004560.004250.00389
(II, III)0.004990.004780.004650.00428
(III, I)0.004870.004680.004340.00417
(III, II)0.004980.004880.004750.00438
50 , 30 , 50 , 30 (I, I)0.003640.003250.002960.00258
(II, II)0.003980.003680.003350.00287
(III, III)0.004250.003980.003780.00329
(I, II)0.003780.003360.003150.00268
(I, III)0.003870.003440.003260.00278
(II, I)0.003690.003470.003250.00285
(II, III)0.003960.003560.003380.00301
(III, I)0.003850.003450.003270.00279
(III, II)0.003990.003570.003480.00311
100 , 55 , 100 , 55 (I, I)0.002950.002780.002460.00199
(II, II)0.003250.003180.002890.00235
(III, III)0.003560.003380.003180.00274
(I, II)0.003010.002890.002570.00208
(I, III)0.003250.002970.002710.00229
(II, I)0.003040.002880.002670.00218
(II, III)0.003350.003280.002990.00245
(III, I)0.003260.002980.002750.00231
(III, II)0.003380.003290.002980.00264
100 , 70 , 100 , 70 (I, I)0.002350.002150.001960.00152
(II, II)0.002580.002360.002140.00178
(III, III)0.002930.002790.002580.00213
(I, II)0.002450.002250.002060.00162
(I, III)0.002550.002340.002110.00173
(II, I)0.002430.002220.002040.00165
(II, III)0.002830.002690.002480.00203
(III, I)0.002570.002380.002210.00183
(III, II)0.002870.002570.002510.00212
Table 3. AWs and CPs for R when the true value of R = 0.282586 .
Table 3. AWs and CPs for R when the true value of R = 0.282586 .
MLBPBTBayes
(n1,m1), (n2,m2)(Si, Ti)AWsCPsAWsCPsAWsCPsAWsCPs
30 , 15 , 30 , 15 (I, I)0.52780.9250.49340.9290.42560.9410.37450.947
(II, II)0.55680.9240.51790.9270.45680.9420.39740.951
(III, III)0.61240.9190.55340.9370.49670.9410.43780.954
(I, II)0.53780.9180.50340.9340.43560.9390.38460.961
(I, III)0.56670.9150.52780.9410.46690.9380.41730.962
(II, I)0.53970.9200.50450.9260.43580.9370.38440.957
(II, III)0.60250.9250.53360.9270.48680.9410.42790.949
(III, I)0.56990.9270.52580.9290.46430.9510.41670.948
(III, II)0.61260.9240.54370.9230.496870.9420.44780.955
50 , 30 , 50 , 30 (I, I)0.44650.9310.41750.9410.39870.9380.33450.960
(II, II)0.47630.9340.44570.9390.43650.9540.36470.962
(III, III)0.51360.9290.47680.9380.45670.9470.38990.957
(I, II)0.45650.9270.42750.9370.40780.9380.34480.958
(I, III)0.48630.9310.45570.9370.44650.9370.37480.956
(II, I)0.45670.9400.42790.9270.41790.9410.35470.952
(II, III)0.50370.9280.46670.9290.44680.9470.37980.960
(III, I)0.48650.9230.45580.9260.45640.9380.38470.962
(III, II)0.51360.9190.47650.9250.46680.9370.39970.957
100 , 55 , 100 , 55 (I, I)0.35470.9410.32850.9510.29970.9510.26580.958
(II, II)0.37450.9390.34890.9490.32580.9520.29740.956
(III, III)0.39970.9380.36580.9540.34570.9490.31780.952
(I, II)0.36470.9380.33850.9480.31990.9390.27590.960
(I, III)0.38460.9370.35870.9490.33590.9370.30750.962
(II, I)0.36480.9270.33890.9390.31870.9360.27680.957
(II, III)0.38970.9260.35580.9390.33570.9410.30780.970
(III, I)0.38470.9340.35550.9270.33670.9400.30870.952
(III, II)0.38990.9280.36580.9290.34550.9380.31770.951
100 , 70 , 100 , 70 (I, I)0.32570.9510.29780.9500.26890.9520.21470.971
(II, II)0.34650.9500.31250.9490.28670.9510.23460.969
(III, III)0.35990.9490.33460.9510.30220.9490.26470.958
(I, II)0.33570.9480.31780.9390.27890.9380.22450.957
(I, III)0.35640.9490.32260.9410.29680.9360.24470.949
(II, I)0.33590.9340.31690.9400.27870.9410.23460.947
(II, III)0.34990.9280.32460.9280.29220.9390.25480.952
(III, I)0.35680.9350.32270.9390.29690.9380.24490.954
(III, II)0.34980.9270.32470.9310.30250.9350.25490.953
Table 4. AWs and CPs for R when the true value of R = 0.814092 .
Table 4. AWs and CPs for R when the true value of R = 0.814092 .
MLBPBTBayes
(n1,m1), (n2,m2)(Si, Ti)AWsCPsAWsCPsAWsCPsAWsCPs
30 , 15 , 30 , 15 (I, I)0.43770.9290.41570.9390.37690.9410.28570.960
(II, II)0.45620.9240.43680.9370.39740.9420.31450.962
(III, III)0.49360.9190.46970.9410.44780.9400.35670.957
(I, II)0.44740.9150.42560.9290.38670.9390.29550.958
(I, III)0.46630.9320.44690.9270.40950.9510.32460.956
(II, I)0.44750.9180.42580.9280.38680.9500.29590.952
(II, III)0.47620.9170.45680.9260.41740.9480.33450.960
(III, I)0.46640.9160.44680.9250.40960.9460.32470.962
(III, II)0.47610.9220.45690.9190.41750.9450.33430.957
50 , 30 , 50 , 30 (I, I)0.31740.9390.26580.9380.22670.9390.18550.954
(II, II)0.35610.9340.29540.9280.25430.9410.21460.961
(III, III)0.37980.9250.34560.9180.28670.9460.25740.971
(I, II)0.32740.9310.27580.9170.23670.9480.19550.969
(I, III)0.33710.9270.28570.9280.24650.9360.20560.962
(II, I)0.32570.9260.27660.9250.23470.9380.19570.957
(II, III)0.36940.9210.33520.9240.27650.9540.24750.952
(III, I)0.33720.9240.28580.9230.24660.9510.21570.955
(III, II)0.35950.9100.33510.9330.26640.9470.23730.960
100 , 55 , 100 , 55 (I, I)0.24750.9380.22570.9400.19690.9510.15530.947
(II, II)0.27840.9340.25780.9390.22360.9490.18740.951
(III, III)0.29780.9320.28630.9370.26470.9470.20890.954
(I, II)0.25750.9290.23570.9290.20690.9440.16540.961
(I, III)0.26840.9300.24780.9310.21370.9500.17750.962
(II, I)0.25760.9240.23580.9300.20670.9360.16530.957
(II, III)0.28710.9210.27640.9280.25460.9390.19880.949
(III, I)0.26850.9200.24790.9180.21380.9380.17760.948
(III, II)0.28750.9230.26690.9210.24480.9410.19840.955
100 , 70 , 100 , 70 (I, I)0.19780.9410.17520.9400.15640.9510.12560.958
(II, II)0.21780.9400.19680.9390.17450.9510.14660.956
(III, III)0.24650.9410.22550.9380.20140.9500.17480.952
(I, II)0.20780.9390.18520.9280.16640.9490.13550.960
(I, III)0.21770.9380.19690.9190.17460.9480.14650.962
(II, I)0.20750.9240.18510.9230.16610.9390.13530.957
(II, III)0.23650.9260.21550.9220.19140.9390.16480.960
(III, I)0.21760.9200.19930.9340.17410.9340.14630.962
(III, II)0.22650.9180.21560.9270.19990.9350.17150.967
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

EL-Sagheer, R.M.; Eliwa, M.S.; El-Morshedy, M.; Al-Essa, L.A.; Al-Bossly, A.; Abd-El-Monem, A. Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring. Axioms 2023, 12, 949. https://doi.org/10.3390/axioms12100949

AMA Style

EL-Sagheer RM, Eliwa MS, El-Morshedy M, Al-Essa LA, Al-Bossly A, Abd-El-Monem A. Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring. Axioms. 2023; 12(10):949. https://doi.org/10.3390/axioms12100949

Chicago/Turabian Style

EL-Sagheer, Rashad M., Mohamed S. Eliwa, Mahmoud El-Morshedy, Laila A. Al-Essa, Afrah Al-Bossly, and Amel Abd-El-Monem. 2023. "Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring" Axioms 12, no. 10: 949. https://doi.org/10.3390/axioms12100949

APA Style

EL-Sagheer, R. M., Eliwa, M. S., El-Morshedy, M., Al-Essa, L. A., Al-Bossly, A., & Abd-El-Monem, A. (2023). Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring. Axioms, 12(10), 949. https://doi.org/10.3390/axioms12100949

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop