Next Article in Journal
Hyperbolastic Models from a Stochastic Differential Equation Point of View
Next Article in Special Issue
On Various High-Order Newton-Like Power Flow Methods for Well and Ill-Conditioned Cases
Previous Article in Journal
Style Transformation Method of Stage Background Images by Emotion Words of Lyrics
Previous Article in Special Issue
Time-Dependent Reliability Analysis of Plate-Stiffened Prismatic Pressure Vessel with Corrosion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability and Inference for Multi State Systems: The Generalized Kumaraswamy Case

by
Vlad Stefan Barbu
1,
Alex Karagrigoriou
2,* and
Andreas Makrides
2
1
Laboratoire de Mathématiques Raphaël Salem, Université de Rouen-Normandie, UMR 6085, Avenue de l’Université, BP.12, F76801 Saint-Étienne-du-Rouvray, France
2
Department of Statistics and Actuarial-Financial Mathematics, University of the Aegean, GR-83200 Samos, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1834; https://doi.org/10.3390/math9161834
Submission received: 23 June 2021 / Revised: 27 July 2021 / Accepted: 30 July 2021 / Published: 4 August 2021
(This article belongs to the Special Issue Advances in Reliability Modeling, Optimization and Applications)

Abstract

:
Semi-Markov processes are typical tools for modeling multi state systems by allowing several distributions for sojourn times. In this work, we focus on a general class of distributions based on an arbitrary parent continuous distribution function G with Kumaraswamy as the baseline distribution and discuss some of its properties, including the advantageous property of being closed under minima. In addition, an estimate is provided for the so-called stress–strength reliability parameter, which measures the performance of a system in mechanical engineering. In this work, the sojourn times of the multi-state system are considered to follow a distribution with two shape parameters, which belongs to the proposed general class of distributions. Furthermore and for a multi-state system, we provide parameter estimates for the above general class, which are assumed to vary over the states of the system. The theoretical part of the work also includes the asymptotic theory for the proposed estimators with and without censoring as well as expressions for classical reliability characteristics. The performance and effectiveness of the proposed methodology is investigated via simulations, which show remarkable results with the help of statistical (for the parameter estimates) and graphical tools (for the reliability parameter estimate).

1. Introduction

The Kumaraswamy distribution is a well-known distribution, especially to those familiar with the hydrological literature [1]. Kumaraswamy’s densities are unimodal and uniantimodal and, depending on the parameter values chosen, are either increasing or decreasing or constant functions. Note that most if not all of the above characteristics are shared by both Kumarasawmy and Beta distributions (see [2,3,4]). In fact, Kumarasawmy and Beta distributions share numerous characteristics, although some of them are much more readily available, from the mathematical point of view, for the Kumaraswamy distribution. Kumaraswamy distribution is appropriate for the modeling of bounded natural and physical phenomena, such as atmospheric temperatures or hydrological measurements [5,6], record data, such as tests, games or sports [7], economic observations [8], or for empirical data with failure rate with an increasing prior [9]. It is also appropriate in situations where one considers a distribution with infinite lower and/or upper bounds to fit data, when, in fact, the bounds are finite, which makes Kumaraswamy useful in preventive maintenance. Some specific examples discussed in the literature include failure and running times of devices [10] and deterioration or fatigue failure [11]. Furthermore, due to the closed form of both its distribution as well as its quantile function (inverse cumulative distribution), Kumaraswamy appears advantageous when it comes to the quantile modeling perspective [12,13]. These characteristics make Kumaraswamy useful and easily applicable in reliability theory.
A general class of distributions with Kumaraswamy as a baseline distribution is considered in this work by using a parent continuous distribution function: G ( · ) . The Kumaraswamy distribution is viewed as the baseline distribution of the proposed G-class because it arises in the trivial case associated with G ( x ) being the identity function, which corresponds to the U ( 0 , 1 ) parent distribution. For an arbitrary continuous parent distribution, one can generate a general subclass of distributions with the support that is different from the support of the baseline distribution (i.e., the Kumaraswamy distribution), distribution that is defined like the Beta distribution, in [ 0 , 1 ] . The general form of the G-class of distributions based on an arbitrary parent cdf G ( · ) , with Kumaraswamy as the baseline distribution, is defined by the following (see [2,14,15,16]):
F ( t ; a ) = 1 ( 1 G ( t ) c ) a , c , a > 0 ,
where both parameters c and a are considered shape parameters associated with the skewness and a tail weight of F ( · ) . Note that additional structural parameters associated with F ( · ) (such as the shape parameter c) and/or distributional parameters associated with the parent distribution G ( · ) may also be involved in (1). Due to the fact that the distribution function in (1) is written in a closed form, it can be effectively used for both uncensored and censored data in reliability theory as well as in survival analysis.
The statistical inference, including parameter estimation in the context of reliability modeling, is of vital importance. In addition to the use of a proper distribution, such as (1), for modeling purposes, one may also be interested in evaluating the performance of the reliability system involved. Indeed, for instance, the problem of performance of a system is of great importance in mechanical engineering and refers to a component of strength X, which is subject to stress Y. The system stays in operation as long as the stress is less than the strength, so the system performance is associated with the probability of exceedance, usually denoted by R. The quantity of interest in such cases is the stress–strength reliability or reliability parameter which is a measure of reliability defined by the following:
R = P ( Y < X ) = E [ P ( Y < X ) | X ] .
The concept of stress–strength reliability has been investigated extensively in the literature for various lifetime distributions. The reliability parameter has been obtained for several distributions that typically appear in reliability analyses, such as Exponential, Gamma, Weibull, Burr, Marshall–Olkin extended Lomax distribution and inverse Rayleigh distribution. Such distributions were considered for at least one of the two variables of interest, namely X and Y (see [17,18,19,20,21]).
In this work, we focus on the general class of distributions of the form (1), using a parent continuous distribution function, and discuss some of its properties, including the stress–strength reliability. In addition, and for a multi-state system, we provide parameter estimates for the class given in (1), which are assumed to vary over the states of the system. The theoretical part of the work also includes the asymptotics for the proposed estimators. The performance of the proposed methodology is investigated with simulated results.
The paper is organized as follows: In Section 2, we propose and discuss the G-class of distributions (1). In Section 3 and Section 4, the semi-Markov setting and the parameter estimation are provided. Reliability characteristics are discussed in Section 5, while applications are analyzed in the final section.

2. The G-Class of Distributions

Consider the family of distributions with shape parameter a and distribution function, given by the following:
F ( t ; a ) : = 1 1 F ( t ; 1 ) a ,
which is absolutely continuous w.r.t. the Lebesgue measure, with density function f ( t ; a ) and with F ( t ; 1 ) being the standard family member when a = 1 . Classical reliability distributions, such as Exponential, Rayleigh and Weibull distributions are members of (3). One of the special features of (3) is that the distribution of the minimum, i.e., of the ordered statistic X ( 1 ) of a random sample X 1 , X 2 , , X n from (3), falls within this class (see [22,23]).
Notice that the general G-class of distributions in (1) with a parent continuous distribution G ( · ) builds a new general class of distributions with each member lying within the class given in (3), with
F ( t ; 1 ) = G c ( t )
and Kumaraswamy as the baseline distribution of the entire class. It should be noted that the baseline (Kumaraswamy) distribution is obtained for the Uniform distribution in [ 0 , 1 ] , i.e., for the identity function G ( x ) = x , with cdf provided by the following expression:
F K ( t ; a ) : = 1 1 t c a , t ( 0 , 1 ) .
Note that the Kumaraswamy distribution is a member of the G-class (1) with F K ( t ; 1 ) = t c . Kumaraswamy distribution, which, due to the form of the function G, can be viewed as a Generalized Uniform distribution, is an easy to handle distribution in the sense that it has simple explicit expressions for the distribution and quantile functions as well as the L-moments and the moments of order statistics [2]. Furthermore, it has a simple formula for the generation of random samples. The proposed general G-class though, goes beyond the Kumaraswamy since for each (any) continuous distribution chosen as the parent distribution G (i.e., Exponential, Gamma, Weibull, Gumbel, Rayleigh, and Inverse Gaussian), a new special/specific general (sub)class of densities arises (Generalized Exponential, Gamma, Weibull, Gumbel, Rayleigh or Inverse Gaussian, etc.). Observe that each of these general specific subclasses offers additional flexibility to the researcher for accommodating complex reliability phenomena. Observe further that the G-class in (1) generates a family of distributions with support that goes beyond the restrictive support [ 0 , 1 ] of the baseline distribution in (4) and, in fact, it coincides with the support of the parent distribution G ( · ) . This characteristic extends even further the applicability of the G-class of distributions, covering, among others, classical reliability and survival analysis problems, where the time-to-event is the main feature to be investigated (see, for example, [24]).

2.1. Basic Characteristics of the G-Class of Distributions

The basic functions, including the pdf of the G-class of distributions (1), are given in the lemma below.
Lemma 1.
Let X be a random variable from the G-class of distributions (1) with G ( · ) being absolutely continuous with respect to the Lebesgue measure. Then, the density function, f ( t ) , the hazard (failure) rate function, h ( t ) , the reversed hazard rate function, r ( t ) , and the cumulative hazard rate function, H ( t ) , are the following:
f ( t ) = c a g ( t ) G ( t ) c 1 1 G ( t ) c a 1 ,
h ( t ) = c a g ( t ) G ( t ) c 1 1 G ( t ) c a 1 ,
r ( t ) = c a g ( t ) G ( t ) c 1 1 G ( t ) c a 1 1 1 G ( t ) c a 1 ,
H ( t ) = a log 1 G ( t ) c ,
where g ( t ) = d G ( t ) d t the pdf associated with G ( · ) .
Proof. 
The results follow immediately using the following standard definitions:
f ( t ) = d F ( t ) / d ( t ) , h ( t ) = f ( t ) / 1 F ( t ) , r ( t ) = f ( t ) / F ( t ) , H ( t ) = 0 t h ( y ) d y ,
where F is the cdf of the random variable involved, given in the case at hand, by (1). □
Having as the baseline distribution of (1) the Kumaraswamy distribution given in (4), it is easily seen that the associated pdf is given by the following:
f K ( t ) = c a t c 1 1 t c a 1 .
Taking as a parent distribution the Exponential distribution with G ( t ; λ ) = 1 e λ t , we have the following:
F ( t ; a ; c ; λ ) = 1 1 1 e λ t c a ,
and
f ( t ; a ; c ; λ ) = c a λ e λ t 1 e λ t c 1 1 1 e λ t c a 1 ,
while for the Weibull distribution with G ( t ) = 1 e λ t β as a parent distribution, we have the following:
F ( t ; a ; c ; β ; λ ) = 1 1 1 e λ t β c a ,
and
f ( t ; a ; c ; β ; λ ) = c a λ β t β 1 e λ t β 1 e λ t β c 1 1 1 e λ t β c a 1 .
For the baseline Kumaraswamy distribution K ( a , c ) , observe that if the random variable X K ( a ; 1 ) , then 1 X K ( 1 ; a ) and ln ( X ) E x p ( a ) . However, if X K ( 1 ; c ) , then 1 X K ( c ; 1 ) and ln ( 1 X ) E x p ( c ) . In addition, if X K ( 1 ; c ) , then X B e t a ( 1 ; c ) . In general, the parameters c and a control the skewness and the tail of the distribution so that the G-class becomes ideal for fitting skew data, which cannot be otherwise described.
As expected, irrespective of the parent distribution, the resulting distribution is a member of (1) as summarized below.
Lemma 2.
Let G be a specific distribution function with k dimensional distribution parameter θ R k . Then, for the specific parent continuous distribution G ( x ) , the resulting F ( · ) is a member of the G-class of distributions (1).
Proof. 
Consider a cdf G ( x ) and define F ( t ; 1 ) G c ( t ) such that 1 F ( t ; a ) = 1 G c ( t ) a . Then, the resulted F satisfies expression (1) and the result is immediate. □

Reliability Characteristics

In this section, we provide some basic reliability characteristics, including the expression for the reliability parameter in the case of two random variables with distributions in the G-class of distributions, with different shape parameters.
Theorem 1.
Let T be a random variable with cdf belonging to the G-class. Then, the reliability and hazard functions of the random variable T are given, respectively, by the following:
R ( t ; a ) : = 1 G ( t ) c a
and
h ( t ; α ) = c a g ( t ) G ( t ) c 1 1 G ( t ) c 1 .
Proof. 
The result is immediate from the definitions of the reliability and hazard functions and the expressions (1) and (5). □
Theorem 2.
Let X , Y be independent random variables from the G-class with shape parameters α 1 and α 2 , respectively, and common shape parameter c. Then, the reliability parameter R given in (2) is a constant that depends only on the shape parameters α 1 and α 2 .
Proof. 
Let X F ( x ) = 1 ( 1 G ( x ) c ) a 1 and Y H ( y ) = 1 ( 1 G ( y ) c ) a 2 with G ( · ) , the common parent distribution that may or may not depend on distributional parameters. The reliability parameter can be written as the following:
R = P ( X > Y ) = E 0 x h ( y ) d y = E H Y ( x ) = 0 1 f X ( x ) 0 x h Y ( y ) d y d x = 0 1 f X ( x ) H Y ( x ) d x = 0 1 c a 1 g ( x ) G ( x ) c 1 1 G ( x ) c a 1 1 1 ( 1 G ( x ) c ) a 2 d x .
Setting 1 G ( x ) c = u and c G ( x ) c 1 g ( x ) d x = d u , the above integration becomes:
a 1 1 0 u a 1 1 d u + a 1 1 0 u a 1 + a 2 1 d u = 1 a 1 a 1 + a 2 = a 2 a 1 + a 2 .
Remark 1.
Consider X and Y, two random variables having the baseline Kumaraswamy distribution of the G-class. In this case and under the setting of Theorem 2, the reliability parameter between X and Y is the following:
R = 0 1 c a 1 1 x c 1 1 x c a 1 1 1 ( 1 x c ) a 2 d x .
Setting 1 x c = u and c x c 1 d x = d u , we have the following:
R = 1 0 a 1 u a 1 1 1 u a 2 d u = 1 1 + a 1 a 2 .
Remark 2.
If we consider as the parent distribution the Exponential distribution G ( x ) = 1 e λ x , x > 0 , and under the setting of Theorem 2, the reliability parameter associated with X and Y is the following:
R = 0 1 c a 1 λ e λ x x c 1 1 1 e λ x c a 1 1 1 1 1 e λ x c a 2 d x ,
which, for 1 1 e λ x c = u and c 1 e λ x c 1 λ e λ x d x = d u , takes the following form:
R = 1 0 a 1 u a 1 1 1 u a 2 d u = a 2 a 1 + a 2 .
Remark 3.
If R = 1 / 2 , then the two distributions of the G-class (for any continuous G ( · ) ) share a common shape parameter, i.e., a 1 = a 2 . In general, for δ = a 1 / a 2 , then
R = 1 1 + δ ,
so that R < 1 / 2 if a 1 > a 2 while R > 1 / 2 if a 2 > a 1 . Thus, R increases if a 2 increases as compared to a 1 ; otherwise R decreases.

2.2. Ordered Statistics and Distribution of the Minimum

In this section, we establish that the G-class is closed under minima which is a significant property with a pivotal role in inferential statistics under the multi-state setting of the following section.
Theorem 3.
If X 1 , , X n are random variables from the G-class, then the distribution function F m i n of the minimum ordered statistic X ( 1 ) is given by the following:
F m i n ( x ) = 1 ( 1 G ( x ) c ) n a
and belongs to the G-class.
Proof. 
It is straightforward that
F m i n ( x ) = P ( X ( 1 ) x ) = 1 i = 1 n P ( X i x ) = 1 1 P ( X i x ) n = 1 1 1 ( 1 G ( x ) c ) a n = 1 ( 1 G ( x ) c ) n a ,
which belongs to the G-class with shape parameters c and n a . □
Lemma 3.
Let X 1 , , X n be random variables from the G-class of distributions (1), where G is the Exponential distribution. The distribution function F m i n of the first ordered statistic X ( 1 ) falls into the G-class.
Proof. 
The result arises naturally from the previous theorem. In fact, by substituting G ( x ) = 1 e λ x , the distribution of the minimum becomes the following:
F ( x ; a ; c ; λ ) = 1 1 1 e λ x c n a .
Remark 4.
The results of this section can be generalized by dropping the assumption of identically distributed random variables. Indeed, if one considers the case of independent but not necessarily identically distributed (inid) r.v’s and assumes a random sample X 1 , , X n with the cdf of X i , i = 1 , , n being given by
F ( t ; a i ) : = 1 1 G ( t ; θ ) c a i ,
then Theorem 3 still holds true with F m i n belonging to the G-class (1) with parameters c and i = 1 n a i , namely the following:
F m i n ( x ; a 1 , , a n ) = 1 ( 1 G ( x ) c ) i = 1 n a i .
The next subsection concentrates on inid r.v’s under the multi-state setting with the sojourn times T i j (the time spend on state i before moving to state j) having a distribution F i j ( · ; a i j ) , belonging to (1) with shape parameter a i j , and a common parameter c for any i , j = 1 , , N , with N representing the finite number of system states. From a practical point of view, such a setting is quite meaningful since the transition from one state to another in multi-state systems is not necessarily described by the iid framework. Thus, for instance, the waiting time in a state i (before the system moves to state j) could be properly described by, for example, the Exponential distribution, but the distribution of the waiting time in state i or even in state j (before moving to state k) may have a heavier or lighter tail than the Exponential distribution. Such situations are tractable within our inid framework by allowing the parameter controlling the tail part of the distribution, i.e., the parameter a, to vary according to the specific current and next visited states. The case of varying both a and c parameters is a complex mathematical problem that will be left for future work.

3. The Semi-Markov Model and Multi-State Systems

A multi-state model is a continuous time stochastic process with values in a discrete set. Diverting from the standard class of Markov processes to the semi-Markov processes, we abandon the restriction of memoryless state sojourn times but, at the same time, we retain the treatment of the data as jump processes in continuous time. In fact, for semi-Markov processes, the Markov property is assumed only for the embedded chain of distinct visited states and we also have a Markov property that acts on random time instants, i.e., on the jump time instants. Such characteristics allow for a great applicability of semi-Markov processes in fields such as economics, finance, survival analysis, reliability, health care systems, etc. [25,26,27,28,29,30,31].
Consider a stochastic jump process Z = ( Z t ) t R + with state space E = { 1 , , N } , N < . We denote by S = ( S n ) n N the jump times, by J = ( J n ) n N the visited states at these times and by X = ( X n ) n N the sojourn times, X n = S n S n 1 , n N * , X 0 = S 0 = 0 .
We assume that Z = ( Z t ) t R + is a semi-Markov process (SMP) and that ( J , S ) = ( J n , S n ) n N is a Markov renewal process (MRP) associated to the SMP (cf. [29]). It immediately follows that ( J n ) n N is a Markov chain, called the embedded Markov chain. Let us also denote by
N ( t ) : = max { n N S n t } , t R + ,
the counting process of the number of jumps in the time interval ( 0 , t ] .
We assume that the SMP is regular, that is P i ( N ( t ) < ) = 1 for all t > 0 and state i E .
A SMP is defined by the initial law:
α = ( α 1 , , α N ) , α j : = P ( J 0 = j ) , j E ,
and the semi-Markov kernel:
Q i j ( t ) : = P ( J n = j , X n t | J n 1 = i ) .
We can also define the transition probabilities of ( J n ) n N , as follows:
p i j : = P ( J n = j | J n 1 = i ) = lim t Q i j ( t ) ,
and the conditional sojourn time distribution functions as the following:
W i j ( t ) : = P ( S n S n 1 t | J n 1 = i , J n = j ) ;
So, we have the following: Q i j ( t ) = p i j W i j ( t ) .
The time spent in state i before moving directly to state j is denoted by T i j ; let F i j ( t ; a i j ) be the corresponding cumulative distribution function and f i j ( t ; a i j ) the density function with respect to the Lebesgue measure.
The model we consider assumes that the next state to be visited after i is the one for which T i j is the minimum. Under this condition, we have the following:
Q i j ( t ) = P ( min l T i l t & the   min   occurs   for   j | J n 1 = i ) = p i j W i ( t ) ,
where
p i j = P ( J n = j | J n 1 = i ) = P ( T i j T i l , l | J n 1 = i )
and
W i j ( t ) = P ( min l T i l t | J n 1 = i ) = : W i ( t ) , independent   of   j .
We denote by f i ( t ) the density of W i ( t ) w.r.t. the Lebesgue measure. Note that j Q i j ( t ) = W i ( t ) .
The following proposition holds under the G-class.
Proposition 1.
Q i j ( t ) = a i j k E a i k 1 1 G ( t ) c k E a i k ,
p i j = a i j k E a i k ,
W i ( t ) = 1 1 G ( t ) c j = 1 N a i j
and
f i ( t ) = j = 1 N a i j 1 G ( t ) c j = 1 N a i j c g ( t ) G ( t ) c 1 1 G ( t ) c .
Remark 5.
The model considered in this work assumes that the next state to be visited after state i is the one for which T i j is the minimum. In many cases, especially in reliability applications, the optimal choice for the next state to be visited is the one with the “lowest cost” or the “minimum distance”. This can be achieved by setting a system such that j is chosen so that the potential time spent in state i before moving directly to state j is minimal over all states in the state-space. The definition for T i j can be adjusted accordingly if one focuses on the cost instead of the time, in which case T i j is the potential cost for the route from state i to state j.

4. Inference with and without Censoring

We proceed now to the statistical inference with and without censoring. More specifically, for the latter case, the sojourn times are fully observed, while, for the former, right censoring is observed at the last visited state.
Let M be the total observation time. Then, the likelihood function for the case without censoring and for the sample paths
j 0 ( l ) , x 1 ( l ) , j 1 ( l ) , x 2 ( l ) , , j N l ( M ) ( l ) , l = 1 , , L ,
is given by the following:
L = l = 1 L α j 0 ( l ) p j 0 ( l ) j 1 ( l ) f j 0 ( l ) ( x 1 ( l ) ) f j N l ( M ) 1 ( l ) ( x N l ( M ) ( l ) ) = i E α i N i , 0 ( L ) i , j E p i j l = 1 L N i j ( l ) ( M ) × l = 1 L i E k = 1 N i ( l ) ( M ) f i ( x i ( l , k ) ) ,
where
  • N i , 0 ( L ) : = l = 1 L 1 { J 0 ( l ) = i } ,
  • N i ( l ) ( M ) : the number of visits to state i up to time M of the lth trajectory, l = 1 , , L ,
  • N i j ( l ) ( M ) : the number of transitions from state i to state j up to time M during the lth trajectory, l = 1 , , L ,
  • N i j ( L , M ) : = l = 1 L N i j ( l ) ( M ) ,
  • x i ( l , k ) : the sojourn time in state i during the kth visit, k = 1 , , N i ( l ) ( M ) of the lth trajectory, l = 1 , , L .
Using the relevant expressions associated with the G-class, Equation (22) becomes the following:
L = i E α i N i , 0 ( L ) l = 1 L i , j E a i j N i j ( l ) ( M ) × l , i , k 1 G x i ( l , k ) c j E a i j c g x i ( l , k ) G x i ( l , k ) c 1 1 G x i ( l , k ) c .
Maximizing accordingly the above function, the maximum likelihood estimators of the parameters a i j and of the initial distribution law α i ( L , M ) are obtained:
a ^ i j ( L , M ) = N i j ( L , M ) l = 1 L B i ( l ) ( M )
and
α ^ i ( L , M ) = N i , 0 ( L ) L ,
where
B i ( l ) ( M ) = k = 1 N i ( l ) ( M ) log 1 G X i ( l , k ) c .
Observe that the results above hold for any number of trajectories. For the special case of a single sample path the estimator a ^ i j ( 1 , M ) could be simplified as follows:
a ^ i j ( M ) = N i j ( 1 , M ) B i ( 1 ) ( M ) N i j ( M ) B i ( M ) , i , j E .
Turning now to the case with censoring at time M, we consider L censored sample paths denoted by j 0 ( l ) , x 1 ( l ) , j 1 ( l ) , x 2 ( l ) , , j N l ( M ) ( l ) , u M ( l ) , l = 1 , , L . Then, the associated likelihood function is equal to the following:
L = i E α i N i , 0 ( L ) i , j E p i j l = 1 L N i j ( l ) ( M ) × l = 1 L i E k = 1 N i ( l ) ( M ) f i ( x i ( l , k ) ) i E k = 1 N i , M ( L ) 1 W i ( u i ( k ) ) ,
where
  • u M ( l ) : = M S N l ( M ) is the observed censored time of the lth trajectory;
  • N i , M ( L ) = l = 1 L 1 { J N l ( M ) ( l ) = i } is the number of visits of state i, as last visited state, over the L trajectories; note that i E N i , M ( L ) = L ;
  • u i ( k ) is the observed censored sojourn time in state i during the kth visit, k = 1 , , N i , M ( L ) .
It should be pointed out that, if M happens to be a jump time, then, for the associated path(s), we have u M ( l ) = 0 and thus the corresponding likelihood term equals 1. If this happens to occur for all values of l = 1 , 2 , , L , then the censoring case described above collapses to the uncensored case discussed earlier.
For the G-class, the likelihood with censoring for the case of L trajectories given in (26) takes the following form:
L = i E α i N i , 0 ( L ) l = 1 L i , j E a i j N i j ( l ) ( M ) × l , i , k 1 G x i ( l , k ) c j E a i j c g x i ( l , k ) G x i ( l , k ) c 1 1 G x i ( l , k ) c × i E k = 1 N i , M ( L ) 1 G u i ( k ) c j E a i j .
The resulting estimators a ^ i j ( L , M ) and of the initial distribution law α ^ i ( L , M ) are provided by the following expressions:
a ^ i j ( L , M ) = N i j ( L , M ) l = 1 L B i ( l ) ( M ) + k = 1 N i , M ( L ) log 1 G U i ( k ) c
and
α ^ i ( L , M ) = N i , 0 ( L ) L .
Note that, the above results hold for any number of trajectories. For the special case of L = 1 , the parameter estimates take the following simplified form:
a ^ i j ( 1 , M ) a ^ i j ( M ) = N i j ( 1 , M ) B i ( 1 ) ( M ) + log 1 G U M c , i , j E ,
where U M = M S N ( M ) represents the last sojourn censored time.
Choosing the appropriate estimator from those obtained in this section, we can now proceed to introduce the estimators of the main components p i j , W i and Q i j of the semi-Markov model:
p ^ i j ( M ) = a ^ i j ( L , M ) l E a ^ i l ( L , M ) = N i j ( M ) N i ( M ) ,
W ^ i ( t , M ) = 1 1 G ( t ) c j E a ^ i j ( L , M )
and
Q ^ i j ( t , M ) = a ^ i j ( L , M ) k E a ^ i k ( L , M ) 1 1 G ( t ) c k E a ^ i k ( L , M ) .

Parameter Estimation for Kumaraswamy Distribution

The estimator of the parameter a i j for the Kumaraswamy distribution with G ( x ) = x in (1) and without censoring takes the following form:
a ^ i j ( L , M ) = N i j ( L , M ) l = 1 L k = 1 N i ( l ) ( M ) log 1 X i ( l , k ) c .
Note that the maximum likelihood estimator c ^ of the shape parameter c is obtained by solving the equation given by the following:
log L c = l , i , j , k N i j ( l ) ( M ) log 1 X i ( l , k ) c 1 k = 1 N i ( l ) ( M ) x i ( l , k ) c log x i ( l , k ) 1 x i ( l , k ) c + + l , i , k 1 + c log x i ( l , k ) c = 0 .
Finally note that, in the censored case, the estimator of a i j becomes:
a ^ i j ( L , M ) = N i j ( L , M ) l = 1 L k = 1 N i ( l ) ( M ) log 1 X i ( l , k ) c + k = 1 N i , M ( L ) log 1 U i ( k ) c .

5. Transition Matrix and Reliability Approach of Semi-Markov Processes

For the purpose of this section, the Markov renewal function, Ψ i j ( t ) , i , j E , t 0 , is defined as the following [29]:
Ψ i j ( t ) = E i [ N j ( t ) ] = n = 1 Q i j ( n ) ( t ) ,
where Q i j ( n ) ( t ) is the nth convolution of Q by itself. For i , j E , the semi-Markov transition matrix is defined as follows [29]:
P i j ( t ) = P ( Z t = j | Z 0 = i ) = 0 t Ψ i j ( d s ) 1 k E Q j k ( t s ) .
The main idea for a reliability analysis is that the space E is divided into two subsets—U, which contains the functioning states, and D, which contains the failure states—such that E = U D and E = U D = , i.e., U = { 1 , , n } and D = { n + 1 , , N } . We also consider the corresponding restrictions of any matrix or matrix valued-function, for example, f , denoted by f U U , f U D , f D U , f D D respectively. We denote by P ( t ) the semi-Markov transition matrix, by P U U ( t ) and P D D ( t ) the restrictions of P ( t ) induced by the corresponding restrictions of the semi-Markov kernel to the set U and D, respectively (attention: P U U ( t ) and P D D ( t ) are not the restrictions of P ( t ) to the sets U and D) and by α U and α D the restrictions of the initial law α to the sets U and D, respectively. Finally, let p U U be the restriction of the embedded Markov chain transition matrix to the set U. Having this in mind, one can furnish several reliability indices as those derived below. Indeed, for instance, the reliability function and the failure rate are given respectively by the following:
R ( t ) = P ( Z s U , s t ) = α U P U U ( t ) 1 n
and
λ ( t ) = R ( t , M ) R ( t , M ) , f o r t > 0 .
Furthermore, the availability A ( t ) and maintainability M ( t ) at time t for a semi-Markov system are defined respectively by the following (for details see [29,32]):
A ( t ) = P ( Z t U ) = α P ( t ) 1 N ; n , M ( t ) = 1 P ( Z s D , s t ) = 1 α D P D D ( t ) 1 N n ,
where 1 N ; n = ( 1 , , 1 n , 0 , , 0 N n ) and 1 m = ( 1 , , 1 m ) .
Finally, MTTF (the mean time to failure) is given by the following:
E T D = α U ( I n p U U ) 1 m U ,
where m U is the restriction to set U of the mean sojourn time in state i, m i , which can be estimated by the following:
m ^ i ( 1 ) ( M ) : = 0 1 W ^ i ( t , M ) d t = 0 1 G ( t ) c j = 1 N a ^ i j ( M ) d t
or
m ^ i ( 2 ) ( M ) : = k = 1 N i ( M ) X i ( k ) N i ( M ) .

6. Simulations

The proposed methodology is evaluated via simulations for both cases of one sample path and several sample paths. In addition, the case of censoring is also taken into account in some of the above cases. More precisely, in the first part, we consider the case of a single sample path for several values of observation time M; in the second part, we assume L sample paths where T i j K ( a i j , 2 ) and the observation time M is set to be 1000 where the real values for the parameters a i j and the transition probabilities of the embedded Markov chain respectively are given below Table 1.

6.1. Single Sample Path

Table 2 provides the squared errors of the estimators for several values of observation time M in order to study their accuracy. As was expected, the estimators improve with respect to the squared error while the value of M is increasing ( M = 10 , 50 , 100 and 1000).

6.2. Several Sample Paths

The results for the estimated transition probabilities of the semi-Markov process P i j ( t ) are provided in this subsection for various times t and for several sample paths, together with the associated standard errors. Table 3, Table 4 and Table 5 and Figure 1 refer to the uncensored case, while Table 6, Table 7 and Table 8 and Figure 2 refer to the censored case.
According to the results in Table 4 and Table 7, the estimators are more accurate as the number of trajectories is increasing. Furthermore, the estimator of the initial law (Table 3 and Table 6) seems to be very close to the target value for both the censored and the uncensored case.
According to the above results, observe that as long as the time is close to the lower limit of the domain (i.e., close to 0), it is more likely that the process will remain in the same state (see Table 9 and Table 10 and Figure 1 and Figure 2). However, as time goes on, this changes. More specifically, for t = 0.7 , if the semi-Markov process is in state 1 at time 0, the most probable transition is to state 3. If the semi-Markov process starts at time 0 from state 2, the most probable transition is to state 1. If starting from state 3, the most probable transition is to state 2. For t = 0.99 , the semi-Markov process will most likely transition to state 2, given that at time 0, it has started in state 1 or 3. Finally, if it was in state 2, the process would most likely move to state 1. For the overall behavior of the estimators of the transition probabilities, see Table 5 and Table 8, where the associated standard errors are provided.

6.3. Reliability Parameter Estimation

In engineering, one of the main problems of interest is the event that the system remains in a working state during the whole observation period. As mentioned in the introduction, the performance of a system in mechanical engineering is evaluated via the reliability parameter. Such a system stays in a functioning condition as long as the stress (pressure) Y is at a lower level than the strength X. In general, in a multi-state system, the transition from one state to another may be defined according to whether a component of the system fails due to the fact that the stress exceeds the strength. In this section, we provide the performance of the estimator of the reliability parameter when X and Y follow the Kumaraswamy distribution with shape parameters a 1 and a 2 , respectively. Observe that in Figure 3 and Figure 4, the estimated values (in blue) almost coincide with the true values (in green). We provide here the simulated results of the reliability parameter, using, for illustrative purposes, selected states i and j from the previous section. The notation R i j refers to the reliability parameter for i and j.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their appreciation for the anonymous reviewers and the academic editor for their valuable and constructive comments that greatly improved the quality of the manuscript. This work was completed while the third author was a postodoctoral researcher at the Laboratory of Statistics and Data Analysis of the University of the Aegean.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KKumaraswamy distribution
UUniform distribution
ExpExponential distribution
SMPSemi-Markov process
MRPMarkov renewal process
MTTFMean time to failure
S.E.Squared errors

References

  1. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  2. Jones, M.C. Kumaraswamy’s distribution: A beta-type distribution with some tractability advantages. Stat. Methodol. 2008, 6, 70–81. [Google Scholar] [CrossRef]
  3. Kohansal, A. On estimation of reliability in a multicomponent stress-strength model for a Kumaraswamy distribution based on progressively censored sample. Stat. Pap. 2019, 60, 2185–2224. [Google Scholar] [CrossRef]
  4. Mitnik, P.A. New Properties of the Kumaraswamy Distribution. Comm. Stat. Theory Methods 2013, 42, 741–755. [Google Scholar] [CrossRef]
  5. Fletcher, S.G.; Ponnambalam, K. Estimation of reservoir yield and storage distribution using moments analysis. J. Hydrol. 1996, 182, 259–275. [Google Scholar] [CrossRef]
  6. Ganji, A.; Ponnambalam, K.; Khalili, D.; Karamouz, M. Grain yield reliability analysis with crop water demand uncertainty. Stoch. Environ. Res. Risk Assess. 2006, 2, 259–277. [Google Scholar] [CrossRef]
  7. Nadar, M.; Papadopoulos, A.; Kizilaslan, F. Statistical analysis for Kumaraswamy’s distribution based on record data. Stat. Pap. 2013, 54, 355–369. [Google Scholar] [CrossRef]
  8. Courard-Hauri, D. Using Monte Carlo analysis to investigate the relationship between overconsumption and uncertain access to one’s personal utility function. Ecol. Econ. 2007, 64, 152–162. [Google Scholar] [CrossRef]
  9. Seifi, A.; Ponnambalam, K.; Vlach, J. Maximization of manufacturing yield of systems with arbitrary distributions of component values. Ann. Oper. Res. 2000, 99, 373–383. [Google Scholar] [CrossRef]
  10. Cordeiro, G.M.; Edwin, M.M.; Ortega, S.N. The Kumaraswamy Weibull distribution with application to failure data. J. Franklin Inst. 2010, 347, 1399–1429. [Google Scholar] [CrossRef]
  11. Ghosh, I. On the reliability for some bivariate dependent Beta and Kumaraswamy distributions: A brief survey. Stoch. Qual. Control 2019, 34, 115–121. [Google Scholar] [CrossRef]
  12. Gilchrist, W.G. Statistical Modelling with Quantile Functions; Chapman & Hall/CRC: Boca Raton, FL, USA, 2001. [Google Scholar]
  13. Mitnik, P.A.; Baek, S. The Kumaraswamy distribution: Median-dispersion re-parameterizations for regression modeling and simulation-based estimation. Stat. Pap. 2013, 54, 177–192. [Google Scholar] [CrossRef]
  14. Cordeiro, G.M.; Castro, M. A new family of generalized distributions. J. Stat. Comput. Simul. 2011, 81, 883–898. [Google Scholar] [CrossRef]
  15. Eugene, N.; Lee, C.; Famoye, F. Beta-normal distribution and its applications. Comm. Stat. Theory Methods 2002, 31, 497–512. [Google Scholar] [CrossRef]
  16. Jones, M.C. Families of distributions arising from distributions of order statistics (with discussion). Test 2004, 13, 1–43. [Google Scholar] [CrossRef]
  17. Awad, A.M.; Azzam, M.M.; Hamdan, M.A. Some inference results on Pr(X < Y) in the bivariate exponential model. Comm. Stat. Theory Methods 1981, 10, 2515–2524. [Google Scholar]
  18. Church, J.D.; Harris, B. The estimation of reliability from stress strength relationships. Technometrics 1970, 12, 49–54. [Google Scholar] [CrossRef]
  19. Constantine, K.; Karson, M. Estimation of P(Y < X) in gamma case. Comm. Stat. Simul. Comput. 1986, 15, 365–388. [Google Scholar]
  20. Kotz, S.; Lumelskii, Y.; Pensky, M. The Stress-Strength Model and Its Generalizations. Theory and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  21. Weerahandi, S.; Johnson, R.A. Testing reliability in a stress-strength model when X and Y are normally distributed. Technometrics 1992, 34, 83–91. [Google Scholar] [CrossRef]
  22. Balasubramanian, K.; Beg, M.I.; Bapat, R.B. On families of distributions closed under extrema. Sankhya A 1991, 53, 375–388. [Google Scholar]
  23. Barbu, V.S.; Karagrigoriou, A.; Makrides, A. Semi-Markov Modelling for Multi-State Systems. Methodol. Comput. Appl. Probab. 2017, 19, 1011–1028. [Google Scholar] [CrossRef]
  24. Slud, E.V.; Vonta, F. Efficient semiparametric estimators via modified profile likelihood. J. Stat. Plan. Inference 2005, 129, 339–367. [Google Scholar] [CrossRef]
  25. Asanjarani, A.; Liquet, B.; Nazarathy, Y. Estimation of semi-Markov multi-state models: A comparison of the sojourn times and transition intensities approaches. Int. J. Biostat. 2020. [Google Scholar] [CrossRef]
  26. Barbu, V.; Boussemart, M.; Limnios, N. Discrete-time semi-Markov model for reliability and survival analysis. Comm. Stat. Theory Methods 2004, 33, 2833–2868. [Google Scholar] [CrossRef]
  27. D’Amico, G.; Manca, R.; Petroni, F.; Selvamuthu, D. On the Computation of Some Interval Reliability Indicators for Semi-Markov Systems. Mathematics 2021, 9, 575. [Google Scholar] [CrossRef]
  28. Janssen, J.; Manca, R. Semi-Markov Risk Models for Finance, Insurance and Reliability; Springer: New York, NY, USA, 2007. [Google Scholar]
  29. Limnios, N.; Oprişan, G. Semi-Markov Processes and Reliability; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  30. Lisnianski, A.; Frenkel, I.; Karagrigoriou, A. Recent Advances in Multi-State Reliability, Theory and Applications; Springer Series in Reliability Engineering; Springer: Berlin, Germany, 2018. [Google Scholar]
  31. Lisnianski, A.; Levitin, G. Multi-State System Reliability: Assessment, Optimization and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  32. Ouhbi, B.; Limnios, N. Non-parametric estimation for semi-Markov kernels with application to reliability analysis. Appl. Stoch. Models Data Anal. 1996, 12, 209–220. [Google Scholar] [CrossRef]
Figure 1. Real and estimated semi-Markov transition probabilities in the case of uncensored trajectories.
Figure 1. Real and estimated semi-Markov transition probabilities in the case of uncensored trajectories.
Mathematics 09 01834 g001
Figure 2. Real and estimated semi-Markov transition probabilities in the case of censored trajectories.
Figure 2. Real and estimated semi-Markov transition probabilities in the case of censored trajectories.
Mathematics 09 01834 g002
Figure 3. Reliability parameters for the case of uncensored trajectories.
Figure 3. Reliability parameters for the case of uncensored trajectories.
Mathematics 09 01834 g003
Figure 4. Reliability parameters for the case of censoring at the beginning and/or at the end.
Figure 4. Reliability parameters for the case of censoring at the beginning and/or at the end.
Mathematics 09 01834 g004
Table 1. Real values for the parameters a i j and transition probabilities p i j .
Table 1. Real values for the parameters a i j and transition probabilities p i j .
a i j 123 p i j 123
100.92.1100.30.7
21.500.320.83300.167
31.21.8030.40.60
Table 2. Squared errors (S.E.) for the estimators of a i j and p i j in the case of no censoring, for various values of M.
Table 2. Squared errors (S.E.) for the estimators of a i j and p i j in the case of no censoring, for various values of M.
M10501001000
S.E. ( a ^ i j ( M ) ) 1.41 1.036 2.82 × 10 1 8.97 × 10 3
S.E. ( p ^ i j ( M ) ) 1.76 × 10 1 5.06 × 10 2 7 × 10 3 1.49 × 10 6
Table 3. The estimated values of the initial law a i in the case of L = 1000 trajectories without censoring.
Table 3. The estimated values of the initial law a i in the case of L = 1000 trajectories without censoring.
i123
α ^ i ( L , M ) 0.3190.3550.326
Table 4. Squared errors (S.E.) for the estimators of a i j and p i j in the case of no censoring, for various values of L.
Table 4. Squared errors (S.E.) for the estimators of a i j and p i j in the case of no censoring, for various values of L.
L5101001000
S.E. ( a ^ i j ( L , M ) ) 5.71 × 10 3 1.05 × 10 3 7 × 10 4 4.35 × 10 5
S.E. ( p ^ i j ( L , M ) ) 2.6 × 10 4 1.06 × 10 4 1.47 × 10 5 7.58 × 10 7
Table 5. Squared errors (S.E.) for the estimators of the semi-Markov transition probabilities in the case of no censoring.
Table 5. Squared errors (S.E.) for the estimators of the semi-Markov transition probabilities in the case of no censoring.
t0.10.20.50.9
S.E. ( P ^ i j ( t ; L , M ) ) 2.72 × 10 6 3.75 × 10 5 4.47 × 10 4 1.34 × 10 4
Table 6. The estimated values of the initial law a i in the case of L = 1000 trajectories with censoring at the beginning and/or at the end.
Table 6. The estimated values of the initial law a i in the case of L = 1000 trajectories with censoring at the beginning and/or at the end.
i123
α ^ i ( L , M ) 0.3280.3280.344
Table 7. Squared errors (S.E.) for the estimators of a i j and p i j in the case of censoring at the beginning and/or at the end, for various values of L.
Table 7. Squared errors (S.E.) for the estimators of a i j and p i j in the case of censoring at the beginning and/or at the end, for various values of L.
L5101001000
S.E. ( a ^ i j ( L , M ) ) 1.13 × 10 2 6.18 × 10 3 2.79 × 10 4 3.27 × 10 5
S.E. ( p ^ i j ( L , M ) ) 2 × 10 4 9.24 × 10 5 1.13 × 10 5 6.91 × 10 7
Table 8. Squared errors for the estimators of the semi-Markov transition probabilities in the case of censoring at the beginning and/or at the end.
Table 8. Squared errors for the estimators of the semi-Markov transition probabilities in the case of censoring at the beginning and/or at the end.
t0.10.20.50.9
S.E. ( P ^ i j ( t ; L , M ) ) 1.69 × 10 6 2.39 × 10 5 3.91 × 10 4 5.37 × 10 4
Table 9. Estimators for the transition probabilities of the semi-Markov process for t = 0.1 and t = 0.5 .
Table 9. Estimators for the transition probabilities of the semi-Markov process for t = 0.1 and t = 0.5 .
P ^ i j ( t = 0.1 ) 123 P ^ i j ( t = 0.5 ) 123
10.9700.0090.02110.4560.2000.358
20.0140.9830.00320.2880.6250.096
30.0120.0170.97130.2260.3260.460
Table 10. Estimators for the transition probabilities of the semi-Markov process for t = 0.7 and t = 0.99 .
Table 10. Estimators for the transition probabilities of the semi-Markov process for t = 0.7 and t = 0.99 .
P ^ i j ( t = 0.7 ) 123 P ^ i j ( t = 0.99 ) 123
10.2540.3340.47010.2760.3870.337
20.4590.3830.20220.4660.2070.327
30.3450.4700.23930.3650.4080.227
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barbu, V.S.; Karagrigoriou, A.; Makrides, A. Reliability and Inference for Multi State Systems: The Generalized Kumaraswamy Case. Mathematics 2021, 9, 1834. https://doi.org/10.3390/math9161834

AMA Style

Barbu VS, Karagrigoriou A, Makrides A. Reliability and Inference for Multi State Systems: The Generalized Kumaraswamy Case. Mathematics. 2021; 9(16):1834. https://doi.org/10.3390/math9161834

Chicago/Turabian Style

Barbu, Vlad Stefan, Alex Karagrigoriou, and Andreas Makrides. 2021. "Reliability and Inference for Multi State Systems: The Generalized Kumaraswamy Case" Mathematics 9, no. 16: 1834. https://doi.org/10.3390/math9161834

APA Style

Barbu, V. S., Karagrigoriou, A., & Makrides, A. (2021). Reliability and Inference for Multi State Systems: The Generalized Kumaraswamy Case. Mathematics, 9(16), 1834. https://doi.org/10.3390/math9161834

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop