Next Article in Journal
Special Subsets of Addresses for Blockchains Using the secp256k1 Curve
Next Article in Special Issue
Lie Symmetries of the Nonlinear Fokker-Planck Equation Based on Weighted Kaniadakis Entropy
Previous Article in Journal
Constrained Optimal Control for Nonlinear Multi-Input Safety-Critical Systems with Time-Varying Safety Constraints
Previous Article in Special Issue
Quantile-Zone Based Approach to Normality Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Continuous-Time Semi-Markov System Governed by Stepwise Transitions

by
Vlad Stefan Barbu
1,
Guglielmo D’Amico
2 and
Andreas Makrides
3,*
1
Laboratory of Mathematics Raphaël Salem, University of Rouen-Normandy, UMR 6085, Avenue de l’Université, BP.12, F-76801 Saint-Étienne-du-Rouvray, France
2
Department of Economics, University “G. d’Annunzio” of Chieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy
3
Department of Statistics and Actuarial-Financial Mathematics, University of the Aegean, GR-83200 Samos, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2745; https://doi.org/10.3390/math10152745
Submission received: 30 June 2022 / Revised: 26 July 2022 / Accepted: 27 July 2022 / Published: 3 August 2022
(This article belongs to the Special Issue Probability, Statistics and Their Applications 2021)

Abstract

:
In this paper, we introduce a class of stochastic processes in continuous time, called step semi-Markov processes. The main idea comes from bringing an additional insight to a classical semi-Markov process: the transition between two states is accomplished through two or several steps. This is an extension of a previous work on discrete-time step semi-Markov processes. After defining the models and the main characteristics of interest, we derive the recursive evolution equations for two-step semi-Markov processes.

1. Introduction

This article is concerned with continuous-time semi-Markov processes (SMPs). They represent an important class of stochastic processes widely studied and applied in several fields, such as reliability, survival analysis, financial mathematics, DNA modelling, manpower planning, etc.; see, e.g., [1,2,3,4,5]. The interest in using this class of stochastic processes in applications comes from the fact that the sojourn time in a state can be arbitrarily distributed, as compared to Markov processes, where the sojourn time is geometrically distributed (in discrete time) or exponentially distributed (in continuous time).
To be more specific, in this work, we introduce a class of stochastic processes, called continuous-time step semi-Markov processes, which generalizes classical/usual continuous-time semi-Markov processes. The main feature of this new type of SMPs comes from the fact that the sojourn time in a state represents the addition of two or several times that correspond to different physical causes. A typical example of this type of process can be encountered in biomedical investigations of the time-evolution of a disease when this sojourn time can be the sum between the incubation time of the disease and the waiting time before a change of state occurs. Another example can be seen in manpower planning; for instance, the sojourn time in this context can be seen as the sum between a first duration after entering a position in a company and a second duration that corresponds to a training period, for potential upgrade (change) of their position (see, e.g., [6,7,8,9,10,11]). It is clear that this type of phenomenon can be modelled also by introducing a new state for each different time (see [12]), but this implies that supplementary parameters are used for describing the system, increasing thus the modelling complexity.
The present article extends the discrete-time step semi-Markov processes introduced in [13] to the case of continuous-time semi-Markov processes. It is important to stress from the beginning that the passage from discrete-time case to continuous-time case and vice versa is not obvious at all in such semi-Markov frameworks.
The need to move from discrete-time models to continuous ones has already been suggested by several authors and is of great importance in many scientific fields, see, e.g., [14]. One of the main advantages of continuous-time models is due to their greater flexibility as compared to their discrete-time counterparts. Indeed, while the discrete-time model needs the adoption of a specific time scale over which data are observed, the continuous-time setting avoids this problem and considers the exact time of observation of the events, see, e.g., [15]. A detailed description of the benefits of a continuous-time framework is presented in [12] where illustrative examples are also given based on real data and practical applications.
It is worth also noticing that one problem that researchers face when using continuous-time stochastic processes is the need for discretization. In a semi-Markov setting, continuous-time semi-Markov processes can be numerically solved by means of discrete-time semi-Markov processes, as can be found in [16,17,18,19,20].
Our article is structured as follows: in the next section we first introduce classical semi-Markov notations and we define the continuous-time step semi-Markov process. In Section 3, we study the recursive evolution equations for step semi-Markov processes. A particular case of the continuous-time step semi-Markov process is presented in Section 4. Some elements of statistical estimation are also presented in Section 5. We end the article with some concluding remarks.

2. System Settings

A continuous time stochastic process ( Z t ) t R + evolving in a discrete finite state space E = { 1 , , N } is considered. Several counting processes have to be defined for modelling the time evolution of this system, such as T = ( T n ) n N representing the successive time points when a change in the system states occurs, J = ( J n ) n N the successive visited states at these time points. Let us consider also the sojourn times in each state, denoted by X n = T n T n 1 , n N * , where X 0 = 0 .
If the following relation is satisfied
P ( J n + 1 = j , T n + 1 T n t J 0 , , J n ; T 0 , , T n ) = P ( J n + 1 = j , T n + 1 T n t J n ) ,
then the stochastic process ( Z t ) t R + is a semi-Markov process associated with the Markov renewal chain ( J n , T n ) n N , with ( J n ) n N being the so-called embedded Markov chain associated with ( Z t ) t R + , where Z t = J N ( t ) and J n = Z T n , with N ( t ) : = max { n N T n t } representing the number of jumps up to the time point t .
The time behaviour of the SMP is governed by the semi-Markov kernel
Q ˜ i j ( t ) = P ( J n + 1 = j , X n + 1 t J n = i ) .
Equivalently, the SMP is defined by the transition probabilities of the embedded Markov chain ( J n ) , i.e.
p ˜ i j = P ( J n + 1 = j J n = i ) , i , j E , n N ,
and the conditional cumulative distribution function of the sojourn time X n + 1 , defined by
F ˜ i j ( t ) = P ( X n + 1 t J n = i , J n + 1 = j ) = 0 t f ˜ i j ( s ) d s , t R + ,
where f ˜ i j ( t ) = d F ˜ i j ( t ) d t is the corresponding density with respect to the Lebesgue measure, assumed to exist.
Let us denote by α the initial distribution of the process, α ( i ) = P ( J 0 = i ) . We also define the sojourn time distribution function in state i as
H ˜ i ( t ) = P ( X n + 1 t J n = i ) = j E Q ˜ i j ( t ) = 0 t h ˜ i ( s ) d s , t R + ,
where h ˜ i ( t ) = d H ˜ i ( t ) d t is the density of the sojourn time distribution in state i with respect to the Lebesgue measure, assumed also to exist. It is clear that the semi-Markov kernel can be written as
Q ˜ i j ( t ) = p ˜ i j F ˜ i j ( t ) , i , j E .
Following the procedure proposed in [13] for discrete-time SMPs, the sojourn time X n + 1 between two consecutive states J n and J n + 1 can be seen as the sum of two different times, say U n + 1 and V n + 1 , that is X n + 1 = U n + 1 + V n + 1 . Under this setting one can rewrite the semi-Markov condition (1) in the following form
P ( J n + 1 = j , U n + 1 u , V n + 1 v . J 0 , , J n ; U 0 , , U n , V 0 , , V n ) = P ( J n + 1 = j , U n + 1 u , V n + 1 v . J n ) .
Having in mind this new setting, the semi-Markov kernel takes the following form for t R +
Q ˜ i j ( t ) = P ( J n + 1 = j , U n + 1 + V n + 1 t J n = i ) = 0 t P ( J n + 1 = j , V n + 1 t u , U n + 1 d u J n = i ) = 0 t P ( V n + 1 t u U n + 1 = u , J n = i , J n + 1 = j ) × P ( J n + 1 = j J n = i , U n + 1 = u ) P ( U n + 1 d u J n = i ) = 0 t F i u ; j ( t u ) p i u ; j g i ( u ) d u ,
where
  • F i u ; j ( v ) = P ( V n + 1 v . U n + 1 = u , J n = i , J n + 1 = j ) ;
  • p i u ; j = P ( J n + 1 = j J n = i , U n + 1 = u ) , with j E p i u ; j = 1 ; in other words ( p i u ; j ) i , j E is a stochastic matrix, u R + ;
  • g i ( u ) d u P ( u U n + 1 u + d u J n = i ) , using the definition of derivative and represents the density of random variable U n + 1 , given that the previous state is J n = i .
We can summarize all this discussion by introducing the new concept of two-step SMP, for which the holding time is the sum of two different holding times. As mentioned in the Introduction, this setting can be of interest in several applications, such as biomedical sciences or manpower planning.
Definition 1.
If a semi-Markov kernel is of the form given in (6), then ( J , T ) is called a two-step Markov renewal chain and the associated semi-Markov process Z = ( Z t ) t R + is called a continuous-time two-step semi-Markov process.
An example of this type of process can be encountered for patients in a hospital that may change status depending on their medical results. Due to this process, they will need corresponding medical attention. Let U be the time interval between the status change in the patient’s clinical condition and V the time that elapses until the operator’s action to change the patient’s condition. This problem can be described by a continuous-time two-step semi-Markov model. The main interest here is focused on computing the transition probabilities between states.
It is important to define the following quantities with respect to the fist passage time in state i, given that U n + 1 = u .
Definition 2.
Under the previous setting, let us also introduce the following notations:
Q i u ; j ( v ) : = P ( J n + 1 = j , V n + 1 v . J n = i , U n + 1 = u ) = P ( V n + 1 v . J n = i , J n + 1 = j , U n + 1 = u ) × P ( J n + 1 = j J n = i , U n + 1 = u ) = p i u ; j F i u ; j ( v ) ,
F i u ; j ( v ) : = P ( V n + 1 v . J n = i , J n + 1 = j , U n + 1 = u ) ,
H i u ( v ) : = P ( V n + 1 v . J n = i , U n + 1 = u ) = j E Q i u ; j ( v ) ,
with the corresponding density h i u ( v ) = d H i u ( v ) d v assumed to exist, and
G i ( u ) : = P ( U n + 1 u J n = i ) ,
with the corresponding density g i ( u ) = d G i ( u ) d u assumed to exist.
Proposition 1.
The following relations hold true for any states i , j E and t R + :
1. 
p ˜ i j = 0 p i u ; j g i ( u ) d u ,
2. 
Q ˜ i j ( t ) = 0 t Q i u ; j ( t u ) G i ( d u ) .
Proof. 
We immediately obtain the results as follows.
  • p ˜ i j = P ( J n + 1 = j J n = i ) = 0 P ( J n + 1 = j , U n + 1 d u J n = i ) = 0 P ( J n + 1 = j J n = i , U n + 1 = u ) P ( U n + 1 d u J n = i ) = 0 p i u ; j g i ( u ) d u .
  • According to expression (6) we have
    Q ˜ i j ( t ) = 0 t F i u ; j ( t u ) p i u ; j g i ( u ) d u = 0 t Q i u ; j ( t u ) g i ( u ) d u .
 □
Thus, we have defined the main quantities that characterize such a process and we can now focus on more complex matters of time behaviour of the process.

3. Recurrence Evolution Behaviour

In this section, recurrence evolution behaviour of a two-step semi-Markov process Z t in the continue-time framework is investigated.
There are different types of evolution equations that may considered. First, we consider the case with initial backward of the process, where the transition probability function is the matrix-valued function b Φ = ( b ϕ i u 1 ; j ( l ; t ) ; i , j E , u 1 , l , t R + ) M E ( R + × R + ) defined by
b ϕ i u 1 ; j ( l , t ) : = P ( Z t = j Z 0 = i , B 0 = l , U 1 = u 1 ) ,
where the left upper-script b stands for the initial backward, B t : = t T N ( t ) represents the backward time process associated with the SMP and M E ( R + × R + ) denotes the set of matrix-valued functions defined on R + × R + with values in M E , the set of real matrices on E × E .
Theorem 1. 
(a) 
The following recursive expression holds true i , j E and u 1 , l , t R + , such that u 1 < l ,
b ϕ i u 1 ; j ( l , t ) = δ i j H ¯ i u 1 ( t + l u 1 ) H ¯ i u 1 ( l u 1 ) + m = 0 t ( k E u 2 = 0 t m b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 + G ¯ j ( t m ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) ) d m .
(b) 
Similarly, if u 1 l , then
b ϕ i u 1 ; j ( l , t ) = δ i j H ¯ i u 1 ( t + l u 1 ) + m = u 1 l t ( k E u 2 = 0 t m b ϕ k u 2 ; j ( 0 , t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) d u 2 + G ¯ j ( t m ) · q i u 1 ; j ( m + l u 1 ) ) d m ,
where G ¯ ( · ) = 1 G ( · ) denotes the survival function.
The proof of this result is given in the Appendix A.

4. Step SMP with Minimum Sojourn Time

In this section, we investigate a special case of a two-step semi-Markov process, where the next state to be visited is selected to be the one with the minimum time.

4.1. System Setting

Let us consider the model proposed in Section 2. In addition to the quantities defined above, let us consider the random variables U i j as the potential times spent in state i before moving directly to state j . This type of framework was considered in [21,22], with a special interest in reliability modelling.
The semi-Markov kernel becomes
Q ˜ i j ( t ) = P ( min l { U i l } + V n + 1 t & min o c c u r s f o r j J n = i ) = P ( V n + 1 t u , U i j U i l l J n = i ) = 0 t P ( V n + 1 t u , U i j U i r r , min r { U i r } d u J n = i ) = 0 t P ( V n + 1 t u min r { U i r } = u , J n = i , J n + 1 = j ) × P ( J n + 1 = j J n = i , min r { U i r } = u ) P ( min r { U i r } d u J n = i ) = 0 t F i u ; j ( t u ) p i u ; j g i ( 1 ) ( u ) d u ,
where
F i u ; j ( v ) = P ( V n + 1 v . min r { U i r } = u , J n = i , J n + 1 = j )
= P ( V n + 1 v . min r { U i r } = u , J n = i ) ,
= F i u ( v ) independent   of   j ,
g i ( 1 ) ( u ) × d u = P ( min r { U i r } d u J n = i ) .

4.2. Recurrence Evolution of the Step SMP with Minimum Sojourn Time

The transition function with initial backwards process under the proposed model of this section is described by the following result.
Theorem 2. 
(a) 
Under the model setting of this section, the following formula stand true i , j E and u 1 , l , t R + , in the case u 1 < l
b ϕ i u 1 ( 1 ) ; j ( l , t ) = δ i j H ¯ i u 1 ( 1 ) ( t + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) + m = 0 t ( k E u 2 = 0 t m b ϕ k u 2 ( 1 ) ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) d u 2 + G ¯ j ( t m ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) ) d m .
(b) 
In the case that u 1 l , the transition function is
b ϕ i u 1 ( 1 ) ; j ( l , t ) = δ i j H ¯ i u 1 ( 1 ) ( t + l u 1 ) + m = u 1 l t ( k E u 2 = 0 t m b ϕ k u 2 ( 1 ) ; j ( 0 , t m ) · g k ( u 2 ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) + G ¯ j ( t m ) · q i u 1 ( 1 ) ; j ( m + l u 1 ) ) ,
where the upper-script ( 1 ) stands for the minimum order statistic.
The proof of this result is given in the Appendix A.

5. Associated Estimation Procedures

Several researchers have investigated statistical inference topics for continuous-time semi-Markov processes; the interested reader can see, e.g., [21,23,24,25,26,27,28] and the references within.
Let us consider a sample path of a two-step semi-Markov model censored at a fixed arbitrary time M N * ,
J 0 , U 1 , V 1 , J 1 , U 2 , V 2 , , J N ( M ) 1 , U N ( M ) , V N ( M ) , J N ( M ) , B M ,
where N ( M ) : = max { n T n M } is the counting process of the number of jumps in [ 1 , M ] and B M : = M T N ( M ) is the censored sojourn time in the last visited state J N ( M ) .
Starting from this sample, one can obtain estimators of the quantities of interest defining our model. Two main procedures may be considered:
  • Empirical estimation, as considered, for instance, in [26,27].
  • Nonparametric kernel estimation, as recently proposed in [23].
On the one hand, it should be noted that, following classical arguments, it is easy to check that the empirical estimators are strongly consistent, as the sample size M . On the other hand, similar arguments as those used in [23] show that the nonparametric kernel estimators are also strongly consistent, as the sample size M .
One can also consider several (say K) sample paths of a two-step semi-Markov model censored at a fixed arbitrary time M N * ,
J 0 l , U 1 l , V 1 l , J 1 l , U 2 l , V 2 l , , J N ( M ) 1 l , U N ( M ) l , V N ( M ) l , J N ( M ) l , B M l , l = 1 , , K .
Similarly as that discussed above, one can propose corresponding empirical estimators, taking into account the counting over the K sample paths; as before, these estimators are strongly consistent, as K .
Let us give here a Monte Carlo algorithm in order to simulate a trajectory of a given SMC in the time interval [ 0 , M ] . The output of the algorithm consists of the successive visited states ( J 0 , , J N ( M ) ) and in the successive holding times ( U 1 , V 1 , , U N ( M ) , V N ( M ) ) up to the time M N , i.e., a sample path of the process up to any arbitrary time M . This algorithm is an adaptation to our framework of the one in [1].   
Algorithm
  • Set k = 0 , T 0 = 0 and sample J 0 from the initial distribution α ;
  • Sample the random variable U g J k ( · ) and set U k + 1 = U ( ω ) ;
  • Sample the random variable  J p J k U ; · and set J k + 1 = J ( ω ) ;
  • Sample the random variable V F J k U ; J k + 1 ( · ) and set V k + 1 = V ( ω ) ;
  • Set X k + 1 = U + V and T k + 1 = T k + X k + 1 ;
  • If T k + 1 M , then end;
  • Else, set k = k + 1 and continue to step 2.

6. Conclusions

In this paper, we studied step semi-Markov models in continuous-time. The interest in these models comes from the fact that they are very flexible, general, and add a new source of randomness as compared to classical semi-Markov processes, which consists of considering the sojourn times as the sum of two (or possibly several) random times. This type of stochastic model could be very promising in the study of reliability problems, survival analysis and queuing theory.
An important estimation topic could also be developed. It consists of the estimation of the main functions defining the model when the variables U n and V n are latent and the only available observations are the sojourn time lengths X n . Another important development is represented by extending the model to open systems where several individuals are considered. This would bring to a generalization of the so-called Markov/semi-Markov systems which have found a large body of literature (see, e.g., [29,30] and the reference therein).

Author Contributions

Methodology, V.S.B., G.D. and A.M.; Writing—review & editing, V.S.B., G.D. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1. 
(a)
For the case where u 1 < l , the transition function can be written as
b ϕ i u 1 ; j ( l , t ) = P ( Z t = j Z 0 = i , B 0 = l , U 1 = u 1 ) = P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) ( A 1 ) + P ( Z t = j , T 1 t Z 0 = i , B 0 = l , U 1 = u 1 ) . ( A 2 )
Let us study separately each term of the preceding formula. The first term (A1) can be expressed in the following way
P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) = P ( Z t = j T 1 > t , Z 0 = i , B 0 = l , U 1 = u 1 ) P ( T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) = δ i j P ( T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) = δ i j P ( T 1 T 0 > t T 0 J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) = δ i j P ( U 1 + V 1 > t + l J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) = δ i j P ( V 1 > t + l u 1 J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) = δ i j P ( V 1 > t + l u 1 , T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) P ( T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) .
Let us consider now the numerator of (A3)
P ( V 1 > t + l u 1 , T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) = P ( V 1 > t + l u 1 , V 1 > l u 1 J 0 = i , T 0 = l , U 1 = u 1 ) = P ( V 1 > t + l u 1 , J 0 = i , U 1 = u 1 ) = H ¯ i u 1 ( t + l u 1 ) .
As for the denominator of (A3) we have
P ( T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) = P ( T 1 T 0 > 0 T 0 J 0 = i , U 1 = u 1 , T 0 = l ) = P ( U 1 + V 1 > l J 0 = i , U 1 = u 1 , T 0 = l ) = P ( V 1 > l u 1 J 0 = i , U 1 = u 1 ) = H ¯ i u 1 ( l u 1 ) .
Combining (A3)–(A5) yields to
P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) = δ i j H ¯ i u 1 ( t + l u 1 ) H ¯ i u 1 ( l u 1 ) .
Let us move to the second term (A2), i.e.,
P ( Z t = j , T 1 t Z 0 = i , B 0 = l , U 1 = u 1 ) = k E m = 0 t u 2 = 0 P ( Z t = j , T 1 [ m , m + d m ] , J 1 = k , U 2 [ u 2 , u 2 + d u 2 ] Z 0 = i , B 0 = l , U 1 = u 1 ) = m = 0 t u 2 = 0 k E P ( Z t = j T 1 = m , J 1 = k , U 2 = u 2 , J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) × P ( U 2 [ u 2 , u 2 + d u 2 ] T 1 = m , J 1 = k ) × P ( T 1 [ m , m + d m ] , J 1 = k J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) = m = 0 t u 2 = 0 k E P ( Z t = j T 1 = m , J 1 = k , U 2 = u 2 , J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) × P ( U 2 [ u 2 , u 2 + d u 2 ] T 1 = m , J 1 = k ) × P ( T 1 T 0 [ m + l , m + l + d m ] , J 1 = k , T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) P ( T 1 T 0 > 0 + l J 0 = i , T 0 = l , U 1 = u 1 ) = m = 0 t u 2 = 0 k E P ( Z t = j T 1 = m , J 1 = k , U 2 = u 2 , J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) × P ( U 2 [ u 2 , u 2 + d u 2 ] T 1 = m , J 1 = k ) × P ( V 1 [ m + l u 1 , m + l u 1 + d m ] , J 1 = k J 0 = i , U 1 = u 1 ) P ( V 1 > l u 1 J 0 = i , U 1 = u 1 ) = m = 0 t u 2 = 0 k E b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 d m .
= k E m = 0 t ( u 2 = 0 t m b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 + u 2 = t m b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 ) d m . = k E m = 0 t ( u 2 = 0 t m b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 + u 2 = t m δ k j · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 ) d m . = m = 0 t ( k E u 2 = 0 t m b ϕ k u 2 ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) d u 2 + G ¯ j ( t m ) · q i u 1 ; k ( m + l u 1 ) H ¯ i u 1 ( l u 1 ) ) d m .
(b)
Similarly as before, one can prove ( 9 ) in the case of u 1 l . In this case, (A1) can be written as
P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , U 1 = u 1 ) = δ i j P ( T 1 T 0 > t T 0 J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) = δ i j P ( V 1 > t + l u 1 J 0 = i , U 1 = u 1 ) P ( T 1 > 0 J 0 = i , T 0 = l , U 1 = u 1 ) = δ i j H ¯ i u 1 ( t + l u 1 ) .
As for (A2), it takes the form
P ( Z t = j , T 1 t Z 0 = i , B 0 = l , U 1 = u 1 ) = k E m = u 1 l t u 2 = 0 P ( Z t = j T 1 = m , J 1 = k , U 2 = u 2 , J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) × P ( U 2 [ u 2 , u 2 + d u 2 ] T 1 = m , J 1 = k ) × P ( T 1 [ m , m + d m ] , J 1 = k J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) .
A similar procedure in the case u 1 < l leads to the desired result.
Proof of Theorem 2. 
(a)
The transition function, in the case u 1 < l , takes the form
b ϕ i u 1 ( 1 ) ; j ( l , t ) = P ( Z t = j Z 0 = i , B 0 = l , min r { U i r } = u 1 ) = P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) ( A 8 ) + P ( Z t = j , T 1 t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) . ( A 9 )
The term (A8) of the above formula is written as
P ( Z t = j , T 1 > t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) = P ( Z t = j T 1 > t , Z 0 = i , B 0 = l , min r { U i r } = u 1 ) × P ( T 1 > t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) = δ i j P ( T 1 > t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) = δ i j P ( V 1 > t + l u 1 , T 1 > 0 J 0 = i , T 0 = l , min r { U i r } = u 1 ) P ( T 1 > 0 J 0 = i , T 0 = l , min r { U i r } = u 1 ) = δ i j H ¯ i u 1 ( 1 ) ( t + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) .
As for the term (A9) is written as
P ( Z t = j , T 1 t Z 0 = i , B 0 = l , min r { U i r } = u 1 ) = k E m = 0 t u 2 = 0 P ( Z t = j , T 1 m , J 1 = k , min s { U j s } d u 2 Z 0 = i , B 0 = l , min r { U i r } = u 1 ) d u 2 d m = m = 0 t u 2 = 0 k E P ( Z t = j T 1 m , J 1 = k , min s { U j s } = u 2 , J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) × P ( min s { U j s } d u 2 T 1 m , J 1 = k ) P ( T 1 m , J 1 = k J 0 = i , T 0 = l , T 1 > 0 , U 1 = u 1 ) d u 2 d m = m = 0 t u 2 = 0 k E P ( Z t = j T 1 m , J 1 = k , U 2 = u 2 , J 0 = i , T 0 = l , T 1 > 0 , min r { U i r } = u 1 ) × P ( min s { U j s } d u 2 T 1 m , J 1 = k ) P ( V 1 m + l u 1 , J 1 = k J 0 = i , min r { U i r } = u 1 ) P ( V 1 > l u 1 J 0 = i , min r { U i r } = u 1 ) d u 2 d m = m = 0 t u 2 = 0 k E b ϕ k u 2 ( 1 ) ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) d u 2 d m . = m = 0 t ( k E u 2 = 0 t m b ϕ k u 2 ( 1 ) ; j ( 0 ; t m ) · g k ( u 2 ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) d u 2 + G ¯ j ( t m ) · q i u 1 ( 1 ) ; k ( m + l u 1 ) H ¯ i u 1 ( 1 ) ( l u 1 ) ) d m .
(b)
Following similar steps as before, one may obtain the corresponding recursive formula. We omit here the corresponding details.

References

  1. Barbu, V.S.; Limnios, N. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications—Their Use in Reliability and DNA Analysis; Lecture Notes in Statistics; Springer: New York, NY, USA, 2008; Volume 191. [Google Scholar]
  2. Janssen, J.; Manca, R. Semi-Markov Risk Models for Finance, Insurance and Reliability; Springer: New York, NY, USA, 2007. [Google Scholar]
  3. Limnios, N.; Oprişan, G. Semi-Markov Processes and Reliability; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  4. Papadopoulou, A.A.; Vassiliou, P.-C.G. Continuous time non-homogeneous semi-Markov systems. In Semi-Markov Models and Applications; Janssen, J., Limnios, N., Eds.; Kluwer Academic Publishing: Boston, MA, USA, 1999; pp. 241–251. [Google Scholar]
  5. Silvestrov, D.; Silvestrov, S. Nonlinearly Perturbed Semi-Markov Processes; Springer: Cham, Switzerland, 2017. [Google Scholar]
  6. D’Amico, G.; Biase, G.D.; Gismondi, F.; Manca, R. The evaluation of generalized Bernoulli processes for salary lines construction by means of continuous time generalized non-homogeneous semi-Markov processes. Commun. Stat.-Theory Methods 2013, 42, 2889–2901. [Google Scholar] [CrossRef]
  7. Feyter, T.D.; Guerry, M. Evaluating recruitment strategies using fuzzy set theory in stochastic manpower planning. Stoch. Anal. Appl. 2009, 27, 1148–1162. [Google Scholar] [CrossRef]
  8. Ernst, A.T.; Jiang, H.; Krishnamoorthy, M.; Sier, D. Staff scheduling and rostering: A review of applications, methods and models. Eur. J. Oper. Res. 2004, 153, 3–27. [Google Scholar] [CrossRef]
  9. Janssen, J.; Manca, R. Salary cost evaluation by means of non-homogeneous semi-Markov processes. Stoch. Model. 2002, 18, 7–23. [Google Scholar] [CrossRef]
  10. McClean, S.I. Semi-Markov models for manpower planning. In Semi-Markov Models: Theory and Applications; Janssen, J., Ed.; Springer: Boston, MA, USA, 1986. [Google Scholar]
  11. McClean, S.I.; Montgomery, E. Estimation for semi-Markov manpower models in a stochastic environment. In Semi-Markov Models and Applications; Janssen, J., Limnios, N., Eds.; Kluwer: Boston, MA, USA, 1999. [Google Scholar]
  12. Garnier, H.; Peter, C.Y. The advantages of directly identifying continuous-time transfer function models in practical applications. Int. J. Control. 2014, 87, 1319–1338. [Google Scholar] [CrossRef]
  13. Barbu, V.S.; D’Amico, G.; Manca, R.; Petroni, P. Step semi-Markov models and application to manpower management. Esaim Probab. Stat. 2016, 20, 555–571. [Google Scholar] [CrossRef]
  14. Garnier, H.; Liuping, W.; Peter, C.Y. Direct identification of continuous-time models from sampled data: Issues, basic solutions and relevance. In Identification of Continuous-Time Models from Sampled Data; Garnier, H., Wang, L., Eds.; Advances in Industrial Control; Springer: London, UK, 2008; pp. 1–29. [Google Scholar] [CrossRef]
  15. de Haan-Rietdijk, S.; Voelkle, M.C.; Keijsers, L.; Hamaker, E.L. Discrete-vs. continuous-time modeling of unequally spaced experience sampling method data. Front. Psychol. 2017, 8, 1849. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Blasi, A.; Janssen, J.; Manca, R. Numerical treatment of homogeneous and non-homogeneous semi-Markov reliability models. Commun. Stat. Theory Methods 2004, 33, 697–714. [Google Scholar] [CrossRef]
  17. Corradi, G.; Janssen, J.; Manca, R. Numerical treatment of homogeneous semi-Markov processes in transient case-a straightforward approach. Methodol. Comput. Appl. Probab. 2004, 6, 233–246. [Google Scholar] [CrossRef]
  18. D’Amico, G. Age-usage semi-Markov models. Appl. Math. Model. 2011, 35, 4354–4366. [Google Scholar] [CrossRef]
  19. Moura, M.D.C.; Droguett, E.L. Mathematical formulation and numerical treatment based on transition frequency densities and quadrature methods for non-homogeneous semi-Markov processes. Reliab. Eng. Syst. Saf. 2009, 94, 342–349. [Google Scholar] [CrossRef]
  20. Wu, B.; Maya, B.I.G.; Limnios, N. Using semi-Markov chains to solve semi-Markov processes. Methodol. Comput. Appl. Probab. 2021, 23, 1419–1431, Erratum in Commun. Methodol. Comput. Appl. Probab. 2021, 23, 1433–1434.. [Google Scholar] [CrossRef]
  21. Barbu, V.S.; Karagrigoriou, A.; Makrides, A. Estimation and reliability for a special type of semi-Markov process. J. Math. Stat. 2019, 15, 259–272. [Google Scholar] [CrossRef] [Green Version]
  22. Barbu, V.S.; Karagrigoriou, A.; Makrides, A. Semi-Markov modelling for multi-state systems. Methodol. Comput. Appl. Probab. 2016, 19, 1011–1028. [Google Scholar] [CrossRef]
  23. Ayhar, C.; Barbu, V.S.; Mokhtari, F.; Rahmani, S. On the asymptotic properties of some kernel estimators for continuous semi-Markov processes. J. Nonparametric Stat. 2022, 34, 299–318. [Google Scholar] [CrossRef]
  24. Barbu, V.S.; Beltaief, S.; Pergamenshchikov, S. Adaptive efficient estimation for generalized semi- Markov big data models. Ann. Inst. Stat. Math. 2022. [Google Scholar] [CrossRef]
  25. Barbu, V.S.; Beltaief, S.; Pergamenshchikov, S. Robust adaptive efficient estimation for a semi-Markov nonparametric regression models. Stat. Inference Stoch. Process. 2018, 22, 187–231. [Google Scholar] [CrossRef] [Green Version]
  26. Limnios, N.; Ouhbi, B. Nonparametric estimation of some important indicators in reliability for semi-Markov processes. Stat. Methodol. 2006, 3, 341–350. [Google Scholar] [CrossRef]
  27. Ouhbi, B.; Limnios, N. Nonparametric reliability estimation of semi-Markov processes. J. Stat. Plan. Inference 2003, 109, 155–165. [Google Scholar] [CrossRef]
  28. Votsi, I.; Gayraud, G.; Barbu, V.S.; Limnios, N. Hypotheses testing and posterior concentration rates for semi-Markov processes. Stat. Inference Stoch. Process. 2021, 24, 707–732. [Google Scholar] [CrossRef]
  29. Vassiliou, P.-C.G. Limiting distributions of a non-homogeneous Markov system in a stochastic environment in continuous time. Mathematics 2022, 10, 1214. [Google Scholar] [CrossRef]
  30. Dimitriou, V.A.; Tsantas, N. The augmented semi-Markov system in continuous time. Commun. Stat. Theory Methods 2012, 41, 88–107. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barbu, V.S.; D’Amico, G.; Makrides, A. A Continuous-Time Semi-Markov System Governed by Stepwise Transitions. Mathematics 2022, 10, 2745. https://doi.org/10.3390/math10152745

AMA Style

Barbu VS, D’Amico G, Makrides A. A Continuous-Time Semi-Markov System Governed by Stepwise Transitions. Mathematics. 2022; 10(15):2745. https://doi.org/10.3390/math10152745

Chicago/Turabian Style

Barbu, Vlad Stefan, Guglielmo D’Amico, and Andreas Makrides. 2022. "A Continuous-Time Semi-Markov System Governed by Stepwise Transitions" Mathematics 10, no. 15: 2745. https://doi.org/10.3390/math10152745

APA Style

Barbu, V. S., D’Amico, G., & Makrides, A. (2022). A Continuous-Time Semi-Markov System Governed by Stepwise Transitions. Mathematics, 10(15), 2745. https://doi.org/10.3390/math10152745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop