Next Article in Journal
Kharitonov Theorem Based Robust Stability Analysis of a Wind Turbine Pitch Control System
Next Article in Special Issue
Insurance Contracts for Hedging Wind Power Uncertainty
Previous Article in Journal
Method for Developing Combinatorial Generation Algorithms Based on AND/OR Trees and Its Application
Previous Article in Special Issue
Stability Estimates for Finite-Dimensional Distributions of Time-Inhomogeneous Markov Chains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications

by
Nikolaos Limnios
1,* and
Anatoliy Swishchuk
2
1
Sorbonne University Alliance, Université de Technologie de Compiègne, 60203 Compiègne, France
2
Department of Mathematics and Statistics, Faculty of Science, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(6), 963; https://doi.org/10.3390/math8060963
Submission received: 27 May 2020 / Revised: 5 June 2020 / Accepted: 9 June 2020 / Published: 12 June 2020
(This article belongs to the Special Issue New Trends in Random Evolutions and Their Applications)

Abstract

:
This paper deals with discrete-time semi-Markov random evolutions (DTSMRE) in reduced random media. The reduction can be done for ergodic and non ergodic media. Asymptotic approximations of random evolutions living in reducible random media (random environment) are obtained. Namely, averaging, diffusion approximation and normal deviation or diffusion approximation with equilibrium by martingale weak convergence method are obtained. Applications of the above results to the additive functionals and dynamical systems in discrete-time produce the above tree types of asymptotic results.

1. Introduction

In order to simplify the analysis of complex systems, we consider stochastic approximation methods where we not only simplify the system but also the random media. These address the fact that some subsets of state space are weakly connected with the other subsets, that is, the transition probabilities are very small compared to transition probabilities inside the considered subsets. This fact allows one to proceed into an asymptotic reduction of the state space of system and also of the random medium.
In fact, the random medium, which is coming to perturb the considered system, or equivalently the random evolution, can explore some subsets of its state space in a fast time, while some other subsets of states are explored in a slow time. On the one hand, in the scale of fast time, the slow time explored subsets take place as rare events. On the other hand, in the slow time scale, the fast time can be considered as a unique merged state, since states into the fast subsets are undistinguishable from the point of view of slow time scale.
Of course, the different kind of stochastic approximations of the random evolutions give us different kind of results. Namely, the average approximation leave the same structure of the system but with a simpler state space of the random medium and structure. In the diffusion approximation the structure of the system is simplified to a switched diffusion process, and the switching random medium in a simpler state space and structure. In the normal deviation, or equivalently in the merging with equilibrium, the considered process is the difference of the initial process by the mean process obtained in the averaging scheme and the limit is a switched diffusion process.
Concerning the state space of random media we may consider a finite or even uncountable factor space of the state space on which we consider a supporting Markov chain and the original process is considered as a perturbation of the above supporting Markov chain by a signed transition kernel.
Results of this kind in continuous time have been presented in several works including those by the authors of the present paper. This kind of results in semi-Markov setting were first presented by V.S. Koroliuk and his collaborators [1,2,3,4,5]. Asymptotic merging called consolidation is also studied by Anisimov [6]. See also results by Yin and Zhang [7]. Some recent papers are also dedicated to the above problems. For the Markov switching models, see, for example, References [8,9], and for non equilibrium Markov processes, see, for example, References [10,11].
Discrete-time semi-Markov random evolution have already been studied as the embedded Markov process of semi-Markov processes, where in fact it turns out to be a Markov chain random evolution, see, for example, References [1,2]. Discrete calendar time Markov evolution were introduced first by Keepler in Reference [12]. In semi-Markov setting they have been introduced in Reference [13], and studied in depth in Reference [14]. This paper presents new results as continuation of those presented in Reference [14]; they are different from the fact that the random media there were on fixed state space and not reducible as in the present case. Nevertheless, for the first part, the merging of semi-Markov chain, we are using here a different technique to obtain merging of the state space and asymptotic results for stochastic systems, which is based on the compensating operator of the semi-Markov chain, see, for example, References [1,15].
The limit results obtained here are in the weak functional sense in the Skorohod topology, see, for example, References [15,16,17,18,19,20,21]. For works on random evolution see, for example, in References [1,2,3,5,12,22,23], and references therein. Reference works for Markov chains see, for example, References [24,25,26,27,28]. For semi-Markov processes see, for example, References [29,30,31,32]; and for discrete-time see, for example, References [13,14,33]. Useful results in Banach space can be found in, for example, References [14,34,35,36,37,38,39]. For applications to real problems of the type of results presented here see, for example, References [4,16,40,41,42].
The paper is organized as follows. Section 2 includes the semi-Markov chain setting needed in the sequel. Section 3 includes merging state space definition and results of asymptotic merging in the ergodic and non ergodic cases. Section 4, includes discrete-time semi-Markov random evolution (DTSMRE) definition and preliminary results. Section 5 presents the main results of this paper, that is average, diffusion and diffusion with equilibrium approximation or normal deviation results for DTSMRE with merging. Section 6 presents average, diffusion and diffusion with equilibrium approximation results for particular systems: integral functionals and dynamical systems. Section 7 presents proofs of the theorems. Finally, Section 8 contains concluding remarks.

2. Semi-Markov Chains with Merging

Let ( E , E ) be a measurable space with countably generated σ -algebra and ( Ω , F , ( F n ) n N , P ) be a stochastic basis on which we consider a Markov renewal process (MRP), ( x n , τ n , n N ) , in discrete time k N , with state space ( E , E ) . It is worth noticing that k is the calendar time, while n is the number of jumps, both are N -valued. Notice that N is the set of non-negative integer numbers. The semi-Markov kernel q is defined by (see, e.g., Reference [33]),
q ( x , B , k ) : = P ( x n + 1 B , τ n + 1 τ n = k x n = x ) , x E , B E , k , n N .
We will also denote q ( x , B , Γ ) = k Γ q ( x , B , k ) , where Γ N . The process ( x n ) is the embedded Markov chain (EMC) of the MRP ( x n , τ n ) with transition kernel P ( x , d y ) on the state space ( E , E ) . The semi-Markov kernel q is written as
q ( x , d y , k ) = P ( x , d y ) f x y ( k ) ,
where f x y ( k ) : = P ( τ n + 1 τ n = k x n = x , x n + 1 = y ) , the conditional distribution of the sojourn time in state x given that the next visited state is y. We set q ( · , · , 0 ) 0 .
Here for simplicity we do not consider dependence of the function f x y on the second state y, that is, the sojourn time distribution in state x is independent of the arrival state y, and we will denote it as f x . In fact, any semi-Markov process with x and y dependence can be transformed to one with dependence only on x, see, for example, Reference [29]. So, there is no restriction to the generality.
Let ν k = max { n : τ n k } be the process which counts the jumps of the EMC x n , in the time interval [ 0 , k ] N , and the discrete-time semi-Markov chain z k by z k = x ν k , for k N . Define now the backward recurrence time process γ k : = k τ ν k , k 0 , and the filtration F k : = σ ( z , γ ; k ) , k 0 .
The Markov chain ( z k , γ k ) , k 0 , has the following transition probability operator on the real bounded measurable functions defined on E × N ,
P φ ( x , k ) = 1 F ¯ x ( k ) E \ { x } q ( x , d y , k + 1 ) φ ( y , 0 ) + F ¯ x ( k + 1 ) F ¯ x ( k ) φ ( x , k + 1 ) .
It is worth noticing that the above relation (2) can be written also in another interesting form as follows
P φ ( x , k ) = φ ( x , k + 1 ) + λ x ( k + 1 ) [ P φ ( x , 0 ) φ ( x , k + 1 ) ] ,
where λ x ( k + 1 ) is the exit rate of the SMC from the state x E and time k + 1 , given by P x ( τ 1 = k + 1 τ 1 > k ) . Of course, as usually, the transition rate in discrete-time is a probability and not a positive real-valued function as is the case in continuous-time. The above relation is similar to the generator of the process ( z t , γ t ) in the continuous-time, see, for example, References [1,2,3].
The stationary distribution of the process ( z k , γ k ) , if there exists, is given by
π ( d x × { k } ) = ρ ( d x ) F ¯ x ( k ) / m ,
where m ( x ) is the mean sojourn time in state x E , and
m : = E ρ ( d x ) m ( x ) , m ( x ) = k 0 F ¯ x ( k ) ,
and ρ ( d x ) is the stationary distribution of the EMC ( x n ) , F x ( k ) : = q ( x , E , [ 0 , k ] ) , and F ¯ x ( k ) : = 1 F x ( k ) = q ( x , E , [ k + 1 , ) ) . The probability measure π defined by π ( B ) = π ( B × N ) is the stationary probability of the SMC ( z k ) . From the above equality we get the following useful equality
π ( d x ) = ρ ( d x ) m ( x ) / m ,
which connect the stationary distribution of the semi-Markov chain, with the stationary distribution of the embedded Markov chain, when they exist.
Define also the r-th moment of holding time in state x E ,
m r ( x ) : = k 1 k r q ( x , E , k ) , r = 1 , 2 ,
Of course, m ( x ) = m 1 ( x ) , for any x E .
Define now the uniform integrability of the r-th moments of the sojourn time in states by
lim M sup x E k M k r f x ( k ) = 0 ,
for any n 1 .

3. Merging of Semi-Markov Chains

We present here the two cases: ergodic and non ergodic of the semi-Markov chain in merging scheme.

3.1. The Ergodic Case

Let us consider a family of ergodic semi-Markov chains z k ε , k 0 , ε > 0 , with semi-Markov kernel q ε and a fixed state space ( E , E ) , a measurable space.
Let us consider the following partition (split) of the state space.
E = j = 1 d E j , E i E j = , i j .
Let us also consider the trace of σ algebra E on E j , denoted by E j , for j = 1 , , d .
The semi-Markov kernels have the following representation
q ε ( x , B , k ) = P ε ( x , B ) f x ( k ) ,
where the transition kernel of the EMC x n ε , n 0 , has the representation
P ε ( x , B ) = P ( x , B ) + ε P 1 ( x , B ) .
The transition kernel P determines a support Markov chain, say x n 0 , n 0 , and satisfies the following relations
P ( x , E j ) = 1 j ( x ) 1 E j ( x ) = { 1 if x E j 0 if x E j ,
for j = 1 , , d . Of course, the signed perturbing kernel P 1 satisfies the relation P 1 ( x , E ) = 0 , and P ε ( x , E ) = P ( x , E ) = 1 .
The perturbing signed transition kernel, P 1 , provides transition probabilities between merged states.
Let v : E E ^ be the merging onto function defined by v ( j ) = , if j E , E ^ = { 1 , , d } .
Set k : = [ t / ε ] , where [ x ] is the integer part of the positive real number x, and define the split family of processes
x ^ t ε : = v ( z [ t / ε ] ε ) , t 0 , ε > 0 .
Define also the projector operator Π onto the null space, N ( Q ) , of the operator Q : = P I by
Π φ ( x ) = φ ^ ( v ( x ) ) , where φ ^ ( j ) : = E j ρ j ( d x ) φ ( x ) .
This operator satisfies the equations
Π Q = Q Π = 0 .
The potential operator of Q, denoted by R 0 , is defined by
R 0 : = ( Q + Π ) 1 Π = k 0 [ P k Π ] .
Let us now consider the following assumptions needed in the sequel.
C1:
The transition kernel P ε ( x , B ) of the embedded Markov chain x n ε has the representation (7).
C2:
The supporting Markov chain ( x n 0 ) with transition kernel P is uniformly ergodic in each class E j , with stationary distribution ρ j ( d x ) , j E ^ , that is,
ρ j ( B ) = E j ρ j ( d x ) P ( x , B ) , and ρ j ( E j ) = 1 , B E j
C3:
The average exit probabilities of the initial embedded Markov chain ( x n ε ) are positive, that is,
p ^ j : = E j ρ j ( d x ) P 1 ( x , E \ E j ) > 0 .
C4:
The mean merged values are positive and bounded, that is,
0 < m j : = E j ρ j ( d x ) m ( x ) < .
From relation (3), we get directly,
π j ( d x ) q ( x ) = q j ρ j ( d x ) ,
where q ( x ) : = 1 / m ( x ) and q j : = 1 / m j with m j : = E j ρ j ( d x ) m ( x ) .
Theorem 1.
Under assumptions C1-C4, the following weak convergence takes place
x ^ t ε x ^ t a s ε 0 ,
where the limit merged process x ^ t is a continuous-time Markov process determined on the state space E ^ = { 1 , , d } , by the intensity matrix
Q ^ = ( q ^ i j ; i , j E ^ ) ,
where:
q ^ i j = q i p ^ i j , j i q i p ^ i i , j = i .
and p ^ i j : = E i ρ i ( d x ) P 1 ( x , E j ) , with i , j E ^ , and q i : = E i π i ( d x ) q ( x ) .

3.2. The Non-Ergodic Case

Let us consider a family of semi-Markov chains z k ε , k 0 , ε > 0 , with semi-Markov kernels q ε and a fixed state space ( E , E ) , a measurable space, which includes an absorbing state, say 0. Of course, here state 0 can represent a final class, say E 0 , and the analysis presented here is the same.
Let us consider the following partition of the state space.
E = E { 0 } , E = j = 1 d E j , E i E j = , i j .
Let v : E E ^ 0 be the merging onto function defined by v ( j ) = , if j E , E ^ 0 = { 0 , 1 , , d } , and v ( 0 ) = 0 .
We now need the following condition.
C5:
The average transition probabilities of the initial embedded Markov chain ( x n ε ) to state 0, satisfy the following,
p ^ j 0 : = E j ρ j ( d x ) P 1 ( x , E ) > 0
with partition of the state space as defined by (13).
Let us also define the absorption time to state 0, ζ ε , for any ε > 0 ,
ζ ε : = inf { t 0 : z [ t / ε ] ε = 0 } .
Theorem 2.
Under assumptions C1-C4, and C5 the following weak convergence takes place
x ^ t ε x ^ t a s ε 0 ,
where the limit merged process x ^ t , 0 t ζ ^ , is a continuous-time Markov process determined, on the state space E ^ = { 0 , 1 , , d } , by the intensity matrix
Q ^ = ( q ^ i j ; i , j E ^ 0 ) ,
where:
q ^ i j = q i p ^ i j , j i , i 0 q i p ^ i i , j = i , i 0 0 , i = 0
and ζ ^ : = inf { t 0 : x ^ ( t ) = 0 } .

4. Semi-Markov Random Evolution

Let us consider a separable Banach space B of real-valued measurable functions defined on E, endowed with the sup norm · and denote by B its Borel σ -algebra. Let us given a family of bounded contraction operators D ( x ) , x E , defined on B, where the maps D ( x ) φ : E B are E -measurable, φ B . Denote by I the identity operator on B. For a discrete generator Q, on B, let Π B = N ( Q ) be the null space, and ( I Π ) B = R ( Q ) be the range values space of operator Q. We will suppose here that the Markov chain ( x n , n N ) , with discrete operator Q = P I , is uniformly ergodic, that is, ( P n Π ) φ 0 , as n , for any φ B . In that case, the transition operator is reducible-invertible on B. Thus, we have B = N ( Q ) R ( Q ) , the direct sum of the two subspaces. The domain of an operator A on B is D ( A ) : = { φ B : A φ B } .
Let us define now a discrete-time semi-Markov random evolution (DTSMRE).
Let us define a (forward) discrete-time semi-Markov random evolution Φ k , k N , on B, by ([13,14]):
Φ k φ = D ( z k ) D ( z k 1 ) D ( z 2 ) D ( z 1 ) φ , k 1 , and Φ 0 = I .
for any φ B 0 : = x E D ( D ( x ) ) . Thus we have Φ k = D ( z k ) Φ k 1 .
For example, consider an additive functional of the SMC ( z k ) , that is, α k : = u + = 1 k a ( z ) , for k 1 , and α 0 = u . Define now a family of operators D ( x ) , x E , defined on B by D ( x ) φ ( u ) = φ ( u + a ( x ) ) . Then we can write Φ k φ ( u ) = = 1 k D ( z ) φ ( u ) = φ ( u + = 1 k a ( z ) ) = φ ( z k ) .
The process M k defined by
M k : = Φ k I = 0 k 1 E [ Φ + 1 Φ F ] ,
on B, is an F k -martingale. The random evolution Φ k can be written as follows
Φ k : = I + = 0 k 1 [ D ( z + 1 ) I ] Φ ,
and then, the martingale (17) can be written as follows
M k : = Φ k I = 0 k 1 E [ ( D ( z + 1 ) I ) Φ F ] ,
or
M k : = Φ k I = 0 k 1 [ E ( D ( z + 1 ) F ) I ] Φ .
Finally, as E [ ( D ( z + 1 ) Φ φ F ) ] = ( P D ( · ) Φ φ ) ( z , u ) , one takes
M k : = Φ k I = 0 k 1 [ P D ( · ) I ] Φ .
Let us now define the average random evolution u k ( x ) , x E , k N , by
u k ( x ) : = E x [ Φ k φ ( z k ) ] .
Theorem 3.
The random evolution u k ( x ) satisfy the following Markov renewal equation
u k ( x ) = F ¯ x ( k ) D ( x ) φ ( x ) + l = 0 k E q ( x , d y , l ) D ( y ) u k l ( y ) .

5. Average and Diffusion Approximation with Merging

In this section we present average and diffusion approximation results for the discrete-time semi-Markov random evolution, as well as diffusion approximation with equilibrium or normal deviation.

5.1. Averaging

Let us consider the continuous time process M t ε
M t ε : = M [ t / ε ] = Φ [ t / ε ] ε I = 0 [ t / ε ] 1 [ P D ε ( · ) I ] Φ ε .
We will prove here asymptotic results for this process as ε 0 .
The following assumptions are needed for averaging.
A1:
The MC ( z k , γ k , k N ) is uniformly ergodic in each class E j , with ergodic distribution π j ( B × { k } ) , B E E j , k N , and the projector operator Π is defined by relation (10).
A2:
The moments m 2 ( x ) , x E , are uniformly integrable, that is, relation (4) holds for r = 2 .
A3:
Let us assume that the perturbed operator D ε ( x ) has the following representation in B
D ε ( x ) = I + ε D 1 ( x ) + ε D 0 ε ( x ) ,
where operators D 1 ( x ) on B are closed and B 0 : = x E D ( D 1 ( x ) ) is dense in B, B ¯ 0 = B . Operators D 0 ε ( x ) are negligible, that is, lim ε 0 D 0 ε ( x ) φ = 0 for φ B 0 .
A4:
We have: E π ( d x ) D 1 ( x ) φ 2 < .
A5:
There exists Hilbert spaces H and H such that compactly embedded in Banach spaces B and B , respectively, where B is a dual space to B .
A6:
Operators D ε ( z ) and ( D ε ) ( z ) are contractive on Hilbert spaces H and H , respectively.
We note that if B = C 0 ( R ) , the space of continuous function on R vanishing at infinity, then H = W l , 2 ( R ) is a Sobolev space, and W l , 2 ( R ) C 0 ( R ) and this embedding is compact (see References [34,43]). For the spaces B = L 2 ( R ) and H = W l , 2 ( R ) the situation is the same.
Theorem 4.
Under assumptions A1-A6 and C1-C4 the following weak convergence takes place
Φ [ t / ε ] ε Φ ^ ( t ) , ε 0 ,
where the limit random evolution Φ ^ ( t ) is determined by the following equation
Φ ^ ( t ) φ ^ ( x ^ t ) φ ^ ( u ) 0 t L ^ Φ ^ ( s ) φ ^ ( x ^ s ) d s = 0 , 0 t T , φ B 0 ,
with generator L ^ , defined by
L ^ Π = Π D 1 Π + Π Q 1 Π
and acting on test functions φ ( x , v ( x ) ) . The operator Q 1 is defined by
Q 1 φ ( x , k ) = λ x ( k + 1 ) E P ( x , d y ) φ ( y , 0 ) .
Let us consider the average random evolution defined as Λ x ( t ) : = E x [ Φ ^ ( t ) ϕ ^ ( u ) ] , x E . Set D ^ 1 Π = Π D 1 Π and Q ^ Π = Π Q 1 Π . For detailed description of operator Q ^ see Theorem 1.Then we have the following straightforward result.
Corollary 1.
The average random evolution Λ x ( t ) satisfy the following Cauchy problem:
d Λ x d t ( t ) = ( Q ^ + D ^ 1 ) Λ x ( t ) Λ x ( 0 ) = φ ^ ( u ) .

5.2. Diffusion Approximation

For the diffusion approximation we will consider a different time-scaling and some additional assumptions. In this case, we replace relation (7) by the following one
P ε ( x , B ) = P ( x , B ) + ε 2 P 1 ( x , B ) .
D1:
Let us assume that the perturbed operators D ε ( x ) have the following representation in B
D ε ( x ) = I + ε D 1 ( x ) + ε 2 D 2 ( x ) + ε 2 D 0 ε ( x ) ,
where operators D 2 ( x ) on B are closed and B 0 : = x E D ( D 2 ( x ) ) is dense in B, B ¯ 0 = B ; operators D 0 ε ( x ) are a negligible operator, that is, lim ε 0 D 0 ε ( x ) φ = 0 .
D2:
The following balance condition holds
Π D 1 ( x ) Π = 0 .
D3:
The moments m 3 ( x ) , x E , are uniformly integrable, that is, relation (4) holds for r = 3 .
Theorem 5.
Under Assumptions A1, A5-A6 and D1-D3, the following weak convergence takes place
Φ [ t / ε 2 ] ε Φ 0 ( t ) , ε 0 ,
where the limit random evolution Φ 0 ( t ) is a diffusion random evolution determined by the following generator L ^ ,
L ^ Π = Π D 2 Π + Π D 1 R 0 D 1 Π Π D 1 2 Π + Q ^ Π ,
where the operator Q ^ is defined in Theorem 1.

5.3. Normal Deviation with Merging

We note, that averaged semi-Markov random evolutions can be considered as the first approximation to the initial evolutions. The diffusion approximation of the semi-Markov random evolutions determine the second approximation to the initial evolution, since the first approximation under balance condition appears to be trivial.
Here we consider the algorithms of construction of the first and the second approximation in the case when the balance condition in the diffusion approximation scheme is not fulfilled. We introduce the deviated semi-Markov random evolution as the normalized difference between the initial and averaged evolutions. In the limit, we obtain the diffusion approximation with equilibrium of the initial evolution from the averaged one.
Let us consider the discrete-time semi-Markov random evolution Φ [ t / ε ] ε , averaged evolution Φ ^ ( t ) (see Section 5.1) and the deviated evolution
Ψ t ε : = ε 1 / 2 [ Φ [ t / ε ] ε Φ ^ ( t ) ] .
Theorem 6.
Under Assumptions A1, A5-A6 and D3, with operators D ε ( x ) in A3, instead of D1, the deviated semi-Markov random evolution Ψ t ε weakly convergence when ε 0 to the diffusion random evolution Ψ t 0 defined by the following generator
L ^ Π = Π ( D 1 D ^ 1 ) R 0 ( D 1 D ^ 1 ) Π + Q ^ Π ,
where the operator Q ^ is defined in Theorem 1.

6. Application to Particular Systems

In this section we will apply the above Theorems 4–6 to obtain limit results for particular stochastic systems, namely, additive functionals of semi-Markov chains and discrete-time dynamical systems perturbed by semi-Markov chains.

6.1. Integral Functionals

The integral functional of a semi-Markov chain considered here is defined by
y k : = l = 0 k a ( z l ) , k 0 , y 0 = u ,
where a is a real-valued measurable function defined on the state space E.
The perturbing operator D ε ( x ) is defined as follows
D ε ( x ) φ ( u ) = φ ( u + ε a ( x ) ) .
The perturbed operator D ε ( x ) has the asymptotic expansions (20), for averaging, and (25), for diffusion approximation, with D 1 ( x ) φ ( u ) = a ( x ) φ ( u ) and D 2 ( x ) φ ( u ) = 1 2 a ( x ) φ ( u ) .

6.1.1. Average Approximation

In the averaging scheme the additive functional has the representation
y t ε : = ε l = 0 [ t / ε ] a ( z l ) , k 0 , ε > 0 , y 0 ε = u .
Theorem 7.
Under conditions C1-C4 and A1-A2 the following weak convergence holds
y t ε y ^ t , a s ε 0 ,
where the limit process is an integral functional, defined by
y ^ t : = u + 0 t a ^ ( x ^ s ) d s ,
with a ^ ( j ) = E j π j ( d x ) a ( x ) . The Markov process x ^ t , is defined on the state space E ^ as in the previous section by the generator Q ^ , defined in Theorem 1
It is worth noticing here that the initial processes (28) are switched by a SMC, and the limit process by a continuous-time Markov process on a finite state space, which is much simpler.

6.1.2. Diffusion Approximation

In the diffusion approximation the additive functional is time-rescaled as follows
ξ t ε : = ε l = 0 [ t / ε 2 ] a ( z l ) , t 0 , ε > 0 , ξ 0 ε = u .
Then we have the following result.
Theorem 8.
Under conditions C1-C4, A1-A6 and D1-D2 the following weak convergence holds
ξ t ε ξ t 0 , a s ε 0 ,
where the limit process is a switched diffusion process,
d ξ t 0 = b ( x ^ t ) d W t , a n d ξ 0 0 = u .
The process W t , t 0 , is a standard Brownian motion, and b 2 ( j ) : = a ^ 0 ( j ) 1 2 a ^ 2 ( j ) . The coefficients are
a ^ 0 ( j ) : = E j π j ( d x ) a ( x ) R 0 a ( x ) , a n d a ^ 2 ( j ) : = E j π j ( d x ) a 2 ( x ) .
It is worth noticing that the generator of the diffusion ξ 0 ( t ) can be written as follows
L φ ( u , x ) = 1 2 j = 1 d b 2 ( j ) φ ^ j ( u ) 1 j ( x ) .
In fact the limit process is switched by the merged Markov process x ^ t , defined in Theorem 1.

6.1.3. Normal Deviation

The diffusion approximation with equilibrium will be realized without balance condition D 2 . Let us consider the stochastic processes ζ t ε , t 0 , ε > 0 ,
ζ t ε : = ε 1 / 2 ( y t ε y ^ t ) .
The process y ^ t is the limit process in the averaging scheme.
Then we have the following weak convergence result.
Theorem 9.
Under conditions C1-C4 and A1-A2 the following weak convergence holds
ζ t ε ζ t 0 , a s ε 0 ,
where the limit process is a switched diffusion process,
d ζ 0 ( t ) = c ( x ^ t ) d W t , a n d ζ 0 0 = 0 .
The process W t , t 0 , is a standard Brownian motion, and
c 2 ( j ) : = E j π j ( d x ) ( a ^ ( j ) a ( x ) ) R 0 ( a ^ ( j ) a ( x ) ) , 1 j d .
The limit process here is also switched by the merged Markov process x ^ t .

6.2. Discrete Dynamical Systems

Let us consider the family of difference equations
y k + 1 ε = y k ε + ε C ( y k ε ; z k + 1 ) , k 0 , and y 0 ε = u ,
switched by the SMC ( z k ) .
The perturbed operators D ε ( x ) , x E , are defined now by
D ε ( x ) φ ( u ) = φ ( u + ε C ( u , x ) ) .
The perturbed operator D ε ( x ) has the asymptotic expansions (20), for averaging, and (25), for diffusion approximation, with D 1 ( x ) φ ( u ) = C ( u , x ) φ ( u ) and D 2 ( x ) φ ( u , x ) = 1 2 C ( u , x ) φ ( u ) .

6.2.1. Average Approximation

The time-scaled system considered here is
y t ε : = y [ t / ε ] + 1 ε = y [ t / ε ] ε + ε C ( y [ t / ε ] ε ; z [ t / ε ] + 1 ) , t 0 , and y 0 ε = u .
Theorem 10.
Under conditions C1-C4 and A1-A2 the following weak convergence holds
y t ε y ^ t , a s ε 0 ,
where the limit process is a continuous-time dynamical system, defined by
d d t y ^ t : = C ^ ( y ^ t ; x ^ t ) , a n d y ^ 0 0 = u
with C ^ ( u , j ) = E j π j ( d x ) C ( u , x ) .

6.2.2. Diffusion Approximation

The time-scaled dynamical system considered here is
ξ t ε : = y [ t / ε 2 ] + 1 ε = y [ t / ε 2 ] ε + ε C ( y [ t / ε 2 ] ε ; z [ t / ε 2 ] + 1 ) , t 0 , and y 0 ε = u .
Theorem 11.
Under conditions C1-C4, A1-A2 and D2 the following weak convergence holds
ξ t ε ξ t 0 , a s ε 0 ,
where the limit process is a switched diffusion process,
d ξ t 0 = c ( x ^ t ) d W t , a n d ξ 0 0 = u .
The process W t , t 0 , is a standard Brownian motion, and c 2 ( j ) : = 1 2 C ^ ( u , j ) + C ^ 0 ( u , j ) C ^ 2 ( u , j ) .
The coefficients are
C ^ ( u , j ) : = E j π j ( d x ) C ( u , x ) , C ^ 0 ( u , j ) : = E j π j ( d x ) C ( u , x ) R 0 C ( u , x ) ,
a n d C ^ 2 ( u , j ) : = E j π j ( d x ) C 2 ( u , x ) .

6.2.3. Normal Deviation

The time-scaled system considered here is
ζ t ε : = ε 1 / 2 ( y [ t / ε ] ε y ^ t ) , t 0 , and ζ 0 ε = u .
Theorem 12.
Under conditions C1-C4 and A1-A2 the following weak convergence holds
ζ t ε ζ t 0 , a s ε 0 ,
where the limit process is a switched diffusion process,
d ζ t 0 = c e ( x ^ t ) d W t , a n d ζ 0 0 = u .
The process W t , t 0 , is a standard Brownian motion, and
c e 2 ( j ) : = E j π j ( d x ) [ C ( u , x ) C ^ ( u , j ) ] R 0 [ C ( u , x ) C ^ ( u , j ) ] .

7. Proofs

As the state space E ^ of the switching process z k is a finite set, we do not consider the new component v ( z k ) , and the proofs of tightness in Reference [14] are also valuable here. So, we will only prove here the finite dimensional distributions convergence concerned by the transition kernels.

7.1. Proof of Theorem 1

Proof. 
Let us consider the extended Markov renewal process
x n ε , v ( x n ε ) , τ n ε , t 0 , ε > 0 ,
where n : = [ t / ε ] , x n ε = z ε ( τ n ε ) and τ n + 1 ε = τ n ε + [ ε θ n + 1 ε ] .
The compensating operator of this process is defined by the following relation, (see Reference [1]),
L ε φ ( x , v ( x ) , k ) = ε 1 q ( x ) E [ φ ( x 1 ε , v ( x 1 ε ) , τ 1 ε ) x 0 ε = x , v ( x 0 ε ) = j , τ 0 ε = k ] φ ( x , j , k ) .
The compensating operator L ε acting on test functions φ ( x , v ( x ) ) , x E , can be written as follows
L ε φ ( x , v ( x ) ) = ε 1 q ( x ) E P ε ( x , d y ) [ φ ( y , v ( y ) ) φ ( x , v ( x ) ) ] .
And now from (7), the operator L ε , can be written as follows
L ε = ε 1 Q + Q 1 ,
where
Q φ ( x , v ( x ) ) = q ( x ) E P ( x , d y ) [ φ ( y , v ( y ) ) φ ( x , v ( x ) ) ] ,
and
Q 1 φ ( x , v ( x ) ) = q ( x ) E P 1 ( x , d y ) φ ( y , v ( y ) ) .
Now, by the following singular perturbation problem, on test functions φ ε ( x , v ( x ) ) = φ ^ ( v ( x ) ) + ε φ 1 ( x , v ( x ) ) ,
L ε φ ε ( x , v ( x ) ) = L φ ( x ) + ε θ ε ( x )
and from Proposition 5.1 in Reference [1], we get the limit operator L , whose the contracting form, L ^ , defined by the relation
Π Q 1 Π = L ^ Π
provide us directly the generator Q ^ L ^ of the limit process x ^ ( t ) , and the prove is achieved. □
Proof of Theorem 2 is the same as the previous one.

7.2. Proof of Theorem 3

Proof. 
We have E x [ Φ k φ ( z k ) 1 { τ 1 > k } ] = F ¯ x ( k ) D ( x ) φ ( x ) , and E x [ Φ k φ ( z k ) 1 { τ 1 k } ] = l = 0 k E q ( x , d y , l ) E x [ Φ k φ ( z k ) x 1 = y , τ 1 = l ] = l = 0 k E q ( x , d y , l ) D ( y ) E y [ Φ k l φ ( z k l ) ] , and the result follows. □

7.3. Proof of Theorem 4

Proof. 
The perturbed semi-Markov kernel q ε , has the representation q ε ( x , B , k ) = q ( x , B , k ) + ε q 1 ( x , B , k ) , where q 1 ( x , d y , k ) : = P 1 ( x , d y ) f x ( k ) .
The discrete generators of the four component family of processes Φ [ t / ε ] ε φ , z [ t / ε ] ε , v ( z [ t / ε ] ε ) , γ [ t / ε ] ε , t 0 , ε > 0 , are
L ε φ ( u , x , v ( x ) , k ) = ε 1 E P ( x , k ; d y , k + 1 ) [ D ε ( y ) φ ( u , y ; v ( y ) , k + 1 ) φ ( u , x , v ( x ) , k ) ] .
The asymptotic representation of the above operator acting on test functions φ ( u , x , v ( x ) , k ) is given by
L ε φ ( u , x , v ( x ) , k ) = [ ε 1 Q , ε + P , ε D 1 ( · ) + P , ε D 0 ε ( · ) ] φ ( u , x , v ( x ) , k ) ,
where Q , ε : = P , ε I .
Now from (2) the transition operator P , ε can be written as follows P , ε = P + ε Q 1 , where the operator Q 1 is defined by relation (23).
Finally, the asymptotic representation of the operator L ε can be written
L ε ( x ) = ε 1 Q + Q 1 + P D 1 ( x ) + θ ε ( x )
where Q : = P I , and the negligible operator θ ε ( x ) is given by θ ε ( x ) : = P , ε D 0 ε ( x ) + ε Q 1 ( D 1 ( x ) + D 0 ε ( x ) ] .
From the singular perturbation problem L ε φ ( u , x ) = L φ ( v ( x ) ) + θ ε ( x ) , with test functions φ ε = φ + ε φ 1 , we get the limit operator defined by L ^ Π = Π ( P D 1 ( x ) + Q 1 ) Π , see, for example, Proposition 5.1 in Reference [1]. From this representation, we get
L ^ = D ^ 1 + Q ^
where D ^ 1 Π = Π D 1 ( x ) Π and Q ^ Π = Π Q 1 Π . □

7.4. Proof of Theorem 5

Proof. 
The discrete generators of the four component family of processes Φ [ t / ε 2 ] ε φ , z [ t / ε 2 ] ε , v ( z [ t / ε 2 ] ε ) , γ [ t / ε 2 ] ε , t 0 , ε > 0 , are
L ε ( x ) = ε 2 Q + ε 1 P D 1 ( x ) + Q 1 + P D 2 ( x ) Θ ε ( x ) .
And solving singular perturbation problem L ε ( x ) φ ( x , k ) = L φ ( v ( x ) ) + θ ε ( x , k ) , with test functions φ ε = φ + ε φ 1 + ε 2 φ 2 , see, for example, Proposition 5.2 in Reference [1], we get the desired result. □
The proof of Theorem 6 is similar to the previous ones.
Finally, the proofs of Theorems 7 to 12 are obtained directly as corollaries from Theorems 4 to 6.

8. Concluding Remarks

In this paper, we presented the semi-Markov random evolution in reduced random media, where the main results were given in Section 4 and Section 5. And their applications for integral functionals of semi-Markov chains and dynamical systems perturbed by semi-Markov chains were given in Section 6. These kind of results have to be extended to many other stochastic systems, as in hidden semi-Markov chains, controlled systems, epidemiology systems, and so forth.
For simplicity, we consider fixed initial conditions of processes, that is, independent of ε . This is not a loss of generality since we can add dependent on ε initial conditions without any problem, see, for example, References [1,2].
It is worth noticing that the discrete-time semi-Markov chains have to be developed, in parallel with the continuous-time semi-Markov processes, as is the case with discrete-time Markov chains and continuous-time Markov processes.

Author Contributions

N.L. and A.S. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Acknowledgments

We are indebted to three anonymous referees for their useful comments that improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koroliuk, V.S.; Limnios, N. Stochastic Systems in Merging Phase Space; World Scientific: Singapore, 2005. [Google Scholar]
  2. Korolyuk, V.S.; Swishchuk, A. Evolution of System in Random Media; CRC Press: Boca Raton, FL, USA, 1995. [Google Scholar]
  3. Korolyuk, V.S.; Turbin, A.F. Mathematical Foundations of the State Lumping of Large Systems; Kluwer Academic Publisher: Dordtrecht, The Netherlands, 1993. [Google Scholar]
  4. Swishchuk, A.; Wu, J. Evolution of Biological Systems in Random Media: Limit Theorems and Stability; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
  5. Swishchuk, A.V. Random Evolutions and their Applications; Kluwer: Dordrecht, The Netherlands, 1995. [Google Scholar]
  6. Anisimov, A.A. Switching Processes in Queuing Models; ISTE: Washington, DC, USA; J. Wiley: London, UK, 2008. [Google Scholar]
  7. Yin, G.G.; Zhang, Q. Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
  8. Endres, S.; Stübinger, J. A flexible regime switching model with pairs trading application to the S& P 500 high-frequency stock returns. Quant. Financ. 2019, 19, 1727–1740. [Google Scholar]
  9. Yang, J.W.; Tsai, S.Y.; Shyu, S.D.; Chang, C.C. Pairs trading: The performance of a stochastic spread model with regime switching-evidence from the S& P 500. Int. Rev. Econ. Financ. 2016, 43, 139–150. [Google Scholar]
  10. Chetrite, R.; Touchette, H. Nonequilibrium Markov processes conditioned on large deviations. Ann. Inst. Poincaré 2015, 16, 2005–2057. [Google Scholar] [CrossRef] [Green Version]
  11. Touchette, H. Introduction to dynamical large deviations Markov processes. Phys. A Statist. Mech. Appl. 2018, 504, 5–19. [Google Scholar] [CrossRef] [Green Version]
  12. Keepler, M. Random evolutions processes induced by discrete time Markov chains. Port. Math. 1998, 55, 391–400. [Google Scholar]
  13. Limnios, N. Discrete-time semi-Markov random evolutions—Average and diffusion approximation of difference equations and additive functionals. Commun. Statist. Theor. Methods 2011, 40, 3396–3406. [Google Scholar] [CrossRef]
  14. Limnios, N.; Swishchuk, A. Discrete-time semi-Markov random evolutions and their applications. Adv. Appl. Probabil. 2013, 45, 214–240. [Google Scholar] [CrossRef]
  15. Sviridenko, M.N. Martingale approach to limit theorems for semi-Markov processes. Theor. Probab. Appl. 1986, 34, 540–545. [Google Scholar] [CrossRef]
  16. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; J. Wiley: New York, NY, USA, 1986. [Google Scholar]
  17. Jacod, J.; Shiryaev, A.N. Limit Theorems for Stochastic Processes; Springer: Berlin, Germany, 1987. [Google Scholar]
  18. Skorokhod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; AMS: Providence, RI, USA, 1989; Volume 78. [Google Scholar]
  19. Silvestrov, D.S. The invariance principle for accumulation processes with semi-Markov switchings in a scheme of arrays. Theory Probab. Appl. 1991, 36, 519–535. [Google Scholar] [CrossRef]
  20. Silvestrov, D.S. Limit Theorems for Randomly Stopped Stochastic Processes. Series: Probability and its Applications; Springer: New York, NY, USA, 2004. [Google Scholar]
  21. Stroock, D.W.; Varadhan, S.R.S. Multidimensional Diffusion Processes; Springer: Berlin, Germany, 1979. [Google Scholar]
  22. Hersh, R. Random Evolutions: A Survey of Results and Problems. Rocky Mt. J. Math. 1974, 4, 443–447. [Google Scholar] [CrossRef]
  23. Pinsky, M. Lectures in Random Evolutions; World Scientific: Singapore, 1991. [Google Scholar]
  24. Maxwell, M.; Woodroofe, M. Central limit theorems for additive functionals of Markov chains. Ann. Probab. 2000, 28, 713–724. [Google Scholar]
  25. Meyn, S.P.; Tweedie, R.L. Markov Chains and Stochastic Stability; Springer: New York, NY, USA, 1993. [Google Scholar]
  26. Nummelin, E. General Irreducible Markov Chains and Non-Negative Operators; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  27. Revuz, D. Markov Chains; North-Holland: Amsterdam, The Netherlands, 1975. [Google Scholar]
  28. Shurenkov, V.M. On the theory of Markov renewal. Theory Probab. Appl. 1984, 19, 247–265. [Google Scholar]
  29. Limnios, N.; Oprişan, G. Semi-Markov Processes and Reliability; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  30. Pyke, R. Markov renewal processes: Definitions and preliminary properties. Ann. Math. Statist. 1961, 32, 1231–1242. [Google Scholar] [CrossRef]
  31. Pyke, R. Markov renewal processes with finitely many states. Ann. Math. Statist. 1961, 32, 1243–1259. [Google Scholar] [CrossRef]
  32. Pyke, R.; Schaufele, R. Limit theorems for Markov renewal processes. Ann. Math. Statist. 1964, 35, 1746–1764. [Google Scholar] [CrossRef]
  33. Barbu, V.; Limnios, N. Semi-Markov Chains and Hidden Semi-Markov Models. Toward Applications. Their Use in Reliability and DNA Analysis; Lecture Notes in Statistics; Springer: New York, NY, USA, 2008; Volume 191. [Google Scholar]
  34. Adams, R. Sobolev Spaces; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  35. Ledoux, M.; Talangrand, M. Probability in Banach Spaces; Springer: Berlin, Germany, 1991. [Google Scholar]
  36. Rudin, W. Functional Analysis; McGraw-Hill: New York, NY, USA, 1991. [Google Scholar]
  37. Swishchuk, A.V.; Islam, M.S. The Geometric Markov Renewal Processes with application to Finance. Stoch. Anal. Appl. 2010, 29, 4. [Google Scholar] [CrossRef]
  38. Swishchuk, A.V.; Islam, M.S. Diffusion Approximations of the Geometric Markov Renewal Processes and option price formulas. Int. J. Stoch. Anal. 2010, 2010, 347105. [Google Scholar] [CrossRef] [Green Version]
  39. Swishchuk, A.V.; Islam, M.S. Normal Deviation and Poisson Approximation of GMRP. Commun. Stat. Theory Methods 2010. (accepted). [Google Scholar]
  40. Chiquet, J.; Limnios, N.; Eid, M. Piecewise deterministic Markov processes applied to fatigue crack growth modelling. J. Stat. Plan. Inference 2009, 139, 1657–1667. [Google Scholar] [CrossRef]
  41. Swishchuk, A.V.; Limnios, N. Optimal stopping of GMRP and pricing of European and American options. In Proceedings of the 15th International Congress on Insurance: Mathematics and Economics (IME 2011), Trieste, Italy, 14–17 June 2011. [Google Scholar]
  42. Swishchuk, A.V.; Svishchuk, M.; Limnios, N. Stochastic stability of vector SDE with applications to a stochastic epidemic model. Int. J. Pure Appl. Math. 2016, 106, 801–839. [Google Scholar] [CrossRef] [Green Version]
  43. Sobolev, S. Some Applications of Functional Analysis in Mathematical Physics, 3rd ed.; American Mathematical Society: Providence, RI, USA, 1991; Volume 90. [Google Scholar]

Share and Cite

MDPI and ACS Style

Limnios, N.; Swishchuk, A. Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications. Mathematics 2020, 8, 963. https://doi.org/10.3390/math8060963

AMA Style

Limnios N, Swishchuk A. Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications. Mathematics. 2020; 8(6):963. https://doi.org/10.3390/math8060963

Chicago/Turabian Style

Limnios, Nikolaos, and Anatoliy Swishchuk. 2020. "Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications" Mathematics 8, no. 6: 963. https://doi.org/10.3390/math8060963

APA Style

Limnios, N., & Swishchuk, A. (2020). Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications. Mathematics, 8(6), 963. https://doi.org/10.3390/math8060963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop