Next Article in Journal
Maximum Power Point Tracker Controller for Solar Photovoltaic Based on Reinforcement Learning Agent with a Digital Twin
Next Article in Special Issue
Heavy-Tailed Probability Distributions: Some Examples of Their Appearance
Previous Article in Journal
Multilingual Multi-Target Stance Recognition in Online Public Consultations
Previous Article in Special Issue
Renewable k-Out-of-n System with the Component-Wise Strategy of Preventive System Maintenance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a New Characterization of Harris Recurrence for Markov Chains and Processes

Department of Management Science and Engineering, Stanford University, Stanford, CA 94305, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2165; https://doi.org/10.3390/math11092165
Submission received: 31 March 2023 / Revised: 25 April 2023 / Accepted: 2 May 2023 / Published: 5 May 2023

Abstract

:
This paper shows that Harris recurrent Markov chains and processes can be characterized as the class of Markov chains and processes for which there exists a random time T at which the distribution of the chain or process does not depend on its initial condition. In particular, no independence assumptions concerning the post-T process or T play a role in the characterization. Since Harris chains and processes are known to contain infinite sequences of regeneration times exhibiting various independence properties, it follows that the existence of this single T implies the existence of infinitely many times at which regeneration occurs.

1. Introduction

The exploitation of the regenerative structure has a long and successful history in the development of both theory and algorithms for Markov chains and processes, going back to the pioneering work of Doeblin [1] in which the central limit theorem for Markov chains was studied. In the 1970s, Athreya and Ney [2] and Nummelin [3] independently showed how Harris recurrent Markov chains can be viewed as regenerative stochastic processes. Sigman [4] later extended these regeneration ideas to the setting of Harris recurrent Markov processes in continuous time. Important contributions were also made to this literature on regeneration by Vladimir Kalashnikov, both in book form [5] and in various papers published over the course of his research life [6,7,8,9,10,11,12].
In view of Professor Kalashnikov’s major contributions to this research domain, this note also discusses regeneration. In particular, we provide a new characterization of the class of regenerative Markov processes. Specifically, we recall in Section 2 that such processes can be identified with the class of chains and processes that are recurrent in the sense of Harris. The main contribution of the current note is that this class of Markov processes is exactly the class for which there exists a single random time T (not necessarily a randomized stopping time) at which the chain or process has a distribution that does not depend on its initial state, see Theorems 3 and 5. With only this property assumed, the Markov process must in fact then be wide-sense regenerative in discrete time, and one-dependent regenerative in continuous time. A useful review of some of these regeneration concepts can be found in [13]. In particular, when a single such time T exists, such processes then necessarily possess an infinite sequence of randomized stopping times at which the process is identically distributed, and at which the process also exhibits various forms of “cycle independence” relative to that sequence of times. In other words, this seemingly weak property involving a single T is equivalent to the much stronger property that the process regenerates at an infinite sequence of randomized stopping times.
In this sense, there is some similarity to the results of [14], in which it is shown that for general (possibly non-Markov) stochastic processes, the existence of a single wide-sense (or classical) regeneration time implies the existence of an infinite sequence of such wide-sense (or classical) regeneration times. In contrast to their results, the current paper assumes a Markov structure, but makes no independence assumptions related to either T or the post-T process, so makes much weaker demands on T.

2. Main Results

We start with a discussion in the setting of discrete-time Markov chains. Let S be a separable metric space, and suppose that S is its associated Borel σ -algebra. Put Ω = S , and let F be the associated σ -algebra on Ω induced by the product topology. For ω = ( x i : i 0 ) Ω , let X i ( ω ) = x i (for i 0 ) be the i’th coordinate projection. Given a one-step transition kernel P : S × S [ 0 , 1 ] and x S , let P x ( · ) be the probability on Ω under which
P x ( X i A i , 1 i n ) = A 1 A n P ( x , d x 1 ) P ( x 1 , d x 2 ) P ( x n 1 , d x n )
for A i S ( 1 i n ), so that X = ( X n : n 0 ) is a Markov chain with transition kernel P starting from x.
Definition 1.
We say that P induces a Harris recurrent Markov chain on S if there exists a non-trivial non-negative σ-finite measure η on ( S , S ) for which η ( A ) > 0 implies that
P x ( X n A infinitely often ) = 1
for x S .
The theory developed by [2,3] showed that Harris recurrence is equivalent to wide-sense regeneration. To state this result, let S ˜ = S × [ 0 , 1 ] and let S ˜ be the associated Borel product σ -algebra. Put Ω ˜ = S ˜ , and let F ˜ be the associated Borel product σ -algebra. For ω ˜ = ( ( x i , u i ) , i 0 ) Ω ˜ , let X ˜ i ( ω ˜ ) = x i and U i ( ω ˜ ) = u i for i 0 . Given a transition kernel P on ( S , S ) , we say that the family ( P ˜ x : x S ) of probabilities on ( Ω ˜ , F ˜ ) is consistent with P if for each x S and n 0 ,
(i)
P ˜ x ( ( X ˜ 0 , , X ˜ n ) · ) = P x ( ( X 0 , , X n ) · ) ;
(ii)
( U i : i 0 ) is a sequence of independent and identically distributed (iid) random variables (rv’s) under P ˜ x .
The existence of the sequence ( U i : i 0 ) on the same probability space ( Ω ˜ , F ˜ , P ˜ x ) that supports the Markov chain ( X ˜ n : n 0 ) allows the possibility of constructing random times ( R n : n 1 ) that exhibit regeneration structure.
Definition 2.
We say that P induces wide-sense regeneration if for some family ( P ˜ x : x S ) of probabilities on Ω ˜ consistent with P, there exist strictly increasing random times ( R n : n 1 ) and a probability λ ( · ) on S such that for each x S and n 1 ,
(i) 
R n is independent of ( X ˜ R n + k : k 0 ) under P ˜ x ;
(ii) 
P ˜ x ( ( X ˜ R n + k : k 0 ) · ) = S λ ( d y ) P y ( ( X k : k 0 ) · ) .
In the presence of wide-sense regeneration, one may analyze ( E x f ( X n ) : n 0 ) via the use of renewal equations, thereby greatly simplifying the theory of such Markov chains. Refs. [2,3] essentially proved the following result. (They proved the “only if” direction. The proof uses the existence of C-sets to construct the wide-sense regeneration. The C-set construction, in turn, uses the fact that S is countably generated. This is why we assume that S is the Borel σ -algebra of a separable metric space. The converse follows from, for example, our Theorem 3).
Theorem 1.
The transition kernel P induces a Harris recurrent Markov chain if and only if P induces wide-sense regeneration.
A related, but different, form of regeneration is the following.
Definition 3.
We say that P induces one-dependent regeneration if for some family ( P ˜ x : x S ) on Ω ˜ consistent with P, there exist strictly increasing random times ( R n : n 1 ) with R 0 = 0 , such that for each x S ,
( ( ( X ˜ R n + k : 0 k < R n + 1 R n ) , R n + 1 R n ) : n 0 )
is one-dependent (in n) under P ˜ x and
P ˜ x ( ( ( X ˜ R n + k : 0 k < R n + 1 R n ) , R n + 1 R n ) · )
is independent of x S and n 1 .
Using the Athreya–Ney–Nummelin regenerative construction, ref. [15] established the “only if” direction of the following result. As for Theorem 1, the other direction follows, for example, from Theorem 3.
Theorem 2.
The transition kernel P induces a Harris recurrent Markov chain if, and only if, P induces one-dependent regeneration.
We now turn to our first new result.
Definition 4.
We say that the random time T on Ω ˜ exhibits distributional invariance with respect to the transition kernel P if there exists a family ( P ˜ x : x S ) of probabilities on Ω ˜ consistent with P such that the distributions P ˜ x ( X ˜ T · ) do not depend on x S .
It is obvious that if P induces either wide-sense regeneration or one-dependent regeneration, then R n exhibits distributional invariance for n 1 .
Theorem 3.
The transition kernel P induces a Harris recurrent Markov chain if, and only if, there exists a random time T 1 that exhibits distributional invariance with respect to P.
Proof. 
If P induces Harris recurrence, then the random time R n of Theorem 1 exhibits distributional invariance. On the other hand, if T exhibits distributional invariance with respect to P, then
η ( · ) = Δ P ˜ x ( X ˜ T · )
does not depend on x S . Suppose η ( A ) > 0 . Then, for each x S , there exists n ( x ) such that
P ˜ x ( X ˜ T A , T n ( x ) ) η ( A ) / 2
Furthermore, we may select n ( · ) to be S measurable. Hence, if T ˜ ( A ) = inf { n 0 : X ˜ n A } , it follows that
P ˜ x ( 1 T ˜ ( A ) n ( x ) ) P ˜ x ( X ˜ T A , T n ( x ) ) η ( A ) / 2 ,
for each x S . Consequently, if T ( A ) = inf { n 0 : X n A } ,
P x ( 1 T ( A ) n ( x ) ) η ( A ) / 2 .
Put N 1 = n ( X 0 ) and N i + 1 = n ( X N i ) for i 1 . Then, ( N i : i 1 ) is a strictly increasing sequence for which
P x ( X m A for some m { N i + 1 , , N i + 1 } | X j : 0 j N i ) η ( A ) / 2
for each i 1 . It follows from the conditional Borel–Cantelli lemma (see, for example, [16], Corollary 2, p. 324) that
P x ( X m A infinitely often ) = 1 ,
proving that P induces Harris recurrence. □
As noted in the Introduction, the seemingly weak assumption of existence of a distributionally invariant random time T already implies existence of an entire sequence of wide-sense or one-dependent regeneration times.
We now extend this theory to continuous time. Given a separable metric space S, let Ω = D [ 0 , ) be the space of functions ω : [ 0 , ) S , such that ω ( · ) is right continuous everywhere, with left limits at t ( 0 , ) (RCLL). For t 0 , put X ( t , ω ) = ω ( t ) , X = ( X ( t ) : t 0 ) , and ( θ t X ) ( ω ) = ( ω ( t + s ) : s 0 ) . Let ( P x : x S ) be a family of probabilities on Ω for which P x ( X ( 0 ) = x ) = 1 for x S . Furthermore, we require that for each x S and non-negative (measurable) f,
E x f ( θ t X ) | X ( u ) : 0 u t = u ( X ( t ) ) P x a . s . ,
where u ( y ) = E y f ( X ) for y S and E y ( · ) is the expectation associated with P y ( · ) . It follows that X is a time-homogeneous Markov process under P x . We further assume that X is a Feller process (i.e., for each bounded continuous f : S R and t 0 , E x f ( X ( t ) ) is continuous in x).
We now state the definition of Harris recurrence in continuous time; see, for example, [17].
Definition 5.
The process X and its associated probabilities ( P x : x S ) on Ω is said to be Harris recurrent in continuous time if there exists a non-trivial non-negative σ-finite measure η for which
P x 0 I ( X ( t ) A ) d t = = 1
for each x S whenever η ( A ) > 0 . (Here, I ( B ) is the indicator rv corresponding to B).
It has been known since the 1990s that such Harris recurrence implies the existence of one-dependent regeneration times (see [4]). As in discrete time, the statement of this result requires the use of a probability space upon which auxiliary randomization can be defined. To this end, let Ω ˜ = Ω × [ 0 , 1 ] . For ω ˜ = ( ω , ( u i : i 0 ) ) , let X ˜ ( t , ω ˜ ) = ω ( t ) and U i ( ω ˜ ) = u i for t 0 and i 0 .
We say that the family of probabilites ( P ˜ x : x S ) on Ω ˜ is consistent with ( P x : x S ) if for each x S ,
(i)
P ˜ x ( ( X ˜ ( t ) : t 0 ) · ) = P x ( ( X ( t ) : t 0 ) · ) ;
(ii)
( U i : i 0 ) is a sequence of iid rv’s on [ 0 , 1 ] under P ˜ x .
Definition 6.
We say that X and ( P x : x S ) induce one-dependent regeneration in continuous time if for some family ( P ˜ x : x S ) of probabilities on Ω ˜ consistent with ( P x : x S ) , there exists a sequence of strictly increasing random times ( R n : n 0 ) (with R 0 = 0 ), such that for each x S ,
( ( ( X ˜ ( R n + t ) : 0 t < R n + 1 R n ) , R n + 1 R n ) : n 0 )
is a one-dependent sequence under P ˜ x , and
P ˜ x ( ( ( X ˜ ( R n + t ) : 0 t < R n + 1 R n ) , R n + 1 R n ) · )
is independent of x S and n 1 .
As noted earlier, ref. [4] proved the forward implication of the next theorem. As in discrete time, the separability of S is used to ensure that S is countably generated (so that the “splitting construction” of Athreya–Ney–Nummelin applies.) The reverse implication is a consequence of (for example) Theorem 5.
Theorem 4.
The process X and the family ( P x : x S ) is Harris recurrent in continuous time if, and only if, X and ( P x : x S ) induce one-dependent regeneration in continuous time.
In view of our discrete time discussion, an obvious question is whether Harris recurrence in continuous time implies wide-sense regeneration. This issue, first raised in the 1990s, remains an open question (see, for example, [18]).
Our second major result is the extension of Theorem 3 to continuous time.
Definition 7.
We say that the random time T on Ω ˜ exhibits distributional invariance in continuous time with respect to X and ( P x : x S ) if there exists a family ( P ˜ x : x S ) of probabilities on Ω ˜ consistent with ( P x : x S ) such that the distributions P ˜ x ( X ˜ ( T ) · ) do not depend on x S .
Theorem 5.
The process X and its probabilities ( P x : x S ) is Harris recurrent in continuous time if, and only if, there exists a random time T on Ω ˜ exhibiting distributional invariance.
Proof. 
The proof is similar to that of Theorem 3. Put η ( · ) = P ˜ x ( X ˜ ( T ) · ) . Given A for which η ( A ) > 0 , let ( t ( x ) : x S ) be a measurable function for which
P ˜ x ( X ˜ ( T ) A , T t ( x ) ) η ( A ) / 2 .
Set T 0 = 0 and T i + 1 = t ( X ( T i ) ) for i 0 . As in discrete time, (1) implies that for each x S ,
P x ( X ( · ) visits A in [ T i , T i + 1 ) | X ( u ) : u T i ) η ( A ) / 2
for i 0 ; see [19] for a discussion of the measurability of hitting times of generic Borel sets A. The conditional Borel–Cantelli lemma again implies that
P x ( X ( · ) visits A infinitely often ) = 1
for each x S . Given (2), Theorem 1 of [20] then implies that X and its probabilities ( P x : x S ) are Harris recurrent in continuous time. Note that Theorem 1 of [20] requires that X is a Borel right process (strong Markov process with right continuous sample paths). Since every Feller process with RCLL sample paths satisfies the strong Markov property (see, for example, [21], Theorem 3.1, p. 102), the requirement is met.
The forward direction is an immediate consequence of [4]. □

3. A Further Generalization

We conclude this paper with a generalization of Theorem 3, which generalizes the well-known minorization-type characterization of Harris recurrence in [2,3].
Definition 8.
Given a set C S and a random time T on Ω ˜ , we say that ( C , T ) is a minorization pair for the transition kernel P if P x ( X n C for some n ) = 1 for all x S , and there exists a family ( P ˜ x : x S ) of probabilities on Ω ˜ consistent with P, such that there exists a non-trivial non-negative measure η, such that P ˜ x ( X ˜ T · ) η ( · ) for all x C .
Note that the characterization of Harris recurrence in [2,3] is actually a minorization pair with constant T.
Theorem 6.
The transition kernel P induces a Harris recurrent Markov chain if, and only if, there exists a minorization pair ( C , T ) for P.
Its proof is similar to that of Theorem 3 and, hence, omitted.

4. Conclusions

We have shown that a Markov chain or process is Harris recurrent if, and only if, there exists a single random time at which the distribution of the chain or process does not depend on its initial condition. This characterization shows that the regeneration property of Markov chains or processes can be extracted from a much weaker property, the existence of a single random time that exhibits distributional invariance. Then we further weaken the assumption so that the well-known minorization-type characterization becomes a special case of our new characterization of Harris recurrence. This implies that one strategy for verifying Harris recurrence when wide-sense regeneration fails is to construct a random time T satisfying Definition 4. Practical strategies for identifying such random times is a direction for future work in the area.

Author Contributions

Theory, P.G. and Y.Q.; writing, P.G. and Y.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Doeblin, W. Sur deux problèmes de M. Kolmogoroff concernant les chaînes dénombrables. Bull. Société MathéMatique Fr. 1938, 66, 210–220. [Google Scholar] [CrossRef]
  2. Athreya, K.B.; Ney, P. A new approach to the limit theory of recurrent Markov chains. Trans. Am. Math. Soc. 1978, 245, 493–501. [Google Scholar] [CrossRef]
  3. Nummelin, E. A splitting technique for Harris recurrent Markov chains. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1978, 43, 309–318. [Google Scholar] [CrossRef]
  4. Sigman, K. One-dependent regenerative processes and queues in continuous time. Math. Oper. Res. 1990, 15, 175–189. [Google Scholar] [CrossRef]
  5. Kalashnikov, V.V. Topics on Regenerative Processes; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  6. Kalashnikov, V.V. Estimations of convergence rate and stability for regenerative and renovative processes. In Proceedings of the Colloquia Mathematica Societatis Janos Bolyai 24, Point Processes and Queueing Problems, Keszthely, Hungary, 1981; pp. 163–180. [Google Scholar]
  7. Kalashnikov, V.V. Analytical and simulation estimates of reliability for regenerative models. Syst. Anal. Model. Simul. 1990, 6, 833–851. [Google Scholar]
  8. Kalashnikov, V.V. Regenerative queueing processes and their qualitative and quantitative analysis. Queueing Syst. 1990, 6, 113–136. [Google Scholar] [CrossRef]
  9. Kalashnikov, V.V. Regeneration and general Markov chains. J. Appl. Math. Stoch. Anal. 1994, 7, 357–371. [Google Scholar] [CrossRef]
  10. Kalashnikov, V.V. Crossing and comparison of regenerative processes. Acta Appl. Math. 1994, 34, 151–172. [Google Scholar] [CrossRef]
  11. Foss, S.; Kalashnikov, V.V. Regeneration and renovation in queues. Queueing Syst. 1991, 8, 211–223. [Google Scholar] [CrossRef]
  12. Kalashnikov, V.V.; Vsekhsvyatskii, S. Metric estimates of the first occurrence time in regenerative processes. In Proceedings of the Stability Problems for Stochastic Models, Uzhhorod, Ukraine, 23–29 September 1985; pp. 102–130. [Google Scholar]
  13. Sigman, K.; Wolff, R.W. A review of regenerative processes. SIAM Rev. 1993, 35, 269–288. [Google Scholar] [CrossRef]
  14. Sigman, K.; Thorisson, H.; Wolff, R.W. A note on the existence of regeneration times. J. Appl. Probab. 1994, 31, 1116–1122. [Google Scholar] [CrossRef]
  15. Glynn, P.W. Regenerative Simulation of Harris Recurrent Markov Chains; Technical Report; Department of Operations Research, Stanford University: Stanford, CA, USA, 1982. [Google Scholar]
  16. Doob, J.L. Stochastic Processes; John Wiley & Sons: Hoboken, NJ, USA, 1953. [Google Scholar]
  17. Azema, J.; Kaplan-Duflo, M.; Revuz, D. Mesure invariante sur les classes récurrentes des processus de Markov. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1967, 8, 157–181. [Google Scholar] [CrossRef]
  18. Glynn, P.W. Wide-sense regeneration for Harris recurrent Markov processes: An open problem. Queueing Syst. 2011, 68, 305–311. [Google Scholar] [CrossRef]
  19. Bass, R.F. The measurability of hitting times. Electron. Commun. Probab. 2010, 15, 99–105. [Google Scholar] [CrossRef]
  20. Kaspi, H.; Mandelbaum, A. On Harris recurrence in continuous time. Math. Oper. Res. 1994, 19, 211–222. [Google Scholar] [CrossRef]
  21. Revuz, D.; Yor, M. Continuous Martingales and Brownian Motion; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 293. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Glynn, P.; Qu, Y. On a New Characterization of Harris Recurrence for Markov Chains and Processes. Mathematics 2023, 11, 2165. https://doi.org/10.3390/math11092165

AMA Style

Glynn P, Qu Y. On a New Characterization of Harris Recurrence for Markov Chains and Processes. Mathematics. 2023; 11(9):2165. https://doi.org/10.3390/math11092165

Chicago/Turabian Style

Glynn, Peter, and Yanlin Qu. 2023. "On a New Characterization of Harris Recurrence for Markov Chains and Processes" Mathematics 11, no. 9: 2165. https://doi.org/10.3390/math11092165

APA Style

Glynn, P., & Qu, Y. (2023). On a New Characterization of Harris Recurrence for Markov Chains and Processes. Mathematics, 11(9), 2165. https://doi.org/10.3390/math11092165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop