Next Article in Journal
Nonassociative Algebras, Rings and Modules over Them
Next Article in Special Issue
Stability and Bifurcations in a Nutrient–Phytoplankton–Zooplankton Model with Delayed Nutrient Recycling with Gamma Distribution
Previous Article in Journal
Quiescent Optical Solitons for the Concatenation Model with Nonlinear Chromatic Dispersion
Previous Article in Special Issue
Dynamics of a Four-Dimensional Economic Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Dynamics of a Virus Variant Epidemic Model with Double Inoculations

Department of Mathematics, Yunnan Minzu University, 2929, Yuehua Street, Chenggong District, Kunming 650500, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(7), 1712; https://doi.org/10.3390/math11071712
Submission received: 20 February 2023 / Revised: 26 March 2023 / Accepted: 27 March 2023 / Published: 3 April 2023

Abstract

:
In this paper, we establish a random epidemic model with double vaccination and spontaneous variation of the virus. Firstly, we prove the global existence and uniqueness of positive solutions for a stochastic epidemic model. Secondly, we prove the threshold R 0 * can be used to control the stochastic dynamics of the model. If R 0 * < 0 , the disease will be extinct with probability 1; whereas if R 0 * > 0 , the disease can almost certainly continue to exist, and there is a unique stable distribution. Finally, we give some numerical examples to verify our theoretical results. Most of the existing studies prove the stochastic dynamics of the model by constructing Lyapunov functions. However, the construction of a Lyapunov function of higher-order models is extremely complex, so this method is not applicable to all models. In this paper, we use the definition method suitable for more models to prove the stationary distribution. Most of the stochastic infectious disease models studied now are second-order or third-order, and cannot accurately describe infectious diseases. In order to solve this kind of problem, this paper adopts a higher price five-order model.

1. Introduction

Infectious diseases have become the greatest enemy of human health. When an infectious disease appears and prevails in an area, the primary task is to make every effort to prevent the spread of the disease. Vaccination is one of the important preventive measures. Through vaccination, smallpox was eliminated in the world at the end of the 1970s. This is a great victory for human beings in the fight against infectious diseases, an important milestone in the history of preventive medicine, and a great achievement of vaccination for human beings. In mathematical epidemiology, the control and eradication of infectious diseases are urgent problems, and have greatly attracted the interest of researchers in many fields. Now scholars have proposed and extensively discussed various types of optimizing models and their influencing factors, such as vaccination, time delay, impulse, media reports, etc. [1,2,3,4]. However, as a disease progresses, a virus can mutate as it spreads, allowing the disease to spiral out of control. Cai et al. analyzed the stability of the infectious disease model of virus mutation of inoculation, but only considered the condition that the inoculated individual was completely effective against the virus at a certain stage [5,6]. Baba and Bilgen et al. considered the problem of double-inoculation infectious diseases, which had an adverse effect on the two viruses respectively, but did not consider the conversion between patients infected with the two viruses [7,8]. Therefore, on the basis of the research on the problem of virus mutated infectious disease, considering the situation of two kinds of vaccination for susceptible people, a kind of virus mutated infectious disease model with double vaccination was proposed.
Taking into account the important role of vaccination in preventing the occurrence of infectious diseases, we assume that the first type of vaccinated people are fully immune to the premutation virus and partially resistant to the post mutation virus, whereas the second are fully immune to the postmutation virus and partially resistant to the premutation virus. In addition, the two types of the infected are infectious, and the disease is not fatal before the virus mutation, whereas it is fatal after the virus mutation. Based on the above assumptions, a model was established as follows:
S ˙ ( t ) = Λ β 1 S ( t ) I 1 ( t ) β 2 S ( t ) I 2 ( t ) λ S ( t ) V 1 ˙ ( t ) = φ 1 S ( t ) k 1 I 2 ( t ) V 1 ( t ) a V 1 ( t ) V 2 ˙ ( t ) = φ 2 S ( t ) k 2 I 1 ( t ) V 2 ( t ) a V 2 ( t ) I 1 ˙ ( t ) = β 1 S ( t ) I 1 ( t ) + k 2 I 1 ( t ) V 2 ( t ) α 1 I 1 ( t ) I 2 ˙ ( t ) = β 2 S ( t ) I 2 ( t ) + k 1 I 2 ( t ) V 1 ( t ) + ε I 1 ( t ) α 2 I 2 ( t ) R ˙ ( t ) = γ 1 I 1 ( t ) + γ 2 I 2 ( t ) a R ( t ) ,
where S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) , and R ( t ) , respectively, represent the number at the time t of the susceptible, those vaccinated to the first and to the second types of vaccines, the infected before and after virus mutation, and the recovered. Λ is the input rate of the population. β 1 and β 2 are the infection coefficients, respectively, before and after virus mutation. a is the natural mortality of the population. φ 1 and φ 2 are the vaccination rates of the first and the second vaccines. k 1 and k 2 are the infection rates of the infected with the first type of people vaccinated after virus mutation, and the second before virus mutation, respectively. γ 1 and γ 2 are the recovery rates of the infected, respectively, before and after the virus mutation. ε is the ratio of the infected before the virus mutation to the infected after virus mutation in number. δ is the mortality rate of the infected after virus mutation. In addition, λ : = a + φ 1 + φ 2 ; α 1 : = a + γ 1 + ε ; α 2 : = a + γ 2 + ε .
According to the biological significance of the model, it is assumed that all parameters are positive, and the dynamic behavior of population R does not affect other populations. Thus, the following model is considered:
S ˙ ( t ) = Λ β 1 S ( t ) I 1 ( t ) β 2 S ( t ) I 2 ( t ) λ S ( t ) V 1 ˙ ( t ) = φ 1 S ( t ) k 1 I 2 ( t ) V 1 ( t ) a V 1 ( t ) V 2 ˙ ( t ) = φ 2 S ( t ) k 2 I 1 ( t ) V 2 ( t ) a V 2 ( t ) I 1 ˙ ( t ) = β 1 S ( t ) I 1 ( t ) + k 2 I 1 ( t ) V 2 ( t ) α 1 I 1 ( t ) I 2 ˙ ( t ) = β 2 S ( t ) I 2 ( t ) + k 1 I 2 ( t ) V 1 ( t ) + ε I 1 ( t ) α 2 I 2 ( t ) .
Model (2) has a basic reproduction number R 0 , where
R 0 = max { R 1 , R 2 } , R 1 = β 1 Λ α 1 λ + k 2 φ 2 Λ a α 1 λ , R 2 = β 2 Λ α 2 λ + k 1 φ 1 Λ a α 2 λ ;
it also has a disease-free equilibrium
E 0 ( S 0 , V 1 0 , V 2 0 , 0 , 0 ) = E 0 ( Λ λ , φ 1 Λ a λ , φ 2 Λ a λ , 0 , 0 ) .
Moreover, when R 2 > 0 , model (2) has a boundary equilibrium point
E 2 ( S ˜ , V 1 ˜ , V 2 ˜ , I 1 ˜ , I 2 ˜ ) = E 2 ( Λ β 2 I 2 ˜ + λ , φ 1 Λ ( k 1 I 2 ˜ + a ) ( β 2 I 2 ˜ + λ ) , φ 2 Λ a ( β 2 I 2 ˜ + λ ) , I 1 ˜ , I 2 ˜ ) ,
where the disease will disappear before the virus mutation, and after the virus mutation it will spread; when I 1 * and I 2 * > 0 , both before and after the virus mutates, model (2) has an endemic disease balance point E 3 ( S * , V 1 * , V 2 * , I 1 * , I 2 * ) , where
S * = Λ β 1 I 1 * + β 2 I 2 * + λ , V 1 * = φ 1 Λ ( k 1 I 2 * + a ) ( β 1 I 1 * + β 2 I 2 * + λ ) , V 2 * = φ 2 Λ ( k 2 I 1 * + a ) ( β 1 I 1 * + β 2 I 2 * + λ ) .
On the other hand, environmental change has a key impact on the development of epidemics [9]. For disease transmission, because of the unpredictability of human contact, the growth and spread of epidemics are essentially random, so population numbers are constantly disturbed [10,11]. Therefore, in epidemic dynamics, stochastic differential equation ( S D E ) models may be a more appropriate approach to modeling epidemics in many situations. Many real stochastic epidemic models can be derived based on their deterministic formulas [9,12,13,14,15,16,17,18,19,20,21,22,23]. Assuming that the coefficients of model (2) are affected by random noise that can be represented by Brownian motion, model (2) becomes:
d S ( t ) = ( Λ β 1 S I 1 β 2 S I 2 λ S ) d t + σ 1 S d B 1 ( t ) d V 1 ( t ) = ( φ 1 S k 1 I 2 V 1 a V 1 ) d t + σ 2 V 1 d B 2 ( t ) d V 2 ( t ) = ( φ 2 S k 2 I 1 V 2 a V 2 ) d t + σ 3 V 2 d B 3 ( t ) d I 1 ( t ) = ( β 1 S I 1 + k 2 I 1 V 2 α 1 I 1 ) d t + σ 4 I 1 d B 4 ( t ) d I 2 ( t ) = ( β 2 S I 2 + k 1 I 2 V 1 + ε I 1 α 2 I 2 ) d t + σ 5 I 2 d B 5 ( t ) ,
where σ i ( i = 1 , 2 , 3 , 4 , 5 ) represents the intensities of the white noises, and B i ( t ) ( i = 1 , 2 , 3 , 4 , 5 ) are mutually independent standard Brownian motions. However, the groups S , V 1 , V 2 , I 1 , and I 2 are usually subject to the same random factors such as temperature, humidity, etc., in reality. As a result, it is more reasonable to assume that the five classes of random perturbance noises are uncorrelated. If we set B i ( t ) = B ( t ) ( i = 1 , 2 , 3 , 4 , 5 ) , then model (3) becomes:
d S ( t ) = ( Λ β 1 S I 1 β 2 S I 2 λ S ) d t + σ 1 S d B ( t ) d V 1 ( t ) = ( φ 1 S k 1 I 2 V 1 a V 1 ) d t + σ 2 V 1 d B ( t ) d V 2 ( t ) = ( φ 2 S k 2 I 1 V 2 a V 2 ) d t + σ 3 V 2 d B ( t ) d I 1 ( t ) = ( β 1 S I 1 + k 2 I 1 V 2 α 1 I 1 ) d t + σ 4 I 1 d B ( t ) d I 2 ( t ) = ( β 2 S I 2 + k 1 I 2 V 1 + ε I 1 α 2 I 2 ) d t + σ 5 I 2 d B ( t ) .
Let ( Ω , F , { F t } t 0 , P ) be a complete probability space with the filtration { F t } t 0 satisfying the usual condition (i.e., { F t } t 0 is increasing and right continuous whereas F 0 contains all P -null sets). Throughout this paper, a b : = min { a , b } , a b : = max { a , b } and R + 5 , : = { ( u , v , w , x , y ) : u , v , w , x , y > 0 } are denoted.
First, we prove the global existence and uniqueness of the positive solution of model (4). Similar to a deterministic model, we introduce a threshold value R 0 * , able to be calculated from the coefficients. We show that if R 0 * < 0 , I ( t ) , I ( t ) = I 1 ( t ) + I 2 ( t ) will be extinct with probability 1, and S ( t ) , V 1 ( t ) , V 2 ( t ) will weakly converge to their unique invariant probability measures μ 1 * , μ 2 * , μ 3 * , respectively. If R 0 * > 0 , then coexistence occurs, and all positive solutions of model (4) are converged to the unique variational probability measure μ * in the total variational norm.
Most of the existing studies use the method of constructing the Lyapunov function to prove the existence of the stationary distribution of the solution of model (4). However, this method is not applicable to all models. In this paper, the definition method applicable to more models is used to prove the stationary distribution [24,25,26,27]. Moreover, most of the stochastic infectious disease models studied now are second order or third order. Therefore, in order to depict infectious diseases more accurately, we have established a fifth-order model–a double inoculation and random infectious disease model of spontaneous virus mutation, considering two kinds of vaccination for susceptible people on the basis of the research on infectious diseases of virus mutation.
The main structure of this paper is as follows: In Section 2 we prove the global existence and uniqueness of the positive solution of model (4). In Section 3 and Section 4, we are devoted to the proof of extinction and coexistence, respectively. In Section 5, we provide an example to support our findings. In Section 6, the main results are discussed and summarized briefly.

2. Existence and Uniqueness of the Global Solutions

Theorem 1.
For any given value ( S ( 0 ) , V 1 ( 0 ) , V 2 ( 0 ) , I 1 ( 0 ) , I 2 ( 0 ) ) , there is a unique solution ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) to model (4) on t 0 and the solution will remain in R + 5 , with probability 1, i.e., ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) in R + 5 , for all t 0 almost surely.
Proof of Theorem 1.
Since the coefficients of model (4) satisfy local Lipschitz and linear growth conditions, it can be seen from the existence and uniqueness theorem of solutions of stochastic differential equations that for any ( S ( 0 ) , V 1 ( 0 ) , V 2 ( 0 ) , I 1 ( 0 ) , I 2 ( 0 ) ) R + 5 , , model (4) has a locally unique solution ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) . To prove the global nature of the solution, we only need to prove that τ e = + , where τ e is the explosion time.
Let k 0 > 0 be a sufficiently large positive number, so that for each t 0 , S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) fall in the interval [ 1 k 0 , k 0 ] . For each integer k > k 0 , define the stop time τ e as follows:
τ k = i n f { t 0 , τ e : S ( t ) ( 1 k , k ) , o r V 1 ( t ) ( 1 k , k ) , o r V 2 ( t ) ( 1 k , k ) , o r I 1 ( t ) ( 1 k , k ) , o r I 2 ( t ) ( 1 k , k ) } ,
where i n f = . Obviously, when k , τ k increases monotonously.
Let τ = lim k + τ k , then τ τ e . So we just have to prove τ = . Supposing that τ , then there are constants T > 0 and ε 1 ( 0 , 1 ) such that P { τ T } > ε 1 . Further, there is an integer k 1 k 0 that makes
P { τ k T } ε 1 f o r a l l k k 1 .
Define C 5 function: V : R + 5 , R + by V ( N ( t ) ) = N ( t ) 1 l n N ( t ) , where N ( t ) : = S ( t ) + V 1 ( t ) + V 2 ( t ) + I 1 ( t ) + I 2 ( t ) . Obviously, function V ( N ( t ) ) is a non-negative function. If ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) R + 5 , , according to I t o ^ s formula, there is a positive number G : = Λ + a + γ 1 + γ 2 + δ + 1 2 ( σ 1 2 + σ 2 2 + σ 3 2 + σ 4 2 + σ 5 2 ) , so that
d V = L V d t + ( 1 1 N ) ( σ 1 S + σ 2 V 1 + σ 3 V 2 + σ 4 I 1 + σ 5 I 2 ) d B ( t ) , L V = ( 1 1 N ) [ Λ a N γ 1 I 1 ( γ 2 + δ ) I 2 ] + 1 2 N 2 ( σ 1 2 S 2 + σ 2 2 V 1 2 + σ 3 2 V 2 2 + σ 4 2 I 1 2 + σ 5 2 I 2 2 ) = Λ a N γ 1 I 1 ( γ 2 + δ ) I 2 Λ N + a + γ 1 I 1 N + ( γ 2 + δ ) I 2 N + 1 2 N 2 ( σ 1 2 S 2 + σ 2 2 V 1 2 + σ 3 2 V 2 2 + σ 4 2 I 1 2 + σ 5 2 I 2 2 ) Λ + a + γ 1 + γ 2 + δ + 1 2 ( σ 1 2 + σ 2 2 + σ 3 2 + σ 4 2 + σ 5 2 ) : = G , d V G d t + ( 1 1 N ) ( σ 1 S + σ 2 V 1 + σ 3 V 2 + σ 4 I 1 + σ 5 I 2 ) d B ( t ) .
Integrate both sides of the above inequality from 0 to τ k T at the same time, we get
0 τ k T d V 0 τ k T G d t + 0 τ k T ( 1 1 N ) ( σ 1 S + σ 2 V 1 + σ 3 V 2 + σ 4 I 1 + σ 5 I 2 ) d B ( t ) ,
moreover, then we take the expectation, and obtain
E V ( N ( τ k T ) ) V ( N ( 0 ) ) + G E ( τ k T ) V ( N ( 0 ) ) + G T .
Set Ω k = { τ k T } for k k 1 and by (5), we have P ( Ω k ) ε 1 . Noting that for every ω Ω k , there is S ( τ k , ω ) or V 1 ( τ k , ω ) or V 2 ( τ k , ω ) or I 1 ( τ k , ω ) or I 2 ( τ k , ω ) , being equal to either k or 1 k , and hence
V ( ( N ( τ k , ω ) ) m i n { k 1 l n k , 1 k 1 + l n k } .
It then follows from (6) that
V ( N ( 0 ) ) + G T E [ 1 Ω k ( ω ) V ( N ( ω ) ) ] ε 1 m i n { k 1 l n k , 1 k 1 + l n k } ,
where 1 Ω k is the indicator function of Ω k . Letting k , we obtain the following contradiction:
> V ( N ( 0 ) ) + G T = .
So we must have τ = a.s. This completes the proof of Theorem 1. □

3. Extinction of Disease

For the infectious disease model, we always care about whether the disease will disappear. In this section, we first define a threshold value R 0 * , and the stochastic extinction of the disease when R 0 * < 0 is then proved in the model (4).
To obtain further properties of the solution, we case on the boundary of the first equation of model (4):
d S ¯ ( t ) = [ Λ λ S ¯ ( t ) ] d t + σ 1 S ¯ ( t ) d B ( t )
so we have,
1 t 0 t S ¯ ( τ ) d τ = S ¯ ( 0 ) S ¯ ( t ) λ t + Λ λ + σ 1 λ t 0 t S ¯ ( τ ) d B ( τ ) .
For the given initial value u, let S ¯ ( t ) be the solution to model (7). According to the comparison theorem, S u , v , w , x , y S ¯ ( t ) t 0 . By solving the Fokker–Planck equation, the process S ¯ ( t ) has unique stationary distribution with density f 1 * ( x ) , and by the strong law of large numbers, we have
lim t 1 t 0 t S ¯ ( τ ) d τ = 0 x f 1 * ( x ) d x = Λ λ .
For other equations of model (4), we use the same method to obtain:
d V ¯ 1 ( t ) = [ φ 1 S ¯ ( t ) a V ¯ 1 ( t ) ] d t + σ 2 V ¯ 1 ( t ) d B ( t ) ,
we have
lim t 1 t 0 t V ¯ 1 ( τ ) d τ = 0 x f 2 * ( x ) d x = φ 1 Λ a λ ,
then similarly
d V ¯ 2 ( t ) = [ φ 2 S ¯ ( t ) a V ¯ 2 ( t ) ] d t + σ 3 V ¯ 2 ( t ) d B ( t ) ,
therefore
lim t 1 t 0 t V ¯ 2 ( τ ) d τ = 0 x f 3 * ( x ) d x = φ 2 Λ a λ ,
where f 2 * ( x ) , f 3 * ( x ) have the same definition as above.
To proceed, we define the threshold as follows:
R 0 * = ( β 1 + β 2 ) Λ λ + k 1 φ 1 Λ a λ + k 2 φ 2 Λ a λ + ε α ,
where α = α 1 α 2 .
Theorem 2.
If R 0 * < 0 , then for any initial value ( S ( 0 ) , V 1 ( 0 ) , V 2 ( 0 ) , I 1 ( 0 ) , I 2 ( 0 ) ) = ( u , v , w , x , y ) R + 5 , , lim sup t l n I u , v , w , x , y ( t ) t R 0 * a.s., and the distribution of S u , v , w , x , y ( t ) , V 1 u , v , w , x , y ( t ) , V 2 u , v , w , x , y ( t ) converge weakly to the unique invariant probability measures μ 1 * , μ 2 * , μ 3 * with the densities f 1 * , f 2 * , f 3 * , respectively.
Proof of Theorem 2.
Considering a Lyapunov function I ( t ) , defined by I ( t ) = I 1 ( t ) + I 2 ( t ) . Applying I t o ^ s formula to I ( t ) , we have
d l n I ( t ) = [ 1 I ( t ) ( β 1 S ( t ) I 1 ( t ) + k 2 I 1 ( t ) V 2 ( t ) α 1 I 1 ( t ) + β 2 S ( t ) I 2 ( t ) + k 1 I 2 ( t ) V 1 ( t ) + ε I 1 ( t ) α 2 I 2 ( t ) ) σ 4 2 I 1 ( t ) 2 + σ 5 2 I 2 ( t ) 2 2 I 2 ( t ) ] d t + σ 4 I 1 ( t ) + σ 5 I 2 ( t ) I ( t ) d B ( t ) [ ( β 1 + β 2 ) S ( t ) + k 2 I 1 ( t ) I ( t ) V 2 ( t ) + k 1 I 2 ( t ) I ( t ) V 1 ( t ) + ε I 1 ( t ) I ( t ) α ] d t + σ 4 I 1 ( t ) + σ 5 I 2 ( t ) I ( t ) d B ( t ) [ ( β 1 + β 2 ) S ¯ ( t ) + k 1 V ¯ 1 ( t ) + k 2 V ¯ 2 ( t ) + ε α ] d t + ( σ 4 + σ 5 ) d B ( t ) ,
where α = α 1 α 2 .
Then integral from 0 to t at both ends of inequality
l n I ( t ) l n I ( 0 ) ( β 1 + β 2 ) 0 t S ¯ ( τ ) d τ + k 1 0 t V ¯ 1 ( τ ) d τ + k 2 0 t V ¯ 2 ( τ ) d τ + ( ε α ) t + ( σ 4 + σ 5 ) 0 t d B ( τ ) .
It finally follows from (11) by dividing t on the both sides and let t that,
lim sup t 1 t ln I ( t ) = ( β 1 + β 2 ) Λ λ + k 1 φ 1 Λ a λ + k 2 φ 2 Λ a λ + ε α = R 0 * < 0 .
Hence, I ( t ) converges almost surely to 0 at an exponential rate.
For any ε 1 > 0 , it follows from (12) that there exists t 0 > 0 such that P ( Ω ε 1 ) > 1 ε 1 where
Ω ε 1 = { l n I ( t ) R 0 * t } = { I ( t ) e R 0 * t , t t 0 } .
Case 1.
S u , v , w , x , y ( t ) converges weakly to the unique invariant probability measure μ 1 * with the density f 1 * .
We can choose that t 0 satisfying 2 β R 0 * e x p { R 0 * } < ε 1 . Let S ¯ ( t ) be the solution of (7). Supposing S ¯ ( t 0 ) = S ( t 0 ) , then we can obtain P { S u , v , w , x , y ( t ) S ¯ ( t ) } = 1 by the comparison theorem. In view of the I t o ^ s formula, for almost all ϖ Ω ε 1 we have
0 l n S ¯ ( t ) l n S ( t ) = Λ t 0 t ( 1 S ¯ ( τ ) 1 S ( τ ) ) d τ + t 0 t ( β 1 I 1 ( τ ) + β 2 I 2 ( τ ) ) d τ β t 0 t I ( τ ) d ( τ ) β t 0 t e R 0 * τ d τ = β R 0 * ( e R 0 * t 0 e R 0 * t ) < ε 1 ,
where β = β 1 β 2 . As a result, for any t t 0 we have
P { | ln S ( t ) ln S ¯ ( t ) | ε 1 } > 1 ε 1 P { | ln S ( t ) ln S ¯ ( t ) | > ε 1 } < ε 1 .
Now let us make an equivalent statement, that is, the distribution of l n S ( t ) is weakly convergent to ν 1 * is equivalent to the distribution of S ( t ) is weakly convergent to μ 1 * . By the Portmanteau theorem, it is sufficient to prove that for any g ( · ) : R R satisfying | g ( x ) g ( y ) | | x y | and | g ( x ) | < 1 x , y R , we have
E g ( l n S u , v , w , x , y ( t ) ) g ¯ 1 : = R g ( x ) ν 1 * ( d x ) = 0 g ( l n x ) μ 1 * ( d x ) .
Because the diffusion of model (4) is non-degenerate, the distribution of S ¯ converges weakly to μ 1 * as t . Therefore
lim t E g ( l n S ¯ ( t ) ) = g ¯ 1 ,
such that
| E g 1 ( l n S ( t ) ) g ¯ 1 | = | E g ( l n S ( t ) ) E g 1 ( l n S ¯ ( t ) ) + E g 1 ( l n S ¯ ( t ) ) g ¯ 1 | E | l n S ( t ) l n S ¯ ( t ) | + E | g 1 ( l n S ¯ ( t ) ) g ¯ 1 | { | l n S ( t ) l n S ¯ ( t ) | < ε 1 } P { | l n S ( t ) l n S ¯ ( t ) | < ε 1 } + { | l n S ( t ) l n S ¯ ( t ) | ε 1 } P { | l n S ( t ) l n S ¯ ( t ) | > ε 1 } ε 1 P { | l n S ( t ) l n S ¯ ( t ) | < ε 1 } + 2 ε 1 P { | l n S ( t ) l n S ¯ ( t ) | > ε 1 } .
Applying (13) and (14) to (15), we can obtain
lim sup t | E g ( l n S ( t ) ) g ¯ 1 | 3 ε 1 .
Case 2.
V 1 u , v , w , x , y ( t ) converges weakly to the unique invariant probability measure μ 2 * with the density f 2 * .
Similar to Case 1, we can choose t 0 satisfying 2 k 1 R 0 * e x p { R 0 * } < ε 1 . Then, we can get
l n V ¯ 1 ( t ) l n V 1 ( t ) = φ 1 t 0 t ( S ¯ ( τ ) V ¯ 1 ( τ ) S ( τ ) V 1 ( τ ) ) d τ + k 1 t 0 t I 2 ( τ ) d τ k 1 t 0 t I ( τ ) d ( τ ) k 1 t 0 t e R 0 * τ d τ = k 1 R 0 * ( e R 0 * t 0 e R 0 * t ) < ε 1 .
As a result, for any t t 0 we have
P { | ln V 1 ( t ) ln V ¯ 1 ( t ) | ε 1 } > 1 ε 1 P { | ln V 1 ( t ) ln V 1 ¯ ( t ) | > ε 1 } < ε 1 ,
then we have
E g ( l n V 1 u , v , w , x , y ( t ) ) g ¯ 2 : = R g ( x ) ν 2 * ( d x ) = 0 g ( l n x ) μ 2 * ( d x ) .
Thus
lim t E g ( l n V ¯ 1 v ( t ) ) = g ¯ 2 ,
such that
| E g 1 ( l n S ( t ) ) g ¯ 1 | = | E g ( l n S ( t ) ) E g 1 ( l n S ¯ ( t ) ) + E g 1 ( l n S ¯ ( t ) ) g ¯ 1 | ε 1 P { | l n S ( t ) l n S ¯ ( t ) | < ε 1 } + 2 ε 1 P { | l n S ( t ) l n S ¯ ( t ) | > ε 1 } .
Applying (16) and (17) to (18), we can obtain
lim sup t | E g ( l n V 1 u , v , w , x , y ( t ) ) g ¯ 2 | 3 ε 1 .
Case 3.
V 2 u , v , w , x , y ( t ) converges weakly to the unique invariant probability measure μ 3 * with the density f 3 * .
The proof method is the same as above. Since ε 1 is taken arbitrarily, we obtain the desired conclusion. The proof is completed. □

4. Stationary Distribution

Now we focus on the case R 0 * > 0 . Let P ( t , ( u , v , w , x , y ) , · ) be the transition probability of ( S u , v , w , x , y ( t ) , V 1 u , v , w , x , y ( t ) , V 2 u , v , w , x , y ( t ) , I 1 u , v , w , x , y ( t ) , I 2 u , v , w , x , y ( t ) ) . Because the diffusion of model (4) is degenerate, i.e., B 1 ( t ) = B 2 ( t ) = B 3 ( t ) = B 4 ( t ) = B 5 ( t ) = B ( t ) , we have to change the model to Stratonovich’s form in order to obtain properties of P ( t , ( u , v , w , x , y ) , · ) ,
d S ( t ) = ( Λ c 1 S ( t ) β 1 S ( t ) I 1 ( t ) β 2 S ( t ) I 2 ( t ) ) d t + σ 1 S ( t ) d B ( t ) d V 1 ( t ) = ( c 2 V 1 ( t ) + φ 1 S ( t ) k 1 I 2 ( t ) V 1 ( t ) ) d t + σ 2 V 1 ( t ) d B ( t ) d V 2 ( t ) = ( c 3 V 2 ( t ) + φ 2 S ( t ) k 2 I 1 ( t ) V 2 ( t ) ) d t + σ 3 V 2 ( t ) d B ( t ) d I 1 ( t ) = ( c 4 I 1 ( t ) + β 1 S ( t ) I 1 ( t ) + k 2 I 1 ( t ) V 2 ( t ) ) d t + σ 4 I 1 ( t ) d B ( t ) d I 2 ( t ) = ( c 5 I 2 ( t ) + β 2 S ( t ) I 2 ( t ) + k 1 I 2 ( t ) V 1 ( t ) + ε I 1 ( t ) ) d t + σ 5 I 2 ( t ) d B ( t ) ,
where
c 1 = λ + σ 1 2 2 ; c 2 = a + σ 2 2 2 ; c 3 = a + σ 3 2 2 ; c 4 = α 1 + σ 4 2 2 ; c 5 = α 2 + σ 5 2 2 .
Let
A ( u , v , w , x , y ) = Λ c 1 u β 1 u x β 2 u y c 2 v + φ 1 u k 1 v y c 3 w + φ 2 u k 2 w x c 4 x + β 1 u x + k 2 w x c 5 y + β 2 u y + k 1 v y + ε x , B = σ 1 u σ 2 v σ 3 w σ 4 x σ 5 y ,
to proceed, we first recall the notion of Lie bracket. If X ( a 1 , a 2 , , a n ) = ( X 1 , X 2 , , X n ) and Y ( a 1 , a 2 , , a n ) = ( Y 1 , Y 2 , , Y n ) are two vector fields on R n then the Lie bracket [ X , Y ] is a vector field given by
[ X , Y ] i ( a 1 , a 2 , , a n ) = j = 1 n ( X j Y i x i ( a 1 , a 2 , , a n ) Y j X i x i ( a 1 , a 2 , , a n ) ) ,
where i = 1 , 2 , , n .
Using L ( u , v , w , x , y ) to represent the Lie algebra generated by A ( u , v , w , x , y ) , B ( u , v , w , x , y ) and L 0 ( u , v , w , x , y ) the ideal in L ( u , v , w , x , y ) generated by B. We have the following theorem.
Theorem 3.
The ideal L 0 ( u , v , w , x , y ) in L ( u , v , w , x , y ) generated by B ( u , v , w , x , y ) satisfies dim L 0 ( u , v , w , x , y ) = 5 at every ( u , v , w , x , y ) R + 5 , . In other words, the set of vectors B , [ A , B ] , [ B , [ A , B ] ] , [ B , [ B , [ A , B ] ] , spans R 5 at every ( u , v , w , x , y ) R + 5 , . As a result, the transition probability P ( t , ( u , v , w , x , y ) , · ) has smooth density p ( t , u , v , w , x , y , u , v , w , x , y ) .
Proof of Theorem 3.
By direct calculation,
C = A , B = σ 1 Λ + σ 4 β 1 u x + σ 5 β 2 u y σ 1 φ 1 u + σ 2 φ 1 u + σ 5 k 1 v y σ 1 φ 2 u + σ 3 φ 1 u + σ 4 k 2 w y σ 1 β 1 u x σ 3 k 2 w x σ 1 β 2 u y σ 2 k 1 v y σ 4 ε x + σ 5 ε x , D = B , C = σ 1 2 Λ + σ 4 2 β 1 u x + σ 5 2 β 2 u y ( σ 1 σ 2 ) 2 φ 1 u + σ 5 2 k 1 v y ( σ 1 σ 3 ) 2 φ 2 u + σ 4 2 k 2 w x σ 1 2 β 1 u x σ 3 2 k 2 w x σ 1 2 β 2 u y σ 2 2 k 1 v y ( σ 4 σ 5 ) 2 ε x , E = C , D = e 11 e 21 e 31 e 41 e 51 , F = D , E = f 11 f 21 f 31 f 41 f 51 ,
where elements in matrices E and F are shown in Appendix A.
Consequently,
d e t ( B , C , D , E , F ) 0 ,
which means that B ; [ A , B ] ; [ B , C ] ; [ C , D ] ; [ D , E ] are linearly independent. As a result, B ; [ A , B ] ; [ B , C ] ;   [ C , D ] ; [ D , E ] span R 5 for all ( u , v , w , x , y ) R + 5 , . Theorem 3 is proved. □
In view of the Hormander Theorem, the transition probability function P ( t , u 0 , v 0 , w 0 , x 0 , y 0 , · ) has a density k ( t , u , v , w , x , y , u 0 , v 0 , w 0 , x 0 , y 0 ) and k C 5 ( ( 0 , ) , R + 5 , , R + 5 , , R + 5 , , R + 5 , , R + 5 , ) . Now we check the kernel k is positive. A fixed point ( u 0 , v 0 , w 0 , x 0 , y 0 ) R + 5 , and a function ϕ , considering the following model of integral equations:
u ϕ ( t ) = u 0 + 0 t [ σ 1 ϕ u ϕ + f 1 ( u ϕ , v ϕ , w ϕ , x ϕ , y ϕ ) ] d τ v ϕ ( t ) = v 0 + 0 t [ σ 2 ϕ v ϕ + f 2 ( u ϕ , v ϕ , w ϕ , x ϕ , y ϕ ) ] d τ w ϕ ( t ) = w 0 + 0 t [ σ 3 ϕ w ϕ + f 3 ( u ϕ , v ϕ , w ϕ , x ϕ , y ϕ ) ] d τ x ϕ ( t ) = x 0 + 0 t [ σ 4 ϕ x ϕ + f 4 ( u ϕ , v ϕ , w ϕ , x ϕ , y ϕ ) ] d τ y ϕ ( t ) = y 0 + 0 t [ σ 5 ϕ y ϕ + f 5 ( u ϕ , v ϕ , w ϕ , x ϕ , y ϕ ) ] d τ ,
where
f 1 = Λ c 1 u β 1 u x β 2 u y ; f 2 = c 2 v + φ 1 u k 1 v y ; f 3 = c 3 w + φ 2 u k 2 w x ; f 4 = c 4 x + β 1 u x + k 2 w x ; f 5 = c 5 y + β 2 u y + k 1 v y + ε x .
Let D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ be the F r e c h e ´ t derivative of the function h. If for some ϕ the derivative D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ has rank 5, then k ( T , u , v , w , x , y , u 0 , v 0 , w 0 , x 0 , y 0 ) > 0 for u = u ϕ ( T ) , v = v ϕ ( T ) , w = w ϕ ( T ) , x = x ϕ ( T ) , and y = y ϕ ( T ) . The derivative D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ can be found by means of the perturbation method for ODEs.
Namely, let
Γ ( t ) = f ( u ϕ ( t ) , v ϕ ( t ) , w ϕ ( t ) , x ϕ ( t ) , y ϕ ( t ) ) ,
where f is the Jacobian of f = [ f 1 , f 2 , f 3 , f 4 , f 5 ] and let Q ( t , t 0 ) , for T t t 0 0 , be a matrix function such that
Q ( t 0 , t 0 ) = I ; Q ( t , t 0 ) t = Γ ( t ) Q ( t , t 0 ) ,
and
v = [ σ 1 , σ 2 , σ 3 , σ 4 , σ 5 ] ,
then D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ h = 0 T Q ( T , s ) g ( s ) h ( s ) d s .
Theorem 4.
For any ( u 0 , v 0 , w 0 , x 0 , y 0 ) R + 5 , and ( u , v , w , x , y ) R + 5 , , there exists T > 0 such that k ( T , u , v , w , x , y , u 0 , v 0 , w 0 , x 0 , y 0 ) > 0 .
Proof of Theorem 4.
First, we check that the rank of D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ is 5. Let ε 1 ( 0 , T ) and h ( t ) = 1 [ T ε 1 , T ] , t ( 0 , T ) . Since
Q ( T , s ) = I d + Γ ( T ) ( s T ) + 1 2 Γ 2 ( T ) ( s T ) 2 + 1 6 Γ 3 ( T ) ( s T ) 3 + 1 24 Γ 4 ( T ) ( s T ) 4 + o ( ( s T ) 4 ) ,
we obtain
D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ h = ε 1 v 1 2 ε 1 2 Γ ( T ) v + 1 6 ε 1 3 Γ 2 ( T ) v 1 24 ε 1 4 Γ 3 ( T ) v + 1 120 ε 1 5 Γ 4 ( T ) v + o ( ε 1 5 ) .
Directly calculated
Γ ( T ) v = σ 1 a 11 + σ 4 a 14 + σ 5 a 15 σ 1 a 21 + σ 2 a 22 + σ 5 a 25 σ 1 a 31 + σ 3 a 33 + σ 4 a 34 σ 1 a 41 + σ 3 a 43 + σ 4 a 44 σ 1 a 51 + σ 2 a 52 + σ 4 a 54 + σ 5 a 55 ; Γ 2 ( T ) v = σ 1 b 11 + σ 2 b 12 + σ 3 b 13 + σ 4 b 14 + σ 5 b 15 σ 1 b 21 + σ 2 b 22 + σ 4 b 24 + σ 5 b 25 σ 1 b 31 + σ 3 a 33 + σ 4 b 34 + σ 5 b 35 σ 1 b 41 + σ 3 b 43 + σ 4 b 44 + σ 5 b 45 σ 1 b 51 + σ 2 b 52 + σ 3 b 53 + σ 4 b 54 + σ 5 b 55 ; Γ 3 ( T ) v = σ 1 c 11 + σ 2 c 12 + σ 3 c 13 + σ 4 c 14 + σ 5 c 15 σ 1 c 21 + σ 2 c 22 + σ 3 c 23 + σ 4 c 24 + σ 5 c 25 σ 1 c 31 + σ 2 c 32 + σ 3 c 33 + σ 4 c 34 + σ 5 c 35 σ 1 c 41 + σ 2 c 42 + σ 3 c 43 + σ 4 c 44 + σ 5 c 45 σ 1 c 51 + σ 2 c 52 + σ 3 c 53 + σ 4 c 54 + σ 5 c 55 ; Γ 4 ( T ) v = σ 1 d 11 + σ 2 d 12 + σ 3 d 13 + σ 4 d 14 + σ 5 d 15 σ 1 d 21 + σ 2 d 22 + σ 3 d 23 + σ 4 d 24 + σ 5 d 25 σ 1 d 31 + σ 2 d 32 + σ 3 d 33 + σ 4 d 34 + σ 5 d 35 σ 1 d 41 + σ 2 d 42 + σ 3 d 43 + σ 4 d 44 + σ 5 d 45 σ 1 d 51 + σ 2 d 52 + σ 3 d 53 + σ 4 d 54 + σ 5 d 55 ,
where elements in matrices Γ ( T ) , Γ 2 ( T ) , Γ 3 ( T ) , and Γ 4 ( T ) are shown in Appendix B.
Therefore, it follows that v , Γ ( T ) v , Γ 2 ( T ) v , Γ 3 ( T ) v , Γ 4 ( T ) v are linearly independent and the derivative D u 0 , v 0 , w 0 , x 0 , y 0 ; ϕ has rank 5.
Putting
r 1 = σ 2 σ 1 , r 2 = σ 3 σ 1 , r 3 = σ 4 σ 1 , r 4 = σ 5 σ 1 ,
and
v ¯ ϕ = u ϕ r 1 ( t ) v ϕ ( t ) , w ¯ ϕ = u ϕ r 2 ( t ) w ϕ ( t ) , x ¯ ϕ = u ϕ r 3 ( t ) x ϕ ( t ) , y ¯ ϕ = u ϕ r 4 ( t ) y ϕ ( t ) ,
we have an equivalent model of model (19)
u ˙ ϕ ( t ) = σ 1 ϕ ( t ) u ϕ ( t ) + g 1 ( u ϕ ( t ) , v ¯ ϕ ( t ) , w ¯ ϕ ( t ) , x ¯ ϕ ( t ) , y ¯ ϕ ( t ) ) v ¯ ˙ ϕ ( t ) = g 2 ( u ϕ ( t ) , v ¯ ϕ ( t ) , w ¯ ϕ ( t ) , x ¯ ϕ ( t ) , y ¯ ϕ ( t ) ) w ¯ ˙ ϕ ( t ) = g 3 ( u ϕ ( t ) , v ¯ ϕ ( t ) , w ¯ ϕ ( t ) , x ¯ ϕ ( t ) , y ¯ ϕ ( t ) ) x ¯ ˙ ϕ ( t ) = g 4 ( u ϕ ( t ) , v ¯ ϕ ( t ) , w ¯ ϕ ( t ) , x ¯ ϕ ( t ) , y ¯ ϕ ( t ) ) y ¯ ˙ ϕ ( t ) = g 5 ( u ϕ ( t ) , v ¯ ϕ ( t ) , w ¯ ϕ ( t ) , x ¯ ϕ ( t ) , y ¯ ϕ ( t ) )
where
g 1 ( u , v ¯ , w ¯ , x ¯ , y ¯ ) = Λ c 1 u β 1 x ¯ u 1 r 3 β 2 y ¯ u 1 r 4 ; g 2 ( u , v ¯ , w ¯ , x ¯ , y ¯ ) = u r 1 v ¯ [ ( c 1 r 1 + c 2 ) u r 1 + Λ r 1 u r 1 1 + φ 1 u 2 r 1 + 1 v ¯ 1 β 1 r 1 x ¯ u r 1 r 3 ( β 2 r 1 + k 1 ) y ¯ u r 1 r 4 ] ; g 3 ( u , v ¯ , w ¯ , x ¯ , y ¯ ) = u r 2 w ¯ [ ( c 1 r 2 + c 3 ) u r 2 + Λ r 2 u r 2 1 + φ 2 u 2 r 2 + 1 w ¯ 1 β 2 r 2 y ¯ u r 2 r 4 ( β 1 r 2 + k 2 ) x ¯ u r 2 r 3 ] ; g 4 ( u , v ¯ , w ¯ , x ¯ , y ¯ ) = u r 3 x ¯ [ c 1 + Λ r 3 u r 3 1 c 4 u r 3 + β 1 u r 3 + 1 β 1 r 3 x ¯ β 2 r 3 y ¯ u r 3 r 4 + k 2 w ¯ u r 3 r 2 ] ;
g 5 ( u , v ¯ , w ¯ , x ¯ , y ¯ ) = u r 4 y ¯ [ ( c 1 r 4 + c 5 ) u r 4 + Λ r 4 u r 4 1 + β 2 u r 4 + 1 β 2 y ¯ + k 1 v ¯ u r 4 r 1 ( β 1 r 4 ε u r 4 ) x ¯ u r 4 r 3 ] .
For any u 0 , u 1 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 , v ¯ 1 , w ¯ 1 , x ¯ 1 , y ¯ 1 > 0 and suppose that u 0 < u 1 and let ρ 1 = s u p { | g 1 | , | g 2 | , | g 3 | , | g 4 | , | g 5 | : u 0 u u 1 , | v ¯ v ¯ 0 | ε 1 , | w ¯ w ¯ 0 | ε 1 , | x ¯ x ¯ 0 | ε 1 , | y ¯ y ¯ 0 | ε 1 , } .
We choose ϕ ( t ) ρ 2 with ( σ 1 ρ 2 u 1 ρ 1 ) + 1 ) ε 1 u 1 u 0 . It is easy to check that with this control, there is 0 T ε 1 / ρ 1 such that
u ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = u 1 , | v ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) v ¯ 0 | < ε 1 , | w ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) w ¯ 0 | < ε 1 , | x ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) x ¯ 0 | < ε 1 , | y ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) y ¯ 0 | < ε 1 .
If u 0 > u 1 ,we can construct ϕ ( t ) similarly.
By choosing u 0 to be sufficiently large, for any v ¯ 0 v ¯ v ¯ 1 , w ¯ 0 w ¯ w ¯ 1 , x ¯ 0 x ¯ x ¯ 1 , y ¯ 0 y ¯ y ¯ 1 , there is a ρ 3 > 0 such that g 1 , g 2 , g 3 , g 4 , g 5 > ρ 3 . This property, combined with (20), implies the existence of a feedback control ϕ and T > 0 satisfying that for any 0 t T we have
v ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = v ¯ 1 , w ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = w ¯ 1 , x ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = x ¯ 1 , y ¯ ϕ ( T , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = y ¯ 1 , u ¯ ϕ ( t , u 0 , v ¯ 0 , w ¯ 0 , x ¯ 0 , y ¯ 0 ) = u 0 .
This completes the proof. □
We construct a function V : R + 5 , [ 1 , ) satisfying that
E V ( S u , v , w , x , y ( t * ) , V 1 u , v , w , x , y ( t * ) , V 2 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) ) V ( u , v , w , x , y ) κ 1 V γ ( u , v , w , x , y ) + κ 2 1 { ( u , v , w , x , y ) K }
for some petite set K and some γ ( 0 , 1 ) , κ 1 , κ 2 > 0 , t * > 1 . If there exists a measure ψ with ψ ( R + 5 , ) > 0 and the probability distribution ν ( · ) is concentrated on N so that for any ( u , v , w , x , y ) K , Q B ( R + 5 , )
K ( u , v , w , x , y , Q ) : = n = 1 P ( n t * , u , v , w , x , y , Q ) ν ( n ) ψ ( Q ) ,
then set K is called to be petite with respect to the Markov chain S u , v , w , x , y ( t * ) , V 1 u , v , w , x , y ( t * ) ,   V 2 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) , n N . We must also prove that Markov chain S u , v , w , x , y ( t * ) , V 1 u , v , w , x , y ( t * ) , V 2 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) , I 1 u , v , w , x , y ( t * ) , n N is irreducible and aperiodic. The definitions and properties of irreducible sets, aperiodic sets, and small sets refer to [28] or [29]. The estimation of convergence rate is divided into the following theorems and propositions.
Theorem 5.
Let U ( u , v , w , x , y ) = ( u + v + w + x + y ) 1 + p * + u p * 2 .There exists positive constants M 1 , M 2 such that
e M 1 t E ( S , V 1 , V 2 , I 1 , I 2 ) U ( u , v , w , x , y ) + M 2 ( e M 1 t 1 ) M 1 .
Proof of Theorem 5.
Considering the Lyapunov function U ( u , v , w , x , y ) = ( u + v + w + x + y ) 1 + p * + u p * 2 . By directly calculating the differential operator L U ( u , v , w , x , y ) related to model (4), we obtain
L U = ( 1 + p * ) ( u + v + w + x + y ) p * [ Λ a ( u + v + w + x + y ) γ 1 x ( γ 2 + δ ) y ] p * 2 u p * 2 1 ( Λ β 1 u x β 2 u y λ u ) + p * ( 1 + p * ) 2 ( u + v + w + x + y ) p * 1 ( σ 1 u + σ 2 v + σ 3 w + σ 4 x + σ 5 y ) 2 + p * ( 2 + p * ) 8 σ 1 2 u p * 2 = 2 Λ ( 1 + p * ) ( u + v + w + x + y ) p * ( 1 + p * ) ( u + v + w + x + y ) p * 1 [ ( a p * 2 σ 1 2 ) u 2 + ( a p * 2 σ 2 2 ) v 2 + ( a p * 2 σ 3 2 ) w 2 + ( a + γ 1 p * 2 σ 4 2 ) x 2 + ( a + γ 2 + δ p * 2 σ 5 2 ) y 2 + ( 2 a p * σ 1 σ 2 ) u v + ( 2 a p * σ 1 σ 3 ) u w + ( 2 a + γ 1 p * σ 1 σ 4 ) u x + ( 2 a + γ 2 + δ p * σ 1 σ 5 ) u y + ( 2 a p * σ 2 σ 3 ) v w + ( 2 a + γ 1 p * σ 2 σ 4 ) v x + ( 2 a + γ 2 + δ p * σ 2 σ 5 ) v y + ( 2 a + γ 1 p * σ 3 σ 4 ) w x + ( 2 a + γ 2 + δ p * σ 3 σ 5 ) w y + ( 2 a + γ 1 + γ 2 + δ p * σ 4 σ 5 ) x y ] p * 2 Λ u p * 2 1 + p * 2 β 1 u p * 2 x + p * 2 β 2 u p * 2 y + p * 2 [ ( 2 + p * ) + σ 1 2 4 + a + φ 1 + φ 2 ] u p * 2 .
By Young’s inequality, we have
u p * 2 x 3 p * 4 + 3 p * u 4 + 3 p * 6 + 4 4 + 3 p * x 4 + 3 p * 4 ; u p * 2 y 3 p * 4 + 3 p * u 4 + 3 p * 6 + 4 4 + 3 p * y 4 + 3 p * 4 .
Choose a number M 1 satisfying
0 < M 1 < min { a p * 2 σ 1 2 , a p * 2 σ 2 2 , a p * 2 σ 3 2 , a + γ 1 p * 2 σ 4 2 , a + γ 2 + δ p * 2 σ 5 2 } .
From (21) and (22),we obtain
M 2 = s u p u , v , w , x , y R + 4 { L U ( u , v , w , x , y ) + M 1 U ( u , v , w , x , y ) } < .
As a result,
L U ( u + v + w + x + y ) M 2 M 1 U ( u + v + w + x + y ) .
For n N , define the stopping time η n = i n f { t 0 : U ( S , V 1 , V 2 , I 1 , I 2 ) n } , then I t o ^ s formula and (23) yield that
E ( e M 1 ( t η n ) ) U ( S ( t η n ) , V 1 ( t η n ) , V 2 ( t η n ) , I 1 ( t η n ) , I 2 ( t η n ) ) U ( u , v , w , x , y ) + E 0 t η n e M 1 t [ L U ( S , V 1 , V 2 , I 1 , I 2 ) + M 1 U ( S , V 1 , V 2 , I 1 , I 2 ) ] d t U ( u , v , w , x , y ) + M 2 ( e M 1 ( t η n ) 1 ) M 1 .
By letting n , we obtain from Fatou’s lemma that
E ( e M 1 ( t η n ) ) U ( S ( t η n ) , V 1 ( t η n ) , V 2 ( t η n ) , I 1 ( t η n ) , I 2 ( t η n ) ) U ( u , v , w , x , y ) + M 2 ( e M 1 t 1 ) M 1 .
The Theorem 5 is proved. □
Theorem 6.
For any t 1 and A F we have
E [ l n I 1 ( t ) ] _ 2 1 A ( [ l n x ] _ 2 + c 4 2 t 2 + 2 c 4 t [ l n x ] _ ) P ( A ) ; E [ l n I 2 ( t ) ] _ 2 1 A ( [ l n y ] _ 2 + c 5 2 t 2 + 2 c 5 t [ l n y ] _ ) P ( A ) ,
where [ l n x ] = 0 ( l n x ) .
Proof of Theorem 6.
We have
l n I 1 ( t ) = l n I 1 ( 0 ) 0 t ( β 1 S + k 2 V 2 ) d t + ( α 1 + σ 4 2 2 ) t σ 4 B ( t ) l n x + ( α 1 + σ 4 2 2 ) t = l n x + c 4 t ,
where c 4 = α 1 + σ 4 2 2 ; c 5 = α 2 + σ 5 2 2 , thus
[ l n I 1 ( t ) ] _ [ l n x ] _ + c 4 t .
This implies that
[ l n I 1 ( t ) ] _ 2 1 A ( [ l n x ] _ 2 + c 4 2 t 2 + 2 c 4 t [ l n x ] _ ) 1 A ,
taking expectation both sides and using the estimate above, we obtain
E [ l n I 1 ( t ) ] _ 2 1 A ( [ l n x ] _ 2 + c 4 2 t 2 + 2 c 4 t [ l n x ] _ ) P ( A ) .
Similarly, we have
E [ l n I 2 ( t ) ] _ 2 1 A ( [ l n y ] _ 2 + c 5 2 t 2 + 2 c 5 t [ l n y ] _ ) P ( A ) ,
where c 5 = α 2 + σ 5 2 2 . The Theorem 6 is proved. □
Choose ε 1 ( 0 , 1 ) satisfying
4 R 0 * t 3 ( 1 ε 1 ) + 2 c 4 < R 0 * ; 4 R 0 * t 3 ( 1 ε 1 ) + 2 c 5 < R 0 * , 4 R 0 * t 3 ( 1 ε 1 ) + 4 c 4 ε 1 < R 0 * 2 ; 4 R 0 * t 3 ( 1 ε 1 ) + 4 c 5 ε 1 < R 0 * 2 .
Choose H so large that
( β 1 + k 2 ) H 2 c 4 2 + R 0 * ; ( β 2 + k 1 ) H 2 c 5 2 + R 0 * , e x p { ( β 1 + k 2 ) H 2 c 4 2 σ 4 2 } < ε 1 2 ; e x p { ( β 2 + k 1 ) H 2 c 5 2 σ 5 2 } < ε 1 2 , e x p { R 0 * [ ( β 1 + k 2 ) H c 4 ] 4 σ 4 2 } < ε 1 2 ; e x p { R 0 * [ ( β 2 + k 1 ) H c 5 ] 4 σ 5 2 } < ε 1 2 .
Theorem 7.
For ε 1 and H chosen as above, there is M ( 0 , 1 ) and T * > 1 such that
P { l n x + 2 R 0 * t 3 l n I 1 ( t ) < 0 ; P { l n y + 2 R 0 * t 3 l n I 2 ( t ) < 0 ,
for all u , v , w [ 0 , H ] ; x , y ( 0 , M ) ; t [ T * , 2 T * ] } 1 ε 1 .
Proof of Theorem 7.
Let S ˜ u ( t ) , V 1 ˜ v ( t ) , V 2 ˜ w ( t ) be the solution with initial value u , v , w to
d S ˜ ( t ) = [ Λ ( β 3 θ 1 + λ ) S ˜ ] d t + σ 1 S ˜ d B ( t ) ; d V 1 ˜ ( t ) = [ φ 1 S ˜ ( β 4 θ 2 + a ) V 1 ˜ ] d t + σ 2 V 1 ˜ d B ( t ) ; d V 2 ˜ ( t ) = [ φ 2 S ˜ ( β 5 θ 3 + a ) V 1 ˜ ] d t + σ 3 V 2 ˜ d B ( t ) .
Calculated,
P { lim t 1 t 0 t S ˜ u ( τ ) d τ = Λ β 3 θ 1 + λ } = 1 ; u [ 0 , ) ; P { lim t 1 t 0 t V 1 ˜ v ( τ ) d τ = φ 1 Λ ( β 3 θ 1 + λ ) ( β 4 θ 2 + a ) } = 1 ; v [ 0 , ) ; P { lim t 1 t 0 t V 2 ˜ w ( τ ) d τ = φ 2 Λ ( β 3 θ 1 + λ ) ( β 5 θ 3 + a ) } = 1 ; w [ 0 , ) .
In view of the strong law of large numbers for martingales, P { lim t B ( t ) t = 0 } = 1 . Hence, there exists T * > 1 , such that
P { σ 1 B ( t ) t R 0 * 12 ; t T * } 1 ε 1 3 ; P { σ 2 B ( t ) t R 0 * 12 ; t T * } 1 ε 1 3 ; P { σ 3 B ( t ) t R 0 * 12 ; t T * } 1 ε 1 3 ,
and
P { 1 t 0 t S ˜ 0 ( τ ) d τ Λ β 3 θ 1 + λ R 0 * 12 β ; t T * } 1 ε 1 3 ; P { 1 t 0 t V 1 ˜ 0 ( τ ) d τ φ 1 Λ ( β 3 θ 1 + λ ) ( β 4 θ 2 + a ) R 0 * 12 k 1 ; t T * } 1 ε 1 3 ; P { 1 t 0 t V 2 ˜ 0 ( τ ) d τ φ 2 Λ ( β 3 θ 1 + λ ) ( β 5 θ 3 + a ) R 0 * 12 k 2 ; t T * } 1 ε 1 3 ,
where β = β 1 β 2 . By the uniqueness of solutions to (26), we obtain
P { S ˜ 0 ( t ) S ˜ u ( t ) ; t 0 } = 1 ; u 0 ; P { V 1 ˜ 0 ( t ) V 1 ˜ v ( t ) ; t 0 } = 1 ; v 0 ; P { V 2 ˜ 0 ( t ) V 2 ˜ w ( t ) ; t 0 } = 1 ; w 0 .
Similar to (8)–(10), it can be shown that there exists M ( 0 , θ ) , θ = max { θ 1 , θ 2 , θ 3 } ,
P { ξ u , v , w , x , y 2 T * } ε 1 3 , x , y M ; u , v , w [ 0 , H ] ,
where ξ u , v , w , x , y = i n f { t 0 : I 1 , I 2 0 } .
Observe also that
P { S S ˜ u ( t ) ; t ξ u , v , w , x , y } = 1 ; P { V 1 V 1 ˜ v ( t ) ; t ξ u , v , w , x , y } = 1 ; P { V 2 V 2 ˜ w ( t ) ; t ξ u , v , w , x , y } = 1 ,
which we have from the comparison theorem. From (27)–(30) we can be show that with probability greater than 1 ε 1 , for all t [ T * , 2 T * ] ,
l n θ l n I 1 ( t ) = l n x + β 1 0 t S ( τ ) d τ + k 2 0 t V 2 ( τ ) d τ c 4 t + σ 4 B ( t ) l n x + β 1 Λ t β 3 θ 1 + λ R 0 * t 12 + φ 2 Λ t ( β 3 θ 1 + λ ) ( β 5 θ 3 + a ) R 0 * t 12 c 4 t R 0 * t 12 l n x + 2 R 0 * t 3 , l n θ l n I 2 ( t ) = l n y + β 2 0 t S ( τ ) d τ + k 1 0 t V 1 ( τ ) d τ + ε 0 t I 1 ( τ ) I 2 ( τ ) d τ c 5 t + σ 5 B ( t ) l n y + β 2 Λ t β 3 θ 1 + λ R 0 * t 12 + φ 1 Λ t ( β 3 θ 1 + λ ) ( β 4 θ 2 + a ) R 0 * t 12 c 5 t R 0 * t 12 l n y + 2 R 0 * t 3 .
The proof is completed. □
Proposition 1.
Assuming R 0 * > 0 . Let M ( 0 , 1 ) , H so large and T * > 1 . There exists M 3 , M 4 > 0 independent of T * , such that
E [ l n I 1 ( t ) ] _ 2 [ l n x ] _ 2 R 0 * t [ l n x ] _ + M 3 t 2 , E [ l n I 2 ( t ) ] _ 2 [ l n y ] _ 2 R 0 * t [ l n y ] _ + M 4 t 2 ,
for any x , y ( 0 , ) , 0 u , v , w H , t [ T * , 2 T * ] .
Proof of Proposition 1.
First, considering x , y ( 0 , M ] , 0 u , v , w H , we have
P ( Ω 1 ) 1 ε 1 , P ( Ω 2 ) 1 ε 1 ,
where
Ω 1 = { l n x + 2 R 0 * t 3 l n I 1 ( t ) < 0 ; t [ T * , 2 T * ] } , Ω 2 = { l n y + 2 R 0 * t 3 l n I 2 ( t ) < 0 ; t [ T * , 2 T * ] } .
In Ω 1 , Ω 2 we have
l n x 2 R 0 * t 3 l n I 1 ( t ) > 0 ; l n y 2 R 0 * t 3 l n I 2 ( t ) > 0 ,
thus for any t [ T * , 2 T * ] ,
0 [ l n I 1 ( t ) ] _ [ l n x ] _ 2 R 0 * t 3 ; 0 [ l n I 2 ( t ) ] _ [ l n y ] _ 2 R 0 * t 3 ,
as a result,
[ l n I 1 ( t ) ] _ 2 [ l n x ] _ 2 4 R 0 * t 3 [ l n x ] _ + 4 R 0 * 2 t 2 9 ; [ l n I 2 ( t ) ] _ 2 [ l n y ] _ 2 4 R 0 * t 3 [ l n y ] _ + 4 R 0 * 2 t 2 9 ,
which imply that
E [ 1 Ω 1 [ l n I 1 ( t ) ] _ 2 ] P ( Ω 1 ) [ l n x ] _ 2 4 R 0 * t 3 P ( Ω 1 ) [ l n x ] _ + 4 R 0 * 2 t 2 9 P ( Ω 1 ) ; E [ 1 Ω 2 [ l n I 2 ( t ) ] _ 2 ] P ( Ω 2 ) [ l n y ] _ 2 4 R 0 * t 3 P ( Ω 2 ) [ l n y ] _ + 4 R 0 * 2 t 2 9 P ( Ω 2 ) .
In Ω 1 c = Ω Ω 1 ; Ω 2 c = Ω Ω 2 , we have from Theorem 6 that
E [ 1 Ω 1 c [ l n I 1 ( t ) ] _ 2 ] P ( Ω 1 c ) [ l n x ] _ 2 2 c 4 t P ( Ω 1 c ) [ l n x ] _ + c 4 2 t 2 P ( Ω 1 c ) ; E [ 1 Ω 2 c [ l n I 2 ( t ) ] _ 2 ] P ( Ω 2 c ) [ l n y ] _ 2 2 c 5 t P ( Ω 2 c ) [ l n y ] _ + c 5 2 t 2 P ( Ω 2 c ) ,
adding (31) and (32) side by side, we obtain
E [ l n I 1 ( t ) ] _ 2 [ l n x ] _ 2 + ( 4 R 0 * 3 ( 1 ε 1 ) + 2 c 4 ) t [ l n x ] _ + ( 4 R 0 * 2 9 + c 4 2 ) t 2 ; E [ l n I 2 ( t ) ] _ 2 [ l n y ] _ 2 + ( 4 R 0 * 3 ( 1 ε 1 ) + 2 c 5 ) t [ l n y ] _ + ( 4 R 0 * 2 9 + c 5 2 ) t 2 ,
in view of (24) we deduce
E [ l n I 1 ( t ) ] _ 2 [ l n x ] _ 2 R 0 * t [ l n x ] _ + ( 4 R 0 * 2 9 + c 4 2 ) t 2 ; E [ l n I 2 ( t ) ] _ 2 [ l n y ] _ 2 R 0 * t [ l n y ] _ + ( 4 R 0 * 2 9 + c 5 2 ) t 2 .
Now, for x , y ( [ M , ) and 0 u , v , w H , we have form Theorem 6 that
E [ l n I 1 ( t ) ] _ 2 [ l n x ] _ 2 R 0 * t [ l n x ] _ + M 3 t 2 ; E [ l n I 2 ( t ) ] _ 2 [ l n y ] _ 2 R 0 * t [ l n y ] _ + M 4 t 2 .
Letting M 3 , M 4 sufficiently large, such that M 3 > 4 R 0 * 2 9 + c 4 2 , M 4 > 4 R 0 * 2 9 + c 5 2 , then the proof is completed. □
Proposition 2.
Assuming R 0 * > 0 . There exist M 7 , M 8 > 0 such that
E [ l n I 1 ( 2 T * ) ] _ 2 [ l n x ] _ 2 R 0 * T * 2 [ l n x ] _ + M 7 T * 2 , E [ l n I 2 ( 2 T * ) ] _ 2 [ l n y ] _ 2 R 0 * T * 2 [ l n y ] _ + M 8 T * 2 ,
for x , y ( 0 , ) ; u , v , w > H .
Proof of Proposition 2.
First, considering x , y e x p { R 0 * T * 2 } . Defined the stopping time
ξ u , v , w , x , y = T * i n f { t > 0 : S , V 1 , V 2 H } .
Let
Ω 3 = { σ 4 B ( t ) ( β 1 + k 2 ) H 2 c 4 2 T * 1 } , Ω 4 = { σ 5 B ( t ) ( β 2 + k 1 ) H 2 c 5 2 T * 1 } , Ω 5 = { σ 4 B ( t ) [ ( β 1 + k 2 ) H c 4 ] t R 0 * 8 ; t [ 0 , 2 T * ] } , Ω 6 = { σ 5 B ( t ) [ ( β 2 + k 1 ) H c 5 ] t R 0 * 8 ; t [ 0 , 2 T * ] } .
By the exponential martingale inequality,
P ( Ω 3 ) 1 e x p { ( β 1 + k 2 ) H 2 c 4 2 σ 4 2 } 1 ε 1 2 , P ( Ω 4 ) 1 e x p { ( β 2 + k 1 ) H 2 c 5 2 σ 5 2 } 1 ε 1 2 , P ( Ω 5 ) 1 e x p { R 0 * [ ( β 1 + k 2 ) H c 4 ] 4 σ 4 2 } 1 ε 1 2 , P ( Ω 6 ) 1 e x p { R 0 * [ ( β 2 + k 1 ) H c 5 ] 4 σ 5 2 } 1 ε 1 2 .
Let
Ω 7 = Ω 3 { ξ u , v , w , x , y = T * } ; Ω 8 = Ω 4 { ξ u , v , w , x , y = T * } , Ω 9 = { l n I 1 ( t ) l n x + R 0 * 8 } { ξ u , v , w , x , y < T * } ; Ω 10 = { l n I 2 ( t ) l n y + R 0 * 8 } { ξ u , v , w , x , y < T * } , Ω 11 = Ω ( Ω 7 Ω 9 ) ; Ω 12 = Ω ( Ω 8 Ω 10 ) .
If x 1 Ω 7 , y 1 Ω 8 , we have
l n I 1 ( 2 T * ) = l n x 0 2 T * ( β 1 S + k 2 V 2 c 4 ) d t + σ 4 B ( 2 T * ) l n x 0 T * ( β 1 S + k 2 V 2 c 4 ) d t 0 T * c 4 d t + σ 4 B ( 2 T * ) l n x T * [ ( β 1 + k 2 ) H 2 c 4 ] + σ 4 B ( 2 T * ) l n x T * [ ( β 1 + k 2 ) H 2 c 4 ] 2 + 1 l n x R 0 * T * 2 ,
similarly,
l n I 2 ( 2 T * ) l n y R 0 * T * 2 .
If x < e x p { R 0 * T * 2 } ; y < e x p { R 0 * T * 2 } , therefore
[ l n I 1 ( 2 T * ) ] _ R 0 * T * 2 + [ l n x ] _ , [ l n I 2 ( 2 T * ) ] _ R 0 * T * 2 + [ l n y ] _ .
Squaring and then multiplying by 1 Ω 7 , 1 Ω 8 and then taking expectation both sides, we yield
E [ l n I 1 ( 2 T * ) ] _ 2 1 Ω 7 [ l n x ] 2 P ( Ω 7 ) R 0 * T * [ l n x ] _ P ( Ω 7 ) + R 0 * 2 T * 2 4 , E [ l n I 2 ( 2 T * ) ] _ 2 1 Ω 8 [ l n y ] 2 P ( Ω 7 ) R 0 * T * [ l n y ] _ P ( Ω 8 ) + R 0 * 2 T * 2 4 .
If x 1 Ω 9 , then
l n 1 ( ξ u , v , w , x , y ) = l n x 0 ξ u , v , w , x , y ( β 1 S + k 2 V 2 c 4 ) d t + σ 4 B ( ξ u , v , w , x , y ) l n x [ ( β 1 + k 2 ) H c 4 ] ξ u , v , w , x , y + σ 4 B ( ξ u , v , w , x , y ) l n x + R 0 * 8 ,
similarly, y 1 Ω 10 , we have
l n 2 ( ξ u , v , w , x , y ) l n y + R 0 * 8 ,
as a result,
Ω 4 { ξ u , v , w , x , y < T * } Ω 9 ; Ω 6 { ξ u , v , w , x , y < T * } Ω 10 ,
hence,
P ( Ω 11 ) = P ( Ω 11 { ξ u , v , w , x , y < T * } ) + P ( Ω 11 { ξ u , v , w , x , y = T * } ) P ( Ω 3 c ) + P ( Ω 5 c ) ε 1 , P ( Ω 12 ) = P ( Ω 12 { ξ u , v , w , x , y < T * } ) + P ( Ω 12 { ξ u , v , w , x , y = T * } ) P ( Ω 4 c ) + P ( Ω 6 c ) ε 1 .
Let t < T * ; u , v , w > 0 and l n x l n x + R 0 * 8 0 ; l n y l n y + R 0 * 8 0 . In view of Proposition and the strong Markov property, we can estimate the conditional expectation
E [ l n I 1 ( 2 T * ) ] _ 2 | ξ u , v , w , x , y = t , I 1 = x , S ( ξ ) = u , V 1 ( ξ ) = v , V 2 ( ξ ) = w | [ l n x ] _ 2 R 0 * ( 2 T * t ) [ l n x ] _ + M 3 ( 2 T * t ) 2 [ l n x ] _ 2 R 0 * T * [ l n x ] _ + 4 M 3 T * 2 ( l n x + R 0 * 8 ) 2 R 0 * T * ( l n x ) + 4 M 3 T * 2 ( l n x ) 2 ( R 0 * T * R 0 * 4 ) ( l n x ) + 4 M 3 T * 2 + R 0 * 2 64 [ l n x ] _ 2 3 R 0 * T * 4 [ l n x ] _ + 4 M 3 T * 2 + R 0 * 2 64 , E [ l n I 2 ( 2 T * ) ] _ 2 | ξ u , v , w , x , y = t , I 2 = y , S ( ξ ) = u , V 1 ( ξ ) = v , V 2 ( ξ ) = w | [ l n y ] _ 2 3 R 0 * T * 4 [ l n y ] _ + 4 M 4 T * 2 + R 0 * 2 64 .
As a result,
E [ l n I 1 ( 2 T * ) ] _ 2 1 Ω 9 [ l n x ] _ 2 P ( Ω 9 ) 3 R 0 * T * 4 [ l n x ] _ P ( Ω 9 ) + 4 M 3 T * 2 + R 0 * 2 64 , E [ l n I 2 ( 2 T * ) ] _ 2 1 Ω 10 [ l n y ] _ 2 P ( Ω 10 ) 3 R 0 * T * 4 [ l n y ] _ P ( Ω 10 ) + 4 M 4 T * 2 + R 0 * 2 64 ,
in view of Theorem 6,
E [ l n I 1 ( 2 T * ) ] _ 2 1 Ω 11 [ l n x ] _ 2 P ( Ω 11 ) + 4 c 4 T * [ l n x ] _ P ( Ω 11 ) + 4 c 4 T * 2 , E [ l n I 2 ( 2 T * ) ] _ 2 1 Ω 12 [ l n y ] _ 2 P ( Ω 12 ) + 4 c 5 T * [ l n y ] _ P ( Ω 12 ) + 4 c 5 T * 2 ,
adding side by side (33)–(35), for some M 5 , M 6 > 0 , we have
E [ l n I 1 ( 2 T * ) ] _ 2 [ l n x ] _ 2 T * ( 3 R 0 * 4 ( 1 ε 1 ) + 4 c 4 ε 1 ) + M 5 T * 2 [ l n x ] _ 2 R 0 * T * 2 + M 5 T * ;
E [ l n I 2 ( 2 T * ) ] _ 2 [ l n y ] _ 2 T * ( 3 R 0 * 4 ( 1 ε 1 ) + 4 c 5 ε 1 ) + M 6 T * 2 [ l n y ] _ 2 R 0 * T * 2 + M 6 T * .
We note that, if x , y e x p { R 0 * T * 2 } , then
l n x R 0 * T * 2 ; l n y + R 0 * T * 2 ,
therefore, it follows from Theorem 6 that
E [ l n I 1 ( 2 T * ) ] _ 2 ( R 0 * 4 + c 4 R 0 * + 4 c 4 2 ) T * 2 ; E [ l n I 2 ( 2 T * ) ] _ 2 ( R 0 * 4 + c 5 R 0 * + 4 c 5 2 ) T * 2 .
Let M 7 = M 5 ( R 0 * 4 + c 4 R 0 * + 4 c 4 2 ) ; M 8 = M 6 R 0 * 4 + c 5 R 0 * + 4 c 5 2 , for any u , v , w H ; x , y ( 0 , ) , we have
E [ l n I 1 ( 2 T * ) ] _ 2 [ l n x ] _ 2 R 0 * T * 2 [ l n x ] _ + M 7 T * 2 , E [ l n I 2 ( 2 T * ) ] _ 2 [ l n y ] _ 2 R 0 * T * 2 [ l n y ] _ + M 8 T * 2 .
The proof is completed. □
Theorem 8.
Let R 0 * > 0 , there exists an invariant probability measure π * such that
(a)
lim t t q * | | P ( t , ( u , v , w , x , y ) , · ) π * ( · ) | | = 0 ; ( u , v , w , x , y ) R + 5 , ,
(b)
lim t 1 t 0 t h ( S , V 1 , V 2 , I 1 , I 2 ) d s = R + 5 , h ( u , v , w , x , y ) π * ( d u , d v , d w , d x , d y ) = 1 ,
where | | · | | is the total variation norm, q * is any positive number and P ( t , u , v , w , x , y , · ) is the transition probability of ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) .
Proof of Theorem 8.
By virtue of Theorem 7, there are h 1 , H 1 > 0 satisfying
E U ( S ( 2 T * ) , V 1 ( 2 T * ) , V 2 ( 2 T * ) , I 1 ( 2 T * ) , I 2 ( 2 T * ) ) ( 1 h 1 ) U ( u , v , w , x , y ) + H 1 .
Let
V = U ( u , v , w , x , y ) + [ l n x ] _ 2 + [ l n y ] _ 2 ,
in view of Proposition 1, Proposition 2, and (26), there is a compact set K R + 5 , , h 2 , H 2 > 0 satisfying
E V V h 2 V + H 2 1 { ( u , v , w , x , y ) k } ; ( u , v , w , x , y ) R + 5 , .
Applying (37) and Theorem 3.6 in [30], we obtain that
n | | P ( 2 n T * , ( u , v , w , x , y ) π * ) | | 0 ; n ,
for some invariant probability measure π * the Markov chain ( S ( 2 n T * ) , V 1 ( 2 n T * ) , V 2 ( 2 n T * ) , I 1 ( 2 n T * ) , I 2 ( 2 n T * ) ) . Let τ K = i n f { n N : ( S ( 2 n T * ) , V 1 ( 2 n T * ) , V 2 ( 2 n T * ) , I 1 ( 2 n T * ) , I 2 ( 2 n T * ) ) K } . It is shown in the proof of Theorem 3.6 in [30] that (37) implies E τ K < . In view of [31], the Markov process ( S u , v , w , x , y ( t ) , V 1 u , v , w , x , y ( t ) , V 2 u , v , w , x , y ( t ) , I 1 u , v , w , x , y ( t ) , I 2 u , v , w , x , y ( t ) ) has an invariant probability measure ϕ * . As a result, ϕ * is also an invariant probability measure of the Markov chain ( S ( 2 n T * ) , V 1 ( 2 n T * ) , V 2 ( 2 n T * ) , I 1 ( 2 n T * ) , I 2 ( 2 n T * ) ) . In light of (38), we must have ϕ * = ϕ * , then, ϕ * is an invariant measure of the Markov process ( S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) ) .
In the proofs, we use the function [ l n y ] _ 2 for the sake of simplicity. In fact, we can treat [ l n y ] _ 1 + q for any small q ( 0 , 1 ) in the same manner. For more details, we can refer to [24] or [25]. □

5. Numerical Examples

By using the Milstein method mentioned in Higham [32], model (4) can be rewritten as the following discretization equations:
S k + 1 = S k + ( Λ β 1 S k I 1 k β 2 S k I 2 k λ S k ) t + σ 1 S k t ξ k + σ 1 2 2 S k ( t ξ k 2 t ) V 1 k + 1 = V 1 k + ( φ 1 S k k 1 I 2 k V k 1 a V 1 k ) t + σ 2 V 1 k t ξ k + σ 2 2 2 V 1 k ( t ξ k 2 t ) V 2 k + 1 = V 2 k + ( φ 2 S k k 2 I 1 k V 2 k a V 2 k ) t + σ 3 V 2 k t ξ k + σ 3 2 2 V 2 k ( t ξ k 2 t ) I 1 k + 1 = I 1 k + ( β 1 S k I 1 k + k 2 I 1 k V 2 k α 1 I 1 k ) t + σ 4 I 1 k t ξ k + σ 4 2 2 I 1 k ( t ξ k 2 t ) I 2 k + 1 = I 2 k + ( β 2 S k I 2 k + k 1 I 2 k V 1 k + ε I 1 k α 1 I 2 k ) t + σ 5 I 2 k t ξ k + σ 5 2 2 I 2 k ( t ξ k 2 t )
where ξ k , k = 1 , 2 , , n are Gaussian random variables. The following figures are drawn using MATLAB based on some numerical examples.
Example 1.
Consider (4) with parameters Λ = 15 ; a = 0.2 ; β 1 = 0.15 ; β 2 = 0.15 ; γ 1 = 0.5 ; γ 2 = 0.15 ; φ 1 = 0.4 ; φ 2 = 0.4 ; ε = 0.8 ; δ = 0.01 ; k 1 = 0.7 ; k 2 = 0.5 ; λ = a + φ 1 + φ 2 = 1 ; α 1 = a + γ 1 + ε = 1.5 ; α 2 = a + γ 2 + δ = 0.36 ; σ 1 = 0.5 ; σ 2 = 1 ; σ 3 = 0.8 ; σ 4 = 0.5 ; σ 5 = 0.5 . Directing calculations show that R 0 * = 40.94 > 0 which satisfy the conditions in Theorem 8, then the disease is almost surely persistent (see Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5). Furthermore, the histograms of the probability density function of S ( t ) , V 1 ( t ) , V 2 ( t ) , I 1 ( t ) , I 2 ( t ) , for model (4) are shown in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, where Figure 11 represents the phase diagram of ( V 1 ( t ) , I 1 ( t ) ) , respectively.
Example 2.
Let parameters Λ = 1 ; a = 0.5 ; β 1 = 0.15 ; β 2 = 0.22 ; γ 1 = 0.35 ; γ 2 = 0.25 ; φ 1 = 0.5 ; φ 2 = 0.4 ; ε = 0.54 ; δ = 0.3 ; k 1 = 0.2 ; k 2 = 0.15 ; λ = a + φ 1 + φ 2 = 1.4 ; α 1 = a + γ 1 + ε = 1.39 ; α 2 = a + γ 2 + δ = 1.05 ; σ 1 = 0.8 ; σ 2 = 0.6 ; σ 3 = 0.6 ; σ 4 = 0.5 ; σ 5 = 0.5 . Directing calculations show that R 0 * = 0.03 < 0 , which satisfy the conditions in Theorem 2, then the disease is almost certainly extinct (see Figure 12 and Figure 13). In addition, S ( t ) , V 1 ( t ) , V 2 ( t ) are weakly convergent to the unique invariant probability measure μ 1 * , μ 2 * , μ 3 * (see Figure 14, Figure 15 and Figure 16).

6. Conclusions and Discussion

The main purpose of this paper is to study the global existence and uniqueness of the solution of model (4) and the extinction and stationary distribution of the disease by introducing a threshold R 0 * . If R 0 * < 0 , the number of infected individuals I ( t ) ( I ( t ) = I 1 ( t ) + I 2 ( t ) ) tends to zero at an exponential rate, whereas the distribution of susceptible population S ( t ) , vaccinated of the first type V 1 ( t ) and vaccinated of the second type V 2 ( t ) converge weakly to the boundary distribution. On the other hand, if R 0 * > 0 , the existence and uniqueness of the invariant probability measure and the convergence of the total variation norm of the transition probability to the invariant measure are obtained. In addition, the support of the invariant probability measure is described. Then, we obtain that the disease can almost certainly continue to exist, and there is an independent stable distribution. Finally, numerical simulation is carried out to verify our theoretical results.
In addition, most of the existing literature uses the method of constructing a Lyapunov function to prove the existence of stationary distribution of the solution of the random model (4). However, this approach does not work for all models. In this paper, the stationary distribution is proved using a definition that applies to more models. Most of the stochastic epidemic models studied so far are second-order or third-order models. However, as the disease progresses, the virus can mutate as it spreads, allowing the disease to spiral out of control. Therefore, in order to describe the infectious disease more accurately, considering the situation of two kinds of vaccinations for susceptible people, a fifth-order model was established–a class of virus mutation infectious disease model with double vaccinations. I sincerely hope that in the future we can build more complete models of infectious diseases to make greater progress.

Author Contributions

Formal analysis, H.C. and J.W.; funding acquisition, X.T.; software, W.Q. and W.L. All authors contributed equally and significantly in this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (Nos. 12261104, 12126363).

Data Availability Statement

Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare that they have no competing interests.

Appendix A

e 11 = σ 1 σ 4 β 1 ( σ 4 Λ x + σ 1 Λ x σ 4 β 1 u 2 x + σ 1 β 1 u 2 x ) + σ 1 σ 5 β 2 ( σ 5 Λ y + σ 1 Λ y β 2 u 2 y ) σ 2 σ 5 k 1 β 2 u v y σ 3 σ 4 k 2 β 1 ( σ 4 σ 3 ) u w x ( σ 4 σ 5 2 ) β 2 ε u x ; e 21 = σ 1 ( σ 1 σ 2 ) 2 φ 1 Λ + σ 1 σ 4 2 β 1 φ 1 u x + σ 1 σ 5 2 β 2 φ 1 u y σ 1 σ 5 2 k 1 φ 1 u y σ 1 σ 5 2 β 2 k 1 u v y + σ 1 2 σ 2 φ 1 Λ + σ 1 2 σ 5 β 2 k 1 u v y σ 1 3 φ 1 Λ σ 2 σ 4 2 β 1 φ 1 u x σ 2 σ 5 2 β 2 φ 1 u y + σ 2 σ 5 2 k 1 φ 1 u y σ 2 σ 5 2 k 1 2 v 2 y + σ 2 2 σ 5 k 1 2 v 2 y σ 4 ( σ 1 σ 2 ) 2 β 1 φ 1 u x σ 4 σ 5 2 k 1 ε u x σ 5 ( σ 1 σ 2 ) 2 β 2 φ 1 u y + σ 5 ( σ 1 σ 2 ) 2 k 1 φ 1 u y + σ 5 ( σ 4 σ 5 ) 2 k 1 ε v x + σ 5 3 k 1 ε v x ; e 31 = σ 1 ( σ 1 σ 3 ) 2 φ 2 Λ + σ 1 σ 4 2 β 1 φ 2 u x σ 1 σ 4 2 k 2 φ 2 u x σ 1 σ 4 2 β 1 k 2 u w x + σ 1 σ 5 2 β 2 φ 2 u y + σ 1 2 σ 3 φ 2 Λ + σ 1 2 σ 4 β 1 k 2 u w x σ 1 3 φ 2 Λ σ 3 σ 4 2 β 1 φ 2 u x + σ 3 σ 4 2 k 2 φ 2 u x σ 3 σ 4 2 k 2 2 w 2 x σ 3 σ 5 2 β 2 φ 2 u y + σ 3 2 σ 4 k 2 2 w 2 x σ 4 ( σ 1 σ 3 ) 2 β 1 φ 2 u x + σ 4 ( σ 1 σ 3 ) 2 k 2 φ 2 u x σ 5 ( σ 1 σ 3 ) 2 β 2 φ 2 u y ;
e 41 = σ 1 σ 3 2 k 2 φ 2 u x + σ 1 σ 4 2 β 1 2 u x 2 + σ 1 σ 5 2 β 1 β 2 u x y σ 1 2 σ 4 β 1 2 u x 2 σ 1 2 σ 5 β 1 β 2 u x y 2 σ 1 3 β 1 Λ x + σ 3 ( σ 1 σ 3 ) 2 k 2 φ 2 u x σ 3 σ 4 2 k 2 2 w x 2 σ 3 2 σ 4 k 2 2 w x 2 σ 3 3 k 2 φ 2 u x ; e 51 = σ 1 σ 2 2 k 1 φ 1 u y + σ 1 σ 4 2 β 1 β 2 u x y + σ 1 σ 5 2 β 2 2 u y 2 + σ 1 ( σ 4 σ 5 ) 2 β 1 ε u x σ 1 ( σ 4 σ 5 ) 2 β 2 ε u x σ 1 2 σ 4 β 1 β 2 u x y σ 1 2 σ 4 β 1 ε u x + σ 1 2 σ 4 β 2 ε u x σ 1 2 σ 5 β 2 2 u y 2 + σ 1 2 σ 5 β 1 ε u x σ 1 2 σ 5 β 2 ε u x 2 σ 1 3 β 2 Λ y σ 2 ( σ 1 σ 2 ) 2 k 1 φ 1 u y σ 2 ( σ 4 σ 5 ) 2 k 1 ε v x + σ 2 σ 5 2 k 1 2 v y 2 + σ 2 2 σ 4 k 1 ε v x σ 2 2 σ 5 k 1 2 v y 2 σ 2 2 σ 5 k 1 ε v x σ 2 3 k 1 φ 2 u y + σ 3 ( σ 4 σ 5 ) 2 k 2 ε w x σ 3 2 σ 4 k 2 ε w x + σ 3 2 σ 5 k 2 ε w x ;
f 11 = ( σ 1 2 Λ + σ 4 2 β 1 u x + σ 5 2 β 2 u y ) [ 2 σ 1 σ 4 β 1 2 u x ( σ 1 σ 4 ) 2 σ 1 σ 5 β 2 2 u y σ 2 σ 5 k 1 β 2 v y σ 3 σ 4 k 2 β 1 w x ( σ 4 σ 3 ) σ 4 σ 5 β 2 ε x + σ 5 2 β 2 ε x ] σ 2 σ 5 k 1 β 2 u y [ ( σ 1 σ 2 ) 2 φ 1 u + σ 5 2 k 1 v y ] σ 3 σ 4 k 2 β 1 u x ( σ 4 σ 3 ) [ ( σ 1 σ 2 ) 2 φ 2 u + σ 4 2 k 2 w x ] ( σ 1 2 β 1 u x + σ 3 2 k 2 w x ) [ σ 1 σ 4 β 1 ( Λ ( σ 1 + σ 4 ) + β 1 u 2 ( σ 1 σ 4 ) ) σ 3 σ 4 k 2 β 1 u w ( σ 4 σ 3 ) + σ 5 β 2 ε u ( σ 5 σ 4 ) ] [ σ 1 2 β 1 u y + σ 2 2 k 1 v y + ( σ 4 σ 5 ) 2 ε x ] [ σ 1 σ 5 β 2 ( Λ ( σ 1 + σ 5 ) β 2 u 2 ) σ 2 σ 5 k 1 β 2 u v ] e 11 ( σ 4 2 β 1 x + σ 5 2 β 2 y ) e 41 σ 4 2 β 1 u e 51 σ 5 2 β 2 u ; f 21 = [ ( σ 1 σ 2 ) 2 φ 1 u + σ 5 2 k 1 v y ] [ σ 1 σ 4 2 β 1 φ 1 x + σ 1 σ 5 β 2 k 1 v y ( σ 1 σ 5 ) σ 2 σ 4 2 β 1 φ 1 x σ 4 ( σ 1 σ 2 ) 2 β 1 φ 1 x σ 4 σ 5 2 k 1 ε x + ( k 1 β 2 ) ( σ 5 2 φ 1 y ( σ 2 σ 5 σ 1 σ 5 + ( σ 1 σ 2 ) 2 ) ) ] + [ ( σ 1 σ 2 ) 2 φ 1 u + σ 5 2 k 1 v y ] [ σ 1 σ 5 β 2 k 1 u y ( σ 1 σ 5 ) + 2 σ 1 σ 5 k 1 2 v y ( σ 2 σ 5 ) + σ 5 k 1 ε x ( σ 4 2 2 σ 4 σ 5 + 2 σ 5 2 ) ] ( σ 1 2 β 1 u x + σ 3 2 k 2 w x ) [ σ 4 ( σ 1 σ 2 ) β 1 φ 1 u ( σ 4 σ 1 + σ 2 ) σ 4 σ 5 2 k 1 ε u + σ 5 k 1 ε v ( σ 4 2 2 σ 4 σ 5 + 2 σ 5 2 ) ( σ 1 2 β 2 u y + σ 2 2 k 1 v y + ( σ 4 σ 5 ) 2 ε x ) ] [ σ 1 σ 5 β 2 k 1 u v ( σ 1 σ 5 ) + σ 2 σ 5 k 1 2 v 2 ( σ 2 σ 5 ) + φ 1 u ( k 1 β 2 ) ( σ 1 σ 5 2 + σ 2 σ 5 2 + σ 5 ( σ 1 σ 2 ) 2 ) ] + e 11 ( σ 1 σ 2 ) 2 φ 1 σ 5 2 k 1 y ( e 21 + e 51 ) ; f 31 = ( σ 1 2 Λ + σ 4 2 β 1 u x + σ 5 2 β 2 u y ) [ σ 1 σ 4 2 k 1 ε x + σ 4 2 k 2 φ 2 x ( σ 3 σ 1 ) + σ 1 σ 4 β 1 k 2 w x ( σ 1 σ 4 ) + σ 5 β 2 φ 2 y ( σ 1 σ 5 σ 3 σ 5 ( σ 1 σ 3 ) 2 ) σ 4 β 1 φ 2 x ( σ 3 σ 4 + ( σ 1 σ 3 ) 2 ) + σ 4 ( σ 1 σ 3 ) 2 k 2 φ 2 x ] + [ ( σ 1 σ 3 ) 2 φ 2 u + σ 4 2 k 2 w x ] [ σ 1 σ 4 β 1 k 2 u x ( σ 1 σ 4 ) + 2 σ 3 σ 4 k 2 2 w x ( σ 3 σ 4 ) ] ( σ 1 2 β 1 u x + σ 3 2 k 2 w x ) [ σ 4 φ 2 u ( k 2 β 1 ) ( σ 1 σ 4 + σ 3 σ 4 + ( σ 1 σ 3 ) 2 ) + σ 1 σ 4 β 1 k 2 u w ( σ 1 σ 4 ) + σ 3 σ 4 k 2 2 w 2 ( σ 3 σ 4 ) ] [ σ 1 2 β 2 u y + σ 2 2 k 1 v y + ( σ 4 σ 5 ) 2 ε x ] [ σ 5 β 2 φ 2 u ( σ 1 σ 5 σ 3 σ 5 ( σ 1 σ 3 ) 2 ) ] + e 11 ( σ 1 σ 3 ) 2 φ 2 σ 4 2 k 2 ( e 31 x + e 41 w ) ; f 41 = ( σ 1 2 Λ + σ 4 2 β 1 u x + σ 5 2 β 2 u y ) [ σ 3 k 2 φ 2 x ( σ 1 2 σ 1 σ 3 ) + σ 1 σ 4 β 1 2 x 2 ( σ 4 σ 1 ) + σ 1 σ 5 β 1 β 2 x y ( σ 5 σ 1 ) ] + [ ( σ 1 σ 3 ) 2 φ 2 u + σ 4 2 k 2 w x ] [ σ 3 σ 4 k 2 2 x 2 ( σ 3 + σ 4 ) ] ( σ 1 2 β 1 u x + σ 3 2 k 2 w x ) [ σ 3 k 2 φ 2 u ( σ 1 2 σ 1 σ 3 ) + 2 σ 1 σ 4 β 1 2 u x ( σ 4 σ 1 ) + σ 1 σ 5 β 1 β 2 u y ( σ 5 σ 1 ) σ 3 σ 4 k 2 2 w x ( σ 3 + σ 4 ) 2 σ 1 3 β 1 Λ ] [ σ 1 2 β 2 u y + σ 2 2 k 1 v y + ( σ 4 σ 5 ) 2 ε x ] [ σ 1 σ 5 β 1 β 2 u x ( σ 5 σ 1 ) ] + e 11 σ 1 2 β 1 x + e 31 σ 3 2 k 2 x + e 41 ( σ 1 2 β 1 u + σ 3 2 k 2 w ) ;
f 51 = ( σ 1 2 Λ + σ 4 2 β 1 u x + σ 5 2 u y ) [ σ 1 σ 2 2 k 1 φ 1 y + σ 1 σ 4 β 1 β 2 u x ( σ 4 σ 1 ) + 2 σ 1 σ 5 β 2 2 u y ( σ 5 σ 1 ) 2 σ 1 3 β 2 Λ σ 2 k 1 φ 1 u ( σ 1 2 2 σ 1 σ 2 ) + 2 σ 2 σ 5 k 1 2 v y ( σ 5 σ 2 ) ] + [ ( σ 1 σ 2 ) 2 φ 1 u + σ 5 2 k 1 v y ] [ σ 2 k 1 ε x ( ( σ 4 σ 5 ) 2 + σ 2 σ 5 σ 2 σ 5 ) + σ 2 σ 5 k 1 2 y 2 ( σ 5 σ 2 ) ] + [ ( σ 1 σ 3 ) 2 φ 2 u + σ 4 2 k 2 w x ] [ σ 3 k 2 ε x ( ( σ 4 σ 5 ) 2 σ 3 σ 4 + σ 3 σ 5 ) ] ( σ 1 2 β 1 u x + σ 3 2 k 2 w x ) [ σ 1 σ 4 β 1 β 2 u y ( σ 4 σ 1 ) + σ 1 ε u ( ( σ 4 σ 5 ) 2 σ 1 σ 4 + σ 1 σ 5 ) + σ 2 k 1 ε v ( ( σ 4 σ 5 ) 2 + σ 2 σ 4 σ 2 σ 5 ) + σ 3 k 2 ε w ( ( σ 4 0 σ 5 ) 2 σ 3 σ 4 + σ 3 σ 5 ) ] [ σ 1 2 β 2 u y + σ 2 2 k 1 v y + ( σ 4 σ 5 ) 2 ε x ] [ σ 2 k 1 φ 1 u ( 3 σ 1 σ 2 σ 1 2 σ 2 2 ) + σ 1 σ 4 β 1 β 2 u x ( σ 4 σ 1 ) + 2 σ 1 σ 5 β 2 2 u y ( σ 5 σ 1 ) + 2 σ 2 σ 5 k 1 2 v y ( σ 5 σ 2 ) σ 2 3 k 1 φ 2 u ] + e 21 σ 2 2 k 1 y + e 41 ( σ 4 σ 5 ) 2 + ( σ 1 2 β 2 u + σ 2 2 k 1 v ) ( e 11 + e 51 ) .

Appendix B

a 11 = c 1 β 1 x β 2 y + σ 1 ϕ ; a 14 = β 1 u ; a 15 = β 2 y ; a 21 = c 2 k 1 y + σ 2 ϕ ; a 25 = k 1 v ; a 31 = φ 2 ; a 33 = c 3 k 2 x ; a 34 = k 2 w ; a 41 = β 1 x ; a 43 = β 1 x ; a 43 = k 2 x ; a 44 = β 1 u + k 2 w ; a 51 = β 1 y ; a 52 = k 1 y ; a 54 = ε ; a 55 = c 5 + β 2 u + k 1 v , b 11 = a 11 2 + a 14 a 41 + a 15 a 51 ; b 12 = a 15 a 52 ; b 13 = a 14 a 43 ; b 14 = a 11 a 14 + a 14 a 44 + a 15 a 54 ; b 15 = a 11 a 15 + a 15 a 55 ; b 21 = a 11 a 21 + a 21 a 22 + a 25 a 51 ; b 22 = a 22 2 + a 25 a 52 ; b 24 = a 14 a 21 + a 25 a 54 ; b 25 = a 15 a 21 + a 25 a 55 ; b 31 = a 11 a 31 + a 31 a 33 + a 34 a 41 ; b 33 = a 33 2 + a 34 a 43 ; b 34 = a 14 a 31 + a 33 a 34 + a 34 a 44 ; b 35 = a 15 a 31 ; b 41 = a 11 a 41 + a 31 a 43 + a 41 a 44 ; b 43 = a 33 a 43 + a 43 a 44 ; b 44 = a 14 a 41 + a 34 a 43 + a 44 2 ; b 45 = a 15 a 41 ; b 51 = a 11 a 51 + a 21 a 52 + a 41 a 54 + a 51 a 55 ; b 52 = a 22 a 52 + a 52 a 55 ; b 53 = a 43 a 54 ; b 54 = a 14 a 51 + a 44 a 54 + a 54 a 55 ; b 55 = a 15 a 51 + a 25 a 52 + a 55 2 , c 11 = a 11 b 11 + a 21 b 12 + a 31 b 13 + a 41 b 14 + a 51 b 15 ; c 12 = a 52 b 15 ; c 13 = a 33 b 13 + a 43 b 14 ; c 14 = a 14 b 11 + a 34 b 13 + a 44 b 14 + a 54 b 15 ; c 15 = a 15 b 11 + a 25 b 12 + a 55 b 15 ; c 21 = a 11 b 21 + a 22 b 22 + a 41 b 24 + a 51 b 25 ; c 22 = a 52 b 25 ; c 23 = a 43 b 24 ; c 24 = a 14 b 21 + a 44 b 24 + a 54 b 25 ; c 25 = a 15 b 21 + a 25 b 22 + a 55 b 25 ; c 31 = a 11 b 31 + a 31 b 33 + a 41 b 34 + a 51 b 35 ; c 32 = a 52 b 35 ; c 33 = a 33 b 33 + a 43 b 34 ; c 34 = a 14 b 31 + a 34 b 33 + a 44 b 34 + a 54 b 35 ; c 35 = a 15 b 31 + a 55 b 35 ; c 41 = a 11 b 41 + a 31 b 43 + a 41 b 44 + a 51 b 55 ; c 42 = a 52 b 45 ; c 43 = a 33 b 43 + a 43 b 44 ;
c 44 = a 14 b 41 + a 34 b 43 + a 44 b 44 + a 54 b 45 ; c 45 = a 15 b 41 + a 55 b 45 ; c 51 = a 11 b 51 + a 21 b 52 + a 31 b 53 + a 41 b 54 + a 51 b 55 ; c 52 = a 52 b 55 ; c 53 = a 33 b 53 + a 43 b 54 ; c 54 = a 14 b 51 + a 34 b 53 + a 44 b 54 + a 54 b 55 ; c 55 = a 15 b 51 + a 25 b 52 + a 55 b 55 ; d 11 = b 11 2 + b 12 b 21 + b 13 b 31 + b 14 b 41 + b 15 b 51 ; d 12 = b 11 b 12 + b 12 b 22 + b 15 b 52 ; d 13 = b 11 b 13 + b 13 b 33 + b 14 b 43 + b 15 b 53 ; d 14 = b 11 b 14 + b 12 b 24 + b 13 b 34 + b 14 b 44 + b 15 b 54 ; d 15 = b 11 b 15 + b 12 b 25 + b 13 b 35 + b 14 b 45 + b 15 b 55 ; d 21 = b 11 b 21 + b 21 b 22 + b 24 b 41 + b 25 b 51 ; d 22 = b 12 b 21 + b 22 2 + b 25 b 52 ; d 23 = b 13 b 21 + b 24 b 43 + b 25 b 53 ; d 24 = b 14 b 21 + b 22 b 24 + b 24 b 44 + b 25 b 54 ; d 25 = b 15 b 21 + b 22 b 25 + b 24 b 45 + b 25 b 55 ; d 31 = b 11 b 31 + b 31 b 33 + b 34 b 41 + b 35 b 51 ; d 32 = b 12 b 31 + b 35 b 52 ; d 33 = b 13 b 31 + b 33 2 + b 34 b 43 + b 35 b 53 ; d 34 = b 14 b 31 + b 33 b 34 + b 34 b 44 + b 35 b 54 ; d 35 = b 15 b 31 + b 33 b 35 + b 34 b 45 + b 35 b 55 ; d 41 = b 11 b 41 + b 31 b 43 + b 41 b 44 + b 45 b 51 ; d 42 = b 12 b 41 + b 45 b 52 ; d 43 = b 13 b 41 + b 33 b 43 + b 43 b 44 + b 45 b 53 ; d 44 = b 14 b 41 + b 34 b 43 + b 44 2 + b 45 b 54 ; d 45 = b 15 b 41 + b 35 b 43 + b 44 b 45 + b 45 b 44 ; d 51 = b 11 b 51 + b 21 b 52 + b 31 b 53 + b 41 b 54 + b 51 b 55 ; d 52 = b 12 b 51 + b 22 b 52 + b 52 b 55 ; d 53 = b 31 b 51 + b 33 b 53 + b 43 b 54 + b 53 b 55 ; d 54 = b 14 b 51 + b 24 b 52 + b 34 b 53 + b 44 b 54 + b 54 b 55 ; d 55 = b 15 b 51 + b 25 b 52 + b 35 b 53 + b 45 b 54 + b 55 2 .

References

  1. Ruan, W.; Wang, W. Dynamical behavior of an epidemic model with a nonlinear incidence rate. J. Differ. Equ. 2003, 18, 135–163. [Google Scholar] [CrossRef] [Green Version]
  2. Anderson, R.M.; May, R.M. Population biology of infectious diseases: Part I. Nature 1979, 280, 361–367. [Google Scholar] [CrossRef]
  3. Wang, W. Global behavior of an seirs epidemic model with time delays. Appl. Math. Lett. 2002, 15, 423–428. [Google Scholar] [CrossRef]
  4. Meng, X.; Chen, L.; Wu, B. A delay sir epidemic model with pulse vaccination and incubation times. Nonlinear Anal. Real. 2010, 11, 88–98. [Google Scholar] [CrossRef]
  5. Cai, L.; Xiang, J.; Li, X.; Lashari, A.A. A two-strain epidemic model with mutant strain and vaccination. Appl. Math. Comput. 2012, 40, 125–142. [Google Scholar] [CrossRef]
  6. Maia, M.; Mimmo, I.; Li, X.Z. Subthreshold coexistence of strains: The impact of vaccination and mutation. MBE 2007, 4, 287–317. [Google Scholar]
  7. Baba, I.A.; Kaymakmzade, B.; Hincal, E. Two strain epidemic model with two vaccinations. Solitons Fractals 2018, 106, 342–347. [Google Scholar] [CrossRef]
  8. Bilgen, K.; Evren, H. Two-strain epidemic model with two vaccinations and two time delayed. Qual. Quant. 2018, 52, 695–709. [Google Scholar]
  9. Ksendal, B. Stochastic Differential Equations: An Introduction with Applications. J. Am. Stat. Assoc. 2006, 51, 1721–1732. [Google Scholar]
  10. Allen, L.J.S. TAn introduction to stochastic epidemic models, in: Mathematical Epidemiology. Math. Epidemiol. 2008, 10, 81–130. [Google Scholar]
  11. Beddington, J.R.; May, R.M. Harvesting natural populations in a randomly fluctuating environment. Science 1977, 197, 463–465. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, W. A SIRS epidemic model incorporating media coverage with random. Abstr. Appl. Anal. 2023, 2013, 764–787. [Google Scholar] [CrossRef] [Green Version]
  13. Thomas, C.G.; Shelemyahu, Z. Introduction to Stochastic Differential Equations. J. Am. Stat. Assoc. 1989, 84, 1104. [Google Scholar]
  14. Mao, X.R. Stochastic Differential Equations and Their Applications; Horwood: Chichester, UK, 1997. [Google Scholar]
  15. Beretta, E.; Kolmanovskii, V.; Shaikhet, L. Stability of epidemic model with time delays influenced by stochastic perturbations. Math. Comput. Simul. 1998, 45, 269–277. [Google Scholar] [CrossRef]
  16. Mao, X.R.; Marion, G.; Renshaw, E. Environmental Brownian noise suppresses explosions in population dynamics. Stoch. Process. Their Appl. 2002, 97, 95–110. [Google Scholar] [CrossRef]
  17. Yu, J.; Jiang, D.; Shi, N. Global stability of two-group SIR model with random perturbation. J. Math. Anal. Appl. 2009, 360, 235–244. [Google Scholar] [CrossRef] [Green Version]
  18. Britton, T. Stochastic epidemic models: A survey. Math. Biosci. 2010, 225, 24–35. [Google Scholar] [CrossRef]
  19. Ball, F.; Sirl, D.; Trapman, P. Analysis of a stochastic SIR epidemic on a random network incorporating household structure. Math. Biosci. 2010, 224, 53–73. [Google Scholar] [CrossRef] [Green Version]
  20. Jiang, D.; Ji, C.; Shi, N.; Yu, J. The long time behavior of DI SIR epidemic model with stochastic perturbation. J. Math. Anal. Appl. 2010, 372, 162–180. [Google Scholar] [CrossRef] [Green Version]
  21. Jiang, D.; Yu, J.; Ji, C.; Shi, N. Asymptotic behavior of global positive solution to a stochastic SIR model. Math. Comput. Model. 2011, 54, 221–232. [Google Scholar] [CrossRef]
  22. Gray, A.; Greenhalgh, D.; Hu, L.; Mao, X.R.; Pan, J. A stochastic differential equation SIS epidemic model. J. Appl. Math. 2011, 71, 876–902. [Google Scholar] [CrossRef] [Green Version]
  23. Cai, Y.; Wang, X.; Wang, W.; Zhao, M. Stochastic dynamics of a SIRS epidemic model with ratio-dependent incidence rate. Abstr. Appl. Anal. 2013, 2013, 415–425. [Google Scholar] [CrossRef]
  24. Dieu, N.T.; Nguyen, D.H.; Du, N.H.; Yin, G. Classification of asymptotic behavior in a stochastic SIR model. J. Appl. Dyn. Syst. 2016, 15, 1062–1084. [Google Scholar] [CrossRef] [Green Version]
  25. Du, N.H.; Nhu, N.N. Permanence and extinction for the stochastic SIR epidemic model. J. Differ. Equ. 2020, 269, 9619–9652. [Google Scholar] [CrossRef]
  26. Liu, W.B.; Zheng, Q.B. A stochastic SIS epidemic model incorporating media coverage in a two patch setting. Comput. Math. Appl. 2015, 262, 160–168. [Google Scholar] [CrossRef]
  27. Tan, Y.P.; Cai, Y.L.; Wang, X.Q.; Peng, Z.H.; Wang, K.; Yao, R.X.; Wang, W.M. Stochastic dynamics of an SIS epidemiological model with media coverage. Math. Comput. Simul. 2023, 204, 1–27. [Google Scholar] [CrossRef]
  28. Meyn, D.S.P.; Tweedie, R.L. Markov Chains and Stochastic Stability; Springer: London, UK, 1993. [Google Scholar]
  29. Nummelin, E. General Irreducible Markov Chains and Non-Negative Operations; Cambridge Press: Cambridge, UK, 1984. [Google Scholar]
  30. Jarner, S.F.; Roberts, G.O. Polynomial convergence rates of Markov chains. Ann. Appl. Probab. 2002, 12, 224–247. [Google Scholar] [CrossRef]
  31. Kliemann, W. Recurrence and invariant measures for degenerate diffusions. Ann. Probab. 1987, 15, 690–707. [Google Scholar] [CrossRef]
  32. Higham, D.J. An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 2001, 43, 525–546. [Google Scholar] [CrossRef]
Figure 1. Sample path of S(t).
Figure 1. Sample path of S(t).
Mathematics 11 01712 g001
Figure 2. Sample path of V1(t).
Figure 2. Sample path of V1(t).
Mathematics 11 01712 g002
Figure 3. Sample path of V2(t).
Figure 3. Sample path of V2(t).
Mathematics 11 01712 g003
Figure 4. Sample path of I1(t).
Figure 4. Sample path of I1(t).
Mathematics 11 01712 g004
Figure 5. Sample path of I2(t).
Figure 5. Sample path of I2(t).
Mathematics 11 01712 g005
Figure 6. Histogram of the probability density function of S(t).
Figure 6. Histogram of the probability density function of S(t).
Mathematics 11 01712 g006
Figure 7. Histogram of the probability density function of V1(t).
Figure 7. Histogram of the probability density function of V1(t).
Mathematics 11 01712 g007
Figure 8. Histogram of the probability density function of V2(t).
Figure 8. Histogram of the probability density function of V2(t).
Mathematics 11 01712 g008
Figure 9. Histogram of the probability density function of I1(t).
Figure 9. Histogram of the probability density function of I1(t).
Mathematics 11 01712 g009
Figure 10. Histogram of the probability density function of I2(t).
Figure 10. Histogram of the probability density function of I2(t).
Mathematics 11 01712 g010
Figure 11. Phase portrait of model (4).
Figure 11. Phase portrait of model (4).
Mathematics 11 01712 g011
Figure 12. Sample path of I1(t).
Figure 12. Sample path of I1(t).
Mathematics 11 01712 g012
Figure 13. Sample path of I2(t).
Figure 13. Sample path of I2(t).
Mathematics 11 01712 g013
Figure 14. Sample path of S(t).
Figure 14. Sample path of S(t).
Mathematics 11 01712 g014
Figure 15. Sample path of V1(t).
Figure 15. Sample path of V1(t).
Mathematics 11 01712 g015
Figure 16. Sample path of V2(t).
Figure 16. Sample path of V2(t).
Mathematics 11 01712 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Tan, X.; Wang, J.; Qin, W.; Luo, W. Stochastic Dynamics of a Virus Variant Epidemic Model with Double Inoculations. Mathematics 2023, 11, 1712. https://doi.org/10.3390/math11071712

AMA Style

Chen H, Tan X, Wang J, Qin W, Luo W. Stochastic Dynamics of a Virus Variant Epidemic Model with Double Inoculations. Mathematics. 2023; 11(7):1712. https://doi.org/10.3390/math11071712

Chicago/Turabian Style

Chen, Hui, Xuewen Tan, Jun Wang, Wenjie Qin, and Wenhui Luo. 2023. "Stochastic Dynamics of a Virus Variant Epidemic Model with Double Inoculations" Mathematics 11, no. 7: 1712. https://doi.org/10.3390/math11071712

APA Style

Chen, H., Tan, X., Wang, J., Qin, W., & Luo, W. (2023). Stochastic Dynamics of a Virus Variant Epidemic Model with Double Inoculations. Mathematics, 11(7), 1712. https://doi.org/10.3390/math11071712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop