Next Article in Journal
On Using Relative Information to Estimate Traits in a Darwinian Evolution Population Dynamics
Next Article in Special Issue
Stability Results for Some Classes of Cubic Functional Equations
Previous Article in Journal
A New Class of Coordinated Non-Convex Fuzzy-Number-Valued Mappings with Related Inequalities and Their Applications
Previous Article in Special Issue
Analytical and Numerical Investigation for the Inhomogeneous Pantograph Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Results of Stochastic Differential Equations

School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(6), 405; https://doi.org/10.3390/axioms13060405
Submission received: 19 May 2024 / Revised: 14 June 2024 / Accepted: 14 June 2024 / Published: 16 June 2024
(This article belongs to the Special Issue Difference, Functional, and Related Equations)

Abstract

:
In this paper, there are two aims: one is Schauder and Sobolev estimates for the one-dimensional heat equation; the other is the stabilization of differential equations by stochastic feedback control based on discrete-time state observations. The nonhomogeneous Poisson stochastic process is used to show how knowing Schauder and Sobolev estimates for the one-dimensional heat equation allows one to derive their multidimensional analogs. The properties of a jump process is used. The stabilization of differential equations by stochastic feedback control is based on discrete-time state observations. Firstly, the stability results of the auxiliary system is established. Secondly, by comparing it with the auxiliary system and using the continuity method, the stabilization of the original system is obtained. Both parts focus on the impact of probability theory.

1. Introduction

For the classical theory of partial differential equations, the Schauder and Sobolev estimates are important issues; see the book [1]. The regularity of partial differential equations has been intensively studied. There is a lot of research concerning this part, and thus, we do not review it here. The regularity of stochastic partial differential equations has been also studied by many authors, e.g., stochastic evolution Equations [2,3], stochastic parabolic Equations [4,5], stochastic kinetic Equations [6], and so on. For more information on the regularity of stochastic process and random attractor, we refer the reader to [7,8,9,10]. In [11], Krylov and Priola used the Poisson stochastic process to obtain the Schauder and Sobolev estimates of multidimensional heat equation from the one-dimensional case. More precisely, they first obtained the Schauder and Sobolev estimates for the following equation:
t u ( t , x ) = D x 2 u ( t , x ) + f ( t , x ) , t ( 0 , T ) , x R , u ( 0 , x ) = 0 , x R ,
then, they derived the Schauder and Sobolev estimates for the multidimensional case, among others. Following [11], in this paper, a probability method is used to study the regularity of parabolic equations; see [12] for a similar method. For more information on the study of jump process, see [13,14].
On the other hand, noise exists in the real world and has some advantages. In the last century, some authors realized this point and conducted further research in this field; see [15]. Recently, Mao [16,17] obtained the stabilization by discrete observation in view of control theory. There are a lot of authors that have considered a similar question. For example, You et al. [18] obtained the stabilization of hybrid systems by feedback control based on discrete-time state observations and they considered many kinds of stability including H stability and asymptotic stability; Dong et al. [19] obtained the almost sure exponential stabilization by stochastic feedback control based on discrete-time observations; Li-Mao [20] obtained the stabilization of highly nonlinear hybrid stochastic differential delay equations by delay feedback control; Fei et al. [21] considered the stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations; Liu-Wu [22] obtained the intermittent stochastic stabilization based on discrete-time observation with time delay; Shen et al. [23,24] obtained the stabilization for hybrid stochastic systems by aperiodically intermittent control and stabilization of stochastic differential equations driven by G-Levy process with discrete-time feedback control; and Mao et al. [25] obtained the stabilization by intermittent control for hybrid stochastic differential delay equations. Guo et al. [26] generalized the results of [16,17] to the polynomial case similar to [27]. Recently, Lu et al. [28] obtained the stabilization of differently structured hybrid neutral stochastic systems by delay feedback control under highly nonlinear condition. Global stabilization via output feedback of stochastic nonlinear time-delay systems with time-varying measurement error is established by [29]. Just recently, Zhao and Zhu [30,31] considered the stabilization of highly nonlinear neutral stochastic systems. The stabilization of the stochastic system is a hot issue.
However, there are some cases which have not been considered. For example, in the papers [17,18,19], the authors all assume that every term keep uniform for the stochastic system. In other words, for the stochastic system:
d X t = f ( X t ) d t + σ ( X t ) d B ( t ) , X t | t = 0 = X 0 ,
where X t = ( X t 1 , , X t d ) R d , f = ( f 1 , , f d ) : R d R d . Mao [17] considered the case that σ ( X t ) = A ( X t ) , A R d × d , B ( t ) = B ( t ) and B ( t ) is one-dimensional Brownian motion. Moreover, Mao assumed that there exists positive constants ρ 1 , ρ 2 such that:
| A x | 2 ρ 1 | x | 2 , | x T A x | 2 ρ 2 | x | 4 , | f ( x ) | ρ 1 | x | .
It follows from the assumptions that every term in (2) must be uniform, that is, for the term X t i , the drift term f i and the diffusion term are almost surely linear term of X t i . If there exists a term X t i where the diffusion term disappear, the results of [17] will not hold. If the drift term does not satisfy the linear assumption, then there are few results concerning with this issue. Fei et al. [21] obtained the stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations, and we remark that they did not use the advantage of the noise. Moreover, from the point of saving cost, is it possible to lay the observations on part of system or not?
In Ref. [17], Mao used two important properties of stochastic system: one is the positivity of solution, which can assure that one can use the Itô formula for the small order moment ( 0 < p < 1 ); the other is the Markov property, which can assure that one only seek a point k such that the Lemma 3.6 of [17] holds. In this paper, we will use the similar trick to deal with some simple question. In addition, Li et al. [32] solved the problem of when the drift term does not satisfy the linear growth, such as | f ( x ) | ρ 1 | x | + d 1 for some positive constant d.
In the present paper, the first aim is to use the nonhomogeneous Poisson stochastic process to find some new results. The main difference between this paper and [11] is that the nonhomogeneous Poisson stochastic process is used in this paper but Krylov and Priola used the homogeneous Poisson stochastic process. The method used in [11] is probability, and the results are interesting.
Motivated by [17,21], in this paper, the second aim is to compare the observations of the system. The coupling system with different feedback control is considered. Furthermore, the two elements have different stability properties, which is different from the earlier results.
Throughout this paper, T is a fixed positive number, R d denotes Euclidean space and C γ ( R d ) , γ ( 0 , 1 ) is the space of all real-valued functions f on R d with the following norm:
f C γ ( R d ) = sup x R d | f ( x ) | + [ f ] C γ ( R d ) < + ,
where
[ f ] C γ ( R d ) = sup x y | f ( x ) f ( y ) | | x y | γ .
As usual, we denote C 2 + γ ( R d ) as the space of real-valued twice continuously differentiable functions f on R d with the following norm:
f C 2 + γ ( R d ) = sup x R d ( | f ( x ) | + | D f ( x ) | + | D 2 f ( x ) | ) + [ D 2 f ] C γ ( R d ) ,
where D f is the gradient of f and D 2 f is its Hessian.
The rest of this paper is arranged as follows. In Section 2, some preliminaries and main result are presented. Section 3 and Section 4 focus on the proof of main result.

2. Preliminaries

Consider the following Cauchy problem:
t u ( t , x ) = a ( t ) D x 2 u ( t , x ) + f ( t , x ) , t ( 0 , T ) , x R , u ( 0 , x ) = 0 , x R ,
where a ( t ) is a positive bounded function. Denote B c ( ( 0 , T ) , C 0 ( R d ) ) as the space of functions φ satisfying that φ is Borel bounded function such that φ ( t , · ) C 0 ( R d ) for any t ( 0 , T ) , for any n = 0 , 1 , , the C n ( R d ) -norm of φ ( t , · ) are bounded on ( 0 , T ) , and the supports of φ ( t , · ) belong to the same ball.
It follows from [33] that if f belongs to B c ( ( 0 , T ) , C 0 ( R ) ) , then (3) has a solution u ( t , x ) satisfying:
(i) u is a continuous function in [ 0 , T ] × R ;
(ii) for any fixed t [ 0 , T ] , u belongs to C 2 + α ( R ) and has the following estimate:
sup t [ 0 , T ] u ( t , · ) C 2 + α ( R ) N ( T , α ) sup t [ 0 , T ] f ( t , · ) C α ( R ) .
Moreover, there exists only one solution u satisfying the following properties:
sup ( t , x ) [ 0 , T ] × R | u ( t , x ) | T sup ( t , x ) [ 0 , T ] × R | f ( t , x ) | ,
sup t [ 0 , T ] [ D x 2 u ( t , · ) ] C α ( R ) N ( α ) sup t [ 0 , T ] [ f ( t , · ) ] C α ( R ) ,
D x 2 u L p ( ( 0 , T ) × R ) p N p f L p ( ( 0 , T ) × R ) p .
where L p -space is defined as usual.
Now, we recall some knowledge of Poisson stochastic process. A nonhomogeneous Poisson process π ( t , ω ) ( π t for short) is a Poisson process with rate parameter λ ( t ) such that the rate parameter of the process is a function of time. The significant difference between the homogeneous and nonhomogeneous Poisson process is that the latter case is not a stationary process. Thus, the nonhomogeneous Poisson process can not be wrote as the sum of a sequence which is an i.i.d (independently identically distribution) random variables.
As usual, π t is a counting process with the following properties:
(i): P ( π t π s = k ) = [ m ( t ) m ( s ) ] k k ! e [ m ( t ) m ( s ) ] , m ( t ) = 0 t λ ( s ) d s ;
(ii): π t π s is independent of the trajectory { π r , r [ 0 , s ] } .
For simplicity, in this paper, only the two-dimensional heat equation is considered and the finite dimensional case can be similarly dealt with. For x , y R , we set z = ( x , y ) R 2 . For l R 2 , denote D l 2 = l i l j D i j , D i = D x i = / x i and D i j = D i D j , where i , j = 1 , 2 and x 1 = x , x 2 = y . We obtain the following result.

3. Main Results

In this section, we prove the main results.

3.1. Regularity of Parabolic Equations by Using Probability Method

Similar to [11], we consider the following equation:
t u ( t , x , y , ω ) = a ( t ) D x 2 u ( t , x , y , ω ) + f ( t , x , y h π t ( ω ) ) , t > 0 , x R , y R , u ( 0 , x , y ) = 0 , x R , y R ,
where a ( t ) > 0 is a bounded Borel measurable function and h R is a parameter. As usual in probability theory, we do not indicate the dependence on ω in the sequence. From the result of one-dimensional case, we get that there exists a unique solution u ( t , x , y ) , depending on y and ω as parameters. Furthermore, estimates (4)–(7) hold for each ω Ω and y R if we replace u ( t , x ) and f ( t , x ) with u ( t , x , y ) and f ( t , x , y h π t ) , respectively.
The solution of (8) can be written as:
u ( t , x , y + h π t ) = 0 t [ a ( s ) D x 2 u ( s , x , y + h π s ) + f ( s , x , y ) ] d s + ( 0 , t ] g ( s , x , y ) d π s ,
where
g ( s , x , y ) = u ( s , x , y + h + h π s ) u ( s , x , y + h π s )
is the jump of the process u ( t , x , y + h π t ) as a function of t at moment s if π t has a jump at s. Here, π s is the left-continuous w.r.t. s.
In order to prove the main result, the function g should be studied.
Lemma 1. 
For g defined as (10) and t T , we have:
E ( 0 , t ] g ( s , x , y ) d π s = 0 t λ ( s ) [ v ( s , x , y + h ) v ( s , x , y ) ] d s ,
where
v ( t , x , y ) : = E u ( t , x , y + h π t ) .
Proof. 
Assume that t = 1 for simplicity. Fix x and y, and denote g ( s ) = g ( s , x , y ) . Note that g is bounded on Ω × ( 0 , T ) , and thus, if we define:
g n ( s ) = g ( k 2 n ) = u ( k 2 n , x , y + h + h π k 2 n ) u ( k 2 n , x , y + h π k 2 n )
for s ( k 2 n , ( k + 1 ) 2 n ] , k = 0 , 1 , , 2 n 1 , then g n ( s ) g ( s ) as n for any s ( 0 , 1 ] and ω Ω , and:
ξ n : = ( 0 , t ] g n ( s ) d π s ( 0 , t ] g ( s ) d π s = : ξ
for any ω Ω . Dominated convergence theorem implies that E ξ n E ξ .
Notice that:
E ξ n = k = 0 2 n 1 E g ( k 2 n ) ( π ( k + 1 ) 2 n π k 2 n ) .
Since the nonhomogeneous Poisson process is an independent increment process, the expectations of the products on the right in (11) are equal to the products of expectations, and since E π t = m ( t ) , we arrive at:
E ξ n = E k = 0 2 n 1 g ( k 2 n ) [ m ( k + 1 ) 2 n m ( k 2 n ) ] = E 0 1 g n ( s ) λ ( s ) d s E 0 1 g ( s ) λ ( s ) d s = 0 1 λ ( s ) E g ( s ) d s .
Noting that for any s > 0 , we have π s = π s almost surely, and thus:
E g ( s ) = v ( s , x , y + h ) v ( s , x , y ) .
The proof is complete. □
Taking expectations on both sides of (9), we obtain the following result.
Lemma 2. 
Let f B c ( ( 0 , T ) , C 0 ( R 2 ) ) , h R and λ ( t ) > 0 for all t [ 0 , T ] . Then, there exists a unique continuous function v ( t , x , y ) , t [ 0 , T ] , x , y R , satisfying the equation:
t v ( t , x , y ) = a ( t ) D x 2 v ( t , x , y ) + λ ( t ) [ v ( t , x , y + h ) v ( t , x , y ) ] + f ( t , x , y )
for t ( 0 , T ) , x , y R , with zero initial condition and such that v ( t , · , y ) C 2 + α ( R ) for any t ( 0 , T ) , y R and:
sup ( t , y ) [ 0 , T ] × R v ( t , · , y ) C 2 + α ( R ) N ( T , α ) sup ( t , y ) [ 0 , T ] × R f ( t , · , y ) C α ( R ) .
Furthermore:
sup ( t , z ) [ 0 , T ] × R 2 | v ( t , z ) | T sup ( t , z ) [ 0 , T ] × R 2 | f ( t , z ) | , sup ( t , y ) [ 0 , T ] × R [ D x 2 v ( t , · , y ) ] C α ( R ) N ( α ) sup ( t , y ) ( 0 , T ) × R [ f ( t , · , y ) ] C α ( R ) , D x 2 v L p ( ( 0 , T ) × R 2 ) p N p f L p ( ( 0 , T ) × R 2 ) p .
The proof of this lemma is similar to [11] (Lemma 2.2) and the details are omitted here.
Next, similar to the method of dealing with (3), Equation (12) is studied. More precisely, we consider v ( t , x , y ) depending on ω as a unique solution of:
t v ( t , x , y ) = a ( t ) D x 2 v ( t , x , y ) + λ ( t ) [ v ( t , x , y + h ) v ( t , x , y ) ] + f ( t , x , y + h π t )
with zero initial condition. Then, it follows from the above computations, we have the function w ( t , x , y ) = E v ( t , x , y h π t ) , which satisfies:
t w ( t , x , y ) = a ( t ) D x 2 w ( t , x , y ) + λ ( t ) [ w ( t , x , y + h ) 2 w ( t , x , y ) + w ( t , x , y h ) ] + f ( t , x , y ) .
Furthermore, w ( t , x , y ) has the same estimates as in Lemma 2.
Theorem 1. 
Let a ( t ) > 0 be a bounded Borel measurable function. Then, for any f B c ( ( 0 , T ) , C 0 ( R 2 ) ) , there exists a unique continuous in [ 0 , T ] × R 2 solution v ( t , z ) of the equation:
t v ( t , z ) = a ( t ) Δ v ( t , z ) + f ( t , z ) , t > 0 , z R 2 , v ( 0 , z ) = 0 , z R 2 .
Moreover, v ( t , · ) C 2 + α ( R 2 ) satisfies:
sup ( t , z ) [ 0 , T ] × R 2 | v ( t , z ) | T sup ( t , z ) [ 0 , T ] × R 2 | f ( t , z ) | , sup t [ 0 , T ] [ D i j v ( t , · ) ] C α ( R 2 ) N 0 ( α ) sup t [ 0 , T ] [ f ( t , · ) ] C α ( R 2 ) , sup ( t , z ) [ 0 , T ] × R 2 [ D l 2 v ( t , z + l · ) ] C α ( R 2 ) N 0 ( α ) sup ( t , z ) ( 0 , T ) × R 2 [ f ( t , z + l · ) ] C α ( R 2 ) , D l 2 v L p ( ( 0 , T ) × R 2 ) p N p f L p ( ( 0 , T ) × R 2 ) p ,
where N 0 ( α ) and N p are positive constants.
Proof of Theorem 1. 
Taking λ ( t ) = a ( t ) / h 2 in (13) and letting h 0 , we have the solution w = w h of (13) will converge to a function v ( t , x , y ) , which satisfies Equation (14). Furthermore, v is continuous in [ 0 , T ] × R 2 , and is infinitely differentiable w.r.t. ( x , y ) for any t ( 0 , T ) and all the estimates in Lemma 2 hold true. Therefore, the following estimate obviously holds:
sup ( t , x , y ) [ 0 , T ] × R 2 | v ( t , x , y ) | T sup ( t , x , y ) [ 0 , T ] × R 2 | f ( t , x , y ) | .
Next, the rotation invariant of Laplacian operator and the estimates of Lemma 2 will be used to derive the desired results. In order to do that, define S as an orthogonal transformation of R 2 : S e i = l i , i = 1 , 2 , where e i is the standard basis in R 2 , l i is a unit vector in R 2 and l 2 is orthogonal to l 1 . Set:
f ( t , x e 1 + y e 2 ) = f ( t , x , y ) , v ( t , x e 1 + y e 2 ) = v ( t , x , y ) , S ( x , y ) = x l 1 + y l 2 , g ( t , x , y ) = f ( t , S ( x , y ) ) , u ( t , x , y ) = v ( t , S ( x , y ) ) ,
then u satisfies
t u ( t , x , y ) = a ( t ) Δ u ( t , x , y ) + g ( t , x , y ) ,
where the rotation invariant of Laplacian operator is used.
It follows from Lemma 2 that:
sup ( t , y ) [ 0 , T ] × R sup x 1 x 2 | D x 2 u ( t , x 1 , y ) D x 2 u ( t , x 2 , y ) | | x 1 x 2 | α N ( α ) sup ( t , y ) ( 0 , T ) × R sup x 1 x 2 | g ( t , x 1 , y ) g ( t , x 2 , y ) | | x 1 x 2 | α .
Notice that:
D x 2 u ( t , x , y ) = D l 1 2 v ( t , S ( x , y ) ) = D l 1 2 v ( t , x l 1 + y l 2 ) ,
and using the fact that the solution v of (14) has continuous second-order derivatives w.r.t. ( x , y ) , we have, for any unit vector l R 2 :
sup ( t , z ) [ 0 , T ] × R 2 sup μ ν | D l 2 v ( t , μ l + z ) D l 2 v ( t , ν l + z ) | | μ ν | α N ( α ) sup ( t , z ) ( 0 , T ) × R 2 sup μ ν | f ( t , μ l + z ) f ( t , ν l + z ) | | μ ν | α .
That is to say, we get:
sup ( t , z ) [ 0 , T ] × R 2 [ D l 2 v ( t , z + l · ) ] C α ( R 2 ) N ( α ) sup ( t , z ) ( 0 , T ) × R 2 [ f ( t , z + l · ) ] C α ( R 2 ) .
In particular, if we choose z = 0 , we get the estimate:
sup t [ 0 , T ] [ D i j v ( t , · ) ] C α ( R 2 ) N 0 ( α ) sup t [ 0 , T ] [ f ( t , · ) ] C α ( R 2 ) .
Since the Jacobian of S ( x , y ) equals to 1, then we have for any unit vector l R 2 :
0 T R 2 | D l 2 v ( t , z ) | p d z d t N p 0 T R 2 | f ( t , z ) | p d z d t .
The proof is complete. □
Remark 1. 
The results in this section are slightly different from those in [11]. If a ( t ) = 1 , that is, λ ( t ) λ , then Theorem 2 is exactly the second part of [11]. The big difference is that we can assume λ ( t ) = h 2 a ( t ) and then the equation will keep the same form as the dimensional case. Of course, in [11] (Section 3), Krylov and Priola used a suitable transform to consider the problem (3). Here, we emphasize that we can use another stochastic process to deal with the problem (3).
One can use renew process to study the regularity of parabolic equations. The difference is that in the following Lemma 1, E [ π ( k + 1 ) 2 n π k 2 n ] will be different. But for the parabolic equation, the Poisson process is the best choice.

3.2. Stabilization of Differential Equations Based on Discrete-Time Observation

In this section, a special system is studied which can be regarded as one-side coupling system. Our motivation is that is it possible to lay discrete-time observation on the system. More precisely, consider the deterministic differential system:
d d t X ( t ) = X ( t ) ( β 2 Y ( t ) β 1 ) , t > 0 , d d t Y ( t ) = α Y ( t ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
where α , β i > 0 , i = 1 , 2 . It is easy to see that the solution of (15) is:
X ( t ) = x 0 exp β 2 y 0 α e α t β 1 t , Y ( t ) = y 0 e α t .
where, obviously, ( X ( t ) , Y ( t ) ) ( , ) as t . Now, we want to get ( X ( t ) , Y ( t ) ) ( 0 , 0 ) as t based on discrete-time observation. Firstly, if the system (15) is treated as in [17], it is easy to see that the term X ( t ) ( β 2 Y ( t ) β 1 ) does not satisfy the assumptions of [17], which show that | f ( x ) f ( y ) | γ | x y | and f ( 0 ) = 0 with γ > 0 . Note that | x 1 y 1 x 2 y 2 | | y 1 | | x 1 x 2 | + | x 2 | | y 1 y 2 | , so we cannot find a positive constant γ such that | x 1 y 1 x 2 y 2 | γ | ( x 1 , y 1 ) ( x 2 , y 2 ) | . Therefore, the results of [17] can not be used directly. However, the first result of [17] for the second equation can be used. In order to do so, we recall the first result of [17]. Consider the scalar linear stochastic equation:
d X ( t ) = α X ( t ) + σ X t τ τ d B ( t )
on t 0 with initial value x ( 0 ) = x 0 R , where τ is a positive constant. In fact, Equation (16) can be regarded as a stochastic differential delay equation if one define δ : [ 0 , ) [ 0 , τ ) by δ ( t ) = t k τ for t [ k τ , ( k + 1 ) τ ) , k = 0 , 1 , 2 , For more information on geometric Brownian motion, see [34] for details.
Proposition 1 
([17] (Theorem 2.1)). If α σ 2 2 < 0 , then there is a positive number τ * such that for any initial value x 0 R , the solution of (16) satisfies:
lim sup t 1 t log ( | X ( t ) | ) < 0 a . s
provided τ ( 0 , τ * ) . In practice, we can choose a positive number p ( 0 , 1 ) for which:
α ( 1 p ) σ 2 2 < 0
and let τ * be the smallest positive root to the equation H 1 ( τ ) + H 2 ( τ ) = 0 , where:
H 1 ( τ ) = p ( e α τ 1 ) + p ( p 1 ) 4 α ( e 2 α τ 1 )
and
H 2 ( τ ) = p ( p 1 ) 2 ( e α τ 1 ) 2 + p ( p 1 ) ( p 2 ) 6 [ ( e α τ 1 ) 3 + 3 ( e α τ 1 ) σ ^ 2 ] + p ( p 2 ) ( 2 p 7 ) 8 [ ( e α τ 1 ) 4 + 6 ( e α τ 1 ) 2 σ ^ 2 + 3 σ ^ 4 ] + p ( p 2 ) ( p 4 ) 8 [ ( e α τ 1 ) 5 + 10 ( e α τ 1 ) 3 σ ^ 2 + 15 ( e α τ 1 ) σ ^ 4 ] + p ( p 2 ) ( p 4 ) 48 [ ( e α τ 1 ) 6 + 15 ( e α τ 1 ) 4 σ ^ 2 + 45 ( e α τ 1 ) 2 σ ^ 4 + 15 σ ^ 6 ]
in which σ ^ = σ 2 2 α ( e 2 α τ 1 ) .
Now, the Proposition 1 is used to deal with the problem of (15). In order to do that, consider the following stochastic system:
d X ( t ) = X ( t ) ( β 2 Y ( t ) β 1 ) , t > 0 , d Y ( t ) = α Y ( t ) + σ Y t τ τ d B ( t ) , t > 0 , X t | t = 0 = x 0 , Y t | t = 0 = y 0 > 0 ,
where σ R as in Proposition 1.
Theorem 2. 
If α σ 2 2 < 0 , then there is a positive number τ * such that for any initial value x 0 R , the solution of (15) satisfies:
lim sup t 1 t log ( | X ( t ) | ) β 1 , lim sup t 1 t log ( | Y ( t ) | ) < 0 a . s
provided τ ( 0 , τ * ) .
Proof. 
Firstly, it follows from Proposition 1 that:
lim sup t 1 t log ( | Y ( t ) | ) < 0 a . s .
That is to say, there exists a positive constant λ such that | Y ( t ) | C e λ t for some positive constant C. Submitting this into the first equation of (15), we get:
d d t ln ( X ( t ) ) = β 2 Y ( t ) β 1 ,
which implies that:
X ( t ) x 0 exp C λ ( 1 e λ t ) β 1 t 0 as t .
The proof is complete. □
Remark 2. 
System (17) is often called a nonstrict system because the variable Y does not depend on X. Similarly, one can deal with nonautonomous differential system by using the result of [26]:
d d t X ( t ) = X ( t ) 2 Y ( t ) 1 + t 1 , t > 0 , d d t Y ( t ) = 1 1 + t Y ( t ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 1 .
Obviously, the solution of (19) is ( X ( t ) , Y ( y ) = ( x 0 e ( 2 y 0 1 ) t , y 0 ( 1 + t ) ) , which will go to ( , ) as t . It follows from the results of [26] that the solution of the following:
d Y ( t ) = Y ( t ) 1 + t d t + 4 + 2 p p ( 1 p ) · 1 1 + t 1 2 Y ( [ t τ ] τ ) d B ( t )
will decay polynomially provided that the τ is sufficiently small. Thus, the solution of (19) will go to zero as time goes to infinity.
Next, consider the following stochastic system:
d X ( t ) = [ f 1 ( X ( t ) ) + f 2 ( Y ( t ) ) ] d t + σ ( X ( t ) ) d B ( t ) , t > 0 , d Y ( t ) = f ( X ( t ) , Y ( t ) ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
where f 1 and f 2 are continuous functions. It is hard to get the exact form of the solution, but one can assume that the solutions of (20) will not decay. The aim here is to design a feedback control ( u ( X ( [ t / τ 1 ] τ 1 ) ) , v ( Y ( [ t / τ 2 ] τ 2 ) ) ) so that the controlled system:
d X ( t ) = f 1 ( X ( t ) ) + f 2 ( Y ( t ) ) + u ( X ( X ( [ t / τ 1 ] τ 1 ) ) d t + σ ( X ( t ) ) d B ( t ) , t > 0 , d Y ( t ) = f ( X ( t ) , Y ( t ) ) d t + v ( Y ( [ t / τ 2 ] τ 2 ) d B ^ ( t ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
becomes asymptotically stable, where τ 1 , τ 2 > 0 . For simplicity, we will add the linear feedback control:
d X ( t ) = f 1 ( X ( t ) ) + f 2 ( Y ( t ) ) + a X [ t τ 1 ] τ 1 d t + σ ( X ( t ) ) d B ( t ) , t > 0 , d Y ( t ) = f ( X ( t ) , Y ( t ) ) d t + b Y ( [ t τ 2 ] τ 2 ) d B ^ ( t ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
where B ( t ) and B ^ ( t ) are independent Brownian motion, a , b R . Note that the feedback control is different from [17,32] and in earlier results, the authors often assume that the system has the uniform form. The method used in [17,32] will not be suitable for system (21). We need the following assumptions.
(A1) Assume that f is globally Lipschitz continuous with respect to any fixed x R :
| f ( x , y 1 ) f ( x , y 2 ) | α | y 1 y 2 | , x R ,
where α > 0 . We also assume that f ( x , 0 ) = 0 for any fixed x R .
For the second equation of (21), the following result holds.
Lemma 3. 
Let Assumption (A1) hold and α < b 2 / 2 . Then, there is a positive number τ 2 * such that for any initial value y 0 R , the solution of second equation to (21) satisfies:
lim sup t 1 t log ( | Y ( t ) | ) < 0 a . s
provided τ 2 ( 0 , τ 2 * ) . In practice, we can choose a pair of constants p , ε ( 0 , 1 ) such that τ 2 * is the unique root to the following equation:
2 p K ( τ 2 , p ) e ( 4 α + 3 b 2 ) [ τ 2 + log ( 2 p / ε ) / γ ] 1 p / 2 = 1 ε ,
where
γ = p [ ( 1 p ) 2 b 2 α ] , K ( τ 2 , p ) = 4 τ 2 b 2 ( α 2 τ 2 + b 2 ) 2 α + b 2 p 2 .
Proof. 
The proof of this lemma is similar to that of [17] (Theorem 3.3). In order to do that, consider the auxiliary equation:
d Y ˜ ( t ) = f ( X ( t ) , Y ˜ ( t ) ) d t + b Y ˜ ( t ) d B ^ ( t ) .
Under the Assumption (A1), it follows from [35] (Lemam 5.1) that P ( Y ˜ ( t ; y 0 ) 0 ) = 1 . Choose p ( 0 , 1 ) such that γ : = p ( 1 p ) b 2 2 α > 0 . The Itô formula implies that:
d | Y ˜ ( t ) | p = p | Y ˜ ( t ) | p 2 Y ˜ ( t ) f ( X ( t ) , Y ˜ ( t ) ) + 1 2 ( p 1 ) b 2 | Y ˜ ( t ) | p d t + p | Y ˜ ( t ) | p d B ^ ( t ) p γ | Y ˜ ( t ) | p d t + p | Y ˜ ( t ) | p d B ^ ( t ) .
This implies that:
E [ | Y ˜ ( t ; y 0 ) | p ] | y 0 | p e γ t .
Let Y ( t ; y 0 ) and Y ˜ ( t ; y 0 ) be the solutions of (21) and (23), respectively. By using Assumption (A1) and Itô formula, we have:
E | Y ( t ; y 0 ) Y ˜ ( t ; y 0 ) | p | y 0 | p K ( τ 2 , p ) e ( 4 α + 3 b 2 ) t 1 p / 2 .
The proof of (25) is exactly as [17] (Lemma 3.5). Choose a pair of constants p , ε such that γ > 0 and let τ 2 * > 0 be the unique root of (22), which can be proved by the monotonicity of the left hand side of (22). We claim that for each τ 1 ( 0 , τ 2 * ) , there exists a pair of k ¯ and λ such that the solution of the second equation to (21) satisfies:
E | Y ( i k ¯ τ 2 ; y 0 ) | p | y 0 | p e λ i k ¯ τ 2 , i = 1 , 2 ,
In fact, choose a suitable positive constant k ¯ such that 2 p e γ k ¯ τ 2 ε , and (24) implies that E | Y ˜ ( k ¯ τ 2 ) | p | y 0 | p e γ k ¯ τ 2 . Consequently, (25) gives:
E | Y ( k ¯ τ 2 ) | p 2 p E | Y ˜ ( k ¯ τ 2 ) | p + 2 p E | Y ( k ¯ τ 2 ) Y ˜ ( k ¯ τ 2 ) | p | y 0 | p ε + 2 p K ( τ 2 , p ) e ( 4 α + 3 b 2 ) t 1 p / 2 | y 0 | p e λ k ¯ τ 2 ,
where the choice of k ¯ and λ are used:
log ( 2 p / ε ) γ τ 2 k ¯ < 1 + log ( 2 p / ε ) γ τ 2 ε + 2 p K ( τ 2 , p ) e ( 4 α + 3 b 2 ) t 1 p / 2 = e λ k ¯ τ 2 .
Then, by using the time-homogeneous property of (21), we obtain the claim. The proof is complete by using the standard steps (Borel–Cantelli Lemma). □
Now, it follows from Lemma 3 that there exists constant C 0 > 0 such that | f ( Y ( t ) ) | C 0 for all t 0 almost surely. Next, the first equation of (21) will be considered. In order to do that, the following assumptions are given.
Assumption (A2): there is pair of positive constants α 1 and α 2 such that:
| f 1 ( x ) f 1 ( y ) | α 1 | x y | , | σ ( x ) σ ( y ) | α 2 | x y | .
Moreover, there exists a positive number β such that:
2 ( x y ) [ f 1 ( x ) f 1 ( y ) + a ( x y ) ] + ( σ ( x ) σ ( y ) ) 2 β | x y | 2
for all ( x , y ) R 2 .
In order to introduce the stable in distribution, some notations are needed. Denote with C τ the family of continuous functions ξ : [ 0 , τ ] R with norm ξ τ = sup s [ 0 , τ ] | ξ ( s ) | . Denote with P ( C τ ) the family of probability measures on C τ . For P 1 , P 2 P ( C τ ) , and define the Wasserstein metric d L by:
d L ( P 1 , P 2 ) sup ϕ L | C τ ϕ ( ξ ) P 1 ( d ξ ) C τ ϕ ( ξ ) P 2 ( d ξ ) | ,
where
L = ϕ : C τ R satisfying | ϕ ( ξ ) ϕ ( ζ ) | ξ ζ τ , | ϕ ( ξ ) | 1 for all ξ , ζ C τ .
The first equation of (21) is said to be asymptotically stable in distribution if there exists a probability measure μ τ P ( C τ ) such that:
lim k d L ( L ( X ( k τ 1 ) ) , μ τ 1 ) = 0 .
Define:
H 1 ( r ) = 6 r [ r ( α 1 + | a | ) 2 + α 2 2 ] e 6 r ( r α 1 2 + α 2 2 ) , H 3 ( r ) = 2 H 1 ( r ) 1 2 H 1 ( r ) , H 2 ( r ) = [ 4 r ( 2 r α 1 2 + α 2 2 ) + 4 r 2 a 2 ] e 4 r ( 2 r α 1 2 + α 2 2 ) , H 4 ( r ) = 2 H 2 ( r ) 1 2 H 2 ( r )
for small r > 0 such that 2 H i ( r ) < 1 , i = 1 , 2 . Let r 1 * , , r 4 * be the unique positive roots to the following equations:
2 H 1 ( r 1 * ) = 1 , and β = 2 | a | H 3 ( r 2 * ) , 2 H 2 ( r 3 * ) = 1 , and β = 2 | a | H 3 ( r 4 * ) .
By using the method of [32], we can obtain the following lemma.
Lemma 4. 
Let assumption (A2) hold and τ 1 * = r 1 * r 2 * r 3 * r 4 * . Then, for each τ 1 < τ 1 * , the first equation of (21) is asymptotically stable in distribution.
Combining Lemmas 3 and 4, we get the following result.
Theorem 3. 
Let Assumptions (A1) and (A2) hold and α < b 2 / 2 . If τ 1 < τ 1 * and τ 2 < τ 2 * , then the first equation of (21) is asymptotically stable in distribution and the second equation of (21) is almost surely exponential stable.
Lastly, some examples to support our result are given. Consider:
d X ( t ) = [ f 1 ( X ( t ) ) + Y ( t ) ] d t + σ 1 X ( t ) d B ( t ) , t > 0 , d Y ( t ) = α Y t , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
where f 1 and f 2 are continuous functions. Obviously, the solution of (28) will increase exponentially. In order to make the solution stable, consider the following system:
d X ( t ) = f 1 ( X ( t ) ) + f 2 ( Y ( t ) ) + a X [ t τ 1 ] τ 1 d t + σ 1 X ( t ) d B ( t ) , t > 0 , d Y ( t ) = α Y t + b Y ( [ t τ 2 ] τ 2 ) d B 1 ( t ) , t > 0 , X ( t ) | t = 0 = x 0 , Y ( t ) | t = 0 = y 0 > 0 ,
where B ( t ) and B 1 ( t ) are two independent Brownian motions and f 1 and f 2 are continuous bounded functions. Firstly, it follows from [17] that the solution of the second equation of (29) is almost surely exponential stable. Then, it follows from the results of [32] that the first equation of (29) is asymptotically stable in distribution.
Another example is that we take f ( x , y ) = α y x + x 2 in (21). Then, it is easy to check the assumption (A1) holds, and thus, Theorem 3 is suitable.

4. Discussion and Conclusions

In the theory of partial differential equations (PDEs), the regularity of solutions of PDEs is an important issue. In classical theory, we have the L p theory and Schauder theory for the regularity of solutions. In this paper, we first give another proof of the Schauder theory and Sobolev estimates for the one-dimensional parabolic equation. The method we used here is probability. More precisely, we use the nonhomogeneous Poisson stochastic process to show how knowing Schauder and Sobolev estimates for the one-dimensional heat equation allows one to derive their multidimensional analogs. The idea is introduced by Krylov and Prioda [11]. Furthermore, in this paper, the idea on the nonhomogeneous Poisson stochastic process is generalized.
Meanwhile, it is well-known that noise can stabilize the ordinary differential equations, and time delay can also stabilize the ordinary differential equations. Recently, Mao [17] used the discrete-time stochastic feedback control to stabilize the ordinary differential equations. The discrete-time feedback control is added in every equation of the system. This paper shows that, from the perspective of energy-saving effects, we can add the discrete-time feedback control in part of system.
The results of this paper improve the regularity theory of partial differential equations and have potential applications in control theory.

Author Contributions

Conceptualization, S.G. and W.L.; methodology, S.G. and G.L.; formal analysis, S.G. and W.L.; investigation, G.L.; writing—original draft preparation, S.G., W.L. and G.L.; writing—review and editing, G.L.; supervision, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jiangsu Provincial Double-Innovation Doctor Program JSSCBS20210466 and Qing Lan Project and the Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX21 0932).

Data Availability Statement

No data are used.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
D f is the gradient of f and D 2 f is its Hessian.
E [ X ] denotes the expectation of X.

References

  1. Chen, Y. Second Order Parabolic Partial Differential Equations; Beijing University Press: Beijing, China, 2003. [Google Scholar]
  2. Zou, G.; Lv, G.; Wu, J.-L. On the regularity of weak solutions to space-time fractional stochastic heat equations. Statist. Probab. Lett. 2018, 139, 84–89. [Google Scholar] [CrossRef]
  3. Breit, D.; Hofmanová, M. On time regularity of stochastic evolution equations with monotone coefficients. C. R. Math. Acad. Sci. Paris 2016, 354, 33–37. [Google Scholar] [CrossRef]
  4. Du, K.; Liu, J. On the Cauchy problem for stochastic parabolic equations in Holder spaces. Trans. Am. Math. Soc. 2019, 371, 2643–2664. [Google Scholar] [CrossRef]
  5. Lv, G.; Gao, H.; Wei, J.; Wu, J.-L. BMO and Morrey-Campanato estimates for stochastic convolutions and Schauder estimates for stochastic parabolic equations. J. Differ. Equ. 2019, 266, 2666–2717. [Google Scholar] [CrossRef]
  6. Fedrizzi, E.; Flandoli, F.; Priola, E.; Vovelle, J. Regularity of stochastic kinetic equations. Electron. J. Probab. 2017, 22, 42. [Google Scholar] [CrossRef]
  7. Agresti, A.; Veraar, M. Stability properties of stochastic maximal Lp-regularity. J. Math. Anal. Appl. 2020, 482, 123553. [Google Scholar] [CrossRef]
  8. Cui, H.; Langa, J.; Li, Y. Regularity and structure of pullback attractors for reaction-diffusion type systems without uniqueness. Nonlinear Anal. 2016, 140, 208–235. [Google Scholar] [CrossRef]
  9. Liu, K. On regularity property of retarded Ornstein-Uhlenbeck processes in Hilbert spaces. J. Theoret. Probab. 2012, 25, 565–593. [Google Scholar] [CrossRef]
  10. Liu, Y.; Zhai, J. Time regularity of generalized Ornstein-Uhlenbeck processes with Levy noises in Hilbert spaces. J. Theoret. Probab. 2016, 29, 843–866. [Google Scholar] [CrossRef]
  11. Krylov, N.; Priola, E. Poisson stochastic process and basic Schauder and Sobolev estimates in the theory of parabolic equations. Arch. Ration. Mech. Anal. 2017, 225, 1089–1126. [Google Scholar] [CrossRef]
  12. Yang, S.; Zhang, T. Backward stochastic differential equations and Dirichlet problems of semilinear elliptic operators with singular coefficients. Potential Anal. 2018, 49, 225–245. [Google Scholar] [CrossRef]
  13. Desch, G.; Londen, S.-O. Regularity of stochastic integral equations driven by Poisson random measures. J. Evol. Equ. 2017, 17, 263–274. [Google Scholar] [CrossRef]
  14. Zhou, G. Global well-posedness of a class of stochastic equations with jumps. Adv. Differ. Equ. 2013, 2013, 175. [Google Scholar] [CrossRef]
  15. Arnold, L. Random Dynamic System; Springer: Berlin/Heidelberg, Germany, 1998; ISBN 3-540-63758-3. [Google Scholar]
  16. Mao, X. Stabilization of continuous-time hybrid stochastic differential equations by discrete-time feedback control. Automatica 2013, 12, 3677–3681. [Google Scholar] [CrossRef]
  17. Mao, X. Almost sure exponential stabilization by discrete-time stochastic feedback control. IEEE Trans. Autom. Control. 2016, 61, 1619–1624. [Google Scholar] [CrossRef]
  18. You, S.; Liu, W.; Lu, J.; Mao, X.; Qiu, Q. Stabilization of hybrid systems by feedback control based on discrete-time state observations. SIAM J. Control Optim. 2015, 53, 905–925. [Google Scholar] [CrossRef]
  19. Dong, R. Almost sure exponential stabilization by stochastic feedbackcontrol based on discrete-time observations. Stoch. Anal. Appl. 2018, 36, 561–583. [Google Scholar] [CrossRef]
  20. Li, X.; Mao, X. Stabilisation of highly nonlinear hybrid stochastic differential delay equations by delay feedback control. Automatica 2020, 112, 108657. [Google Scholar] [CrossRef]
  21. Fei, C.; Fei, W.; Mao, X.; Xia, D.; Yan, L. Stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations. IEEE Trans Autom. Control 2020, 65, 2899–2912. [Google Scholar] [CrossRef]
  22. Liu, L.; Wu, Z. Intermittent stochastic stabilization based on discrete-time observation with time delay. Syst. Control Lett. 2020, 137, 104626. [Google Scholar] [CrossRef]
  23. Shen, G.; Xiao, R.; Yin, X.; Zhang, J. Stabilization for hybrid stochastic systems by aperiodically intermittent control. Nonlinear Anal. Hybrid Syst. 2021, 39, 100990. [Google Scholar] [CrossRef]
  24. Shen, G.; Wu, X.; Yin, X. Stabilization of stochastic differential equations driven by G-Levy process with discrete-time feedback control. Discrete Contin. Dyn. Syst. Ser. B 2021, 26, 755–774. [Google Scholar] [CrossRef]
  25. Mao, W.; Jiang, Y.; Hu, L.; Mao, X. Stabilization by intermittent control for hybrid stochastic differential delay equations. Discret. Contin. Dyn. Syst. Ser. B 2022, 27, 569–581. [Google Scholar] [CrossRef]
  26. Guo, S.; Lv, G.; Zhang, Y. Almost Surely Polynomial Stabilization by Discrete-Time Feedback Control. Submitted.
  27. Liu, W. Polynomial stability of highly non-linear time-changed stochastic differential equations. Appl. Math. Lett. 2021, 119, 107233. [Google Scholar] [CrossRef]
  28. Lu, B.; Zhu, Q.; Li, S. Stabilization of differently structured hybrid neutral stochastic systems by delay feedback control under highly nonlinear condition. J. Franklin Inst. 2023, 360, 2089–2115. [Google Scholar] [CrossRef]
  29. Wang, H.; Zhu, Q. Global stabilization via output feedback of stochastic nonlinear time-delay systems with time-varying measurement error: A Lyapunov-Razumikhin approach. Internat. J. Robust Nonlinear Control 2022, 32, 7554–7574. [Google Scholar] [CrossRef]
  30. Zhao, Y.; Zhu, Q. Stabilization of highly nonlinear neutral stochastic systems with Markovian switching by periodically intermittent feedback control. Internat. J. Robust Nonlinear Control 2022, 32, 10201–10214. [Google Scholar] [CrossRef]
  31. Zhao, Y.; Zhu, Q. Stabilization of stochastic highly nonlinear delay systems with neutral term. IEEE Trans. Automat. Control 2023, 68, 2544–2551. [Google Scholar] [CrossRef]
  32. Li, X.; Liu, W.; Luo, Q.; Mao, X. Stabilisation in distribution of hybrid stochastic differential equations by feedback control based on discrete-time state observations. Automatica 2022, 140, 110210. [Google Scholar] [CrossRef]
  33. Krylov, N. The Calderón-Zygmund theorem and parabolic equations in Lp( R ;C2+α). Ann. Scuola Norm. Sup. Pisa Cl. Sci. 2002, 1, 799–820. [Google Scholar]
  34. Cherstvy, A.; Vinod, D.; Aghion, E.; Sokolov, I.; Metzler, R. Scaled geometric Brownian motion features sub- or superexponential ensemble-averaged, but linear time-averaged mean-squared displacements. Phys. Rev. E 2021, 103, 062127. [Google Scholar] [CrossRef] [PubMed]
  35. Mao, X.; Yuan, C. Stochastic Differential Equations with Markovian Switching; Imperial College Press: London, UK, 2006. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Li, W.; Lv, G. Some Results of Stochastic Differential Equations. Axioms 2024, 13, 405. https://doi.org/10.3390/axioms13060405

AMA Style

Guo S, Li W, Lv G. Some Results of Stochastic Differential Equations. Axioms. 2024; 13(6):405. https://doi.org/10.3390/axioms13060405

Chicago/Turabian Style

Guo, Shuai, Wei Li, and Guangying Lv. 2024. "Some Results of Stochastic Differential Equations" Axioms 13, no. 6: 405. https://doi.org/10.3390/axioms13060405

APA Style

Guo, S., Li, W., & Lv, G. (2024). Some Results of Stochastic Differential Equations. Axioms, 13(6), 405. https://doi.org/10.3390/axioms13060405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop