Next Article in Journal
Approximate Efficient Solutions of the Vector Optimization Problem on Hadamard Manifolds via Vector Variational Inequalities
Next Article in Special Issue
A New Simplified Weak Second-Order Scheme for Solving Stochastic Differential Equations with Jumps
Previous Article in Journal
A Family of Multiple-Root Finding Iterative Methods Based on Weight Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulating Stochastic Differential Equations with Conserved Quantities by Improved Explicit Stochastic Runge–Kutta Methods

Department of Mathematics, Harbin Institute of Technology at Weihai, Weihai 264209, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(12), 2195; https://doi.org/10.3390/math8122195
Submission received: 11 November 2020 / Revised: 5 December 2020 / Accepted: 7 December 2020 / Published: 9 December 2020

Abstract

:
Explicit numerical methods have a great advantage in computational cost, but they usually fail to preserve the conserved quantity of original stochastic differential equations (SDEs). In order to overcome this problem, two improved versions of explicit stochastic Runge–Kutta methods are given such that the improved methods can preserve conserved quantity of the original SDEs in Stratonovich sense. In addition, in order to deal with SDEs with multiple conserved quantities, a strategy is represented so that the improved methods can preserve multiple conserved quantities. The mean-square convergence and ability to preserve conserved quantity of the proposed methods are proved. Numerical experiments are implemented to support the theoretical results.

1. Introduction

Stochastic differential equations (SDEs) model the process in practical applications more realistic by taking stochastic effects into consideration. The theory of SDEs has been widely extended to a lot of research fields such as stochastic control, electronic engineering, financial economics, stochastic neural network, and population dynamics [1]. As is well known, it is difficult to obtain the exact solutions of SDEs because of their nonlinearity and coupling property. Therefore, scholars have paid their attention to construct efficient numerical methods to simulate the solution of SDEs. In the last decades, many aspects of theory of numerical methods for SDEs are investigated (see, for example, in [2,3,4,5,6,7,8,9,10] and references therein). Generally, numerical methods for SDEs can be classified to three categories: (1) Explicit methods which are easy to implement but only suitable for non-stiff problems [11,12]; (2) Semi-implicit methods in which the implicitness is only restricted to the drift coefficient [11,13,14]; (3) Implicit methods in which the implicitness is in both the drift and diffusion coefficients [11,15,16]. Among these numerical methods, the stochastic Runge–Kutta method is one of the most classic and most commonly used numerical methods, because it has high accuracy, stability, and is relatively easy to implement in computer [17,18,19].
When constructing numerical methods for SDEs, it is natural to turn to the discrete systems which preserve the conserved quantity of original SDEs. Those numerical methods with ability of simulating the qualitative behavior of original SDEs more accurately are named conserved quantity-preserving methods and have shown great superiority compared to general ones in long time numerical simulation. Examples of the conserved quantity-preserving methods are the stochastic linear projection method [19], the stochastic discrete gradient method [20], the stochastic average vector field method [21], the stochastic Lie group integrators [22,23], and conserved quantity-preserving methods for Itô SDEs [24,25]. In addition, composition technique is commonly used to improve the efficiency of numerical methods [20,22,26,27]. Most of the conserved quantity-preserving numerical methods we mentioned above are implicit ones, involving the solve of large implicit nonlinear equations, which increases the computation cost in this process. Although, theoretically, some implicit numerical methods can be proved to preserve the conserved quantity of original SDEs exactly, the errors caused by computation of implicit nonlinear equations make the solutions of these nonlinear equations approximate, rather than exact. Thus, the theoretically conserved quantity-preserving implicit numerical methods actually are also approximate in practical applications. Therefore, in the process of constructing numerical methods, it is not necessary for us to make sure that the numerical solutions preserve the conserved quantity exactly, instead, we only need to make it as close to conserved quantity of original SDEs as possible. From this point of view, and based on the fact that explicit numerical methods are superior with respect to computational cost, we study the improved version of explicit stochastic Runge–Kutta (ESRK) methods such that the conserved quantities are preserved as accurately as possible.
The remainder of this paper is organized as follows. In Section 2, some preliminaries about stochastic Runge–Kutta methods, convergence, and conserved quantities are presented. In Section 3, we propose two improved versions of ESRK method. It is shown that the improved methods are more efficient than the original method in preserving conserved quantity and the improved methods has the same mean-square convergence order as the original method. Besides, a strategy for applying the improved methods to SDEs with multiple conserved quantities are represented. In Section 4, the theoretical results are verified through numerical experiments and the proposed methods are compared with classical conserved quantity-preserving methods to show their superiority in computational efficiency.

2. Preliminaries

In this paper, without loss of generality, we consider the general autonomous Stratonovich SDEs,
d x ( t ) = g 0 ( x ( t ) ) d t + j = 1 r g j ( x ( t ) ) d W j ( t ) , t [ t 0 , t 0 + T ] , x ( t 0 ) = x 0 R d ,
where W j ( t ) , j = 1 , , r are independent one-dimensional standard Wiener processes defined on a complete filtered probability space ( Ω , F , P , { F t } t [ t 0 , t 0 + T ] ) fulfilling the usual conditions. The probability space is an ordered triple ( Ω , F , P ) consisting of an arbitrary non-empty set Ω called the sample space, a  σ -algebra F of subsets of Ω called events and a probability measure P on F assigning to each event A F a number P ( A ) [ 0 , 1 ] called its probability [11]. A filtration is a family { F t } t 0 of increasing sub- σ -algebras of F . The filtration is right continuous if F t = s > t F s for all t 0 , and the filtration is said to satisfy the usual conditions if it is right continuous and F 0 contains all P-null sets [1]. Functions g j : R d R d , j = 0 , 1 , , r satisfy the conditions under which (1) has a unique solution [1].
Divide the time interval [ t 0 , t 0 + T ] into N equal parts by division points t n = t 0 + n h , n = 0 , 1 , , N , h = T / N . The numerical approximation at time t n is denoted as y n x ( t n ) . The general form of s-stages ESRK methods for solving (1) is given by
Y i = x 0 + k = 0 r j = 1 i 1 Z i j ( k ) g k ( Y j ) , i = 1 , , s , y 1 = x 0 + k = 0 r j = 1 s z j ( k ) g k ( Y j ) ,
which can be represented as the Butcher tableau
Z ( 0 ) Z ( 1 ) Z ( r ) z ( 0 ) z ( 1 ) z ( r ) ,
where Z i j ( k ) = 0 , k = 0 , , r , if  i j .
It is well known that the theory of stochastic B-series is closely related to the investigation of the stochastic Runge–Kutta method, and the stochastic B-series can be represented by the colored, rooted trees and the elementary differentials.
Definition 1
([17]). The set of r + 1 -colored, rooted trees
T = { } T 0 T r
is defined as follows.
  • Denote l = [ ] l T l as the trees with only one vertex of color l, l = 0 , , r ,
  • Denote τ = [ τ 1 , , τ μ ] l T l as the tree formed by joining subtrees τ 1 , , τ μ each by a single branch to a common root l .
Thus, T l is the set of trees with root of color l, and T is the union of all these sets.
Definition 2
([17]). For a given colored, rooted tree τ = [ τ 1 , , τ μ ] l T l , the order of τ is defined as
ord ( τ ) = n d + 1 2 n s ,
where n d is the number of nodes with color 0 and n s is the number of nodes with color l = 1 , , r .
Definition 3 
([17]). For τ T , the elementary differential is a map F ( τ ) : R d R d defined as
  • F ( ) ( x 0 ) = x 0 ,
  • F ( l ) ( x 0 ) = g l ( x 0 ) ,
  • If τ = [ τ 1 , , τ μ ] l T l , then F ( τ ) = g l ( μ ) ( x 0 ) ( F ( τ 1 ) , , F ( τ μ ) ) .
Let ρ ( τ ) , α ( τ ) , and γ ( τ ) denote the number of nodes, the number of possible different monotonic labelings of τ with the root labeled first, and the density of τ , respectively. If τ = [ τ 1 , , τ μ ] l , then
ρ ( τ ) = 1 + j = 1 μ ρ ( τ j ) , γ ( τ ) = ρ ( τ ) j = 1 μ γ ( τ j ) , α ( τ ) = ρ ( τ ) 1 ρ ( τ 1 ) , ρ ( τ μ ) j = 1 μ α ( τ j ) 1 ν 1 ! ν 2 ! ,
where ν 1 , , ν q being the number of equal trees among τ 1 , , τ μ .
The exact solution of (1) can be written as
x ( h ) = τ T γ ( τ ) ρ ( τ ) ! J ( τ ) α ( τ ) F ( τ ) x 0 ,
where
J ( τ ) ( h ) = 0 h j = 1 μ J ( τ j ) ( s ) d W l ( s ) , J ( l ) ( h ) = W l ( h ) , W 0 ( s ) = s .
Moreover, for τ = [ τ 1 , , τ μ ] l T l , denote
Φ ( τ ) = j = 1 μ Z ( l ) Φ ( τ j ) , Φ ( l ) = e , a ( τ ) = z ( l ) Φ ( τ ) ,
we can write the ESRK method (2) as
y 1 = τ T γ ( τ ) ρ ( τ ) ! a ( τ ) α ( τ ) F ( τ ) x 0 .
Denote
l n + 1 = y n + 1 x ( t n + 1 ) = τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 ,
and
ϵ N = x ( T ) y N ,
as the local and global errors of the ESRK method (2), respectively, we have the following convergence lemma.
Lemma 1
([17]). Under the assumption that g i , i = 0 , , r satisfy the uniform Lipschitz condition and has bounded derivatives, if 
| E ( l n + 1 ) | = O ( h p + 1 ) ,
and
E | l n + 1 | 2 1 2 = O ( h p + 1 2 ) ,
then
E | ϵ N | 2 1 2 = O ( h p ) .
Definition 4.
A differentiable function I is called a conserved quantity of SDE (1) if
I ( x ) g j ( x ) = 0 , j = 0 , 1 , , r ,
hold for all x R d .
As we can see, if  x ( t ) is the exact solution of (1), then by Stratonovich chain rule and (16), we have
d I ( x ( t ) ) = I ( x ( t ) ) g 0 ( x ( t ) ) d t + j = 1 r I ( x ( t ) ) g j ( x ( t ) ) d W j ( t ) = 0 ,
which implies I ( x ( t ) ) = I ( x 0 ) holds along the exact solution of (1).

3. Improved ESRK Methods

As mentioned above, general conserved quantity-preserving methods are implicit, which means the computational cost of these methods are higher than the explicit methods. However, as in the deterministic case [28], the ESRK method cannot preserve conserved quantity of original SDEs, even the conserved quantity is quadratic. Besides, exactly conserved quantity-preserving implicit numerical methods are unrealistic in practical computer simulations because of the errors caused by the solve of large nonlinear equations. In fact, in practical applications, the efficiency of the numerical method is an important indicator to measure the quality of the numerical method. One usually needs to balance simulation accuracy and computational cost, instead of just pursuing higher accuracy. Thus, in this section, we present improved versions of ESRK methods such that the conserved quantities are preserved as accurately as possible.
We apply the ESRK method (2) with mean-square convergence order p for solving (1) with conserved quantity I ( x ) , i.e.,  J ( τ ) = a ( τ ) holds for all colored, rooted trees with ord ( τ ) p , then the improved version of the ESRK method is given by
y ¯ n + 1 = y n + 1 + h p + 1 I ( y n + 1 ) if Δ I ( y n + 1 ) < 0 , y n + 1 if Δ I ( y n + 1 ) = 0 , y n + 1 h p + 1 I ( y n + 1 ) if Δ I ( y n + 1 ) > 0 ,
where Δ I ( y n + 1 ) = I ( y n + 1 ) I ( x 0 ) and y n + 1 are given by the ESRK method in the form of (2).
Theorem 1.
Suppose that the conserved quantity I ( x ) of (1) is sufficiently smooth and the gradient | I ( x ) | 0 . Then,
I ( y ¯ n + 1 ) < I ( y n + 1 ) if Δ I ( y n + 1 ) > 0 , I ( y ¯ n + 1 ) > I ( y n + 1 ) if Δ I ( y n + 1 ) < 0 ,
for sufficiently small h. Besides, the improved method (18) has the same mean-square convergence order as the original ESRK method.
Proof of Theorem 1.
If Δ I ( y n + 1 ) < 0 , then by (18)
y ¯ n + 1 = y n + 1 + h p + 1 I ( y n + 1 ) .
By Taylor expansion of I ( y ¯ n + 1 ) at y n + 1 , we have
I ( y ¯ n + 1 ) = I ( y n + 1 ) + ( I ( y n + 1 ) ) ( I ( y n + 1 ) ) h p + 1 + O ( h p + 2 ) .
As | I ( x ) | 0 and h is sufficiently small, then I ( y ¯ n + 1 ) > I ( y n + 1 ) . Similarly, for  Δ I ( y n + 1 ) > 0 , we obtain I ( y ¯ n + 1 ) < I ( y n + 1 ) .
Besides, according to the theory of the stochastic B-series, if the ESRK method can be written into the stochastic B-series in the form of (10), then I ( y n + 1 ) can also be written as the stochastic B-series
I ( y n + 1 ) = τ T γ ( τ ) ρ ( τ ) ! a ¯ ( τ ) α ( τ ) F ( τ ) x 0 ,
where a ¯ is some function of τ . Then, the local error of the improved method (18) is
l ¯ n + 1 = y ¯ n + 1 y ( t n + 1 ) = y n + 1 y ( t n + 1 ) + h p + 1 I ( y n + 1 ) if Δ I ( y n + 1 ) < 0 , y n + 1 y ( t n + 1 ) if Δ I ( y n + 1 ) = 0 , y n + 1 y ( t n + 1 ) h p + 1 I ( y n + 1 ) if Δ I ( y n + 1 ) > 0 ,
i.e.,
l ¯ n + 1 = τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 + h p + 1 τ T γ ( τ ) ρ ( τ ) ! a ¯ ( τ ) α ( τ ) F ( τ ) x 0 if Δ I ( y n + 1 ) < 0 , τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 if Δ I ( y n + 1 ) = 0 , τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 h p + 1 τ T γ ( τ ) ρ ( τ ) ! a ¯ ( τ ) α ( τ ) F ( τ ) x 0 if Δ I ( y n + 1 ) > 0 .
Under the assumption that J ( τ ) = a ( τ ) holds for all colored, rooted trees with ord ( τ ) p , we can conclude that
E τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 2 1 2 = O ( h p + 1 2 ) ,
and
E τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 = O ( h p + 1 ) .
Then, by triangle inequality and mean-value inequality, we have
| E ( l ¯ n + 1 ) | E τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 + E h p + 1 τ T γ ( τ ) ρ ( τ ) ! a ¯ ( τ ) α ( τ ) F ( τ ) x 0 = O ( h p + 1 ) ,
and
E | l ¯ n + 1 | 2 1 2 E τ T γ ( τ ) ρ ( τ ) ! J ( τ ) a ( τ ) α ( τ ) F ( τ ) x 0 2 1 2 + E h p + 1 τ T γ ( τ ) ρ ( τ ) ! a ¯ ( τ ) α ( τ ) F ( τ ) x 0 2 1 2 = O ( h p + 1 2 ) .
According to Theorem 1, we can obtain that the global error of the improved method (18) ϵ ¯ N = x ( T ) y ¯ N satisfies
E | ϵ ¯ N | 2 1 2 = O ( h p ) ,
which implies the improved method has the same mean-square convergence order as the original ESRK method. □
Remark 1.
Theorem 1 shows that the improved method (18) can give a more correct simulation than the original ESRK method. Moreover, the improved method is explicit; therefore, it is superior in computational cost. Actually, we can apply the improved formula (18) to any classical explicit numerical methods (for example, Stratonovich–Taylor methods) such that their numerical solutions can approximately preserve the conserved quantity.
In order to further improve the accuracy of simulation, we introduce an improved ESRK method based on the optimization technique. The improved formula is given by
y ˜ n + 1 = y n + 1 + λ n + 1 h p + 1 I ( y n + 1 ) , min λ n + 1 R | I ( y n + 1 + λ n + 1 h p + 1 I ( y n + 1 ) ) I ( x 0 ) | ,
where the parameter λ n + 1 R is determined through golden section search and parabolic interpolation [29] so that the error | I ( y ˜ n + 1 ) I ( x 0 ) | reaches the minimum.
According to the optimization technique, the improved method (30) can preserve conserved quantity of the original SDEs more accurately than method (18). Moreover, we have the following results.
Theorem 2.
Under the same assumptions as in Theorem 1, the improved method (30) has the same mean-square convergence order as the original ESRK method.
Proof of Theorem 2.
The proof can be completed through a similar analysis as in Theorem 1. □
For a given sufficiently small positive constant α , we call a numerical method α -preserving if
| I ( y n ) I ( x 0 ) | < α ,
holds for all n. Based on this concept, the improved method (30) can be implemented following the procedure in Algorithm 1.
Algorithm 1 Improved ESRK method
Step 1. Set the initial value x 0 and choose a constant α > 0 .
Step 2.
if  n = N   then
  end the procedure;
else
  compute y n + 1 by the ESRK method;
end if
Step 3.
if  | I ( y n ) I ( x 0 ) | < α   then
  let n = n + 1 and go to Step 2;
else
  go to Step 4;
end if
Step 4. Compute λ n + 1 such that | I ( y ˜ n + 1 ) I ( x 0 ) | reaches the minimum. Compute
y ˜ n + 1 = y n + 1 + λ n + 1 h p + 1 I ( y n + 1 ) ,
and let y n + 1 = y ˜ n + 1 , then go to Step 3.
Remark 2.
Specially, for quadratic conserved quantity in the form
I ( x ) = x C x ,
where C is a symmetric square matrix, and x C g j = 0 , j = 0 , , r for all x. Method (30) could preserve the quadratic conserved quantity exactly on some points of numerical solutions, because we can make the min λ n + 1 I ( y ˜ n + 1 ) I ( x 0 ) to be 0 through the root formula of I ( y ˜ n + 1 ) I ( x 0 ) = 0 when its discriminant Δ 0 .
Moreover, notice that many models in practical applications are constructed based on more than one conservative laws. Thus, the corresponding SDEs have more than one conserved quantity. In this case, only keeping some of the conserved quantities may not lead to an accurate approximation of the exact solution. Therefore, we hope that all the conserved quantities can be approximately or exactly preserved after discretization. As mentioned above, ESRK methods generally can not preserve conserved quantities, even the classical conserved quantity-preserving method such as the stochastic discrete gradient method and the stochastic projection method cannot preserve all the conserved quantities in this situation. In this paper, in order to construct ESRK method for preserving multiple conserved quantities, the requirement that all the conserved quantities I j , j = 1 , , m satisfy I j ( y n ) = I ( x 0 ) has been relaxed.
Let
I ˜ ( x ) = j = 1 m I j ( x ) I j ( x 0 ) 2 .
It is easy to verify that if I 1 ( x ) , , I m ( x ) are conserved quantities of the original SDE (1), then (34) is also conserved along the exact solution and satisfies I ˜ ( x ) = 0 . Conversely, I 1 ( x ) = I 1 ( x 0 ) , , I M ( x ) = I M ( x 0 ) is a particular solution to equation I ˜ ( x ) = 0 . Thus, the requirement I j ( y n ) = I j ( x 0 ) for j = 1 , , m is relaxed to I ˜ ( y n ) = 0 , which reduces to SDEs with single conserved quantity.
According to the definition of α -preserving and the new conserved quantity (34), we can conclude that if the conserved quantity I ˜ ( x ) is α -preserved, i.e.,
| I ˜ ( y n ) 0 | < α ,
for some small positive constant α , then the original conserved quantities I j ( x ) , j = 1 , , m are α -preserved, which means
| I j ( y n ) I j ( x 0 ) | < α ,
for j = 1 , , m .
Remark 3.
The improved version of ESRK methods are explicit. The main difference between the improved method (30) and the standard stochastic projection method is that method (30) is based on an unconstrained optimization problem which is a 1-dimensional search, while the standard stochastic projection method is based on a constrained optimization problem which is a m-dimensional search.

4. Numerical Examples

In this section, we intuitively illustrate the effectiveness of the proposed methods by applying these methods to solve some classical physics problems. Here, we choose the Platen method [11] as the original method of our methods, which is given by
Z ( 0 ) = h 0 0 1 0 , Z ( 1 ) = J 1 0 0 1 0 , z ( 0 ) = h ( 1 , 0 ) , z ( 1 ) = J 1 2 ( 1 , 1 ) ,
where J 1 = t n t n + 1 d W ( s ) . The Platen method is a 2-stage ESRK method with mean-square convergence order 1.

4.1. Stochastic Kubo Oscillator

Consider the stochastic Kubo oscillator [28,30]
d p ( t ) = c 1 q ( t ) d t c 2 q ( t ) d W ( t ) , d q ( t ) = c 1 p ( t ) d t + c 2 p ( t ) d W ( t ) ,
where parameters and initial values are chosen as c 1 = 1 , c 2 = 0.1 and p ( 0 ) = 1 , q ( 0 ) = 1 . (38) has a conserved quantity
I ( p , q ) = p 2 + q 2 .
Moreover, the exact solution of (38) is given by
p ( t ) = p ( 0 ) cos ( c 1 t + c 2 W ( t ) ) q ( 0 ) sin ( c 1 t + c 2 W ( t ) ) , q ( t ) = p ( 0 ) sin ( c 1 t + c 2 W ( t ) ) + q ( 0 ) cos ( c 1 t + c 2 W ( t ) ) .
In this example, we verify the effectiveness of the proposed method from the following aspects.
  • In order to show the proposed methods have the same mean-square convergence order as the original method. The mean-square convergence rate of methods (18) and (30) are displayed in Figure 1. Here, the mean-square errors are computed over 1000 simple paths with five different step sizes: h = 2 10 , h = 2 9 , h = 2 8 , h = 2 7 , and h = 2 6 .
  • In order to illustrate the advantages of the proposed methods in computational efficiency, we also apply the stochastic projection method and the stochastic discrete gradient method to solve (38) with h = 0.01 . Figure 2 displays the mean-square sample errors and the corresponding CPU time of the stochastic projection method, the stochastic discrete gradient method, the Platen method, method (18), and method (30).
  • In order to show the proposed methods’ ability to preserve the conserved quantity and get more accurate numerical solutions than the original method. We display the numerical sample paths and absolute errors in conserved quantity I of the Platen method, method (18), and method (30) in Figure 3 and Figure 4, respectively. Here, h = 0.01 .

4.2. Stochastic Mathematical Pendulum

Consider the stochastic mathematical pendulum problem [21,28]
d p ( t ) = c 1 sin ( q ( t ) ) d t c 2 sin ( q ( t ) ) d W ( t ) , d q ( t ) = c 1 p ( t ) d t + c 2 p ( t ) d W ( t ) .
Let the parameters be c 1 = 1 , c 2 = 0.1 and the initial values be p ( 0 ) = 0 , q ( 0 ) = π / 2 . (40) is a 2-dimensional stochastic Hamiltonian system with non-quadratic Hamiltonian function
I ( p , q ) = 1 2 p 2 cos ( q ) .
In order to show the ability of the proposed methods to preserve conserved quantity and correctly simulate the solution of this kind of problems, we display the numerical sample paths of the Platen method, method (18), and method (30) in Cartesian coordinates through the coordinates transform q 1 = sin ( q ) , q 2 = cos ( q ) , p 1 = p cos ( q ) , and p 2 = p sin ( q ) in Figure 5 and displays absolute errors in Hamiltonian function I of these methods in Figure 6. Here, h = 0.01 .

4.3. Stochastic Rigid Body

Consider the 3-dimensional stochastic rigid body problem [23,28]
d x 1 x 2 x 3 = c 1 0 x 3 / I 3 x 2 / I 2 x 3 / I 3 0 x 1 / I 1 x 2 / I 2 x 1 / I 1 0 x 1 x 2 x 3 d t + c 2 0 x 3 / I ˜ 3 x 2 / I ˜ 2 x 3 / I ˜ 3 0 x 1 / I ˜ 1 x 2 / I ˜ 2 x 1 / I ˜ 1 0 x 1 x 2 x 3 d W ( t ) .
We choose parameters and initial values as c 1 = 1 , c 2 = 0.1 , I 1 = 2 , I 2 = 1 , I 3 = 2 / 3 , I ˜ 1 = 1 , I ˜ 2 = 2 , I ˜ 3 = 3 and x 1 ( 0 ) = sin ( 1.1 ) , x 2 ( 0 ) = 0 , x 3 ( 0 ) = cos ( 1.1 ) . (41) has a conserved quantity
I ( x 1 , x 2 , x 3 ) = x 1 2 + x 2 2 + x 3 2 ,
which implies the exact solution of (41) evolves on a unit sphere in R 3 . Figure 7 displays the numerical simple paths of the Platen method, method (18), and method (30), and Figure 8 shows the absolute errors in I of these methods.

4.4. Stochastic Kepler Problem

Consider the stochastic Kepler problem [19,28]
d p 1 p 2 q 1 q 2 = c 1 q 1 q 1 2 + q 2 2 3 q 2 q 1 2 + q 2 2 3 p 1 p 2 d t + c 2 q 1 q 1 2 + q 2 2 3 q 2 q 1 2 + q 2 2 3 p 1 p 2 d W ( t ) ,
which is a 4-dimensional stochastic Hamiltonian system with nonlinear Hamiltonian function
I 1 ( p 1 , p 2 , q 1 , q 2 ) = 1 2 ( p 1 2 + p 2 2 ) 1 q 1 2 + q 2 2 .
Set the parameters c 1 = 1 , c 2 = 0.1 and initial values
p 1 ( 0 ) = 0 , p 2 ( 0 ) = 1 + e 1 e , q 1 ( 0 ) = 1 e , q 2 ( 0 ) = 0 ,
where e = 0.4 .
Besides,
I 2 ( p 1 , p 2 , q 1 , q 2 ) = q 1 p 2 q 2 p 1 ,
is also a conserved quantity of (42), which represents the angular momentum. Based on (34), we obtain a new conserved quantity
I ˜ ( p 1 , p 2 , q 1 , q 2 ) = I 1 ( p 1 , p 2 , q 1 , q 2 ) I 1 ( p 1 ( 0 ) , p 2 ( 0 ) , q 1 ( 0 ) , q 2 ( 0 ) ) 2 + I 2 ( p 1 , p 2 , q 1 , q 2 ) I 2 ( p 1 ( 0 ) , p 2 ( 0 ) , q 1 ( 0 ) , q 2 ( 0 ) ) 2 .
In order to show the effectiveness of the proposed strategy for preserving multiple conserved quantities in this paper, we apply the Platen method, method (18), and method (30) based on (43) to solve (42) with h = 0.01 . Figure 9 show the numerical simple paths of the Platen method, method (18), and method (30). Figure 10 and Figure 11 display the absolute errors of these methods in Hamiltonian function I 1 and angular momentum I 2 , respectively.

5. Conclusions and Discussion

ESRK methods are improved to preserved the conserved quantities as accurately as possible. It has been proven that the improved methods have the same mean-square convergence order as the original ESRK methods. Moreover, the proposed methods are also valid for SDEs with multiple conserved quantities through constructing a new single conserved quantity. Numerical examples illustrate the effectiveness of the proposed methods and the superiority of the methods with respect to computational efficiency. It is well known that SDEs with conserved quantity can be rewritten to the equivalent skew-gradient form [20], and by the composition (operator splitting) technique, the skew-gradient form can be divided into several conservative subsystems, after applying the proposed numerical method to each subsystem, one can construct numerical method for the original SDEs by combining numerical approximations of all subsystems. It could be a further topic to research more accurate and efficient numerical methods for SDEs with conserved quantities based on the composition technique.

Author Contributions

Methodology, Q.M. and Z.W.; formal analysis, Z.W.; numerical experiment, Q.M. and Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, Q.M. and X.D.; supervision, X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number No. 11501150, the National Natural Science Foundation of China grant number No. 11701124, and the Natural Science Foundation of Shandong Province of China grant number No. ZR2017PA006.

Acknowledgments

We would like to thank the reviewers for their valuable comments and pointing out possible further research topics.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Mao, X. Stochastic Differential Equations and Applications; Elsevier Science: Amsterdam, The Netherlands, 2007. [Google Scholar]
  2. Cao, W.; Zhang, Z.; Karniadakis, G.E. Numerical methods for stochastic delay differential equations via the Wong–Zakai approximation. SIAM J. Sci. Comput. 2015, 37, A295–A318. [Google Scholar] [CrossRef]
  3. Hu, L.; Gan, S. Numerical analysis of the balanced implicit methods for stochastic pantograph equations with jumps. Appl. Math. Comput. 2013, 223, 281–297. [Google Scholar] [CrossRef]
  4. Li, X.; Cao, W. On mean-square stability of two-step Maruyama methods for nonlinear neutral stochastic delay differential equations. Appl. Math. Comput. 2015, 261, 373–381. [Google Scholar] [CrossRef]
  5. Wang, X.; Wu, J.; Dong, B. Mean-square convergence rates of stochastic theta methods for SDEs under a coupled monotonicity condition. BIT Numer. Math. 2020, 60, 1–32. [Google Scholar] [CrossRef]
  6. Wu, Q.; Hu, L.; Zhang, Z. Convergence and stability of balanced methods for stochastic delay integro-differential equations. Appl. Math. Comput. 2014, 237, 446–460. [Google Scholar] [CrossRef]
  7. Li, Q.; Gan, S. Mean-square exponential stability of stochastic theta methods for nonlinear stochastic delay integro-differential equations. J. Appl. Math. Comput. 2012, 39, 69–87. [Google Scholar] [CrossRef]
  8. Wang, X.; Gan, S. B-convergence of split-step one-leg theta methods for stochastic differential equations. J. Appl. Math. Comput. 2012, 38, 489–503. [Google Scholar] [CrossRef]
  9. Li, M.; Huang, C. Projected Euler-Maruyama method for stochastic delay differential equations under a global monotonicity condition. Appl. Math. Comput. 2020, 366, 124733. [Google Scholar] [CrossRef]
  10. Milstein, G.N. Numerical Integration of Stochastic Differential Equations; Kluwer Academic Publishers: Amsterdam, Poland, 1995. [Google Scholar]
  11. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 23. [Google Scholar]
  12. Burrage, K.; Burrage, P.M. High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations. Appl. Numer. Math. 1996, 22, 81–101. [Google Scholar] [CrossRef]
  13. Zeghdane, R.; Abbaoui, L.; Tocino, A. Higher-order semi-implicit Taylor schemes for Itô stochastic differential equations. J. Comput. Appl. Math. 2011, 236, 1009–1023. [Google Scholar] [CrossRef] [Green Version]
  14. Burrage, K.; Tian, T. Predictor-corrector methods of Runge–Kutta type for stochastic differential equations. SIAM J. Numer. Anal. 2002, 40, 1516–1537. [Google Scholar] [CrossRef] [Green Version]
  15. Milstein, G.N.; Platen, E.; Schurz, H. Balanced implicit methods for stiff stochastic systems. SIAM J. Numer. Anal. 1998, 35, 1010–1019. [Google Scholar] [CrossRef] [Green Version]
  16. Omar, M.; Aboul-Hassan, A.; Rabia, S.I. The composite Milstein methods for the numerical solution of Stratonovich stochastic differential equations. Appl. Math. Comput. 2009, 215, 727–745. [Google Scholar] [CrossRef]
  17. Burrage, K.; Burrage, P.M. Order conditions of stochastic Runge–Kutta methods by B-series. SIAM J. Numer. Anal. 2000, 38, 1626–1646. [Google Scholar] [CrossRef] [Green Version]
  18. Rößler, A. Runge–Kutta methods for the strong approximation of solutions of stochastic differential equations. SIAM J. Numer. Anal. 2010, 48, 922–952. [Google Scholar] [CrossRef]
  19. Li, X.; Zhang, C.; Ma, Q.; Ding, X. Discrete gradient methods and linear projection methods for preserving a conserved quantity of stochastic differential equations. Int. J. Comput. Math. 2018, 95, 2511–2524. [Google Scholar] [CrossRef]
  20. Hong, J.; Zhai, S.; Zhang, J. Discrete gradient approach to stochastic differential equations with a conserved quantity. SIAM J. Numer. Anal. 2011, 49, 2017–2038. [Google Scholar] [CrossRef]
  21. Cohen, D.; Dujardin, G. Energy-preserving integrators for stochastic Poisson systems. Commun. Math. Sci. 2014, 12, 1523–1539. [Google Scholar] [CrossRef]
  22. Malham, S.J.A.; Wiese, A. Stochastic Lie group integrators. SIAM J. Sci. Comput. 2008, 30, 597–617. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, Z.; Ma, Q.; Yao, Z.; Ding, X. The Magnus expansion for stochastic differential equations. J. Nonlinear Sci. 2020, 30, 419–447. [Google Scholar] [CrossRef]
  24. Chen, C.; Cohen, D.; D’Ambrosio, R.; Lang, A. Drift-preserving numerical integrators for stochastic Hamiltonian systems. Adv. Comput. Math. 2020, 46, 1–22. [Google Scholar] [CrossRef] [Green Version]
  25. D’Ambrosio, R.; Scalone, C. On the numerical structure preservation of nonlinear damped stochastic oscillators. Numer. Algorithms 2020. [Google Scholar] [CrossRef]
  26. Misawa, T. A Lie algebraic approach to numerical integration of stochastic differential equations. SIAM J. Sci. Comput. 2001, 23, 866–890. [Google Scholar] [CrossRef]
  27. Misawa, T. Symplectic integrators to stochastic Hamiltonian dynamical systems derived from composition methods. Math. Probl. Eng. 2010, 2010, 384937. [Google Scholar] [CrossRef]
  28. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 31. [Google Scholar]
  29. Brent, R.P. Algorithms for Minimization without Derivatives; Courier Corporation: North Chelmsford, MA, USA, 2013. [Google Scholar]
  30. Milstein, G.N.; Repin, Y.M.; Tretyakov, M.V. Numerical methods for stochastic systems preserving symplectic structure. SIAM J. Numer. Anal. 2002, 40, 1583–1604. [Google Scholar] [CrossRef]
Figure 1. Mean-square convergence rate of methods (18) and (30).
Figure 1. Mean-square convergence rate of methods (18) and (30).
Mathematics 08 02195 g001
Figure 2. Mean-square sample errors and CPU time of the stochastic projection method, the stochastic discrete gradient method, the Platen method, method (18), and method (30).
Figure 2. Mean-square sample errors and CPU time of the stochastic projection method, the stochastic discrete gradient method, the Platen method, method (18), and method (30).
Mathematics 08 02195 g002
Figure 3. Numerical sample paths of the Platen method, method (18), and method (30).
Figure 3. Numerical sample paths of the Platen method, method (18), and method (30).
Mathematics 08 02195 g003
Figure 4. Conserved quantity errors of the Platen method, method (18), and method (30).
Figure 4. Conserved quantity errors of the Platen method, method (18), and method (30).
Mathematics 08 02195 g004
Figure 5. Numerical sample paths of the Platen method, method (18), and method (30).
Figure 5. Numerical sample paths of the Platen method, method (18), and method (30).
Mathematics 08 02195 g005
Figure 6. Conserved quantity errors of the Platen method, method (18), and method (30).
Figure 6. Conserved quantity errors of the Platen method, method (18), and method (30).
Mathematics 08 02195 g006
Figure 7. Numerical sample paths of the Platen method, method (18), and method (30).
Figure 7. Numerical sample paths of the Platen method, method (18), and method (30).
Mathematics 08 02195 g007
Figure 8. Conserved quantity errors of the Platen method, method (18), and method (30).
Figure 8. Conserved quantity errors of the Platen method, method (18), and method (30).
Mathematics 08 02195 g008
Figure 9. Numerical sample paths of the Platen method, method (18), and method (30).
Figure 9. Numerical sample paths of the Platen method, method (18), and method (30).
Mathematics 08 02195 g009
Figure 10. Conserved quantity errors in I 1 of the Platen method, method (18), and method (30).
Figure 10. Conserved quantity errors in I 1 of the Platen method, method (18), and method (30).
Mathematics 08 02195 g010
Figure 11. Conserved quantity errors in I 2 of the Platen method, method (18), and method (30).
Figure 11. Conserved quantity errors in I 2 of the Platen method, method (18), and method (30).
Mathematics 08 02195 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; Ma, Q.; Ding, X. Simulating Stochastic Differential Equations with Conserved Quantities by Improved Explicit Stochastic Runge–Kutta Methods. Mathematics 2020, 8, 2195. https://doi.org/10.3390/math8122195

AMA Style

Wang Z, Ma Q, Ding X. Simulating Stochastic Differential Equations with Conserved Quantities by Improved Explicit Stochastic Runge–Kutta Methods. Mathematics. 2020; 8(12):2195. https://doi.org/10.3390/math8122195

Chicago/Turabian Style

Wang, Zhenyu, Qiang Ma, and Xiaohua Ding. 2020. "Simulating Stochastic Differential Equations with Conserved Quantities by Improved Explicit Stochastic Runge–Kutta Methods" Mathematics 8, no. 12: 2195. https://doi.org/10.3390/math8122195

APA Style

Wang, Z., Ma, Q., & Ding, X. (2020). Simulating Stochastic Differential Equations with Conserved Quantities by Improved Explicit Stochastic Runge–Kutta Methods. Mathematics, 8(12), 2195. https://doi.org/10.3390/math8122195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop