Next Article in Journal
Ideal and Predictable Hit Ratio for Matrix Transposition in Data Caches
Previous Article in Journal
Eigenvalue Problem for Discrete Jacobi–Sobolev Orthogonal Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations

1
Department of Mathematics,Tianjin Chengjian University, Tianjin 300384, China
2
Department of Mathematics, Tongji University, Shanghai 200092, China
3
Business School, University of Shanghai for Science and Technology, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 183; https://doi.org/10.3390/math8020183
Submission received: 2 January 2020 / Revised: 21 January 2020 / Accepted: 22 January 2020 / Published: 3 February 2020
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, based on the iterative technique, a new explicit Magnus expansion is proposed for the nonlinear stochastic equation d y = A ( t , y ) y d t + B ( t , y ) y d W . One of the most important features of the explicit Magnus method is that it can preserve the positivity of the solution for the above stochastic differential equation. We study the explicit Magnus method in which the drift term only satisfies the one-sided Lipschitz condition, and discuss the numerical truncated algorithms. Numerical simulation results are also given to support the theoretical predictions.

1. Introduction

Stochastic differential equations (SDE) have been widely applied in describing and understanding the random phenomena in different areas, such as biology, chemical reaction engineering, physics, finance and so on. The stochastic differential equations can also be applied in 5G wireless networks to address the problem of joint transmission power [1,2], and the novel frameworks were showed in [3,4]. However, in most cases, explicit solutions of stochastic differential equations are not easy to get. Therefore, it is necessary and important to exploit the algorithms for the stochastic differential equations. So far, there is a lot of literature on the numerical approximation of stochastic differential equations [5,6]. One kind of approximation which aims to preserve the essential properties of corresponding systems is getting more and more attention. For example, a conservative method for stochastic Hamiltonian systems with additive and multiplicative noise [7], the stochastic Lie group integrators [8]. In this paper, we will focus on the stochastic Magnus expansions [9].
As is well known, the deterministic Magnus expansion was first investigated by Magnus [10] in 1960s. This topic was further pursued in [11,12]. With the development of stochastic differential equations, the stochastic Magnus expansion got more and more attention when approximating solutions of linear and nonlinear stochastic differential equations [13,14].
The linear stochastic differential equation is as follows
d Y ( t ) = A ( t ) Y d t + B ( t ) Y d W ( t ) , Y ( 0 ) = Y 0 ,
where A ( t ) and B ( t ) are n × n matrices, W ( t ) is the standard Wiener process, Y ( t ) is expressed by Y ( t ) = exp ( Ω ( t ) ) Y 0 , where Ω ( t ) can be represented by an infinite series Ω ( t ) = k = 1 Ω k ( t ) , whose terms are linear combinations of multiple stochastic integrals.
For example, the first two terms are as follows,
Ω 1 ( t ) = o t A ( t 1 ) d t 1 + 0 t B ( t 1 ) d W ( t 1 ) ,
Ω 2 ( t ) = 1 2 0 t d t 1 0 t 1 d t 2 [ A ( t 1 ) , A ( t 2 ) ] + 1 2 0 t d t 1 0 t 1 d W ( t 2 ) [ A ( t 1 ) , B ( t 2 ) ] + 1 2 0 t d W ( t 1 ) 0 t 1 d t 2 [ B ( t 1 ) , A ( t 2 ) ] + 1 2 0 t d W ( t 1 ) 0 t 1 d W ( t 2 ) [ B ( t 1 ) , B ( t 2 ) ] ,
where [ X , Y ] = X Y Y X is the matrix commutator of X and Y.
In this paper, we design explicit stochastic Magnus expansions [9] for the nonlinear equation
d y = A ( t , y ) y d t + B ( t , y ) y d W , y ( t 0 ) = y 0 G ,
A , B : R + × G g ,
where G denotes a matrix Lie group, and g is the corresponding Lie algebra of G. The Equation (4) has lots of applications in the calculation of highly oscillatory systems of stochastic differential equations, Hamiltonian dynamics and finance engineering. It is necessary to construct efficient numerical integrators for the above equation. The general procedure of devising Magnus expansions for the above stochastic Equation (4) was obtained by applying Picard’s iteration to the new derived stochastic differential equation with the Lie algebra, which is an extension of the stochastic case [11]. G. Lord, J.A. Malham and A.wiese [14] proposed the stochastic Magnus integrators using different techniques.
With the application of the stochastic Magnus methods, we can recapture some main features of the stochastic differential Equations (4) successfully. When Equation (4) is used to describe the asset pricing model in finance, the positivity of the solution is a very important factor to be maintained in the numerical simulation. This task can be accomplished by the nonlinear Magnus methods. We also notice that lots of works [15,16] have dealt with this issue using the methods of balancing and dominating. However, the Magnus methods using the Lie group methods are essentially different from the aforementioned methods. If Equation (4) has an almost sure exponential stable solution, it is necessary to construct numerical methods to preserve this property. The numerical methods which preserve the exponential stability usually have long-time numerical behavior. We prove that the explicit Magnus methods succeed in reproducing the almost sure exponential stability for the stochastic differential Equation (4) with a one-sided Lipschitz condition. Numerical simulation results are also given to support the theoretical predictions.
This paper is organized as follows. We introduce and discuss the nonlinear Magnus in Section 2. The numerical truncated algorithms are discussed in Section 3. Then several numerical applications and experiments strongly support the theoretical analysis. In Section 4, we present the applications in the highly oscillating nonlinear stochastic differential equations. Section 5 consists of the simulations of processes for the dynamic asset pricing models. In Section 6, we show that the explicit Magnus simulation can preserve the almost sure stability and numerical experiments are presented to support the analysis.
Notation
In this paper, ( Ω , F , { F t } t 0 , P ) is the complete probability space with the filtration { F t } t 0 . W ( t ) is the scalar Brownian motion defined on the probability space. d W ( t ) denotes the Stratonovich integral and d W ( t ) denotes the Itô integral. [ X , Y ] = X Y Y X is the matrix commutator of X and Y. A and B are n × n matrices and smooth in the domain, W ( t ) is the standard Wiener process and E | y 0 | 2 < .

2. The Nonlinear Stochastic Magnus Expansion

In this section, we consider a stochastic differential Equation (4).
Here, A and B have uniformly bounded partial derivatives, A ( t , y ) y and B ( t , y ) y satisfy the global Lipschtiz conditions. Therefore, Equation (4) has a unique strong solution [17].
The solution of Equation (4) is presented as follows
Y ( t ) = exp ( Ω ( t ) ) Y 0 ,
and
d Ω ( t ) = Ω 1 ( t ) d t + Ω 2 ( t ) d W .
Combining (5) with (4), we have
d Y = A ( t , Y ) Y d t + B ( t , Y ) Y d W ( t ) = d exp ( Ω ( t ) ) Y 0 = k = 1 1 k ! j = 1 k Ω ( t ) j 1 d Ω ( t ) Ω ( t ) k j Y 0 .
Equation (7) holds since the integral is in the Stratnovitch sense.
Inserting (6) into (7), we can get
k = 1 1 k ! j = 1 k Ω ( t ) j 1 Ω 1 ( t ) Ω ( t ) k j = A ( t , exp ( Ω ( t ) ) Y 0 ) exp ( Ω ( t ) ) ,
k = 1 1 k ! j = 1 k Ω ( t ) j 1 Ω 2 ( t ) Ω ( t ) k j = B ( t , exp ( Ω ( t ) ) Y 0 ) exp ( Ω ( t ) ) .
According to Equation (8), we obtain
A ( t , exp ( Ω ( t ) ) Y 0 ) = k = 1 1 k ! ( j = 1 k Ω ( t ) j 1 Ω 1 ( t ) Ω ( t ) k j ) exp ( Ω ( t ) ) = k = 1 1 k ! ( j = 1 k Ω ( t ) j 1 Ω 1 ( t ) Ω ( t ) k j ) [ l = 0 ( 1 ) l l ! Ω ( t ) l ] = l = 1 ( 1 ) l l ! j = 1 l [ k = j l ( 1 ) k l k ] Ω ( t ) j l Ω 1 ( t ) Ω ( t ) l j .
It is not difficult to check that
k = j l ( 1 ) k l k = ( 1 ) j l 1 j 1 ,
and
ad [ ( Ω ( t ) ) l , Ω 1 ( t ) ] = j = 0 l ( 1 ) l j l j Ω ( t ) j Ω 1 ( t ) Ω ( t ) l j , l Z + ,
where ad [ p 0 , q ] = q and ad [ p k , q ] = [ p , ad [ p k 1 , q ] ] for k N .
Then it is true that
l = 0 1 ( l + 1 ) ! ad [ Ω ( t ) l , Ω 1 ( t ) ] = A ( t , exp ( Ω ( t ) ) Y 0 ) ,
l = 0 1 ( l + 1 ) ! ad [ Ω ( t ) l , Ω 2 ( t ) ] = B ( t , exp ( Ω ( t ) ) Y 0 ) .
Equations (13) and (14) indicate that
Ω 1 ( t ) = m = 0 f m ad [ Ω m , A ( t , exp ( Ω ( t ) ) Y 0 ) ] ,
Ω 2 ( t ) = m = 0 f m ad [ Ω m ( t ) , B ( t , exp ( Ω ( t ) ) Y 0 ) ] ,
where
d ( z ) = exp ( z ) 1 z ,
f m = m = 0 f m z m = 1 d ( z ) .
Finally, we get
d Ω ( t ) = m = 0 f m ad [ Ω m ( t ) , A ( t , exp ( Ω ( t ) ) Y 0 ) ] d t + m = 0 f m ad [ Ω m ( t ) , B ( t , exp ( Ω ( t ) ) Y 0 ) ] d W ( t ) .
Then, applying the Picard’s iteration to (17), we obtain
Ω ( m + 1 ) ( t ) = 0 t k = 0 B k k ! ad [ Ω m ( t ) k , A ( s , exp ( Ω m ( t ) ) Y 0 ) ] d s + 0 t k = 0 B k k ! ad [ Ω m ( t ) k , B ( s , exp ( Ω m ( t ) ) Y 0 ) ] d W ( s ) , m 0 .
In order to get explicit numerical methods which can be implemented in the computer, we need to truncate the infinite series in (18). The iterate function series Ω ( m + 1 ) ( t ) only reproduce the expansion of the solution Ω ( t ) up to certain orders in the mean-square sense.
For example, if m = 0 , we obtain
Ω 1 ( t ) = 0 t A ( s , Y 0 ) d s + 0 t B ( s , Y 0 ) d W ( s ) = Ω ( t ) + O ( t ) .
From
A ( s , exp ( Ω 1 ( s ) ) Y 0 ) = A ( 0 , Y 0 ) + O ( s ) ,
B ( s , exp ( Ω 1 ( s ) ) Y 0 ) = B ( 0 , Y 0 ) + O ( s ) ,
we can get
1 2 0 t [ Ω 1 ( s ) , A ( s , exp ( Ω 1 ( s ) ) Y 0 ) ] d s = O ( t 3 ) ,
1 2 0 t [ Ω 1 ( s ) , B ( s , exp ( Ω 1 ( s ) ) Y 0 ) ] d W = O ( t 2.5 ) .
Finally, we devise the following general Magnus expansion
Ω 1 ( t ) = 0 t A ( s , Y 0 ) d s + 0 t B ( s , Y 0 ) d W ( s ) ,
Ω m ( t ) = k = 0 m 2 0 t ad [ Ω ( m 1 ) ( t ) k , A ( s , exp ( Ω ( m 1 ) ( t ) ) Y 0 ) ] d s + k = 0 m 2 0 t ad [ Ω ( m 1 ) ( t ) k , B ( s , exp ( Ω ( m 1 ) ( t ) ) Y 0 ) ] d W ( s ) .
Obviously, Equation (21) consists of a linear combination of multiple stochastic integrals of nested commutators. It is easy to prove that Ω m ( t ) succeeds in reproducing the sum of the Ω ( t ) series with the Magnus expansion. This scheme is called an explicit stochastic Magnus expansion for the nonlinear stochastic differential equation.
Remark 1.
We let Ω ( t ) be the exact solution of the problem (17) and Ω m ( t ) be the approximate solution given by (21).
It is true that
Ω ( t ) Ω m ( t ) = O ( t m 2 ) ,
we can write the exact solution of Equation (17) as
Ω ( t ) = l = 1 J = ( i 1 , i 2 , , i l ) , i k ( 0 , 1 ) Δ J [ 0 , t ] d W J ω J .
where ω ( 0 ) = A ( Y 0 ) , ω ( 1 ) = B ( Y 0 ) , d W J = d W i 1 d W i 2 d W i l , and
d W i j = d t , i j = 0 , d W , i j = 1 .
Ω ( t ) Ω m ( t ) = l = m + 1 J = ( i 1 , i 2 , , i l ) , i k ( 0 , 1 ) Δ J [ 0 , t ] d W J ω J . ) ,
The general Magnus expansion is
Ω m ( t ) = k = 0 m 2 0 t ad [ Ω ( m 1 ) ( t ) k , A ( s , exp ( Ω ( m 1 ) ( t ) ) Y 0 ) ] d s
+ k = 0 m 2 0 t ad [ Ω ( m 1 ) ( t ) k , B ( s , exp ( Ω ( m 1 ) ( t ) ) Y 0 ) ] d W ( s )
Comparing the Taylor expansion of the Ω m ( t ) with the expansion of Ω ( t ) [12], the conclusion is proven.

3. Numerical Schemes

In this section, we present a new way of constructing efficient numerical methods based on the nonlinear stochastic Magnus expansion. It should be mentioned that highly efficient schemes always involve multiple stochastic integrals. In most cases, however, the half-order approximation of Ω 1 ( t ) (21) can only be exactly evaluated. In order to get higher order integrator, more complicated multiple stochastic integrals, which are hard to approximate, must be included.
We will investigate the schemes of order (mean-square order) 1 / 2 and 1 concretely, and choose the quadrature rules with equispaced points along the interval [ t n , t n + h ] .

3.1. Methods of Order 1/2

When m = 1 , the expansion (21) turns into
Ω 1 ( t ) = t n t A ( s , Y n ) d s + t n t B ( s , Y n ) d W ( s ) .
Using the Taylor formula, we can get the expansion of the solution
y ( t n + h ) = y ( t n ) + A ( t n , y ( t n ) ) y ( t n ) h + B ( t n , y ( t n ) ) y ( t n ) Δ W n + R n 1 ,
and the approximate solution can be expanded in the following form
y ¯ ( t n + h ) = exp ( Ω 1 ( t n + h ) ) y ( t n ) = y ( t n ) + A ( t n , y ( t n ) ) y ( t n ) h + B ( t n , y ( t n ) ) y ( t n ) Δ W n + R n 2 ,
where,
R n 2 = t n t n + h t n s A t d s 1 d s + t n t n + h t n s B t d s 1 d W ( s ) + B ( t n , y ( t n ) ) 2 Δ W n 2 + O ( h 3 / 2 ) .
It is easy to check that
E | y ( t n + h ) y ¯ ( t n + h ) | = O ( h 3 / 2 ) ,
( E | y ( t n + h ) y ¯ ( t n + h ) | 2 ) 1 / 2 = O ( h 1 ) .
According to the Milstein mean-square convergence theorem [18], the strong convergence order is 1 / 2 .
Remark 2.
If Ω 1 ( t ) cannot be evaluated exactly, we need to approximate it with a quadrature rule of order 1 / 2 .
For example, using the Euler method (see reference [19]), we can get
Ω 1 ( t ) = A ( t n , Y n ) Δ t + B ( t n , Y n ) Δ W ( t ) .
Notice that other numerical methods can also be used to approximate (24). we can get the strong convergence order is 1 / 2 .

3.2. Methods of Order 1

When m = 2 , the expansion (21) becomes
Ω 1 ( t ) = t n t A ( s , Y n ) d s + t n t B ( s , Y n ) d W ( s ) ,
Ω 2 ( t ) = t n t A ( s , exp ( Ω 1 ( s ) ) Y n ) d s + t n t B ( s , exp ( Ω 1 ( s ) ) Y n ) d W ( s ) .
The proof of the convergence order is the same as the method of order 1 / 2 .
Remark 3.
Note that if (28) can be computed exactly, all that is required is to replace (29) with a quadrature of order 1. Using the stochastic Taylor expansion, we approximate Ω 2 ( t ) in the following way
Ω 2 ( t ) = A ( t n , Y n ) Δ t + B ( t n , Y n ) Δ W ( t ) + B y ( t n , Y n ) B ( t n , Y n ) Y n Δ W ( t ) 2 / 2 .
As a matter of fact, there is no need to compute (28) or (29) exactly. If we approximate (28) or (29) with the methods of order 1,
Ω 1 ( t ) = A ( t n , Y n ) Δ t + B ( t n , Y n ) Δ W ( t ) .
The convergence order will also be order 1.

4. Numerical Experiment for the Highly Oscillatory Nonlinear Stochastic Differential Equations

We consider the following stochastic system
d y = A ( t , y ) y d t + B ( t , y ) y d W , y ( 0 ) = y 0 ,
where A and B are matrices, W is the Wiener process. It is supposed that the solution of (31) oscillates rapidly.
If A and B are commutative, in order to compute the value of y ( t ) , a new Z ( x ) is considered,
y ( t n + x ) = e x p ( x A ( t n , y n ) + W ( x ) B ( t n , y n ) ) Z ( x ) .
y n y ( t n ) , t n + 1 = t n + h ,
and
d Z ( x ) = A ( x , Z ( x ) ) ¯ Z ( x ) d x + B ( x , Z ( x ) ) ¯ Z ( x ) d W ( x ) , Z ( 0 ) = y n ,
A ( x , Z ( x ) ) ¯ = F 1 ( x ) [ A ( t n + x , F ( x ) Z ( x ) ) A ( t n , y n ) ] F ( x ) ,
B ( x , Z ( x ) ) ¯ = F 1 ( x ) [ B ( t n + x , F ( x ) Z ( x ) ) B ( t n , y n ) ] F ( x ) ,
where F ( x ) = e x p [ x A ( t n , y n ) + W ( x ) B ( t n , y n ) ] , B ( 0 , Z ( 0 ) ) = 0 .
We can consider Z ( x ) as a correction of the solution provided by Ω [ 1 ] . For this reason, if Equation (34) is solved by the method of the nonlinear Magnus expansion, the errors corresponding will be much smaller than the previous algorithms [13,14].
When A and B are not commutative with each other, we have [20]
y ( t n + x ) = exp ( x A ( t n , y n ) ) exp ( W ( x ) B ( t n , y n ) ) Z 1 ( x ) ,
and
d Z 1 ( x ) = A 1 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d x + B 1 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d W ( t n + x ) B 2 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d W ( x ) , Z 1 ( 0 ) = y n ,
with
A 1 ( x , Z 1 ( x ) ) ¯ = F 1 1 ( x ) [ A ( t n + x , F 1 ( x ) Z 1 ( x ) ) A ( t n , y n ) ] F 1 ( x ) ,
B 1 ( x , Z 1 ( x ) ) ¯ = F 1 1 ( x ) [ B ( t n + x , F 1 ( x ) Z 1 ( x ) ) ] F 1 ( x ) ,
B 2 ( x , Z 1 ( x ) ) ¯ = F 1 1 ( x ) [ B ( t n , F 1 ( x ) Z 1 ( x ) ) ] F 1 ( x ) ,
here, F 1 ( x ) = exp ( x A ( t n , y n ) ) exp ( W ( x ) B ( t n , y n ) ) .
To illustrate the main feature of the nonlinear modified Magnus expansion, we consider a system
y ¨ + a ( t , y , y ˙ ) y + b ( t , y , y ˙ ) y W ˙ = 0 ,
where, W is the standard Wiener process.
If y ˙ = x , Equation (42) becomes
d Z = A ( t , Z ) Z d t + B ( t , Z ) Z d W ,
Z = ( y , y ) T ,
A ( t , Z ) = 0 1 a ( t , Z ) 0 ,
B ( t , Z ) = 0 0 b ( t , Z ) 0 .
It is easy to check that A and B are not commutative. By the use of the scheme (37), we obtain the following scheme
exp ( x A ( t n , Z n ) ) = cos ( x θ n ) θ n 1 sin ( x θ n ) θ n sin ( x θ n ) cos ( x θ n ) ,
exp ( W ( x ) A ( t n , Z n ) ) = 1 0 w ( x ) b ( t n , Z n ) 1 ,
F 1 ( x ) = cos ( x θ n ) W ( x ) θ n 1 b ( t n , Z n ) sin ( x θ n ) θ n 1 sin ( x θ n ) θ n sin ( x θ n ) W ( x ) b ( t n , Z n ) cos ( x θ n ) cos ( x θ n ) ,
F 1 1 ( x ) = cos ( x θ n ) W ( x ) θ n 1 b ( t n , Z n ) sin ( x θ n ) θ n 1 sin ( x θ n ) θ n sin ( x θ n ) + W ( x ) b ( t n , Z n ) cos ( x θ n ) cos ( x θ n ) ,
and
d Z 1 ( x ) = A 1 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d x + B 1 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d W ( t n + x ) B 2 ( x , Z 1 ( x ) ) ¯ Z 1 ( x ) d W ( x ) , Z 1 ( 0 ) = y n ,
where A ¯ 1 , B ¯ 1 , B ¯ 2 are computed by (39), (40), (42) and θ n = a ( t n , Z n ) .
Applying the scheme (27) to Equation (44), we get the algorithm
Z 1 ( x ) = exp ( B ( t n , y n ) ( W ( t n + x ) W ( t n ) ) + B ( t n , y n ) Δ W ( x ) ) y n .
Then, inserting (45) into (37), we can get the one-step approximate algorithm.
In Figure 1, we can see that the two points ( ± 1 , 0 ) initially attract the noisy trajectories, and after some time, the trajectories switch to the other point which coincides with the tunneling phenomena described in [21].

5. Application to the Stochastic Differential Equations with Boundary Conditions

In this section, we will deal with the Itô-type stochastic differential equation
d x ( t ) = f ( x ( t ) ) d t + σ ( x ( t ) ) d w ( t ) , x ( 0 ) = x 0 D ,
we assume Equation (46) is well-defined with a certain boundary condition, that is, x ( t ) lies in the domain D .
There are lots of works on the efficient approximate solution of (46) [15,16,22]. Most of them use the balancing or dominating technique to restrict the numerical solution to stay in the domain. In this paper, the proposed explicit nonlinear Magnus methods based on the stochastic Lie group are efficient at approximating the solutions of stochastic differential Equations (46) with unattainable boundary conditions. Our methods are really different from the methods proposed in [23].
In the following, we will approximate two types of finance models using the nonlinear stochastic Magnus methods.
Firstly, we study the financial model defined as:
d X ( t ) = a ( b X ( t ) ) d t + σ X r d w ( t ) .
This model has been widely used in financial engineering. For example, when 2 a b > σ 2 , r = 1 2 , the solution is strictly positive. As well known, the traditional Euler method fails to preserve the positivity of the solution. We also notice that several methods such as the balanced implicit method (BIM) [22] have been proposed. The explicit or implicit Milstein method needs stepsize control [24]. In the following, we show that the nonlinear Magnus methods preserve the positivity independent of the stepsize.
For Equation (47), its equivalent Stratonovich form is defined as:
d X ( t ) = [ a ( b X ( t ) ) r σ 2 2 X 2 r 1 ] d t + σ X r d w ( t ) .
To simulate the above equation, we split the system (48) into two subsystems
d X 1 ( t ) = a X 1 ( t ) d t + σ X 1 ( t ) r d w ( t ) ,
and
d X 2 ( t ) = [ a b r σ 2 2 X 2 2 r 1 ] d t .
The system (49) is simulated with the nonlinear stochastic Magnus method and (50) is approximated with Euler method.
Combining the nonlinear Magnus method (27) and the Euler method, we obtain one step method on the interval [ t n , t n + 1 ]
Ω n + 1 1 = a Δ t n + Δ w ( h ) σ ( X 1 n ) r 1 , X 1 n + 1 = exp ( Ω n + 1 1 ) X 1 n , X 2 n + 1 = X 1 n + 1 + ( a b r σ 2 2 ( X 1 n + 1 ) 2 r 1 ) Δ t n .
In the numerical experiments, we use the scheme (51) to simulate the financial models. When r = 0.5 , it is the famous interest rate model of Cox, Ingeesoll, and Ross (for more detail, see [16]).
Table 1 illustrates that both the Euler and the Milstein methods have a certain percentage of negative paths. When the stepsize decreases, the number of negative paths also decreases. However, the nonlinear Magnus method (NLM) is as good as the balanced Milstein method (BMM) preserving the positivity of the solution independent of time interval [ 0 , T ] and the time stepsize d T .
In Figure 2 and Figure 3, we present the numerical simulations for two finance equations. We can see that both the Euler and the Milstein methods fail to preserve the positivity. However, the NLM is as good as the BMM and the BIM at preserving the positivity of the solution.
In Figure 4, we see that the explicit NLM which is of the order 1 / 2 in mean square sense verifies the analysis in Section 3. In Figure 5, we can see clearly that the simulation is of the order 1 in mean square sense with the linear model, which is the same order as in the BMM.

6. Application to the Nonlinear Itô Scalar Stochastic Differential Equations

In this part, we will focus on the nonlinear Itô scalar stochastic differential equation
d y = a ( y ) y d t + σ y d W ( t ) ,
where σ is independent of y. The Equation (52) satisfies the following conditions
| a ( y ) | K i , y R w i t h | y | i ,
and
λ = sup y R , y 0 ( a ( y ) σ 2 2 ) < 0 .
For convenience, we transform it into its Stratonovich form
d y = ( a ( y ) σ 2 / 2 ) y d t + σ y d W ( t ) .
Theorem 1.
Suppose that (52) satisfies (53) and (54), then the solution of (52) obeys
lim t > sup 1 t log | y ( t ) | λ a . s .
Proof. 
According to the Itô Taylor formula, we have
log ( y 2 ) = log ( y ( 0 ) 2 ) + 0 t 2 σ 2 d W ( s ) + 0 t 2 ( a ( y ) σ 2 / 2 ) d t .
It is easy to prove that
lim t > 0 t 2 σ 2 d W ( s ) t = 0 , a . s .
By the use of the condition (54), we obtain
log ( y ( t ) 2 ) 2 t log ( y ( 0 ) 2 ) 2 t + 0 t 2 σ 2 d W ( s ) 2 t λ .
Let t > , (56) is proven.
It means that the solution of the stochastic differential equation is almost surely stable.
In the following, we will approximate the model using the nonlinear stochastic Magnus methods.
Applying (27) to Equation (55), we can get the approximation Y k Y ( k Δ t ) with Y 0 = y ( 0 ) ,
Y k + 1 = exp ( ( a ( Y k ) σ 2 / 2 ) Δ t + σ Δ W n ) Y k ,
Δ W n = W ( ( n + 1 ) Δ t ) W ( n Δ t ) .
 □
Theorem 2.
Suppose that (52) satisfies (53) and (54), for any Δ t , the scheme (60) satisfies
lim t > sup 1 k Δ t log | Y k | λ , a . s .
Proof. 
By using of the scheme (60), we get
Y n + 1 = exp ( i = 0 n ( a ( Y i ) σ 2 / 2 ) + σ W ( n Δ ) ) Y 0 ,
log | Y n + 1 | = log | Y 0 | + i = 0 n log exp ( ( a ( Y i σ 2 / 2 ) ) Δ t ) + log exp ( σ W ( n Δ t ) ) log | Y 0 | + i = 0 n log exp ( λ Δ t ) + σ W ( n Δ t ) .
 □
Based on (63), we can get
lim t > sup 1 ( n + 1 ) Δ t log | Y n + 1 | log | Y 0 | / ( ( n + 1 ) Δ t ) λ + σ W ( n Δ t ) / ( ( n + 1 ) Δ t ) λ .
Then the theorem is proven.
We prove that the proposed Magnus method succeeds in preserving the exponential stability which is independent of the time-step size.

Cubic Stochastic Differential Equations

For the cubic stochastic differential equation with the form
d x ( t ) = ( x ( t ) x ( t ) 3 ) d t + 2 x ( t ) d w ( t ) .
We can get [25]
lim t > sup 1 t log | x ( t ) | 1 a . s .
Equation (66) implies that the solution has asymptotic stability. It is well-known that the EM method fails to preserve this property (see reference [25]).
Applying (27) to Equation (65), we get the one-step nonlinear Magnus scheme
X n + 1 = e x p ( ( 1 X n 2 ) Δ t + 2 Δ W n ) X n .
Theorem 3.
Let Δ t be any stepsize, the scheme (67) satisfies
lim n > log | X n | n Δ t 1 .
Proof. 
According to (67), we get
X n + 1 = exp ( i = 0 i = n ( 1 X i 2 ) Δ t + 2 W ( n Δ t ) ) X 0 .
(69) implies
1 ( n + 1 ) Δ t log ( | X n + 1 | ) = [ log | X 0 | + i = 0 i = n log exp ( ( 1 X i 2 ) Δ t ) + 2 W ( n Δ t ) ] / ( ( n + 1 ) Δ t ) = log | X 0 | ( n + 1 ) Δ t + i = 0 i = n ( 1 X i 2 ) / ( n + 1 ) + 2 W ( n Δ t ) / ( ( n + 1 ) Δ t ) = log | X 0 | ( n + 1 ) Δ t 1 i = 0 i = n X i 2 n + 1 + 2 W ( n Δ t ) / ( ( n + 1 ) Δ t log | X 0 | ( n + 1 ) Δ t 1 + 2 W ( n Δ t ) / ( ( n + 1 ) Δ t .
lim n > 2 W ( n Δ t ) / ( ( n + 1 ) Δ t = 0 , a . s ,
it follows
lim n > log | X n | n Δ t 1 .
 □
Equation (68) is obtained. Almost sure exponential stability analysis which is independent of the time-step size is proven.
In Figure 6, a single trajectory is simulated using the Euler method and the proposed NLM, respectively. One can see that the Euler method fails to preserve the asymptotic stability. However, the NLM can exactly preserve this property.
In Figure 7 and Figure 8, we simulate the function log | x ( t ) | / t using the NLM. As proved in theorem 6.11, the property (68) can be exactly preserved independent of the stepsize.

7. Conclusions

In this paper, we introduce the nonlinear Magnus expansion and present a new way of constructing numerical Magnus methods. Based on the new Magnus method, we discuss the numerical truncated algorithms, and prove that the errors of the corresponding approximations will be smaller than the previous algorithms. Several numerical applications and experiments strongly support the theoretical analysis. For more applications of the new Magnus method, we believe that they can be applied to the semi-discretized stochastic partial differential equations such as stochastic Korteweg–de Vries equations.

Author Contributions

Formal analysis, X.G.; methodology, X.W.; software, X.G.; review and editing, P.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Natural Science Foundation of China [Number 11301392, 11501406,71601119],“Chenguang” Program supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission(16CG53), and the Tianjin Municipal University Science and Technology Development Fund Project [Number 20140519].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsiropoulou, E.E.; Vamvakas, P.; Papavassiliou, S. Joint utility-based uplink power and rate allocation in wireless networks: A non-cooperative game theoretic framework. Phys. Commun-Amst. 2013, 9, 299–307. [Google Scholar] [CrossRef]
  2. Rahman, A.; Mohammad, J.; AbdelRaheem, M.; MacKenzie, A.B. Stochastic resource allocation in opportunistic LTE-A networks with heterogeneous self-interference cancellation capabilities. In Proceedings of the 2015 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Stockholm, Sweden, 29 September–2 October 2015. [Google Scholar]
  3. Tsiropoulou, E.E.; Katsinis, G.K.; Papavassiliou, S. Utility-based power control via convex pricing for the uplink in CDMA wireless networks. In Proceedings of the 2010 European Wireless Conference, Lucca, Italy, 12–15 April 2010. [Google Scholar]
  4. Singhal, C.; De, S.; Trestian, R.; Muntean, G.M. Joint Optimization of User-Experience and Energy-Efficiency in Wireless Multimedia Broadcast. IEEE Trans. Mob. Comput. 2014, 13, 1522–1535. [Google Scholar] [CrossRef] [Green Version]
  5. Misawa, T. Numerical integration of stochastic differential equations by composition methods. Rims Kokyuroku 2000, 1180, 166–190. [Google Scholar]
  6. Burkardt, J.; Gunzburger, M.D.; Webster, C. Reduced order modeling of some nonlinear stochastic partial differential equations. Int. J Numer. Anal. Mod. 2007, 4, 368–391. [Google Scholar]
  7. Milstein, G.N.; Repin, M.; Tretyakov, M.V. Numerical methods for stochastic systems preserving symplectic structure. SIAM J. Number. Anal. 2002, 30, 2066–2088. [Google Scholar] [CrossRef]
  8. Malham, J.A.; Wiese, A. Stochastic Lie group integrators. SIAM J. Sci. Comput. 2008, 30, 597–617. [Google Scholar] [CrossRef] [Green Version]
  9. Chuluunbaatar, O.; Derbov, V.L.; Galtbayar, A. Explicit Magnus expansions for solving the time-dependent Schrödinger equation. J. Phys. A-Math. Theor. 2008, 41, 203–295. [Google Scholar] [CrossRef]
  10. Magnus, W. On the exponential solution of differential equations for a linear operator. Comm. Pure Appl. Math. 1954, 7, 649–673. [Google Scholar] [CrossRef]
  11. Casas, F.; Iserles, A. Explicit Magnus expansions for nonlinear equations. J. Phys. A Math. Gen. 2006, 39, 5445–5461. [Google Scholar] [CrossRef] [Green Version]
  12. Iserles, A.; Nørsett, S.P. On the solution of linear differential equations in Lie groups. Philos. Trans. Royal Soc. A 1999, 357, 983–1019. [Google Scholar] [CrossRef] [Green Version]
  13. Burrage, K.; Burrage, P.M. High strong order methods for non-commutative stochastic ordinary differential equation systems and the Magnus formula. Phys. D 1999, 33, 34–48. [Google Scholar] [CrossRef]
  14. Lord, G.; Malham, J.A.; Wiese, A. Efficient strong integrators for linear stochastic systems. SIAM J. Numer. Anal. 2008, 46, 2892–2919. [Google Scholar] [CrossRef] [Green Version]
  15. Kahl, C.; Schurz, H. Balanced Milstein Methods for Ordinary SDEs. Monte Carlo Methods Appl. 2006, 12, 143–170. [Google Scholar] [CrossRef]
  16. Moro, E.; Schurz, H. Boundary preserving semianalytic numerical algorithm for stochastic differential equations. SIAM J. Sci. Comput. 2017, 29, 1525–1540. [Google Scholar] [CrossRef] [Green Version]
  17. Donez, N.P.; Yurchenko, I.V.; Yasynskyy, V.K. Mean Square Behavior of the Strong Solution of a Linear non-Autonomous Stochastic Partial Differential Equation with Markov Parameters. Cybernet. Syst. 2014, 50, 930–939. [Google Scholar] [CrossRef]
  18. Jinran, Y.; Siqing, G. Stability of the drift-implicit and double-implicit Milstein schemes for nonlinear SDEs. AMC 2018, 339, 294–301. [Google Scholar]
  19. Kamont, Z.; Nadolski, A. Generalized euler method for nonlinear first order partial differential functional equations. Demonstr. Math. 2018, 38, 977–996. [Google Scholar]
  20. Połtowicz, J.; Tabor, E.; Pamin, K.; Haber, J. Effect of substituents in the manganese oxo porphyrins catalyzed oxidation of cyclooctane with molecular oxygen. Inorg. Chem. Communi. 2005, 8, 1125–1127. [Google Scholar] [CrossRef]
  21. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer-Verlag: New York, NY, USA, 1998. [Google Scholar]
  22. Milstein, G.N.; Planten, E.; Schurz, H. Balanced implicit methods for stiff stochastic systems. SIAM J. Numer. Anal. 1998, 35, 1010–1019. [Google Scholar] [CrossRef] [Green Version]
  23. Misawa, T.A. Lie algebraic approach to numerical integration of stochastic differential equations. SIAM J. Sci. Comput. 2001, 23, 866–890. [Google Scholar] [CrossRef]
  24. Kahl, C. Positive Numerical Integration of Stochatic Differential Equations. Master’s Thesis, Department of Mathematics, University of Wuppertal, Wuppertal, Germany, 2004. [Google Scholar]
  25. Higham, D.J.; Mao, X.R.; Yuan, C.G. Almost sure and moment exponential stability in the numerical simulation of stochastic differential equations. SIAM J. Numer. Anal. 2014, 45, 592–609. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The numerical scheme for the stochastic Duffing-Van der Pol oscillator equation with the nonlinear Magnus method (NLM), x ¨ + x ˙ ( 1 x 2 ) x = 2 x ξ , x ( 0 ) = 2 , x ˙ ( 0 ) = 6 . The stepsize is h = 2 4 .
Figure 1. The numerical scheme for the stochastic Duffing-Van der Pol oscillator equation with the nonlinear Magnus method (NLM), x ¨ + x ˙ ( 1 x 2 ) x = 2 x ξ , x ( 0 ) = 2 , x ˙ ( 0 ) = 6 . The stepsize is h = 2 4 .
Mathematics 08 00183 g001
Figure 2. Numerical simulation for the equation: d x = ( 1 x ) d t + 1.4 x d W , the time interval: [0,4], the stepsize: d t = 0.5 , a = 1 , b = 1 , σ = 1.4 , a σ 2 / 2 .
Figure 2. Numerical simulation for the equation: d x = ( 1 x ) d t + 1.4 x d W , the time interval: [0,4], the stepsize: d t = 0.5 , a = 1 , b = 1 , σ = 1.4 , a σ 2 / 2 .
Mathematics 08 00183 g002
Figure 3. Numerical simulation for the equation: d x = ( 1 x ) d t + 1.4 x d W , the time interval: [0,4], the stepsize: d t = 0.5 , a = 1 , b = 1 , σ = 1.4 .
Figure 3. Numerical simulation for the equation: d x = ( 1 x ) d t + 1.4 x d W , the time interval: [0,4], the stepsize: d t = 0.5 , a = 1 , b = 1 , σ = 1.4 .
Mathematics 08 00183 g003
Figure 4. L 2 -error for the equation: d x = ( 1 x ) d t + 1.4 x d W . The time interval: [ 0 , 1 ] , the number of simulated paths: 1000.
Figure 4. L 2 -error for the equation: d x = ( 1 x ) d t + 1.4 x d W . The time interval: [ 0 , 1 ] , the number of simulated paths: 1000.
Mathematics 08 00183 g004
Figure 5. L 2 -error for the equation: d x = ( 1 x ) d t + 1.4 x d W . The time interval: [ 0 , 1 ] , the number of simulated paths: 1000.
Figure 5. L 2 -error for the equation: d x = ( 1 x ) d t + 1.4 x d W . The time interval: [ 0 , 1 ] , the number of simulated paths: 1000.
Mathematics 08 00183 g005
Figure 6. Numerical simulation for the equation: d x = ( x x 3 ) d t + x d W ( t ) . The time interval: [0,10], the stepsize: d t = 0.1 , the initial value: X ( 0 ) = 1 .
Figure 6. Numerical simulation for the equation: d x = ( x x 3 ) d t + x d W ( t ) . The time interval: [0,10], the stepsize: d t = 0.1 , the initial value: X ( 0 ) = 1 .
Mathematics 08 00183 g006
Figure 7. Numerical simulation of log | x ( t ) | / t for the equation: d x = ( x x 3 ) d t + x d W ( t ) . The time-interval: [0,500], the stepsize: d t = 0.1 , the initial value: X ( 0 ) = 1 .
Figure 7. Numerical simulation of log | x ( t ) | / t for the equation: d x = ( x x 3 ) d t + x d W ( t ) . The time-interval: [0,500], the stepsize: d t = 0.1 , the initial value: X ( 0 ) = 1 .
Mathematics 08 00183 g007
Figure 8. Numerical simulation of log | x ( t ) | / t for the stochastic differential equation: d x = ( x x 3 ) d t + x d W ( t ) . The time interval: [0,500], the stepsize: d t = 0.5 , the initial value X ( 0 ) = 1 .
Figure 8. Numerical simulation of log | x ( t ) | / t for the stochastic differential equation: d x = ( x x 3 ) d t + x d W ( t ) . The time interval: [0,500], the stepsize: d t = 0.5 , the initial value X ( 0 ) = 1 .
Mathematics 08 00183 g008
Table 1. Numerical simulation for the equation d X ( t ) = ( 1 X ( t ) ) d t + 1.4 X ( t ) d W ( t ) , the time interval: [ 0 , T ] , stepsize: d T , weight functions of balanced Milstein method (BMM): d 0 ( x ) = 1 + 0.5 1.4 2 , d 1 ( x ) = 0 , the number of simulated paths: 1500.
Table 1. Numerical simulation for the equation d X ( t ) = ( 1 X ( t ) ) d t + 1.4 X ( t ) d W ( t ) , the time interval: [ 0 , T ] , stepsize: d T , weight functions of balanced Milstein method (BMM): d 0 ( x ) = 1 + 0.5 1.4 2 , d 1 ( x ) = 0 , the number of simulated paths: 1500.
TimeStepsizeEulerMilsteinBMMNLM
T=1dT=1/227.35%22.12%0%0%
dT=1/426.35%8.21%0%0%
dT=1/1617.35%0.12%0%0%
T=4dT=1/269.35%53.45%0%0%
dT=1/466.24%18.48%0%0%
dT=1/1657.89%2.45%0%0%
T=16dT=1/298.67%94.25%0%0%
dT=1/496.56%58.48%0%0%
dT=1/1695.72%9.08%0%0%

Share and Cite

MDPI and ACS Style

Wang, X.; Guan, X.; Yin, P. A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations. Mathematics 2020, 8, 183. https://doi.org/10.3390/math8020183

AMA Style

Wang X, Guan X, Yin P. A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations. Mathematics. 2020; 8(2):183. https://doi.org/10.3390/math8020183

Chicago/Turabian Style

Wang, Xiaoling, Xiaofei Guan, and Pei Yin. 2020. "A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations" Mathematics 8, no. 2: 183. https://doi.org/10.3390/math8020183

APA Style

Wang, X., Guan, X., & Yin, P. (2020). A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations. Mathematics, 8(2), 183. https://doi.org/10.3390/math8020183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop