Next Article in Journal
Rescheduling Out-of-Gauge Trains with Speed Restrictions and Temporal Blockades on the Opposite-Direction Track
Next Article in Special Issue
Evaluating the Efficiency of Financial Assets as Hedges against Bitcoin Risk during the COVID-19 Pandemic
Previous Article in Journal
Modelling Fractional Advection–Diffusion Processes via the Adomian Decomposition
Previous Article in Special Issue
Does ESG Predict Systemic Banking Crises? A Computational Economics Model of Early Warning Systems with Interpretable Multi-Variable LSTM based on Mixture Attention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning

Department of Statistics and Data Science, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2658; https://doi.org/10.3390/math11122658
Submission received: 13 April 2023 / Revised: 31 May 2023 / Accepted: 6 June 2023 / Published: 11 June 2023
(This article belongs to the Special Issue Computational Economics and Mathematical Modeling)

Abstract

:
Many problems in the fields of finance and actuarial science can be transformed into the problem of solving backward stochastic differential equations (BSDE) and partial differential equations (PDEs) with jumps, which are often difficult to solve in high-dimensional cases. To solve this problem, this paper applies the deep learning algorithm to solve a class of high-dimensional nonlinear partial differential equations with jump terms and their corresponding backward stochastic differential equations (BSDEs) with jump terms. Using the nonlinear Feynman-Kac formula, the problem of solving this kind of PDE is transformed into the problem of solving the corresponding backward stochastic differential equations with jump terms, and the numerical solution problem is turned into a stochastic control problem. At the same time, the gradient and jump process of the unknown solution are separately regarded as the strategy function, and they are approximated, respectively, by using two multilayer neural networks as function approximators. Thus, the deep learning-based method is used to overcome the “curse of dimensionality” caused by high-dimensional PDE with jump, and the numerical solution is obtained. In addition, this paper proposes a new optimization algorithm based on the existing neural network random optimization algorithm, and compares the results with the traditional optimization algorithm, and achieves good results. Finally, the proposed method is applied to three practical high-dimensional problems: Hamilton-Jacobi-Bellman equation, bond pricing under the jump Vasicek model and option pricing under the jump diffusion model. The proposed numerical method has obtained satisfactory accuracy and efficiency. The method has important application value and practical significance in investment decision-making, option pricing, insurance and other fields.

1. Introduction

High-dimensional (dimension ≥ 3) nonlinear partial differential equations (PDEs) are among the subjects that draw a lot of interest and are widely used in a variety of fields. Many practical problems require the use of a high-dimensional PDE, for example: the Schrodinger equation of the quantum many-body problem, whose PDE dimension is about three times that of the electron or quantum (particle) in the system; the Black Scholes equation used to price financial derivatives, where the dimension of the PDE is the number of relevant financial assets under consideration; and the Hamilton-Jacobi-Bellman equation in dynamic programming. High-dimensional nonlinear partial differential equations are very practical, but because there are few explicit solutions or the analytical expressions are too complicated, it is frequently necessary to use some numerical techniques to solve them. However, in practical applications, high-dimensional nonlinear partial differential equations are usually very difficult to solve and remain one of the challenging topics in the academic community. The difficulty is that, due to the “curse of dimensionality” [1], the time complexity of traditional numerical solutions will increase exponentially with the increase of dimensions, thus requiring extensive computing resources. Nevertheless, we urgently need to approximate the numerical solution of these high-dimensional nonlinear partial differential equations because these equations can resolve a variety of real-world issues.
In the field of finance, the method of solving high-dimensional PDEs is frequently employed. However, the “heavy tail” of financial markets has been demonstrated by numerous empirical research studies. “Abnormal” events such as new inventions, economic policies, wars or other news, as well as various arbitrage and hedging activities, can lead to sudden and intermittent changes in asset prices. These sudden changes in price and price fluctuations intuitively reflect the jumping effect that exists in financial markets. Although jumping behavior has a low likelihood of happening, when it does happen, it causes significant harm and often results in investors suffering enormous losses or even going bankrupt. How to build adequate models to depict the jumping behavior of asset prices has become a hot topic for academics as financial market research has become more sophisticated and miniaturized. Therefore, it is necessary to incorporate common jumping behaviors in financial problems, such as buying (selling) property defaults, corporate bankruptcies, operational failures, insurance events, etc., into high-dimensional PDE numerical solutions.
Backward stochastic differential equations (BSDEs) are widely used in stochastic control and financial mathematics. One important reason is that a BSDE can be expressed as the stochastic counterpart of a PDE through the Feynman-Kac formula. El Karoui N et al. [2] provided a detailed introduction to the different properties of BSDEs and their applications to finance. Zhang J [3] proposed a numerical scheme for a class of BSDEs with possible path-dependent terminal values and proved its convergence. Recently, Barigou K and Delong Ł [4] also examined the pricing of equity-linked life insurance contracts by framing the price as the solution of a system of non-linear PDEs, reformulating the problem as a BSDE with jumps and solving it numerically using efficient neural networks.
In recent years, deep learning algorithms have rapidly gained popularity and success in various application sectors. As a result, several scientists have had success using deep learning methods to handle high-dimensional PDE issues. In 2017, E W. Han et al. [5,6] first proposed the deep BSDE method and systematically applied deep learning to general high-dimensional PDEs. Following that, Han et al. (2020) [7] continued to apply the deep BSDE method to the study of random games and also produced promising outcomes. The method was expanded to include the case of 2BSDE and the corresponding fully nonlinear PDE by Beck et al. (2019) [8]. Raissi et al. (2017) [9] use the latest development of probabilistic machine learning to infer the control equation represented by a parametric linear operator and modify the prior value of the Gaussian process according to the special form of the operator, which is then used to infer the parameters of the linear equation from the scarce and possibly noisy observations. To approximate the solution of a high-dimensional PDE, Sirignano et al. (2018) [10] suggested using a deep neural network technique. The network was trained to satisfy boundary conditions, initial conditions and differential operators in a batch of randomly sampled time and space. The achievements listed above show that it is feasible to solve high-dimensional PDEs with a deep learning-based method.
In this paper, we mainly apply the deep learning algorithm to a special class of high-dimensional nonlinear partial differential equations with jumps and obtain a numerical solution of the equation. Specifically, through mathematical derivation and equivalent formula expression of a high-dimensional PDE and backward stochastic differential equation, the problem of solving a partial differential equation is equivalent to the problem of solving a BSDE, and then it is transformed into a stochastic control problem. Then, a deep neural network framework is created to address the issue. Therefore, this method can be used to solve high-dimensional PDEs and corresponding backward stochastic differential equations simultaneously.
The method introduced in this paper to solve a kind of high-dimensional PDE with jump by using deep learning is mainly applied in the financial field, and it has important applications in financial derivatives pricing, insurance investment decision-making, small and micro enterprise financing, risk measurement mechanism and other issues.

2. Background Knowledge

2.1. A Class of PDE

We consider a class of semilinear parabolic PDEs with jump term in the following form:
Let T 0 , , d , and f : × d and g : d be continuous functions. For all t 0 , T , x d , the function to be solved u = u t , x C 1 , 2 0 , T × d , has a terminal value condition u T , x = g x and satisfies:
u t t , x L u t , x f t , x , u t , x , σ T t , x x u t , x , B u t , x = 0
where we introduce two operators:
L u t , x = x u t , x μ x λ β x + 1 2 T r σ σ T t , x H e s s x u t , x + λ u t , x + β x u t , x
B u t , x = u t , x + β x u t , x
Here t is the time variable, x is the d -dimensional space variable, f is the nonlinear part of the equation, x u represents the gradient of u with respect to x , H e s s x u represents the Hessian matrix of u with respect to x . In particular, this equation can be regarded as a special case of partial integro-differential equation. What we are interested in is the solution at t = 0 , x = ξ d , which is u 0 , ξ [11,12].

2.2. Backward Stochastic Differential Equations with Jumps

Let Ω , F , P be a complete probability space, W : 0 , T × Ω d be the d-dimensional standard Brownian motion in this probability space, F t t 0 , T be the normal filtration generated by W in space Ω , F , P . X t 0 t T , Y t 0 t T , Z t 0 t T , U t 0 t T are integrable F -adapted stochastic processes. For a class of forward backward stochastic differential equations with jump terms, it has the following form:
X t = X 0 + 0 t μ s , X s d s + 0 t σ s , X s d W s + 0 t β s   , X s d N ˜ s Y t = g X T + t T f s , X s ,   Y s , Z s , U s d s t T Z s d W s t T U s d N ˜ s
where d N ˜ t is the compensated Poisson measure, N ˜ t is a centralized Poisson process with the compensator intensity λ t such that N ˜ t = N t λ t , N t is the Poisson process in the probability space Ω , F , P such that E N t = λ t .
Under the standard Lipschitz assumptions on the coefficients μ , σ , β , f , g , the existence and uniqueness of the solution have been proved [13].

2.3. The Generalized Nonlinear Feynman-Kac Formula

Under the appropriate regularization assumption, and if u t , x C 0 , T × R d satisfies Equation (1) and linear growth condition u t , x K 1 + x , x u t , x K 1 + x , t , x C 0 , T × R d , the following relationships are established almost everywhere:
Y t = u t , X t R
Z t = x u t , X t σ t , X t R d
U t = u t , X t + β t , X t u t , X t R
Y t , Z t , U t is the only solution of the BSDE [11,12].

2.4. Improvement of Neural Network Parameter Optimization Algorithm

The Adam (Adaptive Moment Estimation) algorithm is an algorithm that combines the RMSProp algorithm with classical momentum in physics. It uses the first-order moment estimate and the second-order moment estimation of gradient to dynamically modify the learning rate of each parameter. The Adam optimizer is one of the most popular classical optimizers in deep learning, and it also shows excellent performance in practice [14].
Although Adam combines RMSprop with momentum, the adaptive moment estimation with Nesterov acceleration is often superior to momentum. Therefore, we consider including the Nesterov acceleration effect [15] into the Adam algorithm, that is, using the Nadam (Nesterov-accelerated Adaptive Moment Estimation) optimization algorithm. The calculation formula is as follows:
g ^ t = g t 1 i = 1 t μ i
m t = μ m t 1 + 1 μ g t
m ^ t = m t 1 i = 1 t + 1 μ i
n t = ν n t 1 + 1 ν g t 2
n ^ t = n t 1 ν t
  m ¯ t = 1 μ t g ^ t + μ t + 1 m ^ t
θ t = θ t 1 η m ¯ t n ^ t + ε
For parameters of the neural network θ t , g t is the gradient of θ t . Besides, m t and n t are the first order moment estimate and the second order moment estimate of the gradient, respectively, which can be regarded as the estimation of the expectation E g t and E g t 2 , and here μ and ν are their attenuation rates, respectively. Moreover, m ^ t and n ^ t are the correction for m t and n t , which can be approximated as an unbiased estimate of the expectation. It can be seen that NAdam has a stronger constraint on the learning rate and more direct influence on the gradient update. As a result, we attempt to apply the new optimization algorithm to our method and compare the outcomes with the traditional Adam algorithm.

3. Main Theorem

3.1. Basic Ideas

In this paper we propose a deep learning-based PDE numerical solution for nonlinear PDEs with jump terms in the form of Equation (1). The basic ideas of the algorithm are as follows:
(1)
By using the generalized nonlinear Feynman-Kac formula, the PDEs to be solved can be equivalently constructed using BSDEs with jumps.
(2)
Taking the gradient operator and the jump term of the solution of the function to be solved as policy functions, the problem of solving the numerical solution of BSDEs can be considered as a stochastic control problem, which can be further considered as a reinforcement learning problem.
(3)
Two different deep neural networks are used to approximate this pair of high-dimensional strategy functions, and the neural networks are trained through a deep learning method to obtain the numerical solution of the original equation.

3.2. Transforming the Nonlinear PDE Numerical Solution Problem into a Stochastic Control Problem

It is well known that the generalized nonlinear Feynman-Kac formula can connect a PDE and a BSDE under appropriate assumptions. Thus, the solution Y t x of the above equation is equivalent to the viscous solution u t , x of the semilinear parabolic partial differential equation with jump term in Equation (1) [16,17]. We can estimate the solution u t , x of Equation (1) by estimating the solution of Equation (4) because the numerical solution of the traditional partial differential equation does not perform well in high dimensions. Since BSDEs developed from the study of stochastic control problems and have common characteristics and internal relations, the nonlinear PDE problem can also be transformed into a stochastic control problem through the nonlinear Feynman-Kac formula.
As for the stochastic control problem related to Equation (4), the following formula holds almost everywhere for all t , x C 0 , T × R d :
X s t , x = x + t s μ r , X r t , x d r + t s σ r , X r t , x d W r + t s β r , X r t , x d N ˜ r Y s t , x = g X T t , x + s T f r , X r t , x , Y r t , x , Z r t , x , U r t , x d s s T Z r t , x d W r s T U r t , x d N ˜ r
Among them, s t , T [18].
Under the appropriate regularity assumption, for the nonlinear function f , it holds that a group of solution functions composed of u 0 , ξ , x u t , X t σ t , X t A and u t , X t + β t , X t u t , X t is the unique global minimum of the following functions y , Z , U × A × :
y , Z , U E Y T y , Z , U g X T t , x 2 0 ,
We regard Z and U as the policy functions of a deep learning problem and use DNN to approximate Z and U . In this way, the stochastic process u t , X t corresponds to the solution function of the stochastic control problem and can be solved by using the policy functions Z and U , and thus we can transform the numerical solution problem of the nonlinear partial differential equation into a stochastic control problem.

3.3. Forward Discretization of the Backward Stochastic Differential Equations with Jumps

For the following BSDE:
X t = x + 0 t μ s , X s d s + 0 t σ s , X s d W s + 0 t β s   , X s d N ˜ s Y t = g X T + t T f s , X s ,   Y s , Z s d s t T Z s d W s t T U s d N ˜ s
For all t 1 , t 2 0 , T and t 1 t 2 , the following equations hold almost everywhere:
X t 2 = X t 1 + 0 t μ s , X s d s + 0 t σ s , X s d W s + 0 t β s   , X s d N ˜ s
Y t 2 = Y t 1 t 1 t 2 f s , X s ,   Y s , Z s d s + t 1 t 2 Z s d W s + t 1 t 2 U s d N ˜ s
By substituting (6) and (7) into (19), we can obtain:
Y t 2 = Y t 1 t 1 t 2 f s , X s ,   Y s , x u σ s , X s d s + t 1 t 2 x u σ s , X s , d W s R d + t 1 t 2 u t , X s + β s , X s u s , X s , d N ˜ s R
Now, we will discretize them in the time dimension and divide the time interval 0 , T into N partitions, so that t 0 , t 1 , , t N 0 , T and 0 = t 0 < t 1 < < t N = T . For N sufficiently large, according to (18) and (19) we have:
X t n + 1 X t n = μ t n , X t n Δ t n + σ t n , X t n Δ W t n + β t n , X t n Δ N ˜ t n
Y t n + 1 = Y t n f t n , X t n , Y t n , x u σ t n , X t n Δ t n + x u σ t n , X t n , W t n + 1 W t n + U t n , X t n , N ˜ t n + 1 N ˜ t n
where Δ t n = t n + 1 t n , Δ W t n = W t n + 1 W t n , Δ N ˜ t n = N ˜ t n + 1 N ˜ t n . [5,18].
The above is the Euler scheme for discretization. In this way, we can start from the initial value X 0 of (4) and finally obtain the approximation of the Euler scheme of (4) at N partitions.

3.4. General Framework for Neural Networks

Two feedforward neural networks are established at each time step t = t n in (21) and (22). One is to approximate the gradient of the unknown solution, which means to approximate the function X t n Z t n :   x u t n , X t n σ t n , X t n , and this neural network is recorded as NN θ z n x , such that θ z n represents all parameters of this neural network and Z n indicates that this neural network is used to approximate Z t n at time t n . Another is to approximate the jump process of the unknown solution, which means to approximate the function X t n   U t n : u t n , X t n + β t n , X t n u t n , X t n , and this neural network is recorded as NN θ U n x , such that θ U n represents all parameters of this neural network, and U n indicates that this network is used to approximate U t n at time t n . For the convenience of expression, we can suppose θ Z = θ Z 1 , , θ Z N 1 , θ U = θ U 1 , , θ U N 1 , θ = θ Z , θ U . As shown in Figure 1, all sub-neural networks are stacked together to form a complete neural network.
X t 0 = ξ d in the initial layer is a random variable. u t 0 , X t 0 , x u t 0 , X t 0 and u t 0 , X t 0 + β t 0 , X t 0 u t 0 , X t 0 in the initial layer are unknown, and we treat them as parameters of the neural network. u t 0 , X t 0 , x u t 0 , X t 0 and u t 0 , X t 0 + β t 0 , X t 0 u t 0 , X t 0 correspond to the value of the BSDE with jumps as follows:
Y 0 = u t 0 , ξ
Z 0 = x u t 0 , ξ
and
U 0 = u t 0 , ξ + β t 0 , ξ u t 0 , ξ
In this neural network, the X t n of the current layer is related to the X t n 1 of the previous layer, and at the same time the u t n , X t n of the current layer is related to the X t n 1 , u t n 1 , X t n 1 , x u t n 1 , X t n 1 and u t n 1 , X t n 1 + β t n 1 , X t n 1 u t n 1 , X t n 1 of the previous layer. However, there are no x u t n 1 , X t n 1 and u t n 1 , X t n 1 + β t n , X t n u t n 1 , X t n 1 in the current layer. Therefore, as shown in Figure 1, our solution is to start from the X t n of the current layer to build two sub-neural networks to represent these two values. In addition, the construction of the final loss function can be obtained from a given terminal value condition u T , x , that is, u t N , X t N in the neural network, which also corresponds to g X T in the nonlinear Feynman-Kac formula.
Specifically, for n = 0 , 1 , , N 1 , we can set Y ^ 0 , Z ^ 0 as parameters to obtain the appropriate Euler approximation Y ^ of the process forward:
Y ^ t n + 1 = Y ^ t n f t n , X t n , Y ^ t n , Z ^ t n Δ t n + Z ^ t n , Δ W t n + U ^ t n , Δ N ˜ t n
For all appropriate θ z n , we have Z ^ t n = NN θ z n X t n x u σ t n , X t n . For all appropriate θ U n , we have U ^ t n = NN θ U n X t n u t n , X t n + β t n , X t n u t n , X t n . Then, we can obtain a suitable approximation of u 0 , ξ :
Y ^ 0 u 0 , ξ
The mean square error between the final output Y ^ t N of neural network and the true value g X t N at the terminal time is selected as the loss function:
θ l o s s θ = E Y ^ t N g X t N 2 0 ,
θ is the set of all training parameters of the above system, such that θ = Y ^ 0 , Z ^ 0 , U ^ 0 , θ z 1 , θ z N 1 , θ U 1 , , θ U N 1 . This loss function is used since Y T = g X T . The expectation in the loss function in (28) is the expectation for all sample paths, but due to the infinite number of sample paths, it is not possible to traverse the entire training set. Therefore, we adopt an optimization method based on stochastic gradient descent. In each iteration of gradient descent, only a portion of samples is selected to estimate the loss function, thereby obtaining the gradient of the loss function over all parameters and obtaining a one-step neural network parameter update.
Like this, the back propagation of this neural network uses an optimizer to update the parameters layer by layer. When the DNN is trained, Y ^ 0 is fixed as the parameter. Take this parameter out and it is the required value.

3.5. Details of the Algorithms

The detailed steps of our proposed algorithm based on deep learning to numerically solve the BSDE with jumps is presented as follows.
According to our algorithm, which uses a deep learning-based neural network solver, the BSDE with jumps in the form of Equation (4) can be numerically solved in the following ways:
  • Simulate sample paths using standard Monte Carlo methods
  • Use a deep neural network (DNN) to approximate Z t = x u t , X t σ t , X t   and U t = u t , X t + β t , X t u t , X t , and then plug them into the BSDE with jumps to perform a forward iterative operation in time
For simplicity, here we use the one-dimensional case as an example, and the high-dimensional case is similar. We divide the time interval 0 , T into N partitions, so that t 0 , t 1 , , t N 0 , T and 0 = t 0 < t 1 < < t N = T , assuming h i = t i + 1 t i , d W i = W i + 1 W i and d N ˜ i = N ˜ i + 1 N ˜ i . The detailed steps are as follows:
(1)
N Monte Carlo paths of the diffusion process X i j : i = 0 , 1 , , N ;   j = 1 , 2 , , M ;   X 0 j X 0 are sampled by the Euler scheme:
X i + 1 j = X i j + μ t i   , X i j   h i + σ t i   , X i j   d W i j + β t i , X i j d N ˜ i j
This step is the same as the standard Monte Carlo method. Other discretization schemes can also be used, such as logarithmic Euler discretization or Milstein discretization.
(2)
At t 0 = 0 , the initial value Y 0 , Z 0 = Y X 0 ,   X 0 σ 0 ,   X 0 and U 0 = u 0 , X 0 + β 0 , X 0 u 0 , X 0 are randomly selected as parts of the neural network parameters, and Y 0 , Z 0 and U 0 are all constant random numbers selected from an empirical range.
(3)
At each time step t i , given X i j :   j = 1 , 2 , , M , Z i j :   j = 1 , 2 , , M and U i j :   j = 1 , 2 , , M are approximated using deep neural networks, respectively. Note that every time t = t i , two sub-neural networks are used for all Monte Carlo paths. In our one-dimensional case, as described in Section 3.4, we introduce two sub-deep neural networks: θ z i :   , such that Z i j = θ Z i X i j :   j = 1 , 2 , , M and θ U i :   , such that U i j = θ U i X i j :   j = 1 , 2 , , M . θ Z = θ Z 1 , , θ Z N 1 , θ U = θ U 1 , , θ U N 1 , θ = θ Z , θ U .
(4)
For t i 0 , T , we have
Y i + 1 j = Y i j f t i , X i j , Y i j , Z i j , U i j h i + Z i j , d W i j + U i j , d N ˜ i j
According to this formula, we can directly calculate the Y i + 1 j at the next time step. This process does not require any parameters to be optimized. In this way, the BSDE with jumps propagates forward in time direction from t i to t i + 1 . Along each Monte Carlo path, as the BSDE with jumps propagates forward in time from 0 to T, Y N j is estimated as Y N j Y 0 ,   Z 0 , U 0 , θ , where θ = θ Z , θ U is all the hyper-parameters for the N − 1 layers neural networks.
(5)
Calculate the following loss function:
L F o r w a r d = 1 M j = 1 M Y N j Y 0 ,   Z 0 , U 0 , θ g X N j 2
where g is the terminal function.
(6)
Use stochastic optimization algorithms to minimize the loss function:
Y 0 ˜ ,   z 0 ˜ , θ ˜ = argmin Y 0 ,   z 0 , θ 1 M j = 1 M Y N j Y 0 ,   z 0 , θ g X N j 2
The estimated value Y 0 ˜ is the desired initial value at time t = 0.

4. Numerical Results

In this section, we will use the deep neural network to code the theoretical framework of the proposed algorithm. Many problems in the financial field can be solved through numerical solutions using high-dimensional PDEs and corresponding BSDEs. Among them, the jump process can depict the sudden impact of the outside world in its process, so that it can accurately depict a class of uncertain things, and this situation is most common in financial markets. So, we extend the numerical solution problem of a high-dimensional PDE to the case of jump diffusion. In this paper, three classical high-dimensional PDEs with important applications in finance-related fields are selected for numerical simulation. They are financial derivative pricing under the jump-diffusion model, Bond pricing under the Vasicek model with jump, and the Hamilton-Jacobi-Bellman equation.
The numerical experiments in this paper use a 64-bit Windows 10 operating system, based on the PyTorch deep learning framework in Python. All examples in this section are calculated based on 1000 sample paths, each chosen for a time partition of N = 100. We ran each example 10 times independently to calculate the average result. All neural network parameters are initialized by uniform distribution. Each sub-neural network of each time node contains 4 layers: one d-dimensional input layer, two (d + 10)-dimensional hidden layers and one d-dimensional output layer. We use the rectifier function (ReLU) as the activation function. Batch Normalization (BN) techniques are used after each linear transformation and before activation.

4.1. Pricing of Financial Derivatives under Jump Diffusion Model

In this section, we will apply the proposed method to the pricing of derivatives related to a 100-dimensional jump diffusion model. In this model, the stock price X t satisfies the following jump diffusion model [19]:
d X t = r λ k X t d t + σ X t d W t + V 1 X t d N ˜ t
where r is the constant discount rate, λ is the average number of jumps per unit time of the stock price and σ is the constant volatility. V represents the jump magnitude. Assuming that the jump amplitude is fixed and V is constant, we can let k = V 1 and k > 1 . This equation can be written as d X t = r X t d t + σ X t d W t + k X t d N ˜ t .
It is known that the European call option with the stock as the underlying asset satisfies the following partial differential equation:
u t t , x + 1 2 σ 2 x 2 Δ x u t , x + r λ k x x u t , x + λ u t , x V u t , x r u t , x = 0
u T , x = x K +
For all t 0 , T , x , ω d , y , z 1 × d . Suppose d = 100 , T = 1 , N = 100 , λ = 1 , X 0 = 1 , , 1 , μ t , x = r X t , σ t , x = σ X t , β t , x = V 1 X t = k X t , f t , x , y , z = r Y , g x = max 1 i 100 x i 5 + , here r i , σ i , k i are randomly selected in 0 , 1 , i = 1 , 100 .
After calculation, Table 1 and Table 2 show the important numerical results of solving the related derivative pricing problem of the 100-dimensional jump diffusion model with Adam and NAdam optimizers, respectively, including that with the change of iteration steps, mean and standard deviation of loss function Y 0 , mean and standard deviation of loss function and running time. Only the numerical results of iteration steps n 1000 ,   2000 ,   3000 ,   4000 are selected as typical examples for display. We set the initial value u 0 , X 0 = 0.3124 obtained from the classical Monte Carlo method as the exact solution and compare it with the numerical results.
It can be seen from Table 1 and Table 2 that when the iteration steps are the same, the calculation time required for using the two optimizers is not very different. As the iteration progresses, the loss function value of the NAdam optimizer is smaller than that of the Adam optimizer at the same iteration steps and the final loss function value is also smaller.
Figure 2 and Figure 3 show the changes of the initial value estimate and loss function with the number of iteration steps when Adam and NAdam optimizers are used for solving. It can be seen that the initial value estimates and loss functions of the numerical solutions of the two methods converge rapidly. However, the convergence speed of the latter is obviously faster than that of the former, whether it is the initial value estimate or the loss function. Among them, the former tends to converge after about 1500 iterations, while the latter tends to converge after about 700 iterations. When the iteration is 4000 times, the numerical solution obtained with the Adam algorithm is 0.32143775, while the numerical solution obtained with the NAdam algorithm is 0.32273737, which is similar and not significantly different from the exact value simulated by the Monte Carlo method.

4.2. Bond Pricing under the Jumping Vasicek Model

In this section, we use the proposed method to solve the pricing problem of a class of bonds with interest rates subject to the Vasicek model with jumps [20,21,22]. In this model, the short-term interest rate X t obeys the following stochastic differential equations:
d X t = a b X t d t + σ d W t + k d N ˜ t
i.e., each part of X t follows:
d X t i = a i b i X t i d t + σ i d W t + k i d N ˜ t
For all t 0 , T , x , ω d , y , z 1 × d . Suppose d = 100 , T = 1 , N = 100 , λ = 1 , X 0 = 1 , , 1 , μ t , x = μ 1 , , μ i , , μ d , μ i = a i b i x i , σ t , x = σ 1 , , σ i , , σ d , β t , x = β 1 , , β i , , β d , a i , b i , σ i ,   β i are randomly selected in 0 , 1 , f t , x , y , z = max 1 i n X i Y , g x = 1 . Then, for the zero-coupon bond price u paying 1 at maturity T under the above jump Vasicek model, it satisfies that u T , x = 1   for all t 0 , T , x d , and the following partial differential equation holds:
u t t , x + i = 1 n a i b i x i β i u x i + 1 2 1 i , j n σ i σ j x i x j 2 u x i x j + u t , x + β x u t , x max 1 i n X i u = 0
After calculation, Table 3 and Table 4 show the important numerical results of solving a class of bond pricing problems with interest rates subject to the Vasicek model with jumps using the Adam and NAdam optimizers, respectively, including that with the change of iteration steps, mean and standard deviation of loss function Y 0 , mean and standard deviation of loss function and running time. Only the numerical results of the iteration steps n 1000 ,   2000 ,   3000 ,   4000 are selected as typical examples for display. We set the initial value u 0 , X 0 = 0.2702 obtained from the classical Monte Carlo method as the exact solution and compare it with the numerical results.
It can be seen from Table 3 and Table 4 that with the iteration, the loss function value of the NAdam optimizer is smaller than that of the Adam optimizer at the same iteration steps, and the final loss function value is smaller, but the cost is that the operation time becomes longer.
Figure 4 and Figure 5 show the changes of the initial value estimate and loss function with the number of iteration steps when Adam and NAdam optimizers are used for solving. It can be seen that the initial value estimates and loss functions of the numerical solutions of the two methods converge rapidly. However, the convergence speed of the latter is obviously faster and more stable than the former, whether it is the initial value estimate or the loss function. When the iteration is 4000 times, the numerical solution obtained with the Adam algorithm is 0.25530547, while the numerical solution obtained with the NAdam algorithm is 0.27403888, which is similar and not significantly different from the exact value simulated by the Monte Carlo method.

4.3. Hamilton-Jacobi-Bellman (HJB) Equation

In fields such as finance, investment, and risk management, optimization and control problems are often involved, and these problems are often represented by stochastic optimal control models. One way to solve this kind of control problem is to obtain and solve the Hamilton-Jacobi-Bellman equation (HJB equation for short) of the corresponding control problem according to the principle of dynamic programming. In this section, we use the proposed method to solve a class of 100-dimensional HJB equations [12]:
For all t 0 , T , x , ω d , y , z 1 × d . Suppose d = 100 , T = 1 , N = 100 , λ = 1 , X 0 = 1 , , 1 , μ t , x = 0 , σ t , x = 2 , β t , x = β T x , where β T = β 1 , , β i , , β d , β i are randomly selected in 0 , 1 , f t , x , y , z = z 1 × d 2 , g x = l n 1 + x 2 / 2 , it satisfies that u T , x = l n 1 + x 2 / 2 for all t 0 , T , x d , and the following partial differential equation holds:
u t t , x + Δ x u t , x + u t , x + β x u t , x x u t , x β x = x u t , x 2
After calculation, the important numerical results of solving a class of HJB equations with Adam and NAdam optimizers are shown in Table 5 and Table 6, respectively, including that with the change of iteration steps, mean and standard deviation of loss function Y 0 , mean and standard deviation of loss function, and running time. Only the numerical results of iteration steps n 2000 ,   4000 ,   6000 ,   8000 are selected as typical examples for display. We set the initial value u 0 , X 0 = 2.6119 obtained from the classical Monte Carlo method as the exact solution and compare it with the numerical results.
It can be seen from Table 5 and Table 6 that with the iteration, the loss function value of the NAdam optimizer is smaller than that of the Adam optimizer at the same iteration steps, and the final loss function value is smaller, but the operation time is longer.
Figure 6 and Figure 7 show the changes of the initial value estimate and loss function with the number of iteration steps when Adam and NAdam optimizers are used for solving. It can be seen that the convergence speed of the latter is obviously faster than that of the former, whether it is the initial value estimate or the loss function. Among them, the former tends to converge after about 7000 iterations, while the latter tends to converge after about 5500 iterations. After 8000 iterations, the numerical solution obtained with the Adam algorithm is 2.73136686, while the numerical solution obtained with the NAdam algorithm is 2.6456454, which is similar and not significantly different from the exact value simulated by the Monte Carlo method.

5. Conclusions

In this paper, we propose an algorithm that can be used to solve a class of partial differential equations with jump terms and their corresponding backward stochastic differential equations. Through the nonlinear Feynman-Kac formula, the above-mentioned high-dimensional nonlinear PDE with jumps and its corresponding BSDE can be expressed equivalently, and the numerical solution problem can be regarded as a stochastic control problem. Next, we treat the gradient and jump process of the unknown solution separately as policy functions and use two neural networks at each time division to approximate the gradient and jump process of the unknown solution respectively. In this way, we can use deep learning to overcome the “curse of dimensionality” caused by high-dimensional PDE with jumps and obtain numerical solutions. In addition, we attempt to replace the traditional Adam algorithm with the new stochastic optimization approximation algorithm and apply it to our algorithm.
Next, focusing on the financial field, we applied our algorithm to solve three common high-dimensional problems in finance-related fields and then compared the results. We concluded that our algorithm performs well in numerical simulation. It is also concluded that after applying the new optimizer to the deep learning algorithm, the convergence speed of the model is mostly faster, and the generalization ability is significantly improved, at the cost of the operation time possibly being longer.
The algorithm proposed in this paper can be used for a wide variety of problems in finance, insurance, actuarial modeling and other fields, such as the pricing problem of some special path-dependent financial derivatives, the bond pricing problem under the jump diffusion model of interest rates, the pricing of equity-linked life insurance contracts and so on.

6. Discussion and Limitations

The algorithm proposed in this article only targets a special class of nonlinear parabolic PDEs with jump terms, and its application objects have certain limitations. We can consider improving the algorithm to extend the range of application objects, such as other special forms of PDE. For example, the improved deep learning-based algorithm can be extended to solve reflected PDEs, using the penalty function method to approximate reflected PDEs to a general form of PDEs. We can also consider delayed PDEs, restricted PDEs, and so on. In addition, this paper only considers the most common Poisson process, not the general jump process. We hope to apply this algorithm to other jump processes in the future. In addition, this paper does not focus on the changes in the intensity of the jump process. We hope to extend this algorithm to the general integro-partial differential equations in the future.
This paper mainly focuses on numerical simulations of examples in finance. In the future, more cases can be considered to test the practical effect of the proposed algorithm in other jump-diffusion instances. In addition, this paper only introduces the specific steps of the algorithm but does not analyze the convergence, error estimation and robustness of stochastic simulation, and thus, we hope to improve it in future research.

Author Contributions

Conceptualization, X.L.; Methodology, X.L.; Software, Y.G.; Validation, Y.G.; Writing—original draft, Y.G.; Writing—review & editing, Y.G.; Supervision, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, W.; Wilcox, L.C. Mitigating the curse of dimensionality: Sparse grid characteristics method for optimal feedback control and HJB equations. Comput. Optim. Appl. 2017, 68, 289–315. [Google Scholar] [CrossRef] [Green Version]
  2. El Karoui, N.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  3. Zhang, J. A numerical scheme for BSDEs. Ann. Appl. Probab. 2004, 14, 459–488. [Google Scholar] [CrossRef]
  4. Barigou, K.; Delong, Ł. Pricing equity-linked life insurance contracts with multiple risk factors by neural networks. J. Comput. Appl. Math. 2022, 404, 113922. [Google Scholar] [CrossRef]
  5. Han, J.; Jentzen, A. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and Backward Stochastic Differential Equations. Commun. Math. Stat. 2017, 5, 349–380. [Google Scholar]
  6. Han, J.; Jentzen, A.; Weinan, E. Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA 2018, 115, 8505–8510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Han, J.; Hu, R. Deep fictitious play for finding Markovian Nash equilibrium in multi-agent games. In Proceedings of the First Mathematical and Scientific Machine Learning Conference, Princeton, NJ, USA, 20–24 July 2020; Volume 107, pp. 221–245. [Google Scholar]
  8. Beck, C.; Weinan, E.; Jentzen, A. Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. J. Nonlinear Sci. 2019, 29, 1563–1619. [Google Scholar] [CrossRef] [Green Version]
  9. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Machine learning of linear differential equations using Gaussian processes. J. Comput. Phys. 2017, 348, 683–693. [Google Scholar] [CrossRef] [Green Version]
  10. Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [Google Scholar] [CrossRef] [Green Version]
  11. Rong, S. Theory of Stochastic Differential Equations with Jumps and Applications; Springer: London, UK, 2005; pp. 205–290. [Google Scholar]
  12. Delong, Ł. Backward Stochastic Differential Equations with Jumps and Their Actuarial and Financial Applications; Springer: London, UK, 2013; pp. 85–88. [Google Scholar]
  13. Tang, S.; Li, X. Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control. Optim. 1994, 32, 1447–1475. [Google Scholar] [CrossRef]
  14. Diederik, P.K.; Jimmy, L.B. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  15. Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013), Atlanta, GA, USA, 16–21 June 2013; pp. 1139–1147. [Google Scholar]
  16. Barles, G.; Buckdahn, R.; Pardoux, E. Backward stochastic differential equations and integral-partial differential equations. Stoch. Rep. 1997, 60, 57–83. [Google Scholar] [CrossRef]
  17. Buckdahn, R.; Pardoux, E. BSDE’s with jumps and associated integro-partial differential equations. Preprint 1994, 79. [Google Scholar]
  18. Gnoatto, A.; Patacca, M.; Picarelli, A. A deep solver for BSDEs with jumps. arXiv 2022, arXiv:2211.04349. [Google Scholar] [CrossRef]
  19. Merton, R.C. Option Pricing when Underlying Stock Returns are Discontinuous. J. Financ. Econ. 1976, 3, 125–144. [Google Scholar] [CrossRef] [Green Version]
  20. Wu, Y.; Liang, X. Vasicek model with mixed-exponential jumps and its applications in finance and insurance. Adv. Differ. Equ. 2018, 2018, 1–15. [Google Scholar] [CrossRef]
  21. Lukman, P.C.; Handari, B.D.; Tasman, H. Study on European put option pricing with underlying asset zero-coupon bond and interest rate following the Vasicek model with jump. J. Phys. Conf. Ser. 2021, 1725, 012092. [Google Scholar] [CrossRef]
  22. Jiang, Y.; Li, J. Convergence of the Deep BSDE method for FBSDEs with non-Lipschitz coefficients. Probab. Uncertain. Quant. Risk 2021, 6, 391–408. [Google Scholar] [CrossRef]
Figure 1. Deep neural network framework.
Figure 1. Deep neural network framework.
Mathematics 11 02658 g001
Figure 2. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (34) under Adam optimizer.
Figure 2. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (34) under Adam optimizer.
Mathematics 11 02658 g002
Figure 3. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (34) under NAdam optimizer.
Figure 3. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (34) under NAdam optimizer.
Mathematics 11 02658 g003
Figure 4. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (37) under Adam optimizer.
Figure 4. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (37) under Adam optimizer.
Mathematics 11 02658 g004
Figure 5. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (37) under NAdam optimizer.
Figure 5. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (37) under NAdam optimizer.
Mathematics 11 02658 g005
Figure 6. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (38) under Adam optimizer.
Figure 6. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (38) under Adam optimizer.
Mathematics 11 02658 g006
Figure 7. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (38) under NAdam optimizer.
Figure 7. Changes of initial value estimation and loss function against the number of the iteration steps in the case of PDE (38) under NAdam optimizer.
Mathematics 11 02658 g007
Table 1. Numerical results with Adam optimizer.
Table 1. Numerical results with Adam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
10000.59890.17191.27437.7509474
20000.46640.12351.25324.3095891
30000.41830.16231.08943.73751271
40000.39400.14650.62692.47711676
Table 2. Numerical results with NAdam optimizer.
Table 2. Numerical results with NAdam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
10000.28210.06251.15282.7450549
20000.30190.04840.67472.0031828
30000.30840.04060.49111.65661206
40000.31170.03560.41341.45351651
Table 3. Numerical results with Adam optimizer.
Table 3. Numerical results with Adam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
10000.20960.04350.14050.4825538
20000.23000.03700.08200.3464924
30000.23960.03310.05850.28491296
40000.24500.03020.04630.24761779
Table 4. Numerical results with NAdam optimizer.
Table 4. Numerical results with NAdam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
10000.28280.02930.03230.1310652
20000.27720.02150.02240.10331326
30000.27570.01770.01720.08551957
40000.27500.01540.01500.07532581
Table 5. Numerical results with Adam optimizer.
Table 5. Numerical results with Adam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
20001.25040.15211.31421.4709857
40001.48500.28200.90901.11661770
60001.77040.47680.74470.94122906
80002.01390.59040.63140.83863864
Table 6. Numerical results with NAdam optimizer.
Table 6. Numerical results with NAdam optimizer.
Number of Iteration Steps n Mean   of   u 0 , X 0 Standard   Deviation   of   u 0 , X 0 Mean of the Loss FunctionStandard Deviation of the Loss FunctionRuntime in Second
20000.87620.19411.22941.57671649
40001.24560.44190.86821.17333104
60001.68580.72410.67150.99804527
80001.92660.75310.56060.88555948
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Gu, Y. Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning. Mathematics 2023, 11, 2658. https://doi.org/10.3390/math11122658

AMA Style

Liu X, Gu Y. Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning. Mathematics. 2023; 11(12):2658. https://doi.org/10.3390/math11122658

Chicago/Turabian Style

Liu, Xiangdong, and Yu Gu. 2023. "Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning" Mathematics 11, no. 12: 2658. https://doi.org/10.3390/math11122658

APA Style

Liu, X., & Gu, Y. (2023). Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning. Mathematics, 11(12), 2658. https://doi.org/10.3390/math11122658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop