Next Article in Journal
Some Properties of Fractional Cumulative Residual Entropy and Fractional Conditional Cumulative Residual Entropy
Previous Article in Journal
Exact Solutions of the Nonlinear Modified Benjamin-Bona-Mahony Equation by an Analytical Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Solution of Time Fractional Black–Scholes Model Based on Legendre Wavelet Neural Network with Extreme Learning Machine

School of Business Administration, South China University of Technology, Guangzhou 510641, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Fractal Fract. 2022, 6(7), 401; https://doi.org/10.3390/fractalfract6070401
Submission received: 7 June 2022 / Revised: 9 July 2022 / Accepted: 15 July 2022 / Published: 21 July 2022
(This article belongs to the Section Numerical and Computational Methods)

Abstract

:
In this paper, the Legendre wavelet neural network with extreme learning machine is proposed for the numerical solution of the time fractional Black–Scholes model. In this way, the operational matrix of the fractional derivative based on the two-dimensional Legendre wavelet is derived and employed to solve the European options pricing problem. This scheme converts this problem into the calculation of a set of algebraic equations. The Legendre wavelet neural network is constructed; meanwhile, the extreme learning machine algorithm is adopted to speed up the learning rate and avoid the over-fitting problem. In order to evaluate the performance of this scheme, a comparative study with the implicit differential method is constructed to validate its feasibility and effectiveness. Experimental results illustrate that this scheme offers a satisfactory numerical solution compared to the benchmark method.

1. Introduction

The options pricing model plays an important role in modern financial theory, which has been employed in cost budgets [1,2,3], asset valuations [4], resource allocation [5,6], and so on. It is of great significance to study options pricing and its modeling problem. The mathematical study of the financial market dates back to 1900 [7]. In 1973, the classic Black–Scholes model was proposed by Black and Scholes [8,9], which serves as a milestone of fair market options pricing. Over several decades, the B-S model has been extensively accepted because it can effectively model the options price and provides a mechanism for extracting implied volatility. Although the options price obtained from the B-S model can be well approximated to the observed one, it still has some deficiencies such as the failure to capture jump-diffusion over a small time frame in the financial market [10].
Being a generalization and extension of classic calculus, fractional calculus dates back to a letter from Leibniz to l’Hôpital in 1665 [11]. However, the theory and foundation of fractional calculus were developed by Liouville in 1832. After several decades, Grunwald and Letnikov defined the fractional derivative with differential quotient approach based on a fractional binomial expansion proposed by Newton. In 1967, Caputo defined Caputo’s fractional derivative, which requires the function to be differentiable [12]. In 2006, Jumarie made some improvements to the Riemann–Liouville derivative by subtracting a constant from the integrand to avoid the derivative of the constant being nonzero [13]. Guariglia and Silvestrov introduced the Ortigueira–Caputo fractional derivative operator to describe a complex function in the distribution sense [14]. Indeed, since some remarkable work appearing in [15], fractional calculus has become the key invariant, providing rich information about the level of complexity that a certain system presents. With plenty of research focusing on the fractional calculus field, it becomes imperative and necessary to solve the fractional differential equation. An implicit method was proposed by Murio to solve the linear time fractional diffusion equation, constructed by Caputo’s fractional derivative on a finite slab [16]. Langlands offered the stability and accuracy of the implicit numerical method for solving the fractional diffusion equation [17]. Chen proposed an explicit closed-form solution of the double-barrier option based on the time fractional B-S model [18], where the solution is obtained by the convolution of some specific function (Mittag–Leffler function) and finite series expansion with integration. As a consequence, some numerical methods were introduced to simplify the calculation procedure. Song adopted the implicit finite difference method to solve the options pricing problem based on Jumaire’s model [19]. Cen [20] transformed the differential equation to an integral form with a weakly singular kernel and discretized it on an adaptive mesh grid, rigorously analyzed the convergence of the proposed scheme, and took account of possible singular behavior. Rezaei [21] presented the time fractional B-S model under the European double barrier option with the constant of elasticity of variance and presented the rigorous solvability, stability, and convergence of the implicit difference method. Zhang deduced an implicit discrete solution to the time fractional B-S model for the European option with double barriers, which has a spatially second-order accuracy and temporally 2 α -order accuracy [22]. De Staelen improved the spatial accuracy of the implicit differential method to fourth-order and presented the solvability, stability, and convergence of this method based on the Fourier method [23]. Moreover, some analytical methods have been derived, such as homotopy perturbation and the Laplace transform [24], the homotopy analysis method [25,26,27,28,29,30], the fractional variational iteration method [31,32], the generalized differential transform method [33], and the Adomain decomposition method [34,35,36,37]. The essential aspects of these methods are the convolution of some special function or the integration of infinite series expansion, which are complicated to calculate; what is more, the solvability, stability, and convergence need to be proven. As a consequence, making some improvements to the numerical method becomes practical and necessary.
Compared with standard Brownian motion, fractional Brownian motion can describe the dynamic characteristics of financial markets accurately; meanwhile, some new fractional B-S formulae have been derived. The fractional B-S model for the European option was initially proposed by Wyss [38], then Hu and Oksendal derived a fractional B-S model based on fractional Brownian motion [39]. Li developed a fractional stochastic differential equation to demonstrate the existence of trend memory in options pricing when the Hurst index is between 0.5 and 1 [40]. Jumarie proposed two new families of the fractional B-S model based on fractional Taylor expansion series and the modified Riemann–Liouville derivative [41]. Liu and Chang also proposed a time fractional B-S model with transaction cost based on fractional Brownian motion and provided an approximate solution of the nonlinear Hoggard–Whalley–Wilmott equation [42].
Accompanying the evolution of computer technology, machine learning has gained enormous advancement. Being a branch of artificial intelligence, reliable decisions and results can be obtained through iteration and optimization. In the last decade, a single feed-forward neural network named extreme learning machine [43,44] was proposed by Huang, which is different from traditional learning methods based on the gradient descent principle; meanwhile, the single hidden layer structure makes its learning procedure exceptionally faster compared with that of a deep neural network [45,46]; however, the ELM algorithm has been demonstrated to be sensitive to the activation function [47,48,49]. Furthermore, the wavelet can be regarded as a mathematical microscope for signal analysis (constant ratio of bandwidth and central frequency); its multi-resolution analysis and time–frequency ability can provide a powerful tool for signal analysis and approximation, then different wavelets can be constructed by dilation and translation, such as the Legendre wavelet [50], Chebyshev wavelet [51], Taylor wavelet [52], etc. Aiming at the framework of an adaptive multi-scale wavelet, Zheng pointed out that a sampling rate far from 1/2 can lead to low representation efficiency and proposed a signal Shannon-entropy-based adaptive multi-scale wavelet decomposition for signals analysis [53]. The wavelet neural network, suggested by Zhang in 1992 [54], was initially used for signal approximation [55], which can be applied in image analysis and computer vision, such as video segmentation [56], speech recognition, and data compression [57]. Besides, some improved neural-network-based methods have recently emerged; however, these methods lack a strong theoretical foundation and require massive amounts of training data. Shi et al. proposed the deep scattering network (DSN), a variant of the deep convolutional neural network (DCNN), where the data-driven linear filters are replaced by predefined fixed multi-scale wavelet filters; for the non-stationary image textures, the fractional wavelet transform (FRWT) is employed because it can be regarded as a bank of linear translation-variant multi-scale filters [58]. Aiming at poor resolution in the high-fractional-frequency domain, Shi and his partners revealed a novel fractional wavelet package transform (FRWPT) and its recursive algorithm [59].
Moreover, previous literature demonstrated the reliability and stability of the WNN in the learning procedure [60,61]. Due to the decomposition capacity of the WNN in the time and frequency domain, this paper presents a novel Legendre wavelet neural network (LWNN) with extreme learning machine (ELM) to solve the time fractional B-S model governed by the European option. Taking the final and boundary conditions into consideration, this time fractional B-S model can be converted into a group of algebraic equations. In the end, numerical experiments illustrate the superiority of the proposed method compared to the implicit differential method.
The objective of this paper is to solve a time fractional B-S model of the European option, where ELM is used for the output layer weight learning of the LWNN. The advantages of the proposed scheme are given as follows:
  • It is a single hidden layer feed-forward network; we only should train the weights of the output layer; the input layer weights can be randomly selected.
  • ELM can be classified as unsupervised learning; the optimization scheme is unnecessary; besides, it has a fast learning rate and can avoid over-fitting.
  • It converts the European options pricing problem to a group of algebraic equations, which can substantially facilitate analysis.
The structure of this paper is as follows. Section 2 gives a description of the time fractional B-S model governed by the European option with double barriers. Section 3 introduces the properties of the Legendre wavelet. Section 4 deduces the operational matrix of the fractional derivative for the two-dimensional Legendre wavelet. Section 5 presents the topology of the LWNN and the algorithm of ELM. Section 6 validates the feasibility and effectiveness of the proposed method by numerical experiments. In the end, some conclusions and directions for improvement are presented.

2. Double Barrier Option under Time Fractional B-S Model

2.1. Time Fractional B-S Model

Liang [62] proposed a bi-fractional B-S model in the time and spatial domain, where the underlying asset price follows a fractional Ito process and options price variation is a fractal transmission system. As a result, the time fractional B-S model can be regarded as a special case of the bi-fractional B-S model. Let S, t, r, σ , and q be the underlying asset price, time, risk-free interest rate, volatility, and dividend payment, respectively, then options price V ( S , t ) should satisfy
V t + 1 2 σ 2 S 2 2 V S 2 + ( r q ) S V S r V = 0
According to the arguments in [62,63,64], the variation of the options price regarding time can be assumed as a fractal transmission system, which implies that the average options price C ( S , t ) from time t to expiration time T should satisfy
t T C ( S , τ ) d τ = S d f 1 t T F τ t V ( S , τ ) V ( S , T ) d τ
where F ( t ) and d f represent the transmission function and Hausdorff dimension of the fractional transmission system, respectively. As pointed out in [62], Equation (2) is a conservation equation containing an explicit reference to the diffusion process of the options price based on a fractal structure. It can be assumed that the diffusion sets are the underlying fractal, where F ( t ) = A α t α / Γ 1 α is selected as the transmission function with A α and α a constant and the transmission exponent, respectively. Making partial differentiation regarding t on both sides of Equation (2) yields
C ( S , t ) = S d f 1 t t T F ( τ t ) V ( S , τ ) V ( S , T ) d τ
On the other hand, because of the B-S model, we have
C ( S , t ) = 1 2 σ 2 S 2 2 V S 2 + ( r q ) S V S r V
substituting Equation (3) into (4) yields
A α S d f 1 α V t α + 1 2 σ 2 S 2 2 V S 2 + ( r q ) S V S r V = 0
where
α V t α = 1 Γ ( n α ) n t n t T V ( S , τ ) V ( S , T ) ( τ t ) α + 1 n d τ , n 1 α < n
when α approaches 1, this yields
lim α 1 α V t α = 1 Γ ( 2 1 ) t t T V ( S , τ ) V ( S , T ) ( τ t ) 1 + 1 2 d τ = V t
It is clear that the time fractional B-S model is consistent with the classic B-S model when A α = d f = 1 . In this paper, A α = d f = 1 are selected. As a matter of fact, the solution procedure is applicable to other values of A α and d f .
It should be mentioned that the time fractional order model is a simplification compared with that proposed by Liang [62]. In this paper, we assume that the underlying asset follows standard Brownian motion, only considering the options price variation with time as a fractal transmission system; this is why the fractional derivative only appears in the time domain. Actually, Jumaire [65,66] defined the stock exchange fractional dynamics as fractional exponential growth driven by Gaussian noise; it can be found that the derived equation is similar to Equation (5). However, the derived equation has a time variable coefficient in front of the second derivative, which is different from the time fractional B-S model mentioned above.

2.2. Double Barrier Option

The barrier option is probably the oldest exotic option, which was sporadically traded on the U.S. market before the establishment of the Chicago Board of Options Exchange. The barrier option is essentially a conditional option, depending on whether the barrier can be triggered or breached within its expiration; meanwhile, it is considerably less expensive than the corresponding vanilla option. Obviously, a double barrier option features two barriers, and the payoff to the holder depends on the breaching behaviors of the underlying asset process with respect to these two barriers. Let V ( S , t ) be the price of a European option with double barriers; obviously, if the price variation regarding time is considered to be a fractal transmission system, then V ( S , t ) should satisfy
α V t α + 1 2 σ 2 S 2 2 V S 2 + ( r q ) S V S r V = 0 V ( S , T ) = Π ¯ ( S ) = max ( S k , 0 ) V ( A , t ) = P ¯ ( t ) V ( B , t ) = Q ¯ ( t )
where P ¯ ( t ) and Q ¯ ( t ) denote the discount paid when the corresponding barrier is breached and K and T are the strike price and expiration time, respectively. In general, it is complicated to obtain the solution of this partial derivative equation; the following variables and functions are introduced to non-dimensionalize this model: t = T 2 τ / ( σ 2 ) , S = K e x , V ( S , t ) = K U ( x , τ ) , P ¯ ( t ) = K P ( τ ) , and Q ¯ ( t ) = K Q ( τ ) ; as a consequence, the European Equation (7) can be rewritten as
α V t α = 1 Γ ( 1 α ) t t T V ( S , t ¯ ) V ( S , T ) ( t ¯ t ) α d t ¯   = 1 Γ ( 1 α ) σ 2 2 τ T 2 τ σ 2 T V ( S , t ¯ ) V ( S , T ) t ¯ T 2 τ σ 2 α d t ¯   = 1 Γ ( 1 α ) σ 2 2 τ τ 0 V S , T 2 τ ¯ σ 2 V ( S , T ) 2 σ 2 τ τ ¯ α 2 σ 2 d τ ¯   = K Γ ( 1 α ) σ 2 2 α τ 0 τ U ( x , τ ¯ ) U ( x , 0 ) τ τ ¯ α d τ ¯ = K σ 2 2 α α U τ α
where
α U t α = 1 Γ ( 1 α ) t 0 t U ( x , t ) U ( x , 0 ) ( τ t ) α d τ , 0 < α < 1
is consistent with the modified Riemann–Liouville derivative in [13]. Substituting Equation (8) into Equation (7) yields
α U τ α = c 1 2 U x 2 + c 2 U x + c 3 U U ( x , 0 ) = Π ( x ) U ( B d , τ ) = P ( τ ) U ( B u , τ ) = Q ( τ )
where γ = r ( σ 2 / 2 ) α , d = q ( σ 2 / 2 ) α , c 1 = ( σ 2 / 2 ) 1 α , c 2 = r d c 1 , c 3 = γ , B d = ln ( A / K ) , and B u = ln ( B / K ) . It can be observed that α U τ α is consistent with Caputo’s derivative in Definition 2.

3. Legendre Wavelet and Its Properties

Legendre polynomials are constructed by the polynomials’, { 1 , x , , x n , } , orthogonalization with respect to weighting function w ( x ) = 1 . In 1814, Rodrigul presented the simple expression as
L 0 ( x ) = 1 , L n ( x ) = 1 2 n n ! d n d x n ( x 2 1 ) n , x 1 , 1 , n Z +
In [67], the recurrence formula of Legendre polynomials was given as
L 0 ( x ) = 1 , L 1 ( x ) = x L n + 1 ( x ) = 2 n + 1 n + 1 x L n ( x ) n n + 1 L n 1 ( x )
For the practical implementation of Legendre polynomials on domain [ l d , l u ] , it is feasible to shift the domain by means of linear transformation x ˜ = [ 2 x ( l u + l d ) ] / ( l u l d ) . Hence, the shifted Legendre polynomials L n * ( x ) can be obtained as L n * ( x ) = L n ( x ˜ ) . Specifically, the orthogonality characteristic of shifted Legendre polynomials on domain [ 0 , 1 ] is
0 1 L m * ( x ) L n * ( x ) d x = 1 2 m + 1 , m = n 0 , otherwise
Moreover, the analytical form of shifted Legendre polynomials L n * ( x ) can be expressed as [67]
L n * ( x ) = i = 0 n b n , i x i
where
b n , i = ( 1 ) n + i ( n + i ) ! ( n i ) ! ( i ! ) 2
Furthermore, the wavelet function is generally constructed by the dilation and translation of the mother wavelet. When the dilation and translation parameters vary continuously, we can obtain the continuous wavelet:
ψ a , b ( x ) = 1 a ψ x b a , a , b R , a > 0
where a and b are the dilation and translation parameters, respectively. When they are constricted to only discrete values a = a 0 k , b = n a 0 k b 0 , a 0 > 1 , b 0 > 1 , this yields
ψ k , n ( x ) = 1 a 0 k ψ x n a 0 k b 0 a 0 k = a 0 k / 2 ψ a 0 k x n b 0 , n , k Z +
where ψ k , n ( x ) represents the basis of L 2 ( R ) . In particular, ψ k , n ( x ) constructs an orthogonal basis when a 0 = 2 and b 0 = 1 . Legendre wavelet ψ n , m ( t ) = ψ ( k , n ^ , m , t ) is composed of four arguments, n ^ = 2 n + 1 , n = 0 , 1 , 2 , , 2 k 1 , m = 0 , 1 , , m 0 1 is the degree of the Legendre polynomials, and t is normalized time [68]. The Legendre wavelet on domain [ 0 , 1 ] is defined as
ψ n , m ( x ) = 2 m + 1 2 k L m 2 k + 1 x n ^ , n 2 k x n + 1 2 k 0 , otherwise
Theorem 1.
([69]) Supposing that H and Y H are the inner product and complete space, respectively, we define { e 0 , e 1 , , e n } as an orthogonal basis for H . For f H , the best approximation in Y :
f 0 = i = 0 n f , e i e i
such that
y Y , f f 0 2 f y 2
where ( , ) denotes the inner product, that is f 2 = ( f , f ) .
From Theorem 1, a function f ( x ) on domain [ 0 , 1 ] can be expanded by the Legendre wavelet as
f ( x ) = n = 0 m = 0 c n , m ψ n , m ( x )
where c n , m = f ( x ) , ψ n , m ( x ) . If Equation (17) is substituted with finite series expansion, then f ( x ) can be written as
f ( x ) n = 0 2 k 1 m = 0 m 0 1 c n , m ψ n , m ( x ) = C T Ψ ( x )
where T represents the matrix transposition operator and C and Ψ ( x ) are the 2 k m 0 -column vector shown as below.
C = c 0 , 0 , c 0 , 1 , , c 0 , m 0 1 , c 1 , 0 , c 1 , 1 , , c 1 , m 0 1 , , c 2 k 1 , 0 , c 2 k 1 , 1 , , c 2 k 1 , m 0 1 T Ψ ( x ) = ψ 0 , 0 ( x ) , ψ 0 , 1 ( x ) , , ψ 0 , m 0 1 ( x ) , ψ 1 , 0 ( x ) , ψ 1 , 1 ( x ) , , ψ 1 , m 0 1 ( x ) , , ψ 2 k 1 , 0 ( x ) ,   ψ 2 k 1 , 1 ( x ) , , ψ 2 k 1 , m 0 1 ( x ) T
For simplicity, let c i = c n , m , ψ i = ψ n , m , and M = 2 k m 0 , where index i is determined by i = m 0 n + m + 1 , then Equation (18) can be rewritten as
f ( x ) i = 1 M c i ψ i ( x ) = C T Ψ ( x )
where
C = c 1 , , c m 0 , | , c m 0 + 1 , , c 2 m 0 , | , c m 0 ( 2 k 1 ) + 1 , , c M T Ψ ( x ) = ψ 1 ( x ) , , ψ m 0 ( x ) , | , ψ m 0 + 1 ( x ) , , ψ 2 m 0 ( x ) , | , , | ,   ψ m 0 ( 2 k 1 ) + 1 ( x ) , , ψ M ( x ) T
Similarly, an arbitrary function with two variables f ( x , y ) L 2 ( R × R ) defined on domain [ 0 , 1 ] × [ 0 , 1 ] can be approximated with the Legendre wavelet:
f ( x , y ) i = 1 M j = 1 M ψ i ( x ) f i j ψ j ( y ) = Ψ T ( x ) F Ψ ( y )
where F = f i j M × M and f i j = ψ i ( x ) , f ( x , y ) , ψ j ( y ) .
Theorem 2.
(Convergence analysis [70]) Supposing that a continuous function f ( x ) on domain [ 0 , 1 ] is bounded with second derivative f ( x ) K , it can be uniformly approximated by the infinite sum of Legendre wavelet expansion, that is f ( x ) = n = 1 m = 0 c n , m ψ n , m ( x ) ; moreover, coefficients c n , m are bounded with c n , m K 12 / ( 2 n ) 5 / 2 ( 2 m 3 ) 2 .
Theorem 3.
(Error analysis [71]) Supposing that a continuous function f ( x , y ) on domain [ 0 , 1 ] × [ 0 , 1 ] is bounded with four mixed partial derivatives 4 f ( x , y ) / x 2 y 2 D , then the Legendre wavelet expansion of f ( x , y ) has a uniform convergence characteristic and
f i j < 12 D ( 2 n ) 5 ( 2 m 3 ) 4
Definition 1.
Supposing that A = [ a i j ] m × n and B = [ b i j ] p × q are two arbitrary matrices, the Kronecker product of A and B is defined as follows [72].
A B = a 11 B a 12 B a 1 n B a 21 B a 22 B a 2 n B a m 1 B a m 2 B a m n B m p × n q
For matrix A, operator Vec ( A ) is a column vector made of the column of A stacked on top of each other from left to right [73].
Vec ( A ) = a 11 , , a m 1 , a 12 , , a m 2 , , a 1 n , , a m n T
Definition 2.
The Caputo’s fractional derivative of order α is defined as [74]
0 C D x α f ( x , y ) = 1 Γ ( n α ) 0 x f ( n ) ( χ , y ) x χ 1 + α n d χ , 0 n 1 < α < n α f ( x , y ) x α , α = n
where operator D * α represents Caputo’s derivative. It can be observed that D x α C = 0 (C is a constant); additionally,
D x α x n = 0 , n < α , n Z + Γ ( n + 1 ) Γ ( n + 1 α ) x n α , n α , n Z +
where Γ ( · ) denotes the Gamma function; the ceiling function α denotes the smallest integer greater than or equal to α.

4. Operational Matrix of Fractional Derivative

From Equations (13) and (16), for m < α , the analytical form of the shifted Legendre wavelet can be written as
ψ n , m ( x ) = 2 k / 2 2 m + 1 i = 0 m b m , i 2 k x n i I n , k ( x )
where I n , k = n / 2 k , ( n + 1 ) / 2 k ( n = 0 , 1 , , 2 k 1 , m = 0 , 1 , , m 0 1 ); in addition, the indicator function is defined as
I n , k ( x ) = 1 , x n 2 k , n + 1 2 k 0 , otherwise
It is apparent that I n , k ( x ) is zero outside the interval n / 2 k , ( n + 1 ) / 2 k . Applying D x α on both sides of Equation (21) yields
D x α ψ n , m ( x ) = i = 0 m D α x n 2 k i = i = α m a m , i ( i ) ! Γ ( i α + 1 ) x n 2 k i α I n , k ( x )   + i = α m a m , i ( i ) ! Γ α α Γ i α + 1 f i ( x )
where
a m , i = 2 k ( i + 1 / 2 ) 2 m + 1 b m , i , f i ( x ) = n / 2 k ( n + 1 ) / 2 k χ n 2 k i α x χ α α + 1 d χ
Here, Legendre wavelet expansions are employed to approximate ( x n / 2 k ) i α , that is to say
x n 2 k i α = j = 0 m 0 1 e i , j ψ n , j ( x ) , i = α , α + 1 , , m
where
e i , j = n / 2 k ( n + 1 ) / 2 k x n 2 k i α ψ n , j ( x ) d x   = r = 0 j c j , r n / 2 k ( n + 1 ) / 2 k 2 k x n r + i α d x = r = 0 j c j , r 2 k ( r + i α + 1 )
and
c j , r = 2 k / 2 2 m + 1 2 k ( i α ) b j , r
Furthermore, f i ( x ) in Equation (22) can be approximated by the Legendre wavelet in every interval I l , k ( x ) , l = n + 1 , , 2 k 1 , yielding
f i ( x ) = j = 0 m 0 1 d i , j ψ i , j ( x )
where
d i , j = l / 2 k ( l + 1 ) / 2 k f i ( x ) ψ l , j ( x ) d x   = r = 0 j 2 k / 2 2 m + 1 b j , r × l / 2 k ( l + 1 ) / 2 k f i ( x ) 2 k x l r d x
Substituting Equations (23) and (24) into Equation (22), we can obtain
D x α ψ n , m ( x ) = j = 0 m 0 1 Ω α ( n ) ( m , j ) ψ n , j ( x ) + l = n + 1 2 k 1 j = 0 m 0 1 Ω ^ α ( n ) ( m , j ) ψ l , j ( x )
where
Ω α ( n ) ( m , j ) = i = α m Θ m , j , i Θ m , j , i = a m , i ( i ) ! Γ ( i α + 1 ) × r = 0 j c j , r 2 k r + i α + 1
and
Ω ^ α ( n ) ( m , j ) = i = α m Θ ^ m , j , i Θ ^ m , j , i = a m , i ( i ) ! Γ ( α α ) Γ ( i α + 1 ) × r = 0 j 2 k / 2 2 m + 1 b j , r × l / 2 k ( l + 1 ) / 2 k f i ( x ) 2 k x l r d x
After simplification, Θ m , j , i and Θ ^ m , j , i can be written as:
Θ m , j , i = ( 1 ) m + i + j 2 k α ( 2 m + 1 ) ( m + i ) ! Γ ( i α + 2 ) Γ ( j i + α ) ( i α + 1 ) ( m i ) ! ( i ) ! Γ ( i + α ) Γ ( i α + 1 ) Γ ( j + i α + 2 )
and
Θ ^ m , j , i = 2 k ( i + 1 ) ( 2 m + 1 ) b m , i ( i ) ! Γ ( α α ) Γ ( i α + 1 ) r = 0 j b j , r l / 2 k ( l + 1 ) / 2 k f i ( x ) ( 2 k l ) r d x
Consequently, Equation (25) can be written as
D x α ψ n , m ( x ) = Ω α ( n ) ( m , 0 ) , Ω α ( n ) ( m , 1 ) , , Ω α ( n ) ( m , m 0 1 ) Ψ n ( x ) +   l = n + 1 2 k 1 Ω ^ α ( n ) ( m , 0 ) , Ω ^ α ( n ) ( m , 1 ) , , Ω ^ α ( n ) ( m , m 0 1 ) Ψ l ( x )
where
Ψ s ( x ) = ψ s , 0 ( x ) , ψ s , 1 ( x ) , , ψ s , ( m 0 1 ) ( x ) , s = 0 , 1 , 2 , 2 k 1
Remark 1.
For α = α , from Caputo’s derivative, we can obtain
D x α ψ n , m ( x ) = 0 , 0 m < α j = 0 m 0 1 Ω α ( n ) ( m , j ) ψ n , j ( x ) , m α
Let Ψ ( x ) be the Legendre wavelet vector in Equation (18), D α ( α > 0 , and ( α 1 < α < α )) be the α -order differential operational matrix of the Legendre wavelet, then
D x α Ψ ( x ) D α Ψ ( x )
where
D α = B α F α F α F α 0 B α F α F α 0 0 B α F α 0 0 0 B α M × M
where matrices B α and F α are given as
B α = 0 0 0 0 0 0 Ω α ( n ) ( α , 0 ) Ω α ( n ) ( α , 1 ) Ω α ( n ) ( α , m 0 1 ) Ω α ( n ) ( α + 1 , 0 ) Ω α ( n ) ( α + 1 , 1 ) Ω α ( n ) ( α + 1 , m 0 1 ) Ω α ( n ) ( m 0 1 , 0 ) Ω α ( n ) ( m 0 1 , 1 ) Ω α ( n ) ( m 0 1 , m 0 1 ) m 0 × m 0
and
F α = 0 0 0 0 0 0 Ω ^ α ( n ) ( α , 0 ) Ω ^ α ( n ) ( α , 1 ) Ω ^ α ( n ) ( α , m 0 1 ) Ω ^ α ( n ) ( α + 1 , 0 ) Ω ^ α ( n ) ( α + 1 , 1 ) Ω ^ α ( n ) ( α + 1 , m 0 1 ) Ω ^ α ( n ) ( m 0 1 , 0 ) Ω ^ α ( n ) ( m 0 1 , 1 ) Ω ^ α ( n ) ( m 0 1 , m 0 1 ) m 0 × m 0
Here, for a better understanding of the calculation procedure, B ( 1.5 ) and F ( 1.5 ) are provided as below, where m 0 = 4 , k = 2 , and α = 1.5 are selected.
B ( 1.5 ) = 1 π 0 0 0 0 0 0 0 0 640 128 128 7 128 21 896 640 896 3 640 11
F ( 1.5 ) = 1 π 0 0 0 0 0 0 0 0 1280 * ( 1 2 ) 2 8 * ( 3 2 4 ) 2 7 * ( 166 118 2 ) 7 2 8 * ( 538 2 824 ) 21 2 8 * ( 91 2 28 ) 2 8 * ( 29 2 51 ) 2 8 * ( 400 281 2 ) 3 2 8 * ( 10245 9557 2 ) 33
In order to solve the time fractional B-S model of the European option, U ( x , τ ) in Equation (9) can be approximated with
U ( x , τ ) Ψ ( x ) T U Ψ ( τ )
where U = [ u i , j ] M × M is an unknown matrix, which needs to be determined; from Equation (30), we can obtain
α U ( x , τ ) τ α = Ψ ( x ) T U D α Ψ ( τ ) U ( x , τ ) x = Ψ ( x ) T D T U Ψ ( τ ) 2 U ( x , τ ) x 2 = Ψ ( x ) T ( D T ) 2 U Ψ ( τ )
From Definition 1, Equation (32) can be rewritten as
α U ( x , τ ) τ α = V e c Ψ ( τ ) T Ψ ( x ) T D α T I M V e c ( U ) U ( x , τ ) x = V e c Ψ ( τ ) T Ψ ( x ) T I M D T V e c ( U ) 2 U ( x , τ ) x 2 = V e c Ψ ( τ ) T Ψ ( x ) T I M D T 2 V e c ( U )
where
V e c Ψ ( τ ) T Ψ ( x ) T = ψ 1 ( τ ) Ψ ( x ) , ψ 2 ( τ ) Ψ ( x ) , , ψ M ( τ ) Ψ ( x ) 1 × M 2
and I M is an M-dimensional identity matrix. Then, Equation (9) can be converted to
Ψ T ( x ) U D α Ψ ( τ ) = c 1 Ψ ( x ) T D 2 T U Ψ ( τ ) + c 2 Ψ ( x ) T D T U Ψ ( τ ) +   c 3 Ψ ( x ) T U Ψ ( τ ) Ψ T ( x ) U Ψ ( 0 ) = Π ( x ) Ψ T ( B d ) U Ψ ( τ ) = P ( τ ) Ψ T ( B u ) U Ψ ( τ ) = Q ( τ )
Define
Z = D α T I M c 1 I M D T c 2 I M D T 2 c 3 I M I M
and let
Δ x = B u B d N s , x i = B d + i Δ x , i = 0 , 1 , 2 , , N s Δ τ = T / N t , τ j = j Δ τ , j = 0 , 1 , 2 , , N t
be the step size of stock price and time, respectively. Denoting U i j = U ( x i , τ j ) yields
V e c Ψ ( x i ) T Ψ ( τ j ) T Z V e c ( U ) = 0 V e c Ψ ( x i ) T Ψ ( τ 0 ) T V e c ( U ) = U i 0 V e c Ψ ( x 0 ) T Ψ ( τ j ) T V e c ( U ) = U 0 j V e c Ψ ( x N s ) T Ψ ( τ j ) T V e c ( U ) = U N s j
which can be expressed as
H V e c ( U ) = Y
Let
L = ψ 1 ( x 0 ) ψ 2 ( x 0 ) ψ M ( x 0 ) ψ 1 ( x 1 ) ψ 2 ( x 1 ) ψ M ( x 1 ) ψ 1 x N s ψ 2 x N s ψ M x N s ( N s + 1 ) × M
and
K = ψ 1 ( τ 0 ) ψ 2 ( τ 0 ) ψ M ( τ 0 ) ψ 1 ( τ 1 ) ψ 2 ( τ 1 ) ψ M ( τ 1 ) ψ 1 τ N t ψ 2 τ N t ψ M τ N t ( N t + 1 ) × M
then
H = L K Z L K ( 1 , : ) L ( 1 , : ) K L ( N s + 1 , : ) K
and
Y = 0 , , 0 ( N s + 1 ) ( N t + 1 ) , U 0 0 , , U N s 0 N s + 1 , U 0 0 , , U 0 N t N t + 1 , U N s 0 , , U N s N t N t + 1

5. ELM Algorithm for LWNN Training

ELM has a fast convergence rate owing to its single hidden layer feed-forward structure. It has many advantages over traditional BP methods, because traditional BP methods can easily become stuck in local optima and have a slow convergence rate owing to the continuous weight updating. The parameters of ELM only depend on the analytical resolution of the output layer weights and the random selection of input and hidden layer parameters (weights and biases); as a consequence, many issues, such as the learning rate, local optima and learning epochs, can be avoided. It can solve the slow convergence rate and over-fitting issues of gradient descent methods based on neural networks. Here, the ELM algorithm is illustrated in Algorithm 1. The Neural network structure of LWNN-ELM is showed in Figure 1.
Algorithm 1 Extreme learning machine algorithm.
Input: A = { a i } i = 1 n , B = { b i } i = 1 n , X = { x i } i = 1 n
a i and b i denote the weights and bias of the input layer, respectively.
N i , N h , N o represent the nodes of the input, hidden, and output layer, respectively.
Step 1: Attribute random parameters of the hidden layer nodes, weights, and biases.
Step 2: Calculate the output matrix of the hidden layer nodes H.
Step 3: Calculate the output weight β = H Y .
where H is the Moore–Penrose generalized inverse of the hidden layer
output matrix H, Y is the training data target.
Theorem 4.
The equation H β = Y can be solved in three cases:
1.
If matrix H is square and invertible, then β = H 1 Y .
2.
If matrix H is rectangle, then β = H Y ; β is the minimal least square solution; in other words, β = a r g min Y β T .
3.
If matrix H is singular, then β = H Y , where H = H T λ I + H H T 1 ; λ is a regularization parameter, which can be set based on a specific norm.
The procedure for training the network weights of the LWNN to solve the European options pricing model is given in Algorithm 2.
Algorithm 2 The procedure for solving the European options pricing model based on the LWNN-ELM.
Input: x i = B d + i Δ x , Δ x = B u B d / N s , i = 0 , 1 , 2 , , N s
τ j = j Δ τ , Δ τ = T / N t , j = 0 , 1 , 2 , , N t
ψ 1 ( x ) , ⋯, ψ M ( x ) , ψ 1 ( τ ) , ⋯, ψ M ( τ )
Step 1: Constructing a numerical solution by the Legendre wavelet as
the activation function, that is U * ( x , τ ) = i = 0 n j = 0 n β i , j Ψ i ( x ) Ψ j ( τ ) = Ψ ( x , τ ) β .
Step 2: Substituting the numerical solution U * ( x , τ ) and its derivative into
the PDEs, the initial condition, and its boundary.
Step 3: Solving H β = Y by ELM and obtaining the network weights
β = H Y , β = a r g m i n H β Y .
Step 4: To obtain the numerical solution
U * ( x , τ ) = i = 0 n j = 0 n β i , j Ψ i ( x ) Ψ j ( τ ) = Ψ ( x , τ ) β .
Remark 2.
The detailed proof of this theorem can be referred to the related knowledge of the generalized inverse matrix [43].

6. Numerical Experiment and Discussion

Once the numerical solution is obtained for a specific options pricing model, the main concern of market practitioners becomes its implementation. Whether the options price can be efficiently computed is one of the core criteria to evaluate its characteristics. Therefore, in this section, as far as the validation of the numerical solution’s accuracy and the convergence order of the proposed method are concerned, a double barrier knock-out call European option is selected as an experiment. For other kinds of options pricing models, their solutions can be straightforwardly obtained by a simple modification or the parity relationship [75].
In order to validate the feasibility of the proposed method, calculating the solution for α = 1 can be considered as the best way, and comparing it with a benchmark solution based on the implicit difference method (IDM) [22], where all numerical experiments were implemented with MATLAB R2013b on a desktop with i7-6700K CPU, 8GB memory, 1TB HDD, and a Windows 7 operating system. From Figure 2, it can be seen that the two groups of options price coincide perfectly, which can affirm the correctness of the proposed method. Besides, the calculation times for different expirations is presented in Table 1; it is clear that the time consumption of the LWNN-ELM method decreases significantly compared with the IDM, which can save computing resources.
Here, the influence of different α on the options price is investigated. From Equation (6), intuitively, α can be regarded as a metric of the dependence of U ( S , t ) on the stock price at the same underlying level from the current time to the expiration; in the practical case, T t 1 ; it is not difficult to observe that the larger α is, the weaker the dependence becomes. This may explain the phenomena illustrated in Figure 3, wherein the B-S model tends to overprice a double knock-out call when the underlying is close to the lower barrier. On the other hand, it can be observed that from a certain underlying value and onward, this B-S model prefers to underprice the option, which possibly results in the larger impact of the nonzero payoff value on the options price for smaller α . Table 2 offers the calculation time of the LWNN-ELM for different expirations; it can be seen that for a longer expiration, the time consumption has no apparent increase which is beneficial for some complex options pricing models.
For the European put option, the final and boundary conditions are Π ¯ ( S ) = max { K S , 0 } , Q ¯ ( t ) = q = 0 , and P ¯ ( t ) = K exp ( r ( T t ) ) . The put options prices for different M are presented in Figure 4; it is clear that the numerical solutions of the LWNN-ELM for different M are quite close; meanwhile, the time consumption is 4.8391 ( M = 12 ) and 6.1726 ( M = 16 ), respectively. In addition, the numerical solutions based on the LWNN-ELM and IDM are provided in Table 3; it is clear that the numerical solutions obtained by the LWNN-ELM agree very well with the benchmark method. Although the numerical solution for M = 12 has a faster computation rate compared to that of M = 16 , the numerical solution for M = 16 has a higher precision. Consequently, for a more accurate description of financial dynamics, a fast and efficient scheme to solve the options price is preferred. In the end, the order of convergence is utilized to explore the solution accuracy [76], which is defined as
Order = log 2 e Δ τ e ( Δ τ / 2 )
where e ( Δ τ ) = U ( Δ τ ) U ( Δ τ / 2 ) and U ( Δ τ ) is the numerical solution at the final time with step size Δ τ . In the following experiments, N s = 460 was selected to saturate the stock price discretization so that the error is mainly determined by the time discretization. The order of convergence of the European option is illustrated in Table 4. It can be observed that the LWNN-ELM has first-order convergence for different α .

7. Conclusions

Being a generalization of the classic B-S model, the “global” characteristic of the time fractional B-S model makes it more difficult to calculate the analytical or numerical solution than the integer-order model. In this paper, the options price model is governed by a time fractional derivative B-S equation when the underlying price change is considered as a fractal transmission system, then the fractional derivative operational matrix of the two-dimensional Legendre wavelet is derived, and the European options pricing model (including final and boundary conditions) is transformed to a group of algebraic equations. The LWNN-ELM is employed to train the output layer weights owing to its fast learning rate and moderate fitting characteristic. In the end, numerical experiments are provided to price the European option with double barriers based on the LWNN-ELM and the implicit differential method, and the experimental results illustrate that the proposed method has less time consumption compared with the benchmark method; besides, the first-order convergence is demonstrated when α is less than 1. Furthermore, the proposed scheme is simple and easy to implement and can be available to other similar fractional models for pricing different European options.

Author Contributions

Conceptualization, X.Z. and Y.Z.; methodology, X.Z.; software, Y.Z.; validation, J.Y. and Y.Z.; formal analysis, X.Z.; investigation, X.Z. and Y.Z.; resources, X.Z.; data curation, Y.Z.; writing—original draft preparation, X.Z. and Y.Z.; writing—review and editing, X.Z.; visualization, Y.Z.; supervision, J.Y.; project administration, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. U1901223 and the Natural Science Foundation of Guangdong Province under Grant No. 2014B080807027.

Institutional Review Board Statement

The study was approved by the Academic Committee of South China University of Technology.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used to support this study.

Acknowledgments

The authors thank all of the Reviewers for their valuable suggestions and recommendation, which contributed to this manuscript’s improvement.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
B-SBlack–Scholes
ELMExtreme learning machine
LWNNLegendre wavelet neural network
PDEPartial differential equation
WNNWavelet neural network
BPBack-propagation
CPUCentral processing unit
HDDHard disk drive

References

  1. Podisuk, M. Regulatory quality, financial integration and equity cost of capital. Rev. Int. Econ. 2019, 27, 916–935. [Google Scholar]
  2. Clère, R. A new way to estimate cost of capital. J. Int. Fin. Manag. Acc. 2019, 4, 223–249. [Google Scholar] [CrossRef]
  3. Basu, P. Capital adjustment cost and inconsistency in inconsistency dynamic panel models with fixed effects. Ger. Econ. Rev. 2019, 20, 1002–1018. [Google Scholar] [CrossRef]
  4. Staveley-O’Carroll, J. Exchange rate targeting in the presence of foreign debt obligations. J. Macroecon. 2018, 56, 113–134. [Google Scholar] [CrossRef] [Green Version]
  5. Dudin, B. Resource Allocation with Automated QoE Assessment in 5G/B5G Wireless Systems. IEEE Netw. 2019, 33, 76–81. [Google Scholar] [CrossRef]
  6. Dudin, B. OKRA: Optimal task and resource allocation for energy minimization in mobile edge computing systems. Wirel. Netw. 2019, 25, 2851–2867. [Google Scholar]
  7. Samuelson, P.A.; Davis, M.; Etheridge, A. Louis Bachelier’s Theory of Speculation: The Origins of Modern Finance; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  8. Black, F. The pricing of options and corporate liabilities. J. Polit. Econ. 1973, 81, 637–654. [Google Scholar] [CrossRef] [Green Version]
  9. Merton, R.C. Theory of rational options pricing. J. Econ. Manag. Sci. 1973, 4, 141–183. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, C.L. The Finite Moment Log Stable Process and Option Pricing. J. Financ. 2003, 58, 753–777. [Google Scholar]
  11. Fallahgoul, H.A.; Focardi, S.M.; Fabozzi, F.J. Fractional Calculus and Fractional Processes with Applications to Financial Economics; Academic Press: London, UK, 2016. [Google Scholar]
  12. Caputo, M. Linear Models of Dissipation whose Q is almost Frequency Independent—II. Geophys. J. Int. 1966, 19, 529–539. [Google Scholar] [CrossRef]
  13. Caputo, M. Modified Riemann–Liouville derivative and fractional Taylor series of nondifferentiable functions further results. Comput. Math. Appl. 2006, 51, 1367–1376. [Google Scholar]
  14. Guariglia, E.; Silvestrov, S. Fractional-wavelet analysis of positive definite distributions and wavelets on D ( ) . In Engineering Mathematics II; Springer: Berlin/Heidelberg, Germany, 2016; pp. 337–353. [Google Scholar]
  15. Berry, M.V.; Lewis, Z.V. On the Weierstrass-Mandelbrot Fractal Function. Proc. Soc. Lond. 1980, 370, 459–484. [Google Scholar]
  16. Lin, Y.; Xu, C. Implicit finite difference approximation for time fractional diffusion equations. Comput. Math. Appl. 2008, 56, 1138–1145. [Google Scholar]
  17. Langlands, T.A.M.; Henry, B.I. The accuracy and stability of an implicit solution method for the fractional diffusion equation. J. Comput. Phys. 2005, 205, 719–736. [Google Scholar] [CrossRef]
  18. Chen, W.; Xiang, X.; Zhu, S.P. Analytically pricing European-style options under the modified Black–Scholes equation with a spatial-fractional derivative. Q. Appl. Math. 2014, 72, 597–611. [Google Scholar] [CrossRef]
  19. Song, L.; Wei, G. Solution of the Fractional Black–Scholes Option Pricing Model by Finite Difference Method. Abstract Appl. Anal. 2013, 6, 1–16. [Google Scholar] [CrossRef] [Green Version]
  20. Cen, Z.; Huang, J.; Xu, A.; Le, A. Numerical approximation of a time-fractional Black–Scholes equation. Comput. Math. Appl. 2018, 75, 2874–2887. [Google Scholar] [CrossRef]
  21. Rezaei, M.; Yazdanian, A.R.; Ashrafi, A.; Mahmoudi, S.M. Numerical pricing based on fractional Black–Scholes equation with time-dependent parameters under the CEV model: Double barrier options. Comput. Math. Appl. 2021, 90, 104–111. [Google Scholar] [CrossRef]
  22. Zhang, H.; Liu, F.; Turner, I.; Yang, Q. Numerical solution of the time fractional Black–Scholes model governing European options. Comput. Math. Appl. 2016, 71, 1772–1783. [Google Scholar] [CrossRef]
  23. Staelen, R.H.; Hendy, A.S. Numerically pricing double barrier options in a time-fractional Black–Scholes model. Comput. Math. Appl. 2017, 74, 1166–1175. [Google Scholar] [CrossRef]
  24. Yavuz, M.; Özdemir, N. A different approach to the European options pricing model with new fractional operator. Math. Model. Nat. Phenom. 2018, 13, 1–13. [Google Scholar] [CrossRef] [Green Version]
  25. Yavuz, M.; Özdemir, N. European vanilla options pricing model of fractional order without singular kernel. Fractal Fract. 2018, 2, 3. [Google Scholar] [CrossRef] [Green Version]
  26. Elbeleze, A.A.; Kilicman, A.; Taib, B.M. Homotopy Perturbation Method for Fractional Black–Scholes European Option Pricing Equations Using Sumudu Transform. Math. Probl. Eng. 2013, 2013, 1–7. [Google Scholar] [CrossRef]
  27. Kumar, S.; Kumar, D.; Singh, J. Numerical computation of fractional Black–Scholes equation arising in financial market. Egypt. J. Basic Appl. Sci. 2014, 1, 177–183. [Google Scholar] [CrossRef]
  28. Park, S.H.; Kim, J.H. Homotopy analysis method for options pricing under stochastic volatility. Appl. Math. Lett. 2011, 24, 1740–1744. [Google Scholar] [CrossRef] [Green Version]
  29. Gülka, V. The homotopy perturbation method for the Black–Scholes equation. J. Stat. Comput. Simul. 2010, 80, 1349–1354. [Google Scholar] [CrossRef]
  30. Liao, S. Homotopy Analysis Method in Nonlinear Differential Equations; Higher Education Press: Beijing, China, 2012. [Google Scholar]
  31. Ahmad, J.; Shakeel, M.; Hassan, Q. Mahmood, Ul.; Mohyud-Din, ST. Analytical solution of Black–Scholes model using fractional variational iteration method. Int. J. Mod. Math. Sci. 2013, 5, 133–142. [Google Scholar]
  32. Baleanu, D.; Srivastava, H.M.; Yang, X.J. Local fractional variational iteration algorithms for the parabolic Fokker-Planck equation defined on Cantor sets. Prog. Fract. Differ. Appl. 2015, 1, 1–10. [Google Scholar]
  33. Yavuz, M.; Ozdemir, N.; Okur, Y.Y. Generalized differential transform method for fractional partial differential equation from finance. In Proceedings of the International Conference on Fractional Differentiation and its Applications, Novi Sad, Serbia, 18–20 July 2016. [Google Scholar]
  34. Yavuz, M.; Zdemir, N. A Quantitative Approach to Fractional Option Pricing Problems with Decomposition Series. Appl. Math. Sci. 2018, 6, 102–109. [Google Scholar]
  35. Edeki, S.O.; Motsepa, T.; Khalique, C.M. The Greek parameters of a continuous arithmetic Asian options pricing model via Laplace Adomian decomposition method. Open Phys. 2018, 16, 780–785. [Google Scholar] [CrossRef]
  36. Daftardar-Gejji, V.; Jafari, H. Adomian decomposition: A tool for solving a system of fractional differential equations. J. Math. Anal. Appl. 2015, 301, 508–518. [Google Scholar] [CrossRef] [Green Version]
  37. El-Borai, M.M.; El-Sayed, W.G.; Jawad, A.M. Adomian Decomposition Method For solving Fractional Differential Equations. Int. Res. J. Eng. Technol. 2015, 2, 296–306. [Google Scholar]
  38. Wyss, W. The fractional Black–Scholes equation. Fract. Calc. Appl. Anal. 2000, 3, 51–61. [Google Scholar]
  39. Hu, Y.; OKsendal, B. Fractional White Noise Calculus and Applications to Finance. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2003, 6, 1–32. [Google Scholar] [CrossRef]
  40. Li, Q.; Zhou, Y.; Zhao, X.; Ge, X. Fractional Order Stochastic Differential Equation with Application in European Option Pricing. Discrete Dyn. Nat. Soc. 2014, 2014, 1–12. [Google Scholar] [CrossRef] [Green Version]
  41. Jumarie, G. Derivation and solutions of some fractional Black–Scholes equations in coarse-grained space and time. Application to Merton’s optimal portfolio. Comput. Math. Appl. 2010, 59, 1142–1164. [Google Scholar] [CrossRef] [Green Version]
  42. Liu, H.K.; Chang, J.J. A closed-form approximation for the fractional Black–Scholes model with transaction costs. Comput. Math. Appl. 2013, 56, 1719–1726. [Google Scholar] [CrossRef]
  43. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neural Comput. 2006, 70, 489–501. [Google Scholar] [CrossRef]
  44. Albadra, M.A.A.; Tiuna, S. Extreme Learning Machine: A Review. Int. J. Appl. Eng. Res. 2017, 12, 4610–4623. [Google Scholar]
  45. Li, C.; Xue, D.; Hu, Z.; Chen, H. A Survey for Breast Histopathology Image Analysis Using Classical and Deep Neural Networks. In International Conference on Information Technologies in Biomedicine; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  46. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep Learning for Sensor-based Activity Recognition: A Survey. Pattern Recognit. Lett. 2017, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
  47. Bahiuddin, I.; Mazlan, S.A.; Shapiai, M.I. Study of extreme learning machine activation functions for magnetorheological fluid modelling in medical devices application. In Proceedings of the International Conference on Robotics, Automation and Sciences, Nelaka, Malaysia, 27–29 November 2017. [Google Scholar]
  48. Liu, X.; Lin, S.; Fang, J.; Xu, Z. Is extreme learning machine feasible? A theoretical assessment (part I). IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 7–20. [Google Scholar] [CrossRef] [PubMed]
  49. Lin, S.; Liu, X.; Fang, J.; Xu, Z. Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II). IEEE Trans. Neural Netw. Learn. Syst. 2017, 26, 21–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Venkatesh, S.G.; Ayyaswamy, S.K.; Balachandar, S.R. The Legendre wavelet method for solving initial value problems of Bratu-type. Comput. Math. Appl. 2012, 63, 1287–1295. [Google Scholar] [CrossRef] [Green Version]
  51. Heydari, M.H.; Hooshmandasl, M.R.; Ghaini, F. A new approach of the Chebyshev wavelets method partial differential equations with boundary conditions of the telegraph type. Appl. Math. Model. 2014, 38, 1597–1606. [Google Scholar] [CrossRef]
  52. Keshavarz, E.; Ordokhani, Y.; Razzaghi, M. The Taylor wavelets method for solving the initial and boundary value problems of Bratu-type equations. Appl. Numer. Math. 2018, 128, 205–216. [Google Scholar] [CrossRef]
  53. Zheng, X.; Tang, Y.Y.; Zhou, J. A framework of adaptive multiscale wavelet decomposition for signals on undirected graphs. IEEE Trans. Signal Process. 2019, 67, 1696–1711. [Google Scholar] [CrossRef]
  54. Zhang, Q.; Benveniste, A. Wavelet network. IEEE Trans. Neural Netw. 1992, 3, 889–898. [Google Scholar] [CrossRef]
  55. Zhang, J.; Walter, G.G.; Miao, Y. Wavelet neural networks for function learning. IEEE Trans. Signal Process. 1995, 43, 1485–1497. [Google Scholar] [CrossRef]
  56. Daugman, J.G. Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans. Audio Speech Lang. Process. 1988, 36, 1169–1179. [Google Scholar] [CrossRef] [Green Version]
  57. Hussain, A.J.; Al-Jumeily, D.; Radi, N.; Lisboa, P. Hybrid Neural Network Predictive-Wavelet Image Compression System. Neurocomputing 2015, 151, 975–984. [Google Scholar] [CrossRef]
  58. Shi, J.; Zhao, Y.; Xiang, W. Deep Scattering Network with Fractional Wavelet Transform. IEEE Trans. Signal Process. 2021, 69, 4740–4757. [Google Scholar] [CrossRef]
  59. Shi, J.; Liu, X.; Xiang, W. Novel fractional wavelet packet transform: Theory, implementation, and applications. IEEE Trans. Signal Process. 2020, 68, 4041–4054. [Google Scholar] [CrossRef]
  60. Jahangiri, F.; Doustmohammadi, A.; Menhaj, M.B. An adaptive wavelet differential neural networks based identifier and its stability analysis. Neurocomputing 2012, 77, 12–19. [Google Scholar] [CrossRef]
  61. Pan, H.; Xia, L.Z. Efficient Object Recognition Using Boundary Representation and Wavelet Neural Network. IEEE Trans. Neural Netw. 2008, 19, 2132–2148. [Google Scholar] [CrossRef]
  62. Liang, J.R.; Wang, J.; Zhang, W.J.; Qiu, W.Y.; Ren, F.Y. The solutions to a bi-fractional black-scholes-merton differential equation. Int. J. Pure Appl. Math 2010, 128, 99–112. [Google Scholar]
  63. Giona, M.; Roman, H.E. Fractional diffusion equation on fractals: One-dimensional case and asymptotic behaviour. J. Phys. A-Math. Gen. 1999, 25, 2093. [Google Scholar] [CrossRef]
  64. Mehaute, A.L. Transfer processes in fractal media. J. Phys. A-Math. Gen. 1984, 36, 665–676. [Google Scholar] [CrossRef]
  65. Jumarie, G. Stock exchange fractional dynamics defined as fractional exponential growth driven by (usual) Gaussian white noise. Application to fractional Black–Scholes equations. Insur. Math. Econ. 2008, 42, 271–287. [Google Scholar] [CrossRef]
  66. Jumarie, G. Merton’s model of optimal portfolio in a Black–Scholes Market driven by a fractional Brownian motion with short-range dependence. Insur. Math. Econ. 2005, 37, 585–598. [Google Scholar] [CrossRef]
  67. Hussaini, M.Y.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin, Germany, 1986. [Google Scholar]
  68. Razzaghi, M.; Yousefi, S. Legendre wavelets direct method for variational problems. Math. Comput. Simul. 2000, 63, 185–192. [Google Scholar] [CrossRef]
  69. Kreyszig, E. Introduction Functional Analysis with Applications; Wiley: New York, NY, USA, 1978. [Google Scholar]
  70. Liu, N.; Lin, E.B. Legendre wavelet method for numerical solutions of partial differential equations. Numer. Meth. Part Differ. Equ. 2010, 26, 81–94. [Google Scholar] [CrossRef]
  71. Heydari, M.H.; Hooshmandasl, M.R.; Ghaini, F.M.M. Two-dimensional Legendre wavelets for solving fractional Poisson equation with Dirichlet boundary conditions. Eng. Anal. Bound. Elem. 2013, 37, 1331–1338. [Google Scholar] [CrossRef]
  72. Steeb, W.H.; Shi, T.K. Matrix Calculus and Kronecker Product with Applications and C++ Programs; World Scientific Publishing Company: Singapore, 1997. [Google Scholar]
  73. Hardy, Y.; Steeb, W. Problems and Solutions in Introductory and Advanced Matrix Calculus; World Scientific Publishing Company: Singapore, 2016. [Google Scholar]
  74. Podlubny, I. Fractional Differential Equations. In Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  75. Faulhaber, O. Analytic Methods for Pricing Double Barrier Options in the Presence of Stochastic Volatility. Ph.D. Thesis, Technische Universitat Kaiserslautern, Kaiserslautern, Germany, 2002. [Google Scholar]
  76. Chen, C.; Wang, Z.; Yang, Y. A new operator splitting method for American options under fractional Black–Scholes models. Comput. Math. Appl. 2019, 77, 2130–2144. [Google Scholar] [CrossRef]
Figure 1. Neural network structure based on the LWNN-ELM, where x and τ denote the stock price and time, respectively.
Figure 1. Neural network structure based on the LWNN-ELM, where x and τ denote the stock price and time, respectively.
Fractalfract 06 00401 g001
Figure 2. Comparison of the solution at α = 1 with the traditional B-S model, where the corresponding parameters are σ = 0.45 , r = 0.03 , q = 0.01 , K = 10 , A = 3 , B = 15 , and M = 16 . (a) T = 1 (years); (b) T = 0.5 ; (c) T = 2 .
Figure 2. Comparison of the solution at α = 1 with the traditional B-S model, where the corresponding parameters are σ = 0.45 , r = 0.03 , q = 0.01 , K = 10 , A = 3 , B = 15 , and M = 16 . (a) T = 1 (years); (b) T = 0.5 ; (c) T = 2 .
Fractalfract 06 00401 g002
Figure 3. Options price for different α , where the corresponding parameters are σ = 0.45 , r = 0.03 , q = 0.01 , K = 10 , A = 3 , B = 15 , and M = 16 . (a) T = 1 (years); (b) T = 0.5 ; (c) T = 2 .
Figure 3. Options price for different α , where the corresponding parameters are σ = 0.45 , r = 0.03 , q = 0.01 , K = 10 , A = 3 , B = 15 , and M = 16 . (a) T = 1 (years); (b) T = 0.5 ; (c) T = 2 .
Fractalfract 06 00401 g003
Figure 4. European put options price obtained by the LWNN-ELM for different M, where the corresponding parameters are α = 0.7 , K = 50 , T = 1 , r = 0.01 , σ = 0.1 , A = 10 , and B = 100 .
Figure 4. European put options price obtained by the LWNN-ELM for different M, where the corresponding parameters are α = 0.7 , K = 50 , T = 1 , r = 0.01 , σ = 0.1 , A = 10 , and B = 100 .
Fractalfract 06 00401 g004
Table 1. Time consumption of the LWNN-ELM and IDM for different expirations.
Table 1. Time consumption of the LWNN-ELM and IDM for different expirations.
MethodLWNN-ELMIDM
Time Consumption (seconds)
Expiration (years)
15.285375.2167
0.54.386262.4791
26.754183.3754
Table 2. Time consumption of the LWNN-ELM for different expirations.
Table 2. Time consumption of the LWNN-ELM for different expirations.
MethodLWNN-ELM
Time Consumption (seconds)
Expiration (years)
127.6381
0.524.9635
231.7652
Table 3. The numerical solutions of the European put option, where the corresponding parameters are K = 50 , T = 1 , r = 0.01 , σ = 0.1 , A = 10 , B = 100 , and M = 16 .
Table 3. The numerical solutions of the European put option, where the corresponding parameters are K = 50 , T = 1 , r = 0.01 , σ = 0.1 , A = 10 , B = 100 , and M = 16 .
( N s , N t )MethodStock Price
S = 30 S = 40 S = 50 S = 60 S = 70 S = 80
(230,500)LWNN-ELM17.60899.06583.55681.19690.44720.1727
IDM17.60889.06563.55711.19680.44730.1726
(460,500)LWNN-ELM17.58619.20833.57141.28660.47070.1757
IDM17.58629.20833.57151.28670.47060.1755
Table 4. The convergence order of the European put options price. N t is the number of discretization points regarding time; the number of discretization points regarding the stock price is fixed as N s = 460 .
Table 4. The convergence order of the European put options price. N t is the number of discretization points regarding time; the number of discretization points regarding the stock price is fixed as N s = 460 .
α 0.10.20.30.40.50.60.70.80.9
N t
2501.021.051.021.041.021.041.021.020.98
5001.011.011.031.031.011.001.040.991.01
10001.011.011.001.021.001.031.021.020.99
20001.001.001.021.011.011.000.991.010.99
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Yang, J.; Zhao, Y. Numerical Solution of Time Fractional Black–Scholes Model Based on Legendre Wavelet Neural Network with Extreme Learning Machine. Fractal Fract. 2022, 6, 401. https://doi.org/10.3390/fractalfract6070401

AMA Style

Zhang X, Yang J, Zhao Y. Numerical Solution of Time Fractional Black–Scholes Model Based on Legendre Wavelet Neural Network with Extreme Learning Machine. Fractal and Fractional. 2022; 6(7):401. https://doi.org/10.3390/fractalfract6070401

Chicago/Turabian Style

Zhang, Xiaoning, Jianhui Yang, and Yuxin Zhao. 2022. "Numerical Solution of Time Fractional Black–Scholes Model Based on Legendre Wavelet Neural Network with Extreme Learning Machine" Fractal and Fractional 6, no. 7: 401. https://doi.org/10.3390/fractalfract6070401

APA Style

Zhang, X., Yang, J., & Zhao, Y. (2022). Numerical Solution of Time Fractional Black–Scholes Model Based on Legendre Wavelet Neural Network with Extreme Learning Machine. Fractal and Fractional, 6(7), 401. https://doi.org/10.3390/fractalfract6070401

Article Metrics

Back to TopTop