Quantitative Analysis of the Fractional Fokker–Planck–Levy Equation via a Modified Physics-Informed Neural Network Architecture
Abstract
:1. Introduction
1.1. Related Work
1.2. Paper Highlights
- ⮚
- An innovative approach is utilized in this paper to solve the fractional Fokker–Planck–Levy (FFPL) equation. The equation contains the Levy noise and fractional Laplacian, making the equation computationally complex.
- ⮚
- A hybrid technique is designed by combining the finite difference method (FDM), Adams numerical technique, and physics-informed neural network (PINN) architecture, namely, the FDM-APINN, to solve the fractional Fokker–Planck–Levy (FFPL) equation numerically.
- ⮚
- Related work on solving partial differential equations (PDEs) and fractional partial differential equations (FPDEs) is discussed.
- ⮚
- The fractional Fokker–Planck-Levy (FFPL) equation is solved numerically via the proposed technique.
- ⮚
- The manuscript is categorized into two main scenarios by varying the value of the fractional order parameter . The equation is solved for the two values of , i.e., ().
- ⮚
- The loss values given for each case can be visualized in the tables. The loss values are very minimal, ranging between and , indicating the precision of the proposed technique.
- ⮚
- All the solutions of the proposed technique are compared with those of the score-fPINN technique, which is a well-known technique for solving fractional differential equations.
- ⮚
- The residual error graph and tables show the errors between both techniques. The errors range between and . The small error indicates the validity of our proposed technique.
- ⮚
- Furthermore, loss and error graphs have been added to the manuscript to explore the proposed technique further. The histogram graph shows the consistency of the proposed technique.
- ⮚
- All the results presented in the tables and graphs indicate that the proposed technique is a state-of-the-art technique that is robust.
2. Defining the Problem
- , is defined as
- ⮚
- G(x, t) represents the diffusion term; in this problem, it is taken as the identity matrix.
- ⮚
- is the order of the fractional Laplacian, where .
- ⮚
- represents the fractional Laplacian operator of order .
- ⮚
- is the drift term.
- ⮚
- is the diffusion term
- If , then it is a Gaussian distribution.
- If , then it is a Cauchy distribution.
- If , then it is a Holtsmark distribution, whose PDF is a hypergeometric function.
Fractional Laplacian
- represents the fractional order of the Laplacian.
- is the dimension.
3. Proposed Methodology
3.1. Loss Function
3.1.1. Data Loss
3.1.2. Errors
3.1.3. Physical Loss
3.1.4. Total Loss
3.2. Optimizer
Operating of the Adam Optimizer
- ⮚
- is the time step.
- ⮚
- represents the parameters at time step .
- ⮚
- is the gradient of the objective function with respect to at time step .
- ⮚
- is the 1st moment vector (mean of the gradients).
- ⮚
- is the 2nd moment vector (uncentered variance of the gradients).
- ⮚
- is the learning rate.
- ⮚
- are the exponential decay rates for the moment estimates.
- ⮚
- where is a small constant for numerical stability.
Algorithm 1: Pseudocode representing all the working procedures of FDM-APINN | ||||||
Starting FDM-APINN | ||||||
1 | Defining Neural Networks | |||||
2 | Select number of input layers as 2 | |||||
3 | 50 is the number of hidden neurons selected | |||||
4 | Parameters setting | |||||
5 | Select the number of iterations and learning rate | |||||
6 | Set values of physical parameters and | |||||
7 | Generate training data | |||||
8 | Grid points creation of and | |||||
9 | Set the initial condition | |||||
10 | Training Loop | |||||
11 | Select a mini-batch sample | |||||
12 | Prepare input data and convert it to a deep-learning array | |||||
13 | Compute gradients and loss functions | |||||
14 | Use the Adam optimizer to update the network | |||||
15 | Display loss | |||||
16 | Evaluation of the network | |||||
17 | Predict and plot using test data | |||||
18 | Calculate the targeted functions | |||||
19 | Compute the derivatives . | |||||
20 | Compute the function of fraction Laplacian using FDM | |||||
21 | Compute the data loss function | |||||
22 | Compute the physics loss functions | |||||
23 | Compute gradients and total loss | |||||
End of the algorithm FDM-APNN |
4. Results and Discussion
4.1. Scenario 1
4.2. Scenario 2
5. Conclusions
- ⮚
- The fractional Fokker–Planck-Levy (FFPL) equation is solved in this manuscript. The equation contains the Levy noise and fractional Laplacian, making the equation computationally complex.
- ⮚
- The analytical solution is not possible. The proposed technique to solve the FFPL equation is a hybrid technique that involves the finite difference method (FDM), Adams numerical techniques, and physics-informed neural networks (PINNs).
- ⮚
- The FDM technique calculates the term fractional Laplacian, which is used in the equation.
- ⮚
- The PINN technique then approximates the solutions via the ADAMS optimizer and minimizes the overall loss function.
- ⮚
- The manuscript is categorized into two main scenarios by varying the value of the fractional order parameter . The equation is solved for the two values of , i.e., ().
- ⮚
- For each value of the fractional order parameter three cases are made by distributing the domain of into 100, 200, and 500 points.
- ⮚
- The equation is solved via the proposed technique for each case. 1000, 2000, and 5000 iterations are performed for each case individually.
- ⮚
- The loss values given for each case can be visualized in the tables. The loss values are very minimal, ranging between and , indicating the precision of the proposed technique.
- ⮚
- All the solutions of the proposed technique are compared with those of the score-fPINN technique, which is a well-known technique for solving fractional differential equations.
- ⮚
- The residual error graph and tables show the errors between both techniques. The errors range between and . The small error indicates the validity of our proposed technique.
- ⮚
- Furthermore, loss and error graphs have been added to the manuscript to explore the proposed technique further. The histogram graph shows the consistency of the proposed technique.
- ⮚
- All the results presented in the tables and graphs indicate that the proposed technique is a state-of-the-art technique that is robust.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models; World Scientific: Singapore, 2022. [Google Scholar]
- Cen, Z.; Huang, J.; Xu, A.; Le, A. Numerical approximation of a time-fractional Black–Scholes equation. Comput. Math. Appl. 2018, 75, 2874–2887. [Google Scholar] [CrossRef]
- Nuugulu, S.M.; Gideon, F.; Patidar, K.C. A robust numerical scheme for a time-fractional Black-Scholes partial differential equation describing stock exchange dynamics. Chaos Solitons Fractals 2021, 145, 110753. [Google Scholar] [CrossRef]
- Chen, Q.; Sabir, Z.; Raja, M.A.Z.; Gao, W.; Baskonus, H.M. A fractional study based on the economic and environmental mathematical model. Alex. Eng. J. 2023, 65, 761–770. [Google Scholar] [CrossRef]
- Nikan, O.; Avazzadeh, Z.; Tenreiro Machado, J.A. Localized kernel-based meshless method for pricing financial options underlying fractal transmission system. Math. Methods Appl. Sci. 2024, 47, 3247–3260. [Google Scholar] [CrossRef]
- Abdeljawad, T.; Thabet, S.T.; Kedim, I.; Vivas-Cortez, M. On a new structure of multiterm Hilfer fractional impulsive neutral Levin-Nohel integrodifferential system with variable time delay. AIMS Math. 2024, 9, 7372–7395. [Google Scholar] [CrossRef]
- Rafeeq, A.S.; Thabet, S.T.; Mohammed, M.O.; Kedim, I.; Vivas-Cortez, M. On Caputo-Hadamard fractional pantograph problem of two different orders with Dirichlet boundary conditions. Alex. Eng. J. 2024, 86, 386–398. [Google Scholar] [CrossRef]
- Boutiara, A.; Etemad, S.; Thabet, S.T.; Ntouyas, S.K.; Rezapour, S.; Tariboon, J. A mathematical theoretical study of a coupled fully hybrid (k, Φ)-fractional order system of BVPs in generalized Banach spaces. Symmetry 2023, 15, 1041. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Barron, A. Universal approximation bounds for superposition of a sigmoidal function. IEEE Trans. Inf. Theory 1991, 19, 930–944. [Google Scholar] [CrossRef]
- Kawaguchi, K. On the theory of implicit deep learning: Global convergence with implicit layers. arXiv 2021, arXiv:2102.07346. [Google Scholar]
- Huang, W.; Jiang, T.; Zhang, X.; Khan, N.A.; Sulaiman, M. Analysis of Beam-Column Designs by Varying Axial Load with Internal Forces and Bending Rigidity Using a New Soft Computing Technique. Complexity 2021, 19, 6639032. [Google Scholar] [CrossRef]
- Kharazmi, E.; Cai, M.; Zheng, X.; Zhang, Z.; Lin, G.; Karniadakis, G.E. Identifiability and predictability of integer-and fractional-order epidemiological models using physics-informed neural networks. Nat. Comput. Sci. 2021, 1, 744–753. [Google Scholar] [CrossRef] [PubMed]
- Pang, G.; Lu, L.; Karniadakis, G.E. fPINNs: Fractional physics-informed neural networks. SIAM J. Sci. Comput. 2019, 41, A2603–A2626. [Google Scholar] [CrossRef]
- Avcı, İ.; Lort, H.; Tatlıcıoğlu, B.E. Numerical investigation and deep learning approach for fractal–fractional order dynamics of Hopfield neural network model. Chaos Solitons Fractals 2023, 177, 114302. [Google Scholar] [CrossRef]
- Avcı, İ.; Hussain, A.; Kanwal, T. Investigating the impact of memory effects on computer virus population dynamics: A fractal–fractional approach with numerical analysis. Chaos Solitons Fractals 2023, 174, 113845. [Google Scholar] [CrossRef]
- Ul Rahman, J.; Makhdoom, F.; Ali, A.; Danish, S. Mathematical modelling and simulation of biophysics systems using neural network. Int. J. Mod. Phys. B 2024, 38, 2450066. [Google Scholar] [CrossRef]
- Lou, Q.; Meng, X.; Karniadakis, G.E. Physics-informed neural networks for solving forward and inverse flow problems via the Boltzmann-BGK formulation. J. Comput. Phys. 2021, 447, 110676. [Google Scholar] [CrossRef]
- Hu, Z.; Shi, Z.; Karniadakis, G.E.; Kawaguchi, K. Hutchinson trace estimation for high-dimensional and high-order physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2024, 424, 116883. [Google Scholar] [CrossRef]
- Hu, Z.; Shukla, K.; Karniadakis, G.E.; Kawaguchi, K. Tackling the curse of dimensionality with physics-informed neural networks. Neural Netw. 2024, 176, 106369. [Google Scholar] [CrossRef]
- Lim, K.L.; Dutta, R.; Rotaru, M. Physics informed neural network using finite difference method. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czechia, 9–12 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1828–1833. [Google Scholar]
- Lim, K.L. Electrostatic Field Analysis Using Physics Informed Neural Net and Partial Differential Equation Solver Analysis. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), Jeju, Korea, 2–5 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–2. [Google Scholar]
- Deng, W. Finite element method for the space and time fractional Fokker–Planck equation. SIAM J. Numer. Anal. 2009, 47, 204–226. [Google Scholar] [CrossRef]
- Sepehrian, B.; Radpoor, M.K. Numerical solution of nonlinear Fokker–Planck equation using finite differences method and the cubic spline functions. Appl. Math. Comput. 2015, 262, 187–190. [Google Scholar]
- Chen, X.; Yang, L.; Duan, J.; Karniadakis, G.E. Solving Inverse Stochastic Problems from Discrete Particle Observations Using the Fokker–Planck Equation and Physics-Informed Neural Networks. SIAM J. Sci. Comput. 2021, 43, B811–B830. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, Y.; Liu, Q.; Wang, X.; Li, Y. Solving Fokker–Planck equations using deep KD-tree with a small amount of data. Nonlinear Dyn. 2022, 108, 4029–4043. [Google Scholar] [CrossRef]
- Zhai, J.; Dobson, M.; Li, Y. A deep learning method for solving Fokker-Planck equations. In Mathematical and Scientific Machine Learning; PMLR: London, UK, 2022; pp. 568–597. [Google Scholar]
- Wang, T.; Hu, Z.; Kawaguchi, K.; Zhang, Z.; Karniadakis, G.E. Tensor neural networks for high-dimensional Fokker-Planck equations. arXiv 2024, arXiv:2404.05615. [Google Scholar]
- Lu, Y.; Maulik, R.; Gao, T.; Dietrich, F.; Kevrekidis, I.G.; Duan, J. Learning the temporal evolution of multivariate densities by normalizing flows. Chaos Interdiscip. J. Nonlinear Sci. 2022, 32, 033121. [Google Scholar] [CrossRef]
- Feng, X.; Zeng, L.; Zhou, T. Solving time dependent Fokker-Planck equations via temporal normalizing flow. arXiv 2021, arXiv:2112.14012. [Google Scholar] [CrossRef]
- Guo, L.; Wu, H.; Zhou, T. Normalizing field flows: Solving forward and inverse stochastic differential equations using physics-informed flow models. J. Comput. Phys. 2022, 461, 111202. [Google Scholar] [CrossRef]
- Tang, K.; Wan, X.; Liao, Q. Adaptive deep density approximation for Fokker-Planck equations. J. Comput. Phys. 2022, 457, 111080. [Google Scholar] [CrossRef]
- Hu, Z.; Zhang, Z.; Karniadakis, G.E.; Kawaguchi, K. Score-based physics-informed neural networks for high-dimensional Fokker-Planck equations. arXiv 2024, arXiv:2402.07465. [Google Scholar]
- Evans, L.C. An Introduction to Stochastic Differential Equations; American Mathematical Soc.: Pawtucket, RI, USA, 2012; Volume 82. [Google Scholar]
- Gardiner, C. Stochastic Methods, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Oksendal, B. Stochastic Differential Equations, 6th ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Boffi, N.M.; Vanden-Eijnden, E. Probability flow solution of the fokker–planck equation. Mach. Learn. Sci. Technol. 2023, 4, 035012. [Google Scholar] [CrossRef]
- Zeng, S.; Zhang, Z.; Zou, Q. Adaptive deep neural networks methods for high-dimensional partial differential equations. J. Comput. Phys. 2022, 463, 111232. [Google Scholar] [CrossRef]
- Hanna, J.M.; Aguado, J.V.; Comas-Cardona, S.; Askri, R.; Borzacchiello, D. Residual-based adaptivity for two-phase flow simulation in porous media using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2022, 396, 115100. [Google Scholar] [CrossRef]
- Wu, C.; Zhu, M.; Tan, Q.; Kartha, Y.; Lu, L. A comprehensive study of nonadaptive and residual-based adaptive sampling for physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2023, 403, 115671. [Google Scholar] [CrossRef]
- Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
- Nabian, M.A.; Gladstone, R.J.; Meidani, H. Efficient training of physics-informed neural networks via importance sampling. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 962–977. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Shin, Y.; Kawaguchi, K.; Karniadakis, G.E. Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing 2022, 468, 165–180. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Karniadakis, G.E. How important are activation functions in regression and classification? A survey, performance comparison, and future directions. J. Mach. Learn. Model. Comput. 2023, 4, 21–75. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Kawaguchi, K.; Em Karniadakis, G. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A 2020, 476, 20200334. [Google Scholar] [CrossRef]
- Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
- Wang, S.; Yu, X.; Perdikaris, P. When and why PINNs fail to train: A neural tangent kernel perspective. J. Comput. Phys. 2022, 449, 110768. [Google Scholar] [CrossRef]
- Xiang, Z.; Peng, W.; Liu, X.; Yao, W. Self-adaptive loss balanced physics-informed neural networks. Neurocomputing 2022, 496, 11–34. [Google Scholar] [CrossRef]
- Haghighat, E.; Amini, D.; Juanes, R. Physics-informed neural network simulation of multiphase poroelasticity using stress-split sequential training. Comput. Methods Appl. Mech. Eng. 2022, 397, 115141. [Google Scholar] [CrossRef]
- Amini, D.; Haghighat, E.; Juanes, R. Inverse modelling of nonisothermal multiphase poromechanics using physics-informed neural networks. J. Comput. Phys. 2023, 490, 112323. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
- Meng, X.; Li, Z.; Zhang, D.; Karniadakis, G.E. PPINN: Parareal physics-informed neural network for time-dependent PDEs. Comput. Methods Appl. Mech. Eng. 2020, 370, 113250. [Google Scholar] [CrossRef]
- Yang, L.; Meng, X.; Karniadakis, G.E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 2021, 425, 109913. [Google Scholar] [CrossRef]
- Hu, Z.; Jagtap, A.D.; Karniadakis, G.E.; Kawaguchi, K. When do extended physics-informed neural networks (XPINNs) improve generalization? arXiv 2021, arXiv:2109.09444. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Karniadakis, G.E. Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
- Yang, L.; Zhang, D.; Karniadakis, G.E. Physics-informed generative adversarial networks for stochastic differential equations. SIAM J. Sci. Comput. 2020, 42, A292–A317. [Google Scholar] [CrossRef]
- Yu, J.; Lu, L.; Meng, X.; Karniadakis, G.E. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. Comput. Methods Appl. Mech. Eng. 2022, 393, 114823. [Google Scholar] [CrossRef]
- Eshkofti, K.; Hosseini, S.M. A gradient-enhanced physics-informed neural network (gPINN) scheme for the coupled nonfickian/non-Fourierian diffusion-thermoelasticity analysis: A novel gPINN structure. Eng. Appl. Artif. Intell. 2023, 126, 106908. [Google Scholar] [CrossRef]
- Eshkofti, K.; Hosseini, S.M. The novel PINN/gPINN-based deep learning schemes for non-Fickian coupled diffusion-elastic wave propagation analysis. Waves Random Complex Media 2023, 33, 1–24. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of the MDPI and/or the editor(s). The MDPI and/or the editor(s) disclose responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Scenarios | Cases | Iterations |
---|---|---|
100 points | 1000 | |
2000 | ||
5000 | ||
200 points | 1000 | |
2000 | ||
5000 | ||
500 points | 1000 | |
2000 | ||
5000 | ||
100 points | 1000 | |
2000 | ||
5000 | ||
200 points | 1000 | |
2000 | ||
5000 | ||
500 points | 1000 | |
2000 | ||
5000 |
= 1.75 | ||
---|---|---|
Points | Iterations | Average Loss |
100 | 1000 | 1.79 × 10−5 |
2000 | 4.69 × 10−6 | |
5000 | 1.05 × 10−6 | |
200 | 1000 | 3.17 × 10−6 |
2000 | 1.27 × 10−6 | |
5000 | 2.06 × 10−6 | |
500 | 1000 | 5.56 × 10−6 |
2000 | 2.21 × 10−6 | |
5000 | 1.12 × 10−6 |
Residual Errors Iterations) | Residual Errors Iterations) | Residual Errors Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | −2.29 × 10−2 | 2.49 × 10−2 | −1.60 × 10−3 |
−0.80 | 0.10 | 1.90 × 10−3 | −5.20 × 10−3 | −7.00 × 10−4 |
−0.60 | 0.20 | 5.90 × 10−3 | −4.00 × 10−3 | 2.20 × 10−3 |
−0.40 | 0.30 | 2.80 × 10−3 | −8.80 × 10−3 | −8.00 × 10−4 |
−0.20 | 0.40 | −9.50 × 10−3 | 8.60 × 10−3 | 1.40 × 10−3 |
0.00 | 0.50 | −1.00 × 10−3 | 4.00 × 10−3 | 2.10 × 10−3 |
0.20 | 0.60 | −1.00 × 10−4 | 3.20 × 10−3 | −1.70 × 10−3 |
0.40 | 0.70 | −5.20 × 10−3 | 6.00 × 10−3 | −2.20 × 10−3 |
0.60 | 0.80 | 7.70 × 10−3 | 1.90 × 10−3 | −1.20 × 10−3 |
0.80 | 0.90 | 3.60 × 10−3 | −4.00 × 10−3 | 6.00 × 10−4 |
1.00 | 1.00 | 9.90 × 10−3 | 5.00 × 10−4 | 1.60 × 10−3 |
Residual Error Iterations) | Residual Error Iterations) | Residual Error (5000 Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | 1.51 × 10−2 | 1.70 × 10−3 | −3.40 × 10−3 |
−0.80 | 0.10 | 5.70 × 10−3 | 3.50 × 10−3 | −1.10 × 10−3 |
−0.60 | 0.20 | 1.00 × 10−3 | −8.00 × 10−4 | 1.30 × 10−3 |
−0.40 | 0.30 | −4.50 × 10−3 | 4.00 × 10−4 | −4.00 × 10−4 |
−0.20 | 0.40 | 3.30 × 10−3 | 2.40 × 10−3 | 2.60 × 10−3 |
0.00 | 0.50 | 3.10 × 10−3 | −9.00 × 10−4 | −1.90 × 10−3 |
0.20 | 0.60 | −1.60 × 10−3 | −1.10 × 10−3 | 9.00 × 10−4 |
0.40 | 0.70 | −7.00 × 10−3 | −1.50 × 10−3 | 9.00 × 10−4 |
0.60 | 0.80 | 3.70 × 10−3 | −2.10 × 10−3 | −3.00 × 10−3 |
0.80 | 0.90 | −8.30 × 10−3 | 0.00 | −1.80 × 10−3 |
1.00 | 1.00 | 1.00 × 10−3 | 4.60 × 10−3 | −2.30 × 10−3 |
Residual Error (1000 Iterations) | Residual Error (2000 Iterations) | Residual Error (5000 Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | 1.30 × 10−3 | 2.80 × 10−3 | −7.00 × 10−4 |
−0.80 | 0.10 | 7.10 × 10−3 | 2.40 × 10−3 | −2.20 × 10−3 |
−0.60 | 0.20 | 7.20 × 10−3 | 3.70 × 10−3 | −3.70 × 10−3 |
−0.40 | 0.30 | −6.00 × 10−4 | 1.20 × 10−3 | 2.10 × 10−3 |
−0.20 | 0.40 | −6.40 × 10−3 | 1.80 × 10−3 | −1.70 × 10−3 |
0.00 | 0.50 | 2.30 × 10−3 | −5.10 × 10−3 | 3.20 × 10−3 |
0.20 | 0.60 | −5.10 × 10−3 | 4.10 × 10−3 | 6.20 × 10−3 |
0.40 | 0.70 | −5.20 × 10−3 | 1.46 × 10−2 | −2.00 × 10−4 |
0.60 | 0.80 | −1.33 × 10−2 | 6.00 × 10−3 | 5.80 × 10−3 |
0.80 | 0.90 | −9.40 × 10−3 | 6.50 × 10−3 | 2.30 × 10−3 |
1.00 | 1.00 | 1.00 × 10−4 | 6.00 × 10−3 | 2.00 × 10−4 |
= 1.75 | ||
---|---|---|
Points | Iterations | Average Loss |
100 | 1000 | 3.38 × 10−5 |
2000 | 2.63 × 10−6 | |
5000 | 1.90 × 10−6 | |
200 | 1000 | 9.25 × 10−6 |
2000 | 5.61 × 10−6 | |
5000 | 3.30 × 10−6 | |
500 | 1000 | 2.36 × 10−5 |
2000 | 4.27 × 10−6 | |
5000 | 4.33 × 10−6 |
Residual Error (1000 Iterations) | Residual Error (2000 Iterations) | Residual Error (5000 Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | −5.00 × 10−4 | 3.10 × 10−3 | −1.36 × 10−2 |
−0.80 | 0.10 | 2.80 × 10−3 | 1.60 × 10−3 | −6.00 × 10−3 |
−0.60 | 0.20 | −1.80 × 10−3 | −2.70 × 10−3 | −2.40 × 10−3 |
−0.40 | 0.30 | 6.60 × 10−3 | −1.18 × 10−2 | −3.20 × 10−3 |
−0.20 | 0.40 | 5.80 × 10−3 | 9.00 × 10−4 | 3.20 × 10−3 |
0.00 | 0.50 | 9.80 × 10−3 | −1.80 × 10−3 | 6.00 × 10−4 |
0.20 | 0.60 | 1.40 × 10−3 | −2.30 × 10−3 | −1.90 × 10−3 |
0.40 | 0.70 | 3.50 × 10−3 | 1.30 × 10−3 | 3.10 × 10−3 |
0.60 | 0.80 | −6.00 × 10−3 | 1.40 × 10−3 | 7.00 × 10−3 |
0.80 | 0.90 | −7.60 × 10−3 | −6.00 × 10−4 | 6.40 × 10−3 |
1.00 | 1.00 | 5.00 × 10−4 | −3.60 × 10−3 | 8.10 × 10−3 |
Residual Error (1000 Iterations) | Residual Error (2000 Iterations) | Residual Error (5000 Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | 2.17 × 10−2 | 3.10 × 10−3 | 1.90 × 10−3 |
−0.80 | 0.10 | 2.30 × 10−3 | −6.00 × 10−4 | 4.10 × 10−3 |
−0.60 | 0.20 | −3.10 × 10−3 | 3.00 × 10−4 | 3.30 × 10−3 |
−0.40 | 0.30 | −4.00 × 10−3 | 2.00 × 10−4 | 2.40 × 10−3 |
−0.20 | 0.40 | 3.30 × 10−3 | 9.00 × 10−4 | −4.10 × 10−3 |
0.00 | 0.50 | 4.60 × 10−3 | 1.90 × 10−3 | −2.50 × 10−3 |
0.20 | 0.60 | 6.00 × 10−3 | 9.60 × 10−3 | −3.20 × 10−3 |
0.40 | 0.70 | 7.30 × 10−3 | 1.60 × 10−3 | −2.30 × 10−3 |
0.60 | 0.80 | 1.06 × 10−2 | −7.00 × 10−4 | −4.20 × 10−3 |
0.80 | 0.90 | −4.00 × 10−4 | 3.30 × 10−3 | −6.50 × 10−3 |
1.00 | 1.00 | 8.80 × 10−3 | 7.90 × 10−3 | −1.21 × 10−2 |
Residual Error (1000 Iterations) | Residual Error (2000 Iterations) | Residual Error (5000 Iterations) | ||
---|---|---|---|---|
−1.00 | 0.00 | 9.00 × 10−3 | 8.50 × 10−3 | 7.00 × 10−3 |
−0.80 | 0.10 | −2.10 × 10−3 | 1.29 × 10−2 | 5.30 × 10−3 |
−0.60 | 0.20 | 3.40 × 10−3 | 9.50 × 10−3 | 3.30 × 10−3 |
−0.40 | 0.30 | −2.60 × 10−3 | 2.00 × 10−3 | 4.80 × 10−3 |
−0.20 | 0.40 | 5.30 × 10−3 | 5.00 × 10−3 | 5.00 × 10−4 |
0.00 | 0.50 | 5.10 × 10−3 | −6.00 × 10−4 | 1.80 × 10−3 |
0.20 | 0.60 | 1.20 × 10−3 | 4.20 × 10−3 | 2.50 × 10−3 |
0.40 | 0.70 | −3.60 × 10−3 | 1.10 × 10−3 | −4.20 × 10−3 |
0.60 | 0.80 | −5.70 × 10−3 | −7.70 × 10−3 | −2.50 × 10−3 |
0.80 | 0.90 | −2.80 × 10−3 | −4.50 × 10−3 | −5.80 × 10−3 |
1.00 | 1.00 | −1.95 × 10−2 | −7.90 × 10−3 | −5.30 × 10−3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fazal, F.U.; Sulaiman, M.; Bassir, D.; Alshammari, F.S.; Laouini, G. Quantitative Analysis of the Fractional Fokker–Planck–Levy Equation via a Modified Physics-Informed Neural Network Architecture. Fractal Fract. 2024, 8, 671. https://doi.org/10.3390/fractalfract8110671
Fazal FU, Sulaiman M, Bassir D, Alshammari FS, Laouini G. Quantitative Analysis of the Fractional Fokker–Planck–Levy Equation via a Modified Physics-Informed Neural Network Architecture. Fractal and Fractional. 2024; 8(11):671. https://doi.org/10.3390/fractalfract8110671
Chicago/Turabian StyleFazal, Fazl Ullah, Muhammad Sulaiman, David Bassir, Fahad Sameer Alshammari, and Ghaylen Laouini. 2024. "Quantitative Analysis of the Fractional Fokker–Planck–Levy Equation via a Modified Physics-Informed Neural Network Architecture" Fractal and Fractional 8, no. 11: 671. https://doi.org/10.3390/fractalfract8110671
APA StyleFazal, F. U., Sulaiman, M., Bassir, D., Alshammari, F. S., & Laouini, G. (2024). Quantitative Analysis of the Fractional Fokker–Planck–Levy Equation via a Modified Physics-Informed Neural Network Architecture. Fractal and Fractional, 8(11), 671. https://doi.org/10.3390/fractalfract8110671