VaR Estimation with Quantum Computing Noise Correction Using Neural Networks
Abstract
:1. Introduction
- We present a method to summarize market behavior with a single Gaussian distribution. This addresses the problem of many assets in a portfolio.
- A comparison with present parametric and Monte Carlo estimations is discussed. Therefore, the risk of a portfolio with an arbitrary number of assets can be afforded.
- We show that, using the above-mentioned Gaussian and a Taylor series expansion, can be calculated using quantum computing.
- We present a method to mitigate the noise effect in actual NISQ quantum computers by using neural networks.
2. Monte Carlo VaR Estimation
2.1. Monte Carlo Method
- Calculate the covariance matrix from the assets’ series increments.
- Calculate the Cholesky matrix from the matrix.
- Repeat:
- (a)
- Generate several samples (Gaussian zero media and unitary standard deviation ), one for each asset, to obtain the sample vector .
- (b)
- Calculate the scenario .
- (c)
- Using the vector of market values , calculate the log-normal result sample:.
- (d)
- Calculate total win/loss value by adding all n components:.
- Estimate by calculating percentile in s (log-normal distribution).
2.2. Parametric Estimation
3. Linear Approach to Monte Carlo Estimation
3.1. Taylor Series Approximation
- Calculate the covariance matrix from the assets’ series.
- Calculate Cholesky matrix from the matrix.
- Calculate using the market values vector.
- Calculate the standard deviation for the :.
- Repeat:Generate one as a sample of the loss random variable s.
- Estimate by calculating percentile in s (log-normal distribution).
3.2. Cholesky Matrix Calculation
4. Quantum Computing
4.1. Distribution Window Approach
4.2. Statevector Simulation
5. Neural Networks
5.1. Cost Function
5.2. Layer Activation
5.3. Architecture
- Qubits: The ability to change the number of qubits in the quantum circuit is an important parameter. Changes in the number qubits can be caused by the specific requirements of the assets or the preferences of the designer. While increasing the number of qubits can improve the precision of the system, at the same time, it also increases the computing cost. It is therefore needed to find a balance between cost and precision, which is possible through testing and result analysis. For practical reasons (mainly, quantum computer availability and calculation time), we limited the number of qubits to 5 qubits, but the method presented in this paper is applicable to any number.
- Backend: The flexibility of choosing a real quantum backend or a simulator without needing to change the design of the program makes development much easier. Usually, simulators are used for development, whereas the tests are typically performed in real quantum computers. Consequently, the ability to seamlessly change the backend makes the process much faster.
- Number of assets: When testing the developed design, we need to change the number of assets to check if the model correctly works with different scenarios. If these changes are made manually, human errors could happen, which can be avoided if we can automatically change the number of assets depending on the provided data. For training purposes, our dataset had five assets with a of USD 1380, which was calculated with parametric estimation for a percentile.
- Grouped assets: The number of grouped assets can easily be changed according to the preferences of the developer. While any arbitrary number can be used for this parameter, the entire design may fail if the chosen value is not appropriate. For example, if we have 100 values for each asset, it does not make sense to group them in 50 groups of 2 values because there is almost nothing to learn from each group. On the other hand, if groups contain many assets, then the number of groups is reduced, which is also not convenient for the training process. Therefore, it is important that this parameter can be freely changed if we want to find the optimal number of grouped assets for each dataset.
- Number of layers: The number of layers composing the model can be changed without any difficulty since the entire model is designed as an independent function that can be replaced at any moment. In this paper, we conducted different tests with two and three dense layers to conclude that the configuration with three layers provides better results.
- Number of neurons: Similar to the number of layers, the number of neurons in each layer can also be easily changed according to the characteristics and volume of the asset data. The number of neurons in each layer can be independently changed. In our case, we used 500 neurons at each layer, which is an amount of neurons that is high enough for learning but not too high, avoiding overfitting.
- Expressiveness and non-linearity: A neural network with three dense hidden layers provides the flexibility to model complex, non-linear relationships between the input data and the desired output (qubit rotations). This is crucial in capturing the intricate quantum mechanics involved in qubit rotations.
- Hierarchy of features: Three hidden layers allow for the hierarchical extraction of features from the input data. Each layer can learn increasingly abstract and higher-level representations of the qubit states, aiding in better understanding and approximating the necessary rotations.
- Generalization and prediction: A well-designed neural network with three dense hidden layers can generalize well to unseen data, enabling accurate predictions of the rotations needed for various output qubit configurations. This is crucial for the adaptability and performance of the quantum computer.
5.4. Execution in Real Quantum Computers
- As we used 5 qubits, we should obtain samples out of 32 values. However, in Figure 10, we only see 20. This is because Qiskit removes from the graphics the values with lower occurrences.
- The value is USD 1376, very close to the actual value of USD 1380 used as the target for the neural network, so the overall system seems to be well functioning.
- The effect of the noise and the counter-effect of the rotations found by the neural network can be seen in Figure 10. In the ideal case with no noise, the distribution should be similar to the distribution tail shown in Figure 6. Although it resembles the shape, the neural network clearly modifies it. The important point is that this strange shape compensates for the noise effect.
5.5. Discussion
6. Related Work
7. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CDF | Cumulative Distribution Function |
CNOT | Controlled-NOT |
COV | Covariance matrix |
CHO | Cholesky matrix |
IBM | International Business Machines |
LDL | Lower triangular matrix, Diagonal matrix, Lower triagunal matrix transposed |
NISQ | Noisy Intermediate Scale Quantum computing |
Probability Density Function | |
QC | Quantum Computing |
QE | Quantum Experience |
ReLU | Rectified Linear Unit |
USD | United States Dollar |
VaR | Value at Risk |
XAI | eXplainable Artificial Intelligence |
References
- Glasserman, P. Monte Carlo Methods in Financial Engineering; Springer: New York, NY, USA, 2004. [Google Scholar]
- McCrary, S. Implementing a Monte Carlo Simulation: Correlation, Skew, and Kurtosis; Berkeley Research Group White Paper; Berkeley Research Group: Tokyo, Japan, 2015. [Google Scholar]
- Pagès, G. Numerical Probability: An Introduction with Applications to Finance; Springer: Paris, France, 2018. [Google Scholar]
- Wilson, T. Value at risk. In Risk Management and Analysis, Volume 1: Measuring and Modelling Financial Risk; Alexander, C., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
- Arunraj, N.; Mandal, S.; Maiti, J. Modeling uncertainty in risk assessment: An integrated approach with fuzzy set theory and Monte Carlo simulation. Accid. Anal. Prev. 2013, 55, 242–255. [Google Scholar] [CrossRef] [PubMed]
- Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
- Chen, N.; Hong, L.J. Monte Carlo simulation in financial engineering. In Proceedings of the 2007 Winter Simulation Conference, Washington, DC, USA, 9–12 December 2007; pp. 919–931. [Google Scholar]
- Staum, J. Simulation in financial engineering. In Proceedings of the Winter Simulation Conference, Arlington, VA, USA, 9–12 December 2002; Volume 2, pp. 1481–1492. [Google Scholar]
- Hazewinkel, M. Cholesky factorization. In Encyclopedia of Mathematics; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
- Benoit, C. Note sur une méthode de résolution des équations normales provenant de l’application de la méthode des moindres carrés à un système d’équations linéaires en nombre inférieur à celui des inconnues (Procédé du Commandant Cholesky). Bull. Géodésique 1924, 2, 66–67. [Google Scholar]
- Springer, M.D. The Algebra of Random Variables; John Wiley and Sons: Hoboken, NJ, USA, 1979. [Google Scholar]
- Ahsanullah, M.; Kibria, B.G.; Shakil, M. Normal and student’s t distributions and their applications. In Atlantis Studies in Probability and Statistics; Springer: Berlin/Heidelberg, Germany, 2014; Volume 4. [Google Scholar]
- Kreinovich, V.; Thach, N.N.; Trung, N.D.; Van Thanh, D. Beyond traditional probabilistic methods in economics. In Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2019; Volume 809. [Google Scholar]
- Silva, M.E.d.; Barbe, T. Quasi-Monte Carlo in finance: Extending for problems of high effective dimension. Econ. Apl. 2005, 9, 577–594. [Google Scholar] [CrossRef]
- Joe, S.; Kuo, F.Y. Constructing Sobol sequences with better two-dimensional projections. SIAM J. Sci. Comput. 2008, 30, 2635–2654. [Google Scholar] [CrossRef]
- Joe, S.; Kuo, F.Y. Notes on generating Sobol sequences. ACM Trans. Math. Softw. 2008, 29, 49–57. [Google Scholar] [CrossRef]
- Sobol’, I.M.; Asotsky, D.; Kreinin, A.; Kucherenko, S. Construction and comparison of high-dimensional Sobol’generators. Wilmott 2011, 2011, 64–79. [Google Scholar] [CrossRef]
- Deadman, E.; Relton, S.D. Taylor’s theorem for matrix functions with applications to condition number estimation. Linear Algebra Its Appl. 2016, 504, 354–371. [Google Scholar] [CrossRef]
- Schurman, G. The Cholesky Decomposition-Part I. 2012. Available online: http://www.appliedbusinesseconomics.com/ (accessed on 27 November 2020).
- Golub, G.; Van Loan, C.F. Matrix Computations; The Johns Hopkins Univ. Press: Baltimore, ML, USA, 1996. [Google Scholar]
- Cheng, S.H.; Higham, N.J. A modified Cholesky algorithm based on a symmetric indefinite factorization. SIAM J. Matrix Anal. Appl. 1998, 19, 1097–1110. [Google Scholar] [CrossRef]
- Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Norlén, H. Quantum Computing in Practice with Qiskit® and IBM Quantum Experience®: Practical Recipes for Quantum Computer Coding at the Gate and Algorithm Level with Python; Packt Publishing Ltd.: Birmingham, UK, 2020. [Google Scholar]
- Mottonen, M.; Vartiainen, J.J.; Bergholm, V.; Salomaa, M.M. Transformation of quantum states using uniformly controlled rotations. arXiv 2004, arXiv:quant-ph/0407010. [Google Scholar] [CrossRef]
- Grover, L.; Rudolph, T. Creating superpositions that correspond to efficiently integrable probability distributions. arXiv 2002, arXiv:quant-ph/0208112. [Google Scholar]
- Koch, D.; Wessing, L.; Alsing, P.M. Introduction to Coding Quantum Algorithms: A Tutorial Series Using Qiskit. arXiv 2019, arXiv:1903.04359. [Google Scholar]
- Anthony, M.; Bartlett, P.L. Neural Network Learning: Theoretical Foundations; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
- Herman, D.; Googin, C.; Liu, X.; Sun, Y.; Galda, A.; Safro, I.; Pistoia, M.; Alexeev, Y. Quantum computing for finance. Nat. Rev. Phys. 2023, 5, 450–465. [Google Scholar] [CrossRef]
- Woerner, S.; Egger, D.J. Quantum risk analysis. NPJ Quantum Inf. 2019, 5, 15. [Google Scholar] [CrossRef]
- Montanaro, A. Quantum speedup of Monte Carlo methods. Proc. R. Soc. A Math. Phys. Eng. Sci. 2015, 471, 20150301. [Google Scholar] [CrossRef]
- Egger, D.J.; García Gutiérrez, R.; Mestre, J.C.; Woerner, S. Credit Risk Analysis Using Quantum Computers. IEEE Trans. Comput. 2021, 70, 2136–2145. [Google Scholar] [CrossRef]
- Chaiboonsri, C.; Wannapan, S. Applying quantum mechanics for extreme value prediction of VaR and ES in the ASEAN stock exchange. Economies 2021, 9, 13. [Google Scholar] [CrossRef]
- Shaib, A.; Naim, M.H.; Fouda, M.E.; Kanj, R.; Kurdahi, F. Efficient noise mitigation technique for quantum computing. Sci. Rep. 2023, 13, 3912. [Google Scholar] [CrossRef]
- Xiao, H.; Chen, X.; Xu, J. Using a Deep Quantum Neural Network to Enhance the Fidelity of Quantum Convolutional Codes. Appl. Sci. 2022, 12, 5662. [Google Scholar] [CrossRef]
- Kim, C.; Park, K.D.; Rhee, J.K. Quantum error mitigation with artificial neural network. IEEE Access 2020, 8, 188853–188860. [Google Scholar] [CrossRef]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
Layer Type | Input | Dense | Dense | Dense | Output |
---|---|---|---|---|---|
Neurons | 500 | 500 | 500 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
de Pedro, L.; París Murillo, R.; López de Vergara, J.E.; López-Buedo, S.; Gómez-Arribas, F.J. VaR Estimation with Quantum Computing Noise Correction Using Neural Networks. Mathematics 2023, 11, 4355. https://doi.org/10.3390/math11204355
de Pedro L, París Murillo R, López de Vergara JE, López-Buedo S, Gómez-Arribas FJ. VaR Estimation with Quantum Computing Noise Correction Using Neural Networks. Mathematics. 2023; 11(20):4355. https://doi.org/10.3390/math11204355
Chicago/Turabian Stylede Pedro, Luis, Raúl París Murillo, Jorge E. López de Vergara, Sergio López-Buedo, and Francisco J. Gómez-Arribas. 2023. "VaR Estimation with Quantum Computing Noise Correction Using Neural Networks" Mathematics 11, no. 20: 4355. https://doi.org/10.3390/math11204355
APA Stylede Pedro, L., París Murillo, R., López de Vergara, J. E., López-Buedo, S., & Gómez-Arribas, F. J. (2023). VaR Estimation with Quantum Computing Noise Correction Using Neural Networks. Mathematics, 11(20), 4355. https://doi.org/10.3390/math11204355