The Forecasting of a Leading Country’s Government Expenditure Using a Recurrent Neural Network with a Gated Recurrent Unit
Abstract
:1. Introduction
- Our GRU model accurately predicts government spending on the basis of financial GDP indicators.
- Our GRU model outperformed ARIMA, exponential smoothing (ETS), extreme gradient boosting (XGBoost), SVR, CNN, and LSTM models in evaluation experiments.
- The strengths and weaknesses of neural network models in predicting government expenditure were explored.
- The aforementioned methods can capture complex nonlinear relationships between different economic factors to generate accurate predictions.
2. Methods
2.1. Autoregressive Integrated Moving Average
2.2. Exponential Smoothing
2.3. Support Vector Regression
2.4. Extreme Gradient Boosting
2.5. Convolution Neural Network
2.6. Long Short-Term Memory
2.7. Gated Recurrent Unit
2.8. GDP Indicators Forecasting Framework
Algorithm 1 GRU for Government Expenditure Forecasting |
Let X be the dataset. Desired Output: Prediction of best model
|
2.9. Evaluation Criteria
3. Results and Discussion
3.1. Data Source
3.2. GRU Architecture Results and Sensitivity Analysis
3.3. Experimental System
3.4. Comparison and Discussion
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kumar, R.; Lal, K. Forecasting government expenditure using machine learning algorithms: Evidence from India. Technol. Forecast. Soc. Chang. 2017, 123, 251–260. [Google Scholar]
- Harmsen, R.; Sterneberg, A. Forecasting government expenditure with macroeconomic aggregates in small open economies. Econ. Model. 2017, 60, 302–312. [Google Scholar]
- Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
- Lara-Rubio, J.; Navarro-Galera, A.; Buendía-Carrillo, D.; Gomez-Miranda, M.E.J.C. Analysing financial risks of local governments to design sustainability policies for public services: An empirical study by the population size. Cities 2022, 128, 103795. [Google Scholar] [CrossRef]
- Wei, X.; Mohsin, M.; Zhang, Q.J.R.E. Role of foreign direct investment and economic growth in renewable energy development. Renew. Energy 2022, 192, 828–837. [Google Scholar] [CrossRef]
- Ginn, W.; Pourroy, M.J.E.M. The contribution of food subsidy policy to monetary policy in India. Econ. Model. 2022, 113, 105904. [Google Scholar] [CrossRef]
- Guerrero, O.A.; Castañeda, G.; Trujillo, G.; Hackett, L.; Chávez-Juárez, F.J.S.-E.P.S. Subnational sustainable development: The role of vertical intergovernmental transfers in reaching multidimensional goals. Socio-Econ. Plan. Sci. 2022, 83, 101155. [Google Scholar] [CrossRef]
- Barro, R.J. Government spending in a simple model of endogeneous growth. J. Political Econ. 1990, 98, S103–S125. [Google Scholar] [CrossRef] [Green Version]
- Scotti, F.; Flori, A.; Pammolli, F. The economic impact of structural and Cohesion Funds across sectors: Immediate, medium-to-long term effects and spillovers. Econ. Model. 2022, 111, 105833. [Google Scholar] [CrossRef]
- Jeong, K.; Koo, C.; Hong, T. An estimation model for determining the annual energy cost budget in educational facilities using SARIMA (seasonal autoregressive integrated moving average) and ANN (artificial neural network). Energy 2014, 71, 71–79. [Google Scholar] [CrossRef]
- Robinson, H.; Pawar, S.; Rasheed, A.; San, O. Physics guided neural networks for modelling of non-linear dynamics. Neural Netw. 2022, 154, 333–345. [Google Scholar] [CrossRef] [PubMed]
- Palmer, A.; Montano, J.J.; Sesé, A. Designing an artificial neural network for forecasting tourism time series. Tour. Manag. 2006, 27, 781–790. [Google Scholar] [CrossRef]
- Lago, J.; De Ridder, F.; De Schutter, B. Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms. Appl. Energy 2018, 221, 386–405. [Google Scholar] [CrossRef]
- Farah, S.; Humaira, N.; Aneela, Z.; Steffen, E. Short-term multi-hour ahead country-wide wind power prediction for Germany using gated recurrent unit deep learning. Renew. Sustain. Energy Rev. 2022, 167, 112700. [Google Scholar] [CrossRef]
- Li, Q.; Yu, C.; Yan, G. A New Multipredictor Ensemble Decision Framework Based on Deep Reinforcement Learning for Regional GDP Prediction. IEEE Access 2022, 10, 45266–45279. [Google Scholar] [CrossRef]
- Hyndman, R.J.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef] [Green Version]
- Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Hyndman, R.J.; Athanasopoulos, G. Forecasting: Principles and Practice; OTexts: Melbourne, Australia, 2018. [Google Scholar]
- Gardner, E.S., Jr. Exponential smoothing: The state of the art—Part II. Int. J. Forecast 2006, 22, 637–666. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2. [Google Scholar]
- Shumway, R.H.; Stoffer, D.S.; Stoffer, D.S. Time Series Analysis and Its Applications; Springer: Berlin/Heidelberg, Germany, 2000; Volume 3. [Google Scholar]
- Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
- Sharifzadeh, M.; Sikinioti-Lock, A.; Shah, N. Machine-learning methods for integrated renewable power generation: A comparative study of artificial neural networks, support vector regression, and Gaussian Process Regression. Renew. Sustain. Energy Rev. 2019, 108, 513–538. [Google Scholar] [CrossRef]
- Sermpinis, G.; Stasinakis, C.; Theofilatos, K.; Karathanasopoulos, A. Modeling, forecasting and trading the EUR exchange rates with hybrid rolling genetic algorithms—Support vector regression forecast combinations. Eur. J. Oper. Res. 2015, 247, 831–846. [Google Scholar] [CrossRef] [Green Version]
- Murillo-Escobar, J.; Sepulveda-Suescun, J.; Correa, M.; Orrego-Metaute, D. Forecasting concentrations of air pollutants using support vector regression improved with particle swarm optimization: Case study in Aburrá Valley, Colombia. Urban Clim. 2019, 29, 100473. [Google Scholar] [CrossRef]
- Yang, C.-H.; Shao, J.-C.; Liu, Y.-H.; Jou, P.-H.; Lin, Y.-D. Application of Fuzzy-Based Support Vector Regression to Forecast of International Airport Freight Volumes. Mathematics 2022, 10, 2399. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Kumar, P.; Nestsiarovich, A.; Nelson, S.J.; Kerner, B.; Perkins, D.J.; Lambert, C.G. Imputation and characterization of uncoded self-harm in major mental illness using machine learning. J. Am. Med. Inform. Assoc. 2020, 27, 136–146. [Google Scholar] [CrossRef]
- Memon, N.; Patel, S.B.; Patel, D.P. Comparative analysis of artificial neural network and XGBoost algorithm for PolSAR image classification. In Proceedings of the Pattern Recognition and Machine Intelligence: 8th International Conference, PReMI 2019, Tezpur, India, 17–20 December 2019; pp. 452–460. [Google Scholar]
- Su, Y.-C.; Wu, C.-Y.; Yang, C.-H.; Li, B.-S.; Moi, S.-H.; Lin, Y.-D. Machine learning data imputation and prediction of foraging group size in a Kleptoparasitic spider. Mathematics 2021, 9, 415. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Orr, G.B.; Müller, K.-R. Efficient backprop. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2002; pp. 9–50. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-term residential load forecasting based on LSTM recurrent neural network. IEEE Trans. Smart Grid 2017, 10, 841–851. [Google Scholar] [CrossRef]
- Nelson, D.M.; Pereira, A.C.; De Oliveira, R.A. Stock market’s price movement prediction with LSTM neural networks. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1419–1426. [Google Scholar]
- Duan, J.; Wang, P.; Ma, W.; Tian, X.; Fang, S.; Cheng, Y.; Chang, Y.; Liu, H. Short-term wind power forecasting using the hybrid model of improved variational mode decomposition and Correntropy Long Short-term memory neural network. Energy 2021, 214, 118980. [Google Scholar] [CrossRef]
- Huang, X.; Li, Q.; Tai, Y.; Chen, Z.; Zhang, J.; Shi, J.; Gao, B.; Liu, W. Hybrid deep neural model for hourly solar irradiance forecasting. Renew. Energy 2021, 171, 1041–1060. [Google Scholar] [CrossRef]
- Schwalbert, R.A.; Amado, T.; Corassa, G.; Pott, L.P.; Prasad, P.V.; Ciampitti, I.A. Satellite-based soybean yield forecast: Integrating machine learning and weather data for improving crop yield prediction in southern Brazil. Agric. For. Meteorol. 2020, 284, 107886. [Google Scholar] [CrossRef]
- Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
- Jia, P.; Liu, H.; Wang, S.; Wang, P. Research on a mine gas concentration forecasting model based on a GRU network. IEEE Access 2020, 8, 38023–38031. [Google Scholar] [CrossRef]
- Sankaranarayanan, S.; Balan, J.; Walsh, J.R.; Wu, Y.; Minnich, S.; Piazza, A.; Osborne, C.; Oliver, G.R.; Lesko, J.; Bates, K.L. COVID-19 mortality prediction from deep learning in a large multistate electronic health record and laboratory information system data set: Algorithm development and validation. J. Med. Internet Res. 2021, 23, e30157. [Google Scholar] [CrossRef]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
- Gutierrez-Torre, A.; Bahadori, K.; Baig, S.-U.; Iqbal, W.; Vardanega, T.; Berral, J.L.; Carrera, D. Automatic distributed deep learning using resource-constrained edge devices. IEEE Internet Things J. 2021, 9, 15018–15029. [Google Scholar] [CrossRef]
- Tsay, R.S. Analysis of Financial Time Series; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
- Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Montgomery, D.C.; Jennings, C.L.; Kulahci, M. Introduction to Time Series Analysis and Forecasting; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; Volume 30. [Google Scholar]
Country | Count | Min | Max | Mean | Med | SD | Q1 | Q3 | IQR |
---|---|---|---|---|---|---|---|---|---|
Australia | 31.0 | −3.610 | 6.986 | 2.829 | 3.091 | 1.849 | 1.780 | 3.714 | 1.934 |
Brazil | 31.0 | 0.253 | 5.034 | 2.660 | 3.042 | 1.398 | 1.744 | 3.711 | 1.968 |
Canada | 31.0 | 0.141 | 9.171 | 2.785 | 2.261 | 1.992 | 1.531 | 3.736 | 2.205 |
China | 31.0 | 0.966 | 6.187 | 3.311 | 3.487 | 1.388 | 2.376 | 4.418 | 2.043 |
France | 31.0 | 0.203 | 3.875 | 1.897 | 1.569 | 0.960 | 1.277 | 2.549 | 1.271 |
Germany | 31.0 | −0.725 | 12.732 | 2.018 | 1.843 | 2.350 | 0.563 | 2.608 | 2.045 |
India | 31.0 | 0.027 | 3.621 | 1.254 | 1.056 | 0.863 | 0.612 | 1.862 | 1.251 |
Italy | 31.0 | −1.167 | 2.981 | 0.854 | 0.724 | 0.878 | 0.284 | 1.312 | 1.028 |
Japan | 31.0 | −0.052 | 1.221 | 0.238 | 0.129 | 0.288 | 0.046 | 0.350 | 0.304 |
Korea | 31.0 | 0.212 | 2.156 | 0.855 | 0.780 | 0.495 | 0.496 | 1.033 | 0.537 |
Mexico | 31.0 | 0.877 | 3.988 | 2.491 | 2.564 | 0.751 | 2.181 | 2.892 | 0.710 |
Russia | 31.0 | 0.175 | 4.503 | 1.649 | 1.201 | 1.243 | 0.583 | 2.577 | 1.994 |
Spain | 31.0 | 0.640 | 6.770 | 2.789 | 2.405 | 1.343 | 1.869 | 3.442 | 1.572 |
UK | 31.0 | −0.864 | 11.929 | 3.812 | 2.280 | 3.238 | 1.735 | 5.837 | 4.102 |
USA | 31.0 | 0.465 | 3.406 | 1.593 | 1.473 | 0.762 | 1.034 | 2.058 | 1.024 |
Optimizer | Methods | Parameters | MAPE | ||||
---|---|---|---|---|---|---|---|
Adam | GRU | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | |
1000 | 200 | 200 | 256 | 0.004 | 2.774 | ||
LSTM | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | ||
1000 | 200 | 200 | 256 | 0.004 | 4.208 | ||
CNN | Epochs | CONV1D | CNN_DENSE | Batch-size | Learning rate | ||
1000 | 200 | 50 | 256 | 0.004 | 3.389 | ||
RMSprop | GRU | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | |
100 | 400 | 400 | 256 | 0.004 | 2.809 | ||
LSTM | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | ||
100 | 400 | 400 | 256 | 0.004 | 4.209 | ||
CNN | Epochs | CONV1D | CNN_DENSE | Batch-size | Learning rate | ||
100 | 400 | 100 | 256 | 0.004 | 4.033 | ||
Adagrad | GRU | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | |
32 | 100 | 100 | 256 | 0.004 | 2.852 | ||
LSTM | Epochs | Layer1 | Layer2 | Batch-size | Learning rate | ||
32 | 100 | 100 | 256 | 0.004 | 4.207 | ||
CNN | Epochs | CONV1D | CNN_DENSE | Batch-size | Learning rate | ||
32 | 100 | 10 | 256 | 0.004 | 86.798 |
L250, ep100 | L150, ep100 | L250, ep32 | L150, ep32 | L200, p1000 | |
---|---|---|---|---|---|
MAE | 0.518 | 0.518 | 0.510 | 0.521 | 0.528 |
RMSE | 0.741 | 0.738 | 0.720 | 0.743 | 0.747 |
MAPE (%) | 2.794 | 2.784 | 2.775 | 2.804 | 2.774 |
Country | Metric | ARIMA | ETS | SVR | XGB | CNN | LSTM | BiRNN | ASRNN | GRU |
---|---|---|---|---|---|---|---|---|---|---|
MAE | 24.854 | 0.513 | 1.267 | 0.747 | 0.788 | 1.345 | 1.341 | 0.977 | 0.618 | |
Australia | RMSE | 27.672 | 0.782 | 1.492 | 0.970 | 1.082 | 1.548 | 1.544 | 0.959 | 0.852 |
MAPE(%) | 135.732 | 2.585 | 6.414 | 3.794 | 3.976 | 6.818 | 5.633 | 2.967 | 3.089 | |
MAE | 4.637 | 0.186 | 1.136 | 0.750 | 0.5134 | 0.977 | 0.975 | 0.982 | 0.322 | |
Brazil | RMSE | 4.954 | 0.286 | 1.164 | 0.821 | 0.6463 | 1.008 | 1.007 | 0.884 | 0.379 |
MAPE(%) | 24.177 | 1.196 | 5.628 | 3.727 | 2.538 | 4.837 | 3.551 | 2.781 | 1.594 | |
MAE | 30.841 | 0.543 | 0.585 | 0.585 | 0.587 | 0.867 | 0.865 | 1.053 | 0.526 | |
Canada | RMSE | 35.102 | 0.773 | 0.926 | 0.836 | 0.920 | 1.108 | 1.106 | 1.155 | 0.825 |
MAPE(%) | 150.088 | 1.844 | 2.665 | 2.684 | 2.679 | 4.007 | 3.098 | 2.998 | 2.402 | |
MAE | 1.332 | 0.973 | 0.786 | 0.721 | 0.310 | 0.669 | 0.670 | 0.628 | 0.166 | |
China | RMSE | 1.592 | 0.209 | 0.850 | 0.772 | 0.395 | 0.699 | 0.700 | 0.328 | 0.197 |
MAPE(%) | 9.100 | 1.128 | 4.741 | 4.363 | 1.874 | 4.043 | 3.668 | 2.885 | 1.005 | |
MAE | 35.299 | 0.712 | 0.578 | 0.569 | 0.570 | 0.769 | 0.767 | 1.007 | 0.586 | |
France | RMSE | 39.687 | 0.798 | 0.728 | 0.993 | 0.775 | 1.010 | 1.008 | 0.864 | 0.796 |
MAPE(%) | 150.914 | 2.333 | 2.403 | 2.319 | 2.363 | 3.167 | 2.553 | 3.225 | 2.422 | |
MAE | 36.711 | 0.575 | 0.788 | 0.854 | 0.625 | 0.753 | 0.752 | 1.235 | 0.591 | |
Germany | RMSE | 41.099 | 0.990 | 1.262 | 1.150 | 1.089 | 1.206 | 1.205 | 1.255 | 0.790 |
MAPE(%) | 188.378 | 2.666 | 3.675 | 4.049 | 2.894 | 3.509 | 3.029 | 2.884 | 2.810 | |
MAE | 3.752 | 0.543 | 0.570 | 0.479 | 0.636 | 0.694 | 0.694 | 1.439 | 0.419 | |
India | RMSE | 3.902 | 0.735 | 0.841 | 0.521 | 0.909 | 1.007 | 1.006 | 1.160 | 0.617 |
MAPE(%) | 34.850 | 4.746 | 4.861 | 4.286 | 5.468 | 5.922 | 4.771 | 3.274 | 3.584 | |
MAE | 37.742 | 0.460 | 0.692 | 0.662 | 0.530 | 0.588 | 0.587 | 0.776 | 0.598 | |
Italy | RMSE | 42.589 | 0.852 | 0.817 | 0.959 | 0.849 | 0.957 | 0.957 | 0.923 | 0.936 |
MAPE(%) | 196.691 | 2.260 | 3.527 | 3.318 | 2.641 | 2.916 | 1.072 | 2.884 | 2.979 | |
MAE | 18.049 | 0.335 | 1.133 | 0.401 | 0.398 | 0.451 | 0.451 | 0.617 | 0.366 | |
Japan | RMSE | 19.050 | 0.588 | 1.556 | 0.581 | 0.588 | 0.719 | 0.718 | 0.626 | 0.532 |
MAPE(%) | 100.373 | 1.627 | 5.577 | 1.964 | 1.952 | 2.195 | 0.225 | 3.164 | 1.796 | |
MAE | 13.022 | 0.973 | 1.892 | 0.520 | 1.029 | 1.074 | 1.070 | 0.884 | 0.540 | |
Korea | RMSE | 13.252 | 1.086 | 2.381 | 0.646 | 1.313 | 1.526 | 1.520 | 2.861 | 0.644 |
MAPE(%) | 99.823 | 5.972 | 11.182 | 3.090 | 6.224 | 6.245 | 6.400 | 2.776 | 3.235 | |
MAE | 1.774 | 0.436 | 0.441 | 0.432 | 0.477 | 0.555 | 0.554 | 2.045 | 0.441 | |
Mexico | RMSE | 2.182 | 0.617 | 0.762 | 0.631 | 0.589 | 0.746 | 0.745 | 1.055 | 0.616 |
MAPE(%) | 18.7325 | 3.513 | 3.519 | 3.506 | 3.953 | 4.478 | 3.527 | 2.818 | 3.596 | |
MAE | 40.023 | 0.661 | 0.644 | 0.652 | 0.724 | 0.828 | 0.828 | 0.791 | 0.832 | |
Russia | RMSE | 45.101 | 1.079 | 1.060 | 0.980 | 1.165 | 1.306 | 1.305 | 1.163 | 1.005 |
MAPE(%) | 222.661 | 4.420 | 3.277 | 3.337 | 3.684 | 4.204 | 3.057 | 3.116 | 3.362 | |
MAE | 52.074 | 0.712 | 0.765 | 0.731 | 0.798 | 0.802 | 0.800 | 1.487 | 0.675 | |
Spain | RMSE | 58.233 | 1.258 | 1.437 | 1.039 | 1.253 | 1.384 | 1.383 | 1.483 | 1.242 |
MAPE(%) | 279.389 | 3.399 | 3.637 | 3.604 | 3.877 | 3.831 | 2.982 | 1.975 | 3.214 | |
MAE | 60.055 | 0.850 | 0.928 | 0.916 | 0.963 | 1.047 | 1.046 | 0.055 | 1.021 | |
UK | RMSE | 67.578 | 1.499 | 1.506 | 1.515 | 1.495 | 1.715 | 1.715 | 1.781 | 1.456 |
MAPE(%) | 311.637 | 4.010 | 4.450 | 4.362 | 4.651 | 4.973 | 2.478 | 2.578 | 4.957 | |
MAE | 12.714 | 0.239 | 0.737 | 0.208 | 0.2950 | 0.286 | 0.286 | 0.013 | 0.227 | |
USA | RMSE | 14.765 | 0.352 | 0.783 | 0.243 | 0.380 | 0.401 | 0.402 | 0.343 | 0.318 |
MAPE(%) | 85.847 | 1.660 | 5.243 | 1.459 | 2.068 | 1.979 | 1.372 | 1.875 | 1.570 | |
MAE | 24.858 | 0.581 | 0.863 | 0.615 | 0.616 | 0.780 | 0.779 | 0.933 | 0.528 | |
Total Average | RMSE | 27.784 | 0.793 | 1.171 | 0.844 | 0.897 | 1.089 | 1.088 | 1.123 | 0.747 |
MAPE(%) | 133.893 | 2.891 | 4.720 | 3.324 | 3.389 | 4.208 | 3.161 | 2.813 | 2.774 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, C.-H.; Molefyane, T.; Lin, Y.-D. The Forecasting of a Leading Country’s Government Expenditure Using a Recurrent Neural Network with a Gated Recurrent Unit. Mathematics 2023, 11, 3085. https://doi.org/10.3390/math11143085
Yang C-H, Molefyane T, Lin Y-D. The Forecasting of a Leading Country’s Government Expenditure Using a Recurrent Neural Network with a Gated Recurrent Unit. Mathematics. 2023; 11(14):3085. https://doi.org/10.3390/math11143085
Chicago/Turabian StyleYang, Cheng-Hong, Tshimologo Molefyane, and Yu-Da Lin. 2023. "The Forecasting of a Leading Country’s Government Expenditure Using a Recurrent Neural Network with a Gated Recurrent Unit" Mathematics 11, no. 14: 3085. https://doi.org/10.3390/math11143085
APA StyleYang, C. -H., Molefyane, T., & Lin, Y. -D. (2023). The Forecasting of a Leading Country’s Government Expenditure Using a Recurrent Neural Network with a Gated Recurrent Unit. Mathematics, 11(14), 3085. https://doi.org/10.3390/math11143085