State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting
Abstract
:1. Introduction
- Challenge 1: Large-scale time series have evident non-stationary properties, and nonlinear models, such as neural networks, rely heavily on data periodicity. As a result, it is a challenge to investigate the typical generation rule of large-scale series and improve the model’s generalization;
- Challenge 2: Hidden states influence the observation value of each moment, and the time span of a large-scale time series is larger. Therefore, it is a key issue to extend the hidden state estimation of time series from a certain moment to a period of time;
- Challenge 3: Time series depend on the values observed before. Another challenging assignment for the long-term forecasting of large-scale time series is how to retain temporal dependence in order to address the cumulative error problem.
- For Challenges 1 and 2, this paper first extracts sub-sequences by sliding window, secondly extends time points to larger scale sequences, then finally, designs an adaptive estimation method to encode the latent variables corresponding sub-sequences, and finally uses causal convolutional networks to predict future latent variables.
- For Challenge 3, SCACD employs the Cholesky [14,15] decomposition of the covariance matrix to maintain the temporal dependence, thus generating the latent variables. Based on the latent variables, SCACD infers the prior distribution of the observed variables and generates an observation sequence with the same approach.
- SCACD’s effectiveness was validated and analyzed on six publicly available standard large-scale datasets.
2. Related Work
2.1. Autoregressive Model
2.2. Transformer-Based Model
2.3. Causal Convolution-Based Model
3. Materials and Methods
3.1. Adaptive Estimation for
3.2. State-Causal Convolution
3.3. Sequence Generation
3.3.1. Decomposition and Sampling
3.3.2. Sequence Prediction
4. Results and Discussion
4.1. Hyperparameter for SCACD
4.2. Introduction of the Models
- FEDformer [5], being a Transformer-based model, computes attention coefficients in the frequency domain in order to represent point-wise interactions. Currently, FEDformer is the best model for long-term prediction, regardless of processing efficiency.
- ARIMA [35] is a linear model that includes fewer endogenous variables and is mainly used for short-term forecasting of stationary time series.
- LSTM [36] is a deep model for sequence prediction, which can capture the nonlinear relationship and solve the problem of RNN gradient disappearance by the memory unit.
- TCN [28] utilizes causal and dilated convolution to capture the multi-scale temporal features of time series, which can build deep networks. The method is effective in time series multi-step prediction.
- DeepAR [21] does not directly output definite prediction values but estimates the probability distribution of the future values to predict the observation sequence.
- GM11 [37] is obtained by constructing and solving the differential equation of the cumulative series, which has the advantage of fewer prior constraints on the series.
4.3. Introduction to Datasets
4.4. Comparison of Prediction Performance
4.5. Covariance Matrix Analysis
4.6. Analysis for the Dimension of z
4.7. Analysis of Efficiency
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1. Prediction Curves
Appendix A.2. Heatmaps for ETT
References
- Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Engle, R.F. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econom. J. Econom. Soc. 1982, 50, 987–1007. [Google Scholar] [CrossRef]
- Petneházi, G. Recurrent neural networks for time series forecasting. arXiv 2019, arXiv:1901.00069. [Google Scholar]
- Gardner, E.S., Jr. Exponential smoothing: The state of the art—Part II. Int. J. Forecast. 2006, 22, 637–666. [Google Scholar] [CrossRef]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. arXiv 2022, arXiv:2201.12740. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. Available online: https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (accessed on 27 November 2022).
- Cheng, L.; Zang, H.; Xu, Y.; Wei, Z.; Sun, G. Augmented convolutional network for wind power prediction: A new recurrent architecture design with spatial-temporal image inputs. IEEE Trans. Ind. 2021, 17, 6981–6993. [Google Scholar] [CrossRef]
- Deng, Q.; Söffker, D. A review of the current hmm-based approaches of driving behaviors recognition and prediction. IEEE Trans. Intell. Veh. 2021. [Google Scholar] [CrossRef]
- Li, L.; Zhang, J.; Yan, J.; Jin, Y.; Zhang, Y.; Duan, Y.; Tian, G. Synergetic learning of heterogeneous temporal sequences for multi-horizon probabilistic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021. [Google Scholar]
- Wu, Y.; Ni, J.; Cheng, W.; Zong, B.; Song, D.; Chen, Z.; Liu, Y.; Zhang, X.; Chen, H.; Davidson, S.B. Dynamic gaussian mixture based deep generative model for robust forecasting on sparse multivariate time series. Proc. AAAI Conf. Artif. Intell. 2021, 35, 651–659. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Chung, J.; Kastner, K.; Dinh, L.; Goel, K.; Courville, A.C.; Bengio, Y. A recurrent latent variable model for sequential data. Adv. Neural Inf. Process. Syst. 2015, 28, 2980–2988. [Google Scholar]
- Lyhagen, J. The exact covariance matrix of dynamic models with latent variables. Stat. Probab. Lett. 2005, 75, 133–139. [Google Scholar] [CrossRef]
- Pourahmadi, M. Cholesky decompositions and estimation of a covariance matrix: Orthogonality of variance–correlation parameters. Biometrika 2007, 94, 1006–1013. [Google Scholar] [CrossRef]
- Lv, J.; Guo, C.; Yang, H.; Li, Y. A moving average cholesky factor model in covariance modeling for composite quantile regression with longitudinal data. Comput. Stat. Data Anal. 2017, 112, 129–144. [Google Scholar] [CrossRef]
- Liu, J.; Hu, J.; Li, Z.; Sun, Q.; Ma, Z.; Zhu, J.; Wen, Y. Dynamic estimation of multi-dimensional deformation time series from insar based on kalman filter and strain model. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- de Bézenac, E.; Rangapuram, S.S.; Benidis, K.; Bohlke-Schneider, M.; Kurle, R.; Stella, L.; Hasson, H.; Gallinari, P.; Januschowski, T. Normalizing kalman filters for multivariate time series analysis. Adv. Neural Inf. Process. Syst. 2020, 33, 2995–3007. [Google Scholar]
- Abe, K.; Miyatake, H.; Oguri, K. A study on switching ar-hmm driving behavior model depending on driver’s states. In Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 30 September–3 October 2007; pp. 806–811. [Google Scholar]
- Barber, C.; Bockhorst, J.; Roebber, P. Auto-regressive hmm inference with incomplete data for short-horizon wind forecasting. Adv. Neural Inf. Process. Syst. 2010, 23, 136–144. [Google Scholar]
- Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent neural network regularization. arXiv 2014, arXiv:1409.2329. [Google Scholar]
- Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. Deepar: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 2020, 36, 1181–1191. [Google Scholar] [CrossRef]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
- Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar]
- Lai, G.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.-X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32, 5243–5253. [Google Scholar]
- Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? arXiv 2022, arXiv:2205.13504. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- Oord, A.V.D.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Yu, F.; Koltun, V.; Funkhouser, T. Dilated residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 8–14 December 2017; pp. 472–480. [Google Scholar]
- Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding convolution for semantic segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1451–1460. [Google Scholar]
- Lara-Benítez, P.; Carranza-García, M.; Luna-Romera, J.M.; Riquelme, J.C. Temporal convolutional networks applied to energy-related time series forecasting. Appl. Sci. 2020, 10, 2322. [Google Scholar] [CrossRef] [Green Version]
- Wilinski, A. Time series modeling and forecasting based on a markov chain with changing transition matrices. Expert Syst. Appl. 2019, 133, 163–172. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Gilbert, K. An arima supply chain model. Manag. Sci. 2005, 51, 305–310. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Kayacan, E.; Ulutas, B.; Kaynak, O. Grey system theory-based models in time series prediction. Expert Syst. Appl. 2010, 37, 1784–1789. [Google Scholar] [CrossRef]
- Zhou, J.; Lu, X.; Xiao, Y.; Su, J.; Lyu, J.; Ma, Y.; Dou, D. Sdwpf: A dataset for spatial dynamic wind power forecasting challenge at kdd cup 2022. arXiv 2022, arXiv:2208.04360. [Google Scholar]
Strongly Periodic Datasets | |
---|---|
ETT | ETT is a key time-series indicator for long-term power deployment, which contains electricity data recorded every 15 min for two different counties in China from July 2016 to July 2018. |
Electricity | This dataset contains the hourly electricity consumption of 321 customers from 2012 to 2014. It records electricity usage in kilowatt-hours and is timestamped every 15 min. |
Weather | In order to verify the effect of the algorithm, we selected weather data containing 21 meteorological indicators (such as air temperature and humidity) in 2020 from the public dataset, and its time series sampling frequency was 10 min. |
Weekly periodic datasets | |
Exchange | Exchange records the daily exchange rates of eight different countries as a type of classical time series, with frequencies being daily, and the dataset covers exchange rate data from 1990 to 2016. |
Traffic | Traffic is a collection of hourly data covering the highway system in all major urban areas of California, recording road occupancy rates measured by different sensors at different times of the day. |
ILI | To verify the robustness of the algorithm on multiple time series datasets, we selected this dataset as the final part, which includes weekly Influenza-Like Illness (ILI) patient data recorded from 2002 to 2021, with the time series frequency being the weekly frequency data, which describes the ratio of ILI patients to the total number of patients. |
Models | SCACD | SCACD-nc | FEDformer | ARIMA | LSTM | TCN | DeepAR | GM11 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Len | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | |
ETT | 96 | 0.066 | 0.186 | 0.063 | 0.184 | 0.035 | 0.146 | 0.242 | 0.367 | 4.865 | 2.129 | 3.041 | 1.330 | 1.485 | 1.052 | 0.041 | 0.150 |
192 | 0.075 | 0.215 | 0.082 | 0.225 | 0.066 | 0.202 | 0.147 | 0.284 | 4.094 | 1.952 | 3.072 | 1.339 | 3.871 | 1.772 | 0.886 | 0.243 | |
336 | 0.100 | 0.237 | 0.131 | 0.288 | 0.088 | 0.234 | 0.200 | 0.312 | 3.344 | 1.713 | 3.105 | 1.348 | 1.045 | 0.783 | 0.370 | 0.286 | |
720 | 0.073 | 0.211 | 0.139 | 0.280 | 0.106 | 0.252 | 0.138 | 0.249 | 3.413 | 1.676 | 3.135 | 1.354 | 0.998 | 0.825 | 0.096 | 0.236 | |
Electricity | 96 | 0.213 | 0.340 | 0.252 | 0.359 | 0.251 | 0.367 | 0.922 | 0.766 | 1.034 | 0.794 | 0.985 | 0.813 | 0.573 | 0.615 | 0.594 | 0.624 |
192 | 0.378 | 0.463 | 0.380 | 0.463 | 0.056 | 0.184 | 1.047 | 0.801 | 0.723 | 0.668 | 0.996 | 0.821 | 0.502 | 0.576 | 3.375 | 0.806 | |
336 | 0.160 | 0.289 | 0.224 | 0.341 | 0.082 | 0.225 | 0.991 | 0.749 | 1.428 | 0.890 | 1.000 | 0.824 | 0.582 | 0.619 | 0.967 | 0.369 | |
720 | 0.208 | 0.302 | 0.566 | 0.545 | 0.403 | 0.473 | 3.298 | 0.990 | 0.559 | 0.622 | 1.438 | 0.784 | 0.671 | 0.675 | 1.166 | 0.636 | |
Exchange | 96 | 0.072 | 0.202 | 0.073 | 0.203 | 0.170 | 0.319 | 0.059 | 0.180 | 0.247 | 0.432 | 3.004 | 1.432 | 1.368 | 1.095 | 0.143 | 0.129 |
192 | 0.054 | 0.191 | 0.054 | 0.182 | 0.311 | 0.431 | 0.259 | 0.370 | 0.412 | 0.514 | 3.048 | 1.444 | 0.275 | 0.431 | 0.087 | 0.166 | |
336 | 0.071 | 0.213 | 0.066 | 0.204 | 0.599 | 0.592 | 0.668 | 0.502 | 0.445 | 0.524 | 3.113 | 1.459 | 0.713 | 0.732 | 0.264 | 0.304 | |
720 | 0.187 | 0.329 | 1.331 | 0.980 | 1.432 | 0.924 | 1.536 | 0.934 | 0.945 | 0.826 | 3.150 | 1.458 | 0.282 | 0.424 | 0.839 | 0.550 | |
Weather | 96 | 0.055 | 0.166 | 0.081 | 0.204 | 0.211 | 0.323 | 0.500 | 0.579 | 0.960 | 0.855 | 1.438 | 0.784 | 0.973 | 0.837 | 1.029 | 0.880 |
192 | 0.118 | 0.232 | 0.131 | 0.247 | 0.223 | 0.328 | 0.747 | 0.727 | 0.936 | 0.824 | 1.463 | 0.794 | 1.109 | 0.869 | 0.943 | 0.848 | |
336 | 0.057 | 0.171 | 0.059 | 0.127 | 0.220 | 0.326 | 0.734 | 0.712 | 0.964 | 0.839 | 1.479 | 0.799 | 0.834 | 0.807 | 0.925 | 0.839 | |
720 | 0.102 | 0.230 | 0.148 | 0.290 | 0.233 | 0.333 | 0.859 | 0.801 | 0.908 | 0.777 | 1.499 | 0.804 | 0.823 | 0.791 | 0.924 | 0.844 | |
Traffic | 96 | 0.002 | 0.030 | 0.005 | 0.057 | 0.005 | 0.056 | 0.005 | 0.052 | 0.040 | 0.153 | 0.615 | 0.589 | 0.125 | 0.282 | 0.414 | 0.062 |
192 | 0.001 | 0.022 | 0.008 | 0.068 | 0.005 | 0.057 | 0.006 | 0.050 | 0.083 | 0.248 | 0.629 | 0.600 | 0.096 | 0.247 | 0.791 | 0.089 | |
336 | 0.002 | 0.029 | 0.001 | 0.026 | 0.005 | 0.054 | 1.666 | 0.095 | 0.179 | 0.399 | 0.639 | 0.608 | 0.124 | 0.280 | 0.798 | 0.094 | |
720 | 0.002 | 0.035 | 0.005 | 0.052 | 0.004 | 0.046 | 1.744 | 0.059 | 0.077 | 0.258 | 0.639 | 0.610 | 0.321 | 0.451 | 0.560 | 0.072 | |
ILI | 24 | 0.111 | 0.284 | 0.158 | 0.334 | 0.701 | 0.629 | 0.193 | 0.341 | 1.005 | 0.842 | 6.624 | 1.830 | 0.893 | 0.750 | 0.765 | 0.481 |
36 | 0.181 | 0.356 | 0.187 | 0.343 | 0.581 | 0.613 | 0.258 | 0.399 | 1.098 | 0.852 | 6.858 | 1.879 | 0.948 | 0.763 | 0.279 | 0.371 | |
48 | 0.062 | 0.196 | 0.054 | 0.168 | 0.713 | 0.699 | 0.343 | 0.427 | 0.929 | 0.729 | 6.968 | 1.892 | 1.045 | 0.803 | 0.296 | 0.361 | |
60 | 0.065 | 0.193 | 0.197 | 0.327 | 0.840 | 0.768 | 0.229 | 0.379 | 0.789 | 0.665 | 7.127 | 1.918 | 1.094 | 0.818 | 0.851 | 0.443 | |
10–100% | 100–500% | >=500% |
The Dim of Latent | SCACD-8 | SCACD-16 | SCACD-24 | SCACD-32 | |||||
---|---|---|---|---|---|---|---|---|---|
Len | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | |
ETT | 96 | 0.064 | 0.185 | 0.066 | 0.186 | 0.088 | 0.216 | 0.063 | 0.180 |
192 | 0.140 | 0.296 | 0.075 | 0.215 | 0.126 | 0.280 | 0.116 | 0.262 | |
336 | 0.099 | 0.250 | 0.100 | 0.237 | 0.148 | 0.290 | 0.138 | 0.288 | |
720 | 0.189 | 0.344 | 0.073 | 0.211 | 0.225 | 0.368 | 0.149 | 0.310 | |
Electricity | 96 | 0.291 | 0.403 | 0.213 | 0.340 | 0.317 | 0.410 | 0.252 | 0.366 |
192 | 0.458 | 0.523 | 0.378 | 0.463 | 0.387 | 0.473 | 0.379 | 0.460 | |
336 | 0.212 | 0.333 | 0.160 | 0.289 | 0.261 | 0.365 | 0.255 | 0.380 | |
720 | 0.375 | 0.466 | 0.208 | 0.302 | 0.504 | 0.499 | 0.483 | 0.527 | |
Exchange | 96 | 0.078 | 0.206 | 0.072 | 0.202 | 0.044 | 0.165 | 0.057 | 0.197 |
192 | 0.138 | 0.297 | 0.054 | 0.191 | 0.261 | 0.369 | 0.063 | 0.195 | |
336 | 0.405 | 0.501 | 0.071 | 0.213 | 0.128 | 0.280 | 0.564 | 0.581 | |
720 | 0.757 | 0.624 | 0.908 | 0.780 | 0.796 | 0.844 | 0.509 | 0.597 | |
Traffic | 96 | 0.095 | 0.203 | 0.055 | 0.166 | 0.087 | 0.197 | 0.087 | 0.199 |
192 | 0.100 | 0.217 | 0.118 | 0.232 | 0.077 | 0.201 | 0.165 | 0.275 | |
336 | 0.035 | 0.117 | 0.057 | 0.171 | 0.044 | 0.130 | 0.047 | 0.133 | |
720 | 0.132 | 0.262 | 0.102 | 0.230 | 0.168 | 0.281 | 0.144 | 0.267 | |
Weather | 96 | 0.001 | 0.023 | 0.002 | 0.030 | 0.006 | 0.056 | 0.018 | 0.075 |
192 | 0.004 | 0.050 | 0.001 | 0.022 | 0.001 | 0.023 | 0.004 | 0.050 | |
336 | 0.005 | 0.071 | 0.002 | 0.029 | 0.003 | 0.045 | 0.003 | 0.044 | |
720 | 0.005 | 0.039 | 0.002 | 0.035 | 0.215 | 0.070 | 0.036 | 0.144 | |
ILI | 24 | 0.094 | 0.260 | 0.111 | 0.284 | 0.129 | 0.294 | 0.094 | 0.242 |
36 | 0.170 | 0.336 | 0.181 | 0.356 | 0.187 | 0.358 | 0.186 | 0.346 | |
48 | 0.135 | 0.304 | 0.062 | 0.196 | 0.167 | 0.344 | 0.117 | 0.276 | |
60 | 0.088 | 0.231 | 0.065 | 0.193 | 0.117 | 0.261 | 0.029 | 0.132 |
96 | 192 | 336 | 720 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
P | B | F | P | B | F | P | B | F | P | B | F | |
SCACD | ||||||||||||
Autoformer | ||||||||||||
FEDformer |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; He, Z.; Geng, T.; Huang, F.; Gong, P.; Yi, P.; Peng, J. State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting. Sensors 2023, 23, 809. https://doi.org/10.3390/s23020809
Wang J, He Z, Geng T, Huang F, Gong P, Yi P, Peng J. State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting. Sensors. 2023; 23(2):809. https://doi.org/10.3390/s23020809
Chicago/Turabian StyleWang, Jince, Zibo He, Tianyu Geng, Feihu Huang, Pu Gong, Peiyu Yi, and Jian Peng. 2023. "State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting" Sensors 23, no. 2: 809. https://doi.org/10.3390/s23020809
APA StyleWang, J., He, Z., Geng, T., Huang, F., Gong, P., Yi, P., & Peng, J. (2023). State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting. Sensors, 23(2), 809. https://doi.org/10.3390/s23020809