LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network
Abstract
:1. Introduction
- We propose an innovative spatio-temporal traffic flow prediction scheme, which addresses the research difficulties in spatial and temporal dimensions.
- In the spatial local dimension, we propose bidirectional graph convolution to extract local traffic characteristics with bidirectional propagation. It is more in line with the reality of traffic flow compared to previous graph neural network studies.
- In the spatial global dimension, we express spatial nodes as spatial sequences and employ a probabilistic sparse self-attention mechanism for node interaction computation. Our method can reduce the computational complexity while maintaining the effectiveness of feature extraction.
- In the temporal dimension, we propose a multichannel temporal convolutional network to extract temporal heterogeneity features without the loss of commonality. We set the number of nodes equal to the number of channels to realize the fine extraction of multinode temporal features. Moreover, we utilize the dilated causal convolution, which can expand the sensory field to effectively extract the temporal features of long-term sequences while avoiding information leakage.
2. Related Work
2.1. Deep Learning for Traffic Flow Prediction
2.2. Attention Mechanism
3. Preliminaries
4. Methodology
4.1. Bidirectional Graph Convolutional Network
4.2. Probabilistic Sparse Self-Attention
4.2.1. Canonical Self-Attention
4.2.2. Multihead Self-Attention
4.2.3. Probabilistic Sparse Measurement
4.3. Multichannel Temporal Convolutional Network
4.3.1. Temporal Convolutional Network
4.3.2. Multichannel Mechanism
4.4. Prediction Module
5. Experiments
5.1. Datasets Description
5.2. Baseline Methods
- FC-LSTM: The Long Short-Term Memory Network is suitable for dealing with long-term dependencies in time series [60].
- DCRNN: The Diffusion Convolutional Recurrent Neural Network uses bidirectional random walks on the graph to capture spatial correlations and an encoder–decoder architecture with predetermined sampling to capture temporal correlations [39].
- STGCN: The Spatial–Temporal Graph Convolutional Network employs a temporal gated convolutional module to capture temporal correlations and a graph neural network to capture spatial correlations [38].
- ASTGCN(r): The ASTGCN constructs recent, daily, and weekly modules. A spatial–temporal attention module is used to capture spatial–temporal dynamic features, a graph convolution module is used to capture spatial features, and a standard convolution is used to capture temporal features. The comparison algorithm has recently adopted the recent modules in the model [32].
- STSGCN: The Spatial–Temporal Synchronous Graph Convolutional Networks construct a spatial–temporal synchronization graph to capture spatial–temporal relationships simultaneously and obtain prediction results by stacking multiple modules to aggregate long-range spatial–temporal relationships and heterogeneity [57].
- STFGNN: The Spatial–Temporal Fusion Graph Neural Networks constructs a temporal graph based on time series similarity that can learn long-range dependencies, and it employs a gated dilated convolution module whose large dilation rate can capture long-range dependencies [59].
5.3. Experiment Settings
5.4. Experiment Results
5.4.1. Model Prediction Effects Comparison
5.4.2. Horizon Analysis
5.4.3. Ablation Experiment
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lv, Y.; Chen, Y.; Zhang, X.; Duan, Y.; Li, N.L. Social media based transportation research: The state of the work and the networking. IEEE/CAA J. Autom. Sin. 2017, 4, 19–26. [Google Scholar] [CrossRef]
- Li, Z.; Xiong, G.; Chen, Y.; Lv, Y.; Hu, B.; Zhu, F.; Wang, F.Y. A hybrid deep learning approach with GCN and LSTM for traffic flow prediction. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1929–1933. [Google Scholar]
- Liu, Y.; Feng, T.; Rasouli, S.; Wong, M. ST-DAGCN: A spatiotemporal dual adaptive graph convolutional network model for traffic prediction. Neurocomputing 2024, 601, 128175. [Google Scholar] [CrossRef]
- Li, Z.; Wei, S.; Wang, H.; Wang, C. ADDGCN: A Novel Approach with Down-Sampling Dynamic Graph Convolution and Multi-Head Attention for Traffic Flow Forecasting. Appl. Sci. 2024, 14, 4130. [Google Scholar] [CrossRef]
- Xia, Z.; Zhang, Y.; Yang, J.; Xie, L. Dynamic spatial–temporal graph convolutional recurrent networks for traffic flow forecasting. Expert Syst. Appl. 2024, 240, 122381. [Google Scholar] [CrossRef]
- EDES, Y.J.S.; Michalopoulos, P.G.; Plum, R.A. Improved estimation of traffic flow for real-time control. Transp. Res. Rec. 1980, 95, 28–39. [Google Scholar]
- Ahmed, M.S.; Cook, A.R. Analysis of Freeway Traffic Time-Series Data by Using Box-Jenkins Techniques; Transportation Research Board: Washington, DC, USA, 1979; Number 722. [Google Scholar]
- Williams, B.M.; Hoel, L.A. Modeling and forecasting vehicular traffic flow as a seasonal ARIMA process: Theoretical basis and empirical results. J. Transp. Eng. 2003, 129, 664–672. [Google Scholar] [CrossRef]
- Okutani, I.; Stephanedes, Y.J. Dynamic prediction of traffic volume through Kalman filtering theory. Transp. Res. Part B Methodol. 1984, 18, 1–11. [Google Scholar] [CrossRef]
- Su, H.; Zhang, L.; Yu, S. Short-term traffic flow prediction based on incremental support vector regression. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Washington, DC, USA, 24–27 August 2007; Volume 1, pp. 640–645. [Google Scholar]
- Yang, S.; Wu, J.; Du, Y.; He, Y.; Chen, X. Ensemble learning for short-term traffic prediction based on gradient boosting machine. J. Sens. 2017, 2017, 7074143. [Google Scholar] [CrossRef]
- Tian, Y.; Zhang, K.; Li, J.; Lin, X.; Yang, B. LSTM-based traffic flow prediction with missing data. Neurocomputing 2018, 318, 297–305. [Google Scholar] [CrossRef]
- Fu, R.; Zhang, Z.; Li, L. Using LSTM and GRU neural network methods for traffic flow prediction. In Proceedings of the 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), Wuhan, China, 11–13 November 2016; pp. 324–328. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- Ni, Q.; Peng, W.; Zhu, Y.; Ye, R. Graph dropout self-learning hierarchical graph convolution network for traffic prediction. Eng. Appl. Artif. Intell. 2023, 123, 106460. [Google Scholar] [CrossRef]
- Chen, J.; Xu, M.; Xu, W.; Li, D.; Peng, W.; Xu, H. A flow feedback traffic prediction based on visual quantified features. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10067–10075. [Google Scholar] [CrossRef]
- Zheng, G.; Chai, W.K.; Duanmu, J.L.; Katos, V. Hybrid deep learning models for traffic prediction in large-scale road networks. Inf. Fusion 2023, 92, 93–114. [Google Scholar] [CrossRef]
- Song, C.; Lee, H.; Kang, C.; Lee, W.; Kim, Y.B.; Cha, S.W. Traffic speed prediction under weekday using convolutional neural networks concepts. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1293–1298. [Google Scholar]
- Ma, X.; Dai, Z.; He, Z.; Ma, J.; Wang, Y.; Wang, Y. Learning traffic as images: A deep convolutional neural network for large-scale transportation network speed prediction. Sensors 2017, 17, 818. [Google Scholar] [CrossRef]
- Zhang, D.; Yan, J.; Polat, K.; Alhudhaif, A.; Li, J. Multimodal joint prediction of traffic spatial-temporal data with graph sparse attention mechanism and bidirectional temporal convolutional network. Adv. Eng. Inform. 2024, 62, 102533. [Google Scholar] [CrossRef]
- Liu, Y.; Zheng, H.; Feng, X.; Chen, Z. Short-term traffic flow prediction with Conv-LSTM. In Proceedings of the 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 11–13 October 2017; pp. 1–6. [Google Scholar]
- Khajeh Hosseini, M.; Talebpour, A. Traffic prediction using time-space diagram: A convolutional neural network approach. Transp. Res. Rec. 2019, 2673, 425–435. [Google Scholar] [CrossRef]
- Zheng, H.; Lin, F.; Feng, X.; Chen, Y. A hybrid deep learning model with attention-based conv-LSTM networks for short-term traffic flow prediction. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6910–6920. [Google Scholar] [CrossRef]
- Yu, B.; Lee, Y.; Sohn, K. Forecasting road traffic speeds by considering area-wide spatio-temporal dependencies based on a graph convolutional neural network (GCN). Transp. Res. Part C Emerg. Technol. 2020, 114, 189–204. [Google Scholar] [CrossRef]
- Chen, Z.; Zhao, B.; Wang, Y.; Duan, Z.; Zhao, X. Multitask learning and GCN-based taxi demand prediction for a traffic road network. Sensors 2020, 20, 3776. [Google Scholar] [CrossRef]
- Kuang, H.; Qu, H.; Deng, K.; Li, J. A physics-informed graph learning approach for citywide electric vehicle charging demand prediction and pricing. Appl. Energy 2024, 363, 123059. [Google Scholar] [CrossRef]
- Zhang, X.; Huang, C.; Xu, Y.; Xia, L.; Dai, P.; Bo, L.; Zhang, J.; Zheng, Y. Traffic Flow Forecasting with Spatial-Temporal Graph Diffusion Network. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 19–21 May 2021; Volume 35, pp. 15008–15015. [Google Scholar]
- Luo, Q.; He, S.; Han, X.; Wang, Y.; Li, H. LSTTN: A Long-Short Term Transformer-based spatiotemporal neural network for traffic flow forecasting. Knowl. Based Syst. 2024, 293, 111637. [Google Scholar] [CrossRef]
- Wu, F.; Zheng, C.; Zhang, C.; Ma, J.; Sun, K. Multi-View Multi-Attention Graph Neural Network for Traffic Flow Forecasting. Appl. Sci. 2023, 13, 711. [Google Scholar] [CrossRef]
- Lian, Q.; Sun, W.; Dong, W. Hierarchical Spatial-Temporal Neural Network with Attention Mechanism for Traffic Flow Forecasting. Appl. Sci. 2023, 13, 9729. [Google Scholar] [CrossRef]
- Shi, X.; Qi, H.; Shen, Y.; Wu, G.; Yin, B. A Spatial–Temporal Attention Approach for Traffic Prediction. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4909–4918. [Google Scholar] [CrossRef]
- Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 922–929. [Google Scholar]
- Han, S.Y.; Zhao, Q.; Sun, Q.W.; Zhou, J.; Chen, Y.H. Engs-dgr: Traffic flow forecasting with indefinite forecasting interval by ensemble gcn, seq2seq, and dynamic graph reconfiguration. Appl. Sci. 2022, 12, 2890. [Google Scholar] [CrossRef]
- Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transp. Syst. 2014, 16, 865–873. [Google Scholar] [CrossRef]
- Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-gcn: A temporal graph convolutional network for traffic prediction. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3848–3858. [Google Scholar] [CrossRef]
- Zhang, J.; Zheng, Y.; Qi, D. Deep spatio-temporal residual networks for citywide crowd flows prediction. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Ke, J.; Zheng, H.; Yang, H.; Chen, X.M. Short-term forecasting of passenger demand under on-demand ride services: A spatio-temporal deep learning approach. Transp. Res. Part C Emerg. Technol. 2017, 85, 591–608. [Google Scholar] [CrossRef]
- Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 3634–3640. [Google Scholar]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Wang, A.; Ye, Y.; Song, X.; Zhang, S.; James, J. Traffic prediction with missing data: A multi-task learning approach. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4189–4202. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 19–21 May 2021. [Google Scholar]
- Lin, Z.; Li, M.; Zheng, Z.; Cheng, Y.; Yuan, C. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11531–11538. [Google Scholar]
- Zheng, C.; Fan, X.; Wang, C.; Qi, J. Gman: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1234–1241. [Google Scholar]
- Wang, X.; Ma, Y.; Wang, Y.; Jin, W.; Wang, X.; Tang, J.; Jia, C.; Yu, J. Traffic flow prediction via spatial temporal graph neural network. In Proceedings of the Web Conference 2020, Taipei Taiwan, 20–24 April 2020; pp. 1082–1092. [Google Scholar]
- Xie, Y.; Xiong, Y.; Zhu, Y. SAST-GNN: A self-attention based spatio-temporal graph neural network for traffic prediction. In Proceedings of the International Conference on Database Systems for Advanced Applications, Jeju, Republic of Korea, 24–27 September 2020; Springer: Cham, Switzerland, 2020; pp. 707–714. [Google Scholar]
- Beltagy, I.; Peters, M.E.; Cohan, A. Longformer: The long-document transformer. arXiv 2020, arXiv:2004.05150. [Google Scholar]
- Child, R.; Gray, S.; Radford, A.; Sutskever, I. Generating long sequences with sparse transformers. arXiv 2019, arXiv:1904.10509. [Google Scholar]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Kitaev, N.; Kaiser, L.; Levskaya, A. Reformer: The Efficient Transformer. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Liu, Z.; Miao, Z.; Zhan, X.; Wang, J.; Gong, B.; Yu, S.X. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2537–2546. [Google Scholar]
- Li, Y.; Shen, T.; Long, G.; Jiang, J.; Zhou, T.; Zhang, C. Improving Long-Tail Relation Extraction with Collaborating Relation-Augmented Attention. In Proceedings of the 28th International Conference on Computational Linguistics, Virtual Conference, 8–13 December 2020; pp. 1653–1664. [Google Scholar]
- Zhao, X.; Qi, R. Improving Long-tail Relation Extraction with Knowledge-aware Hierarchical Attention. In Proceedings of the 2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 20–22 August 2021; pp. 166–169. [Google Scholar]
- Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral networks and locally connected networks on graphs. arXiv 2013, arXiv:1312.6203. [Google Scholar]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Song, C.; Lin, Y.; Guo, S.; Wan, H. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 914–921. [Google Scholar]
- Ji, C.; Xu, Y.; Lu, Y.; Huang, X.; Zhu, Y. Contrastive Learning-Based Adaptive Graph Fusion Convolution Network With Residual-Enhanced Decomposition Strategy for Traffic Flow Forecasting. IEEE Internet Things J. 2024, 11, 20246–20259. [Google Scholar] [CrossRef]
- Li, M.; Zhu, Z. Spatial-temporal fusion graph neural networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 2–9 February 2021; Volume 35, pp. 4189–4196. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
Datasets | Nodes | Days | Time Steps | Missing Rate |
---|---|---|---|---|
PEMS03 | 358 | 91 | 26,208 | 0.672% |
PEMS04 | 307 | 59 | 16,992 | 3.182% |
PEMS07 | 883 | 98 | 28,224 | 0.452% |
PEMS08 | 170 | 62 | 17,856 | 0.696% |
Model | Spatial Global | Spatial Local | Temporal |
---|---|---|---|
FC-LSTM | - | - | RNN |
DCRNN | - | Bidirectional random walks | RNN |
STGCN | - | GCN | TCN |
ASTGCN(R) | - | GCN | TCN |
STSGCN | - | Synchronization graph | |
STFGNN | Temporal graph | - | Gated dilated convolution |
LGTCN | Probabilistic Sparse Self-Attention | Bidirectional GCN | Multi-Channel TCN |
Datasets | Metrics | FC-LSTM [60] | DCRNN [39] | STGCN [38] | ASTGCN(r) [32] | STSGCN [57] | STFGNN [59] | LGTCN |
---|---|---|---|---|---|---|---|---|
PEMS03 | MAE | 21.33 ± 0.24 | 18.18 ± 0.15 | 17.49 ± 0.46 | 17.69 ± 1.43 | 17.48 ± 0.15 | 16.77 ± 0.09 | 15.21 ± 0.04 |
MAPE(%) | 23.33 ± 0.23 | 18.91 ± 0.82 | 17.15 ± 0.45 | 19.40 ± 2.24 | 16.78 ± 0.20 | 16.30 ± 0.09 | 14.79 ± 0.20 | |
RMSE | 35.11 ± 0.50 | 30.31 ± 0.25 | 30.12 ± 0.70 | 29.66 ± 1.68 | 29.21 ± 0.56 | 28.34 ± 0.46 | 24.11 ± 0.07 | |
PEMS04 | MAE | 27.14 ± 0.20 | 24.70 ± 0.22 | 22.70 ± 0.64 | 22.93 ± 1.29 | 21.19 ± 0.10 | 19.83 ± 0.06 | 19.58 ± 0.09 |
MAPE(%) | 18.20 ± 0.40 | 17.12 ± 0.37 | 14.59 ± 0.21 | 16.56 ± 1.36 | 13.90 ± 0.05 | 13.02 ± 0.05 | 15.92 ± 0.31 | |
RMSE | 41.59 ± 0.21 | 38.12 ± 0.26 | 35.55 ± 0.75 | 35.22 ± 1.90 | 33.65 ± 0.20 | 31.88 ± 0.14 | 31.37 ± 0.14 | |
PEMS07 | MAE | 29.98 ± 0.42 | 25.30 ± 0.52 | 25.38 ± 0.49 | 28.08 ± 2.34 | 24.26 ± 0.14 | 22.07 ± 0.11 | 22.03 ± 0.13 |
MAPE(%) | 13.20 ± 0.53 | 11.66 ± 0.33 | 11.08 ± 0.18 | 13.92 ± 1.65 | 10.21 ± 1.05 | 9.21 ± 0.07 | 9.96 ± 0.14 | |
RMSE | 45.94 ± 0.57 | 38.58 ± 0.70 | 38.78 ± 0.58 | 42.57 ± 3.31 | 39.03 ± 0.27 | 35.80 ± 0.18 | 34.83 ± 0.16 | |
PEMS08 | MAE | 22.20 ± 0.18 | 17.86 ± 0.03 | 18.02 ± 0.14 | 18.60 ± 0.40 | 17.13 ± 0.09 | 16.64 ± 0.09 | 16.20 ± 0.08 |
MAPE(%) | 14.20 ± 0.59 | 11.45 ± 0.03 | 11.40 ± 0.10 | 13.08 ± 1.00 | 10.96 ± 0.07 | 10.60 ± 0.06 | 10.56 ± 0.20 | |
RMSE | 34.06 ± 0.32 | 27.83 ± 0.05 | 27.83 ± 0.20 | 28.16 ± 0.48 | 26.80 ± 0.18 | 26.22 ± 0.15 | 25.22 ± 0.12 |
Datasets | Metrics | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PEMS03 | MAE | 13.31 | 13.92 | 14.44 | 14.86 | 15.23 | 15.53 | 15.79 | 16.02 | 16.30 | 16.65 | 17.06 | 17.63 |
MAPE(%) | 13.38 | 13.78 | 14.12 | 14.41 | 14.71 | 15.00 | 15.28 | 15.50 | 15.77 | 16.12 | 16.54 | 17.10 | |
RMSE | 20.32 | 21.53 | 22.53 | 23.32 | 23.98 | 24.49 | 24.91 | 25.30 | 25.74 | 26.30 | 26.93 | 27.79 | |
PEMS04 | MAE | 17.99 | 18.37 | 18.74 | 19.04 | 19.32 | 19.62 | 19.94 | 20.22 | 20.45 | 20.71 | 21.11 | 21.71 |
MAPE(%) | 13.65 | 14.04 | 14.36 | 14.65 | 14.93 | 15.21 | 15.49 | 15.77 | 16.02 | 16.31 | 16.69 | 17.15 | |
RMSE | 28.66 | 29.45 | 30.13 | 30.67 | 31.16 | 31.62 | 32.10 | 32.54 | 32.92 | 33.30 | 33.85 | 34.60 | |
PEMS07 | MAE | 18.47 | 19.81 | 20.77 | 21.49 | 22.07 | 22.64 | 23.23 | 23.83 | 24.46 | 25.10 | 25.84 | 26.83 |
MAPE(%) | 8.39 | 9.00 | 9.44 | 9.81 | 10.13 | 10.43 | 10.74 | 11.05 | 11.33 | 11.66 | 12.07 | 12.61 | |
RMSE | 28.58 | 30.94 | 32.47 | 33.60 | 34.52 | 35.36 | 36.20 | 37.01 | 37.83 | 38.67 | 39.63 | 40.84 | |
PEMS08 | MAE | 14.42 | 14.79 | 15.14 | 15.47 | 15.78 | 16.14 | 16.45 | 16.74 | 17.02 | 17.30 | 17.70 | 18.27 |
MAPE(%) | 8.91 | 9.16 | 9.39 | 9.65 | 9.93 | 10.19 | 10.44 | 10.68 | 10.91 | 11.13 | 11.45 | 11.87 | |
RMSE | 21.97 | 22.77 | 23.49 | 24.16 | 24.76 | 25.35 | 25.88 | 26.37 | 26.80 | 27.21 | 27.73 | 28.49 |
Method | PEMS03 | PEMS04 | PEMS07 | PEMS08 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
MAE | MAPE(%) | RMSE | MAE | MAPE(%) | RMSE | MAE | MAPE(%) | RMSE | MAE | MAPE(%) | RMSE | |
LGTCN | 15.14 | 14.73 | 24.00 | 19.77 | 15.36 | 31.75 | 22.88 | 10.56 | 35.47 | 16.27 | 10.31 | 25.41 |
NO-PSA | 17.14 | 15.67 | 26.90 | 20.47 | 15.72 | 32.44 | 25.26 | 11.97 | 38.52 | 16.90 | 11.20 | 26.25 |
NO-MCTCN | 17.31 | 17.05 | 27.54 | 23.61 | 17.17 | 36.66 | 25.98 | 11.89 | 39.77 | 18.89 | 11.58 | 29.08 |
NO-PM | 18.15 | 17.50 | 30.69 | 24.28 | 17.87 | 38.18 | 29.00 | 12.84 | 45.00 | 19.66 | 12.38 | 30.63 |
NO-RES | 18.45 | 18.73 | 31.53 | 21.60 | 19.51 | 34.80 | 28.36 | 12.80 | 47.35 | 20.20 | 12.73 | 31.77 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ye, W.; Kuang, H.; Deng, K.; Zhang, D.; Li, J. LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network. Appl. Sci. 2024, 14, 8847. https://doi.org/10.3390/app14198847
Ye W, Kuang H, Deng K, Zhang D, Li J. LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network. Applied Sciences. 2024; 14(19):8847. https://doi.org/10.3390/app14198847
Chicago/Turabian StyleYe, Wei, Haoxuan Kuang, Kunxiang Deng, Dongran Zhang, and Jun Li. 2024. "LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network" Applied Sciences 14, no. 19: 8847. https://doi.org/10.3390/app14198847
APA StyleYe, W., Kuang, H., Deng, K., Zhang, D., & Li, J. (2024). LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network. Applied Sciences, 14(19), 8847. https://doi.org/10.3390/app14198847