Spatiotemporal Dynamic Multi-Hop Network for Traffic Flow Forecasting
Abstract
:1. Introduction
- We have created an advanced encoder–decoder architecture tailored for multi-step prediction. The encoder extracts important spatiotemporal features from historical data by using multiple ST-Blocks. The decoder then uses DMGCGRUs to decode these features and produce multi-step prediction results.
- We present a dynamic graph learning algorithm to better capture the complex and evolving topology of traffic networks. This algorithm utilizes an iterative updating mechanism to dynamically construct and adjust the topological graph of road networks.
- ST-DMN combines the multi-hop operation of dynamic graphs with the diffusion convolution technique to effectively capture the inherent long-distance spatial dependence in traffic data. Furthermore, the transformer layer enhances the model’s comprehension and perception of the overall temporal structure within the spatiotemporal embeddings generated by the ST-Blocks.
- Experimental results on publicly available traffic speed datasets, including METR-LA and PEMS-BAY, indicate that ST-DMN achieves competitive or even superior performance compared to various baseline models.
2. Related Works
2.1. Traffic Flow Forecasting
2.2. Graph Structure Learning
3. Materials and Methods
3.1. Problem Formulation
3.2. Model Architecture
3.3. Encoder Architecture
3.3.1. ST-Blocks
3.3.2. Dynamic Graph Learning
3.3.3. Temporal Convolution Layer
3.3.4. Spatial Convolution Layer
3.3.5. Transformer Layer
3.4. Decoder Architecture
3.5. Loss Function
4. Experiments
4.1. Datasets
4.2. Baselines
- HA: employs the mean of past data as a basis for forecasting subsequent traffic volumes.
- ARIMA [57]: combines autoregression, differencing, and moving average to forecast non-stationary time series.
- FC-LSTM [58]: combines fully connected layers and LSTM for enhanced time series prediction.
- DCRNN [22]: it is a deep learning model that integrates a graph convolutional network with the recurrent neural network and enhances the model’s understanding of spatial correlation by a diffusion convolution operation.
- Graph WaveNet [31]: uses graph convolutional networks and dilated convolutions to capture spatiotemporal traffic patterns.
- GMAN [59]: adopts an encoder–decoder architecture with spatiotemporal attention blocks for dynamic traffic prediction.
- CCRNN [60]: integrates spatial and temporal features using coupled graph convolutions and gated recurrent units.
- GTS [33]: learns probabilistic graph structures for multiple time series prediction.
- PM-MemNet [61]: uses a memory network with pattern matching to predict traffic in complex road networks.
4.3. Experiment Settings
4.4. Evaluation Metrics
5. Results
5.1. Experiment Results and Analysis
- (1)
- Deep learning methods demonstrate superior performance over traditional time series methods and machine learning models. Traditional approaches often face challenges in achieving high data stationarity, whereas deep neural networks excel in modeling the nonlinear dynamics of traffic data.
- (2)
- Among deep learning models, graph-based architectures like DCRNN, GW-Net, and GMAN consistently outperform FC-LSTM models. This emphasizes the significance of integrating road network data into traffic flow forecasting models, indicating that spatial connectivity plays a critical role in accurate prediction.
- (3)
- The CCRNN model initializes its learnable graph using a 0–1 adjacency matrix of the road network, while the GTS model transforms the problem into learning a probabilistic graphical model by optimizing the performance averaged across the graph distribution. These models leverage dynamic graph structures to enhance predictive performance over earlier methods.
- (4)
- The PM-MemNet model innovatively uses a key-value memory structure to associate input data with representative patterns and identify the best pattern for predicting future traffic conditions based on given spatiotemporal features.
5.2. Model Configuration Analysis
5.3. Model Efficiency
5.4. Ablation Study
- “w/o Transformer Layer” excludes the transformer layer.
- “w/o DGL” excludes the dynamic graph learning.
- “w/o Multi-Hop” excludes the multi-hop operation.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, T.; Zhao, Y.; Lin, Z.; Zhou, J.; Li, H.; Wang, F. Moral and Formal Model-based Control Strategy for Autonomous Vehicles at Traffic-light-free intersections. Smart Constr. Sustain. Cities 2024, 2, 11. [Google Scholar] [CrossRef]
- Kaffash, S.; Nguyen, A.T.; Zhu, J. Big data algorithms and applications in intelligent transportation system: A review and bibliometric analysis. Int. J. Prod. Econ. 2021, 231, 107868. [Google Scholar] [CrossRef]
- Wang, F.; Liang, Y.; Lin, Z.; Zhou, J.; Zhou, T. SSA-ELM: A Hybrid Learning Model for Short-Term Traffic Flow Forecasting. Mathematics 2024, 12, 1895. [Google Scholar] [CrossRef]
- Voukelatou, V.; Gabrielli, L.; Miliou, I.; Cresci, S.; Sharma, R.; Tesconi, M.; Pappalardo, L. Measuring objective and subjective well-being: Dimensions and data sources. Int. J. Data Sci. Anal. 2021, 11, 279–309. [Google Scholar] [CrossRef]
- Park, C.L.; Kubzansky, L.D.; Chafouleas, S.M.; Davidson, R.J.; Keltner, D.; Parsafar, P.; Conwell, Y.; Martin, M.Y.; Hanmer, J.; Wang, K.H. Emotional well-being: What it is and why it matters. Affect. Sci. 2023, 4, 10–20. [Google Scholar] [CrossRef]
- Chui, K.T. Driver stress recognition for smart transportation: Applying multiobjective genetic algorithm for improving fuzzy c-means clustering with reduced time and model complexity. Sustain. Comput. Inform. Syst. 2022, 35, 100668. [Google Scholar] [CrossRef]
- Xu, W.; Liu, J.; Yan, J.; Yang, J.; Liu, H.; Zhou, T. Dynamic Spatiotemporal Graph Wavelet Network for Traffic Flow Prediction. IEEE Internet Things J. 2024, 19, 8019–8029. [Google Scholar] [CrossRef]
- Ghafouri-Azar, M.; Diamond, S.; Bowes, J.; Gholamalizadeh, E. The sustainable transport planning index: A tool for the sustainable implementation of public transportation. Sustain. Dev. 2023, 31, 2656–2677. [Google Scholar] [CrossRef]
- Li, Z.; Zhou, J.; Lin, Z.; Zhou, T. Dynamic spatial aware graph transformer for spatiotemporal traffic flow forecasting. Knowl.-Based Syst. 2024, 297, 111946. [Google Scholar] [CrossRef]
- Ishak, S.; Al-Deek, H. Performance evaluation of short-term time-series traffic prediction model. J. Transp. Eng. 2002, 128, 490–498. [Google Scholar] [CrossRef]
- Isufi, E.; Loukas, A.; Simonetto, A.; Leus, G. Autoregressive moving average graph filtering. IEEE Trans. Signal Process. 2016, 65, 274–288. [Google Scholar] [CrossRef]
- Wang, X.; Ma, Y.; Wang, Y.; Jin, W.; Wang, X.; Tang, J.; Jia, C.; Yu, J. Traffic flow prediction via spatial temporal graph neural network. In Proceedings of the WWW ’20: The Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 1082–1092. [Google Scholar] [CrossRef]
- Shi, X.; Qi, H.; Shen, Y.; Wu, G.; Yin, B. A spatial-temporal attention approach for traffic prediction. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4909–4918. [Google Scholar] [CrossRef]
- Liu, Q.; Wu, S.; Wang, L.; Tan, T. Predicting the next location: A recurrent model with spatial and temporal contexts. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar] [CrossRef]
- Yu, R.; Li, Y.; Shahabi, C.; Demiryurek, U.; Liu, Y. Deep learning: A generic approach for extreme condition traffic forecasting. In Proceedings of the 2017 SIAM International Conference on Data Mining (SIAM 2017), Houston, TX, USA, 27–29 April 2017; pp. 777–785. [Google Scholar] [CrossRef]
- Zhang, J.; Zheng, Y.; Qi, D. Deep spatio-temporal residual networks for citywide crowd flows prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar] [CrossRef]
- Yao, H.; Wu, F.; Ke, J.; Tang, X.; Jia, Y.; Lu, S.; Gong, P.; Ye, J.; Li, Z. Deep multi-view spatial-temporal network for taxi demand prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
- Zhang, W.; Liu, H.; Liu, Y.; Zhou, J.; Xiong, H. Semi-supervised hierarchical recurrent graph neural network for city-wide parking availability prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1186–1193. [Google Scholar] [CrossRef]
- Wang, Y.; Wu, H.; Zhang, J.; Gao, Z.; Wang, J.; Philip, S.Y.; Long, M. Predrnn: A recurrent neural network for spatiotemporal predictive learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2208–2225. [Google Scholar] [CrossRef]
- He, S.; Luo, Q.; Du, R.; Zhao, L.; He, G.; Fu, H.; Li, H. STGC-GNNs: A GNN-based traffic prediction framework with a spatial–temporal Granger causality graph. Phys. A Stat. Mech. Appl. 2023, 623, 128913. [Google Scholar] [CrossRef]
- Wang, Q.; Liu, W.; Wang, X.; Chen, X.; Chen, G.; Wu, Q. GMHANN: A Novel Traffic Flow Prediction Method for Transportation Management Based on Spatial-Temporal Graph Modeling. IEEE Trans. Intell. Transp. Syst. 2023, 25, 386–401. [Google Scholar] [CrossRef]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv 2017, arXiv:1707.01926. [Google Scholar] [CrossRef]
- Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv 2017, arXiv:1709.04875. [Google Scholar] [CrossRef]
- Park, C.; Lee, C.; Bahng, H.; Tae, Y.; Jin, S.; Kim, K.; Ko, S.; Choo, J. ST-GRAT: A novel spatio-temporal graph attention networks for accurately forecasting dynamically changing road speed. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual, 19–23 October 2020; pp. 1215–1224. [Google Scholar] [CrossRef]
- Wang, P.; Zhang, Y.; Hu, T.; Zhang, T. Urban traffic flow prediction: A dynamic temporal graph network considering missing values. Int. J. Geogr. Inf. Sci. 2023, 37, 885–912. [Google Scholar] [CrossRef]
- Geng, X.; Li, Y.; Wang, L.; Zhang, L.; Yang, Q.; Ye, J.; Liu, Y. Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3656–3663. [Google Scholar] [CrossRef]
- Lin, Z.; Feng, J.; Lu, Z.; Li, Y.; Jin, D. Deepstn+: Context-aware spatial-temporal neural network for crowd flow prediction in metropolis. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 1020–1027. [Google Scholar] [CrossRef]
- Pan, Z.; Liang, Y.; Wang, W.; Yu, Y.; Zheng, Y.; Zhang, J. Urban traffic prediction from spatio-temporal data using deep meta learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 1720–1730. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, C.T.; Pang, J. Hierarchical message-passing graph neural networks. Data Min. Knowl. Discov. 2023, 37, 381–408. [Google Scholar] [CrossRef]
- Zhang, K.; Zhou, F.; Wu, L.; Xie, N.; He, Z. Semantic understanding and prompt engineering for large-scale traffic data imputation. Inf. Fusion 2024, 102, 102038. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph wavenet for deep spatial-temporal graph modeling. arXiv 2019, arXiv:1906.00121. [Google Scholar] [CrossRef]
- Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 922–929. [Google Scholar] [CrossRef]
- Shang, C.; Chen, J.; Bi, J. Discrete graph structure learning for forecasting multiple time series. arXiv 2021, arXiv:2101.06861. [Google Scholar] [CrossRef]
- Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; Dauphin, Y.N. Convolutional sequence to sequence learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: London, UK, 2017; pp. 1243–1252. [Google Scholar] [CrossRef]
- Chen, W.; Chen, L.; Xie, Y.; Cao, W.; Gao, Y.; Feng, X. Multi-range attentive bicomponent graph convolutional network for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 3529–3536. [Google Scholar] [CrossRef]
- Nguyen, H.A.T.; Nguyen, H.D.; Do, T.H. An Application of Vector Autoregressive Model for Analyzing the Impact of Weather And Nearby Traffic Flow On The Traffic Volume. In Proceedings of the 2022 RIVF International Conference on Computing and Communication Technologies (RIVF), Ho Chi Minh City, Vietnam, 20–22 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 328–333. [Google Scholar] [CrossRef]
- Liu, Y.; Zheng, H.; Feng, X.; Chen, Z. Short-term traffic flow prediction with Conv-LSTM. In Proceedings of the 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 11–13 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Bai, L.; Yao, L.; Li, C.; Wang, X.; Wang, C. Adaptive graph convolutional recurrent network for traffic forecasting. Adv. Neural Inf. Process. Syst. 2020, 33, 17804–17815. [Google Scholar] [CrossRef]
- Zhong, W.; Meidani, H.; Macfarlane, J. Attention-based Spatial-Temporal Graph Neural ODE for Traffic Prediction. arXiv 2023, arXiv:2305.00985. [Google Scholar] [CrossRef]
- Liu, H.; Dong, Z.; Jiang, R.; Deng, J.; Deng, J.; Chen, Q.; Song, X. Spatio-temporal adaptive embedding makes vanilla transformer sota for traffic forecasting. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 4125–4129. [Google Scholar] [CrossRef]
- Dai, B.A.; Ye, B.L. A novel hybrid time-varying graph neural network for traffic flow forecasting. arXiv 2024, arXiv:2401.10155. [Google Scholar] [CrossRef]
- Zhu, Y.; Xu, W.; Zhang, J.; Du, Y.; Zhang, J.; Liu, Q.; Yang, C.; Wu, S. A survey on graph structure learning: Progress and opportunities. arXiv 2021, arXiv:2103.03036. [Google Scholar] [CrossRef]
- Xue, G.; Zhong, M.; Li, J.; Chen, J.; Zhai, C.; Kong, R. Dynamic network embedding survey. Neurocomputing 2022, 472, 212–223. [Google Scholar] [CrossRef]
- Cao, D.; Wang, Y.; Duan, J.; Zhang, C.; Zhu, X.; Huang, C.; Tong, Y.; Xu, B.; Bai, J.; Tong, J.; et al. Spectral temporal graph neural network for multivariate time-series forecasting. Adv. Neural Inf. Process. Syst. 2020, 33, 17766–17778. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhou, S.; Mao, B.; Zhou, X.; Chen, J.; Tan, Q.; Zha, D.; Wang, C.; Feng, Y.; Chen, C. Opengsl: A comprehensive benchmark for graph structure learning. arXiv 2023, arXiv:2306.10280. [Google Scholar] [CrossRef]
- Li, R.; Wang, S.; Zhu, F.; Huang, J. Adaptive graph convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
- Franceschi, L.; Niepert, M.; Pontil, M.; He, X. Learning discrete structures for graph neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; PMLR: London, UK, 2019; pp. 1972–1982. [Google Scholar] [CrossRef]
- Zheng, C.; Zong, B.; Cheng, W.; Song, D.; Ni, J.; Yu, W.; Chen, H.; Wang, W. Robust graph representation learning via neural sparsification. In Proceedings of the International Conference on Machine Learning, Virtual, 12–18 July 2020; PMLR: London, UK, 2020; pp. 11458–11468. [Google Scholar]
- Yang, L.; Kang, Z.; Cao, X.; Jin, D.; Yang, B.; Guo, Y. Topology Optimization based Graph Convolutional Network. In Proceedings of the International Joint Conference on Artificial Intelligence, Macau, China, 10–16 August 2019; pp. 4054–4061. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; Zhang, C. Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual, 6–10 July 2020; pp. 753–763. [Google Scholar] [CrossRef]
- Zügner, D.; Aubet, F.X.; Satorras, V.G.; Januschowski, T.; Günnemann, S.; Gasthaus, J. A study of joint graph inference and forecasting. arXiv 2021, arXiv:2109.04979. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Bengio, S.; Vinyals, O.; Jaitly, N.; Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: London, UK, 2017; pp. 933–941. [Google Scholar] [CrossRef]
- Cai, C.; Wang, Y. A note on over-smoothing for graph neural networks. arXiv 2020, arXiv:2006.13318. [Google Scholar] [CrossRef]
- Yu, H.; Li, T.; Yu, W.; Li, J.; Huang, Y.; Wang, L.; Liu, A. Regularized graph structure learning with semantic knowledge for multi-variates time-series forecasting. arXiv 2022, arXiv:2210.06126. [Google Scholar] [CrossRef]
- Williams, B.M.; Hoel, L.A. Modeling and forecasting vehicular traffic flow as a seasonal ARIMA process: Theoretical basis and empirical results. J. Transp. Eng. 2003, 129, 664–672. [Google Scholar] [CrossRef]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar] [CrossRef]
- Zheng, C.; Fan, X.; Wang, C.; Qi, J. Gman: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1234–1241. [Google Scholar] [CrossRef]
- Ye, J.; Sun, L.; Du, B.; Fu, Y.; Xiong, H. Coupled layer-wise graph convolution for transportation demand prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 4617–4625. [Google Scholar] [CrossRef]
- Lee, H.; Jin, S.; Chu, H.; Lim, H.; Ko, S. Learning to remember patterns: Pattern matching memory networks for traffic forecasting. arXiv 2021, arXiv:2110.10380. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Wang, G.; Ying, R.; Huang, J.; Leskovec, J. Multi-hop attention graph neural network. arXiv 2020, arXiv:2009.14332. [Google Scholar] [CrossRef]
Dataset | Nodes | Edges | Time Steps | Data Points |
---|---|---|---|---|
METR-LA | 207 | 1515 | 34,272 | 6,519,002 |
PEMS-BAY | 325 | 2369 | 52,116 | 16,937,179 |
Dataset | Models | 15 min/Horizon 3 | 30 min/Horizon 6 | 60 min/Horizon 12 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | MAPE | MAE | RMSE | MAPE | MAE | RMSE | MAPE | ||
METR-LA | HA | 4.16 | 7.80 | 13.00 | 4.16 | 7.80 | 13.00 | 4.16 | 7.80 | 13.00 |
ARIMA [57] | 3.99 | 8.12 | 9.60 | 5.15 | 10.45 | 12.7 | 6.90 | 13.23 | 17.40 | |
FC-LSTM [58] | 3.44 | 6.30 | 9.60 | 3.77 | 7.23 | 10.90 | 4.37 | 8.69 | 13.20 | |
DCRNN [22] | 2.77 | 5.38 | 7.30 | 3.15 | 6.45 | 8.80 | 3.60 | 7.60 | 10.50 | |
GW-Net [31] | 2.69 | 5.15 | 6.90 | 3.07 | 6.22 | 8.37 | 3.53 | 7.37 | 10.01 | |
GMAN [59] | 2.80 | 5.55 | 7.41 | 3.12 | 6.49 | 8.73 | 3.44 | 7.35 | 10.07 | |
CCRNN [60] | 2.85 | 5.54 | 7.50 | 3.24 | 6.54 | 8.90 | 3.73 | 7.65 | 10.59 | |
GTS [33] | 2.65 | 5.22 | 6.83 | 3.09 | 6.34 | 8.45 | 3.59 | 7.29 | 9.83 | |
PM-MemNet [61] | 2.65 | 5.29 | 7.01 | 3.03 | 6.29 | 8.42 | 3.46 | 7.56 | 10.26 | |
ST-DMN | 2.63 | 5.09 | 6.65 | 3.01 | 6.15 | 8.08 | 3.45 | 7.32 | 9.91 | |
PEMS-BAY | HA | 2.88 | 5.59 | 6.80 | 2.88 | 5.59 | 6.80 | 2.88 | 5.59 | 6.80 |
ARIMA [57] | 1.62 | 3.30 | 3.50 | 2.33 | 4.76 | 5.40 | 3.38 | 6.50 | 8.30 | |
FC-LSTM [58] | 2.05 | 4.19 | 4.80 | 2.20 | 4.55 | 5.20 | 2.37 | 4.96 | 5.70 | |
DCRNN [22] | 1.38 | 2.95 | 2.90 | 1.74 | 3.97 | 3.90 | 2.07 | 4.74 | 4.90 | |
GW-Net [31] | 1.30 | 2.74 | 2.70 | 1.63 | 3.70 | 3.70 | 1.95 | 4.52 | 4.60 | |
GMAN [59] | 1.35 | 2.90 | 2.87 | 1.65 | 3.82 | 3.74 | 1.92 | 4.49 | 4.52 | |
CCRNN [60] | 1.38 | 2.90 | 2.90 | 1.74 | 3.87 | 3.90 | 2.07 | 4.65 | 4.87 | |
GTS [33] | 1.39 | 2.95 | 2.88 | 1.78 | 4.06 | 3.98 | 2.24 | 5.17 | 5.35 | |
PM-MemNet [61] | 1.34 | 2.82 | 2.81 | 1.65 | 3.76 | 3.71 | 1.95 | 4.49 | 4.54 | |
ST-DMN | 1.30 | 2.74 | 2.73 | 1.62 | 3.73 | 3.67 | 1.89 | 4.46 | 4.51 |
Configurations | k(hops) | Embedding | MAE | RMSE | MAPE (%) | |
---|---|---|---|---|---|---|
k(hops) | 1 | 0.15 | 16 | 3.01 | 6.12 | 8.02 |
2 | 0.15 | 16 | 3.00 | 6.11 | 8.08 | |
3 | 0.15 | 16 | 2.97 | 6.02 | 7.98 | |
4 | 0.15 | 16 | 2.98 | 6.10 | 8.03 | |
3 | 0.1 | 16 | 2.98 | 6.04 | 7.94 | |
3 | 0.15 | 16 | 2.97 | 6.02 | 7.98 | |
3 | 0.3 | 16 | 2.99 | 6.05 | 7.98 | |
3 | 0.4 | 16 | 2.99 | 6.07 | 7.99 | |
3 | 0.5 | 16 | 3.00 | 6.05 | 8.01 | |
Embedding | 3 | 0.15 | 8 | 3.00 | 6.12 | 7.99 |
3 | 0.15 | 16 | 2.97 | 6.02 | 7.98 | |
3 | 0.15 | 32 | 3.01 | 6.10 | 7.99 | |
3 | 0.15 | 64 | 3.00 | 6.12 | 8.19 |
Models | ST-DMN | PM-MemNet | GTS | DCRNN | GW-Net |
---|---|---|---|---|---|
Time (s/epoch) | 82.44 | 131.38 | 727.60 | 249.31 | 53.68 |
Dataset | Models | 15 min/Horizon 3 | 30 min/Horizon 6 | 60 min/Horizon 12 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | MAPE | MAE | RMSE | MAPE | MAE | RMSE | MAPE | ||
METR-LA | w/o Transformer Layer | 2.64 | 5.12 | 6.75 | 3.02 | 6.20 | 8.20 | 3.48 | 7.39 | 9.94 |
w/o DGL | 2.66 | 5.14 | 6.71 | 3.04 | 6.24 | 8.14 | 3.50 | 7.44 | 9.87 | |
w/o Multi-Hop | 2.66 | 5.17 | 6.72 | 3.03 | 6.23 | 8.17 | 3.47 | 7.32 | 9.89 | |
ST-DMN | 2.63 | 5.09 | 6.65 | 3.01 | 6.15 | 8.08 | 3.45 | 7.32 | 9.91 | |
PEMS-BAY | w/o Transformer Layer | 1.31 | 2.75 | 2.77 | 1.64 | 3.77 | 3.76 | 1.92 | 4.52 | 4.59 |
w/o DGL | 1.31 | 2.77 | 2.77 | 1.63 | 3.77 | 3.73 | 1.92 | 4.54 | 4.60 | |
w/o Multi-Hop | 1.32 | 2.77 | 2.77 | 1.65 | 3.78 | 3.73 | 1.94 | 4.53 | 4.57 | |
ST-DMN | 1.30 | 2.74 | 2.73 | 1.62 | 3.73 | 3.67 | 1.89 | 4.46 | 4.51 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chai, W.; Luo, Q.; Lin, Z.; Yan, J.; Zhou, J.; Zhou, T. Spatiotemporal Dynamic Multi-Hop Network for Traffic Flow Forecasting. Sustainability 2024, 16, 5860. https://doi.org/10.3390/su16145860
Chai W, Luo Q, Lin Z, Yan J, Zhou J, Zhou T. Spatiotemporal Dynamic Multi-Hop Network for Traffic Flow Forecasting. Sustainability. 2024; 16(14):5860. https://doi.org/10.3390/su16145860
Chicago/Turabian StyleChai, Wenguang, Qingfeng Luo, Zhizhe Lin, Jingwen Yan, Jinglin Zhou, and Teng Zhou. 2024. "Spatiotemporal Dynamic Multi-Hop Network for Traffic Flow Forecasting" Sustainability 16, no. 14: 5860. https://doi.org/10.3390/su16145860
APA StyleChai, W., Luo, Q., Lin, Z., Yan, J., Zhou, J., & Zhou, T. (2024). Spatiotemporal Dynamic Multi-Hop Network for Traffic Flow Forecasting. Sustainability, 16(14), 5860. https://doi.org/10.3390/su16145860