The Capacity of the Road Network: Data Collection and Statistical Analysis of Traffic Characteristics
Abstract
:1. Introduction
2. Literature Review
3. Methods
3.1. Objective and Scope
3.2. Research Methodology
3.3. Statistical Methods
3.4. Data Post-Processing
3.4.1. Multiple Linear Regression
3.4.2. Multidimensional Scaling
4. Fuzzy Method
5. Results and Discussion
6. Conclusions
- Duration of the resolving signal of a traffic light has the greatest effect on the dependent variable.
- Sampling for the maximum possible number of vehicles driving without pedestrians.
- The curvature of the carriageway when turning right.
- Time of leaving the 1st vehicle of the pedestrian crossing, taking into account the distance of 1 meter to the pedestrian crossing and its release.
- Number of vehicles driven in the 2nd window.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Buch, N.; Velastin, S.A.; Orwell, J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans. Intell. Transp. Syst. 2011, 12, 920–939. [Google Scholar] [CrossRef]
- Husain, A.A.; Maity, T.; Yadav, R.K. Vehicle detection in intelligent transport system under a hazy environment: A survey. IET Image Process. 2020, 14, 1–10. [Google Scholar] [CrossRef]
- Stetsenko, I.V.; Stelmakh, O. Traffic Lane Congestion Ratio Evaluation by Video Data. Available online: https://link.springer.com/chapter/10.1007%2F978-3-030-25741-5_18 (accessed on 2 February 2019).
- Kim., H.; Song, B. Vehicle recognition based on radar and vision sensor fusion for automatic emergency braking. In Proceedings of the 2013 13th International Conference on Control, Automation and Systems (ICCAS), Gwangju, Korea, 13 October 2013; pp. 1342–1346. [Google Scholar] [CrossRef]
- Fedorov, A.; Nikolskaia, K.; Ivanov, S.; Shepelev, V.; Minbaleev, A. Traffic flow estimation with data from a video surveillance camera. J. Big Data 2019, 6, 1–15. [Google Scholar] [CrossRef]
- Harrell, F. Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Available online: http://dx.doi.org/10.1007/978-1-4757-3462-1 (accessed on 12 November 2019).
- King, R. Cluster Analysis and Data Mining. Available online: http://www.merclearning.com/titles/Cluster_Analysis_and_Data_Mining.htmlm (accessed on 15 February 2019).
- Borg, I.; Groenen, P. Modern Multidimensional Scaling: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2005; pp. 207–212. ISBN 978-0-387-94845-4. [Google Scholar]
- Child, D. The Essentials of Factor Analysis, 3rd ed.; Continuum International Publishing Group: New York, NY, USA, 2006; ISBN 978-0-8264. Available online: https://www.isbns.co.tt/author/Dennis_A_Child (accessed on 22 March 2018).
- Castano, F.; Beruvides, G.; Haber, R.; Artunedo, A. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System. Sensors 2017, 17, 2109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Castano, F.; Beruvides, G.; Villalonga, A.; Haber, R. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models. Sensors 2018, 18, 1508. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Castano, F.; Strzelczak, S.; Villalonga, A.; Haber, R.; Kossakowska, J. Sensor Reliability in Cyber-Physical Systems Using Internet-of-Things Data: A Review and Case Study. Remote Sens. 2019, 11, 2252. [Google Scholar] [CrossRef] [Green Version]
- Kaklauskas, L.; Sakalauskas, L. On network traffic statistical analysis. Lith. Math. J. 2008, 314–319. [Google Scholar]
- Zhou, Y.; Nejati, H.; Do, T.; Cheung, N.; Cheah, L. Image-based vehicle analysis using deep neural network: A systematic study. In Proceedings of the 2016 IEEE International Conference on Digital Signal Processing (DSP), Beijing, China, 16–18 October 2016; pp. 276–280. Available online: https://ieeexplore.ieee.org/document/7868561 (accessed on 27 December 2018). [CrossRef] [Green Version]
- Biswas, D.; Su, H.; Wang, C.; Blankenship, J.; Stevanovic, A. An automatic car counting system using over-feat framework. Sensors 2017, 17, 1535. [Google Scholar] [CrossRef] [Green Version]
- Zhang, F.; Li, C.; Yang, F. Vehicle detection in urban traffic surveillance images based on convolutional neural networks with feature concatenation. Sensors 2019, 19, 594. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. Available online: https://arxiv.org/abs/1311.2524 (accessed on 7 December 2014). [CrossRef] [Green Version]
- Uijlings, J.R.R.; Sande, K.E.A.; Gevers, T.; Smeulders, A.W.M. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
- Wang, K.; Wang, R.; Feng, Y.; Zhang, H.; Huang, Q.; Jin, Y.; Zhang, Y. Vehicle recognition in acoustic sensor networks via sparse representation. In Proceedings of the IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China, 14–18 July 2014; pp. 1–4. Available online: https://journals.sagepub.com/doi/10.1177/1550147717701435 (accessed on 7 December 2014). [CrossRef]
- McKay, T.; Salvaggio, C.; Faulring, J.; Salvaggio, F.; McKeown, D.; Garrett, A.; Coleman, D.; Koffman, L. Passive detection of vehicle loading. In Proceedings of the SPIE—The International Society for Optical Engineering, Burlingame, CA, USA, 15 February 2012; Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/8305/830511/Passive-detection-of-vehicle-loading/10.1117/12.912100.short?SSO=1 (accessed on 17 December 2014). [CrossRef]
- Mishra, P.; Banerjee, B. Multiple kernel based KNN classifiers for vehicle classification. Int. J. Comput. Appl. 2013, 71, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Tang, T.; Thou, S.; Dag, Z.; Lei, L.; Zou, H. Arbitrary-oriented vehicle detection in aerial imagery with single convolutional neural networks. Remote Sens. 2017, 9, 1170. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Huang, M.; Jin, X.; Li, X. A real-time chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 2017, 10, 127. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified. 2016, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. You Only Look Once 9000: Better, Faster, Stronger. Available online: https://pjreddie.com/darknet/yolo/ (accessed on 27 November 2017).
- Tang, T.; Zhou, S.; Deng, Z.; Zou, H.; Lei, L. Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining. Sensors 2017, 17, 336. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Deng, Z.; Sun, H.; Zhou, S.; Zhao, J.; Zou, H. Toward fast and accurate vehicle detection in aerial images using coupled region-based convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3652–3664. [Google Scholar] [CrossRef]
- Qu, T.; Zhang, Q.; Sun, S. Vehicle detection from high-resolution aerial images using spatial pyramid pooling-based deep convolutional neural networks. Multimed. Tools Appl. 2017, 76, 21651–21663. [Google Scholar] [CrossRef]
- Buch, N.; Velastin, S.A.; Orwell, J. Urban road user detection and classification using 3D wire frame models. IET Comput. Vis. 2010, 4, 105–116. [Google Scholar] [CrossRef]
- Daigavane, P.M.; Bajaj, P.R. Real time vehicle detection and counting method for unsupervised traffic video on highways. Int. J. Comput. Sci. Netw. Secur. 2010, 10, 112–117. [Google Scholar]
- Chen, S.C.; Shyu, M.L.; Zhang, C. An intelligent framework for spatio-temporal vehicle tracking. In Proceedings of the 2001 IEEE Intelligent Transportation Systems Proceedings, Oakland, CA, USA, 25–29 August 2001; pp. 213–218. Available online: https://ieeexplore.ieee.org/document/948658 (accessed on 2 September 2007).
- Gupte, S.; Masoud, O.; Martin, R.F.; Papanikolopoulos, N.P. Detection and classification of vehicles. IEEE Trans. Intell. Transp. Syst. 2002, 3, 37–47. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-shot refinement neural network for object detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4203–4212. Available online: https://arxiv.org/abs/1711.06897 (accessed on 22 October 2018).
- Liu, Y.; Wang, R.; Shan, S.; Chen, X. Structure inference net: Object detection using scene-level context and instance-level relationships. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 13 June 2018; Volume 8578828, pp. 6985–6994. [Google Scholar]
- Zhou, P.; Ni, B.; Geng, C.; Hu, J.; Xu, Y. Scale-transferrable object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; Volume 8578160, pp. 528–537. [Google Scholar]
- Li, S. 3D-DETNet: A single stage video-based vehicle detector. In Proceedings of the Third International Workshop on Pattern Recognition, International Society for Optics and Photonics, Jinan, China, 26 July 2018; Volume 10828, p. 108280A. Available online: https://arxiv.org/abs/1801.01769 (accessed on 13 September 2018).
- Wang, X.; Cheng, P.; Liu, X.; Uzochukwu, B. Focal loss dense detector for vehicle surveillance. In Proceedings of the 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 2–4 April 2018; Available online: https://arxiv.org/abs/1708.02002 (accessed on 7 September 2018).
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 318–327. [Google Scholar] [CrossRef] [Green Version]
- Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P. A scale-insensitive convolutional neural network for fast vehicle detection. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1010–1019. [Google Scholar] [CrossRef] [Green Version]
- Gandhi, R. R-CNN, Fast R-CNN, Faster R-CNN, YOLO—Object Detection Algorithms. Available online: https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detectionalgorithms-36d53571365e (accessed on 12 June 2018).
- Hajji, H. Statistical Analysis of Network Traffic for Adaptive Faults Detection. IEEE Trans. Neural Netw. 2005, 16, 1053–1063. [Google Scholar] [CrossRef] [PubMed]
- Jia, W.; Shukla, R.; Sengupta, S. Anomaly Detection using Supervised Learning and Multiple Statistical Methods. In Proceedings of the 2019 18th IEEE International Conference On Machine Learning and Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; Available online: https://www.researchgate.net/publication/336902630 (accessed on 18 November 2019).
- Rousseeuw, P.; Hubert, M. Anomaly detection by robust statistics. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1236. [Google Scholar] [CrossRef] [Green Version]
- Csiszar, C.; Földes, D. Analysis and Modelling Methods of Urban Integrated Information System of Transportation. In Proceedings of the 2015 Smart Cities Symposium Prague (SCSP), Prague, Czech Republic, 24–25 June 2015; pp. 24–25, ISBN 978-1-4673-6727-1. Available online: https://ieeexplore.ieee.org/document/7181574 (accessed on 14 September 2016). [CrossRef] [Green Version]
- Szigeti, S.; Csiszar, C.; Földes, D. Information Management of Demand-responsive Mobility Service Based on Autonomous Vehicles. Procedia Eng. 2017, 187, 483–491. [Google Scholar] [CrossRef] [Green Version]
Variable | Units |
---|---|
Duration of the resolving signal of a traffic light | seconds |
The number of pedestrians in the direction of the vehicle (right) | one unit |
The number of pedestrians in the direction of the vehicle (left) | one unit |
The duration of the 1st free window in the pedestrian stream for driving | seconds |
Number of vehicles driven in the 1st window | one unit |
The duration of the 2nd free window in the pedestrian flow for driving | seconds |
Number of vehicles driven in the 2nd window | one unit |
The duration of the 3rd free window in the pedestrian stream for driving | seconds |
Number of vehicles driving in the 3rd window | one unit |
Driving time through the free window in the pedestrian flow, taking into account the distance of 1 m to the pedestrian crossing and its release | seconds |
Number of vehicles in the queue due to waiting for pedestrians to pass | one unit |
t1—time of movement of the 1st vehicle from the stop line to the beginning of rounding | seconds |
t2—time of movement of the 1st vehicle in an arc (until the exit from the turn) | seconds |
t3—the time of movement of the 1st vehicle on the segment of approach to the pedestrian crossing after exiting from the turn | seconds |
t4—time of leaving the 1st vehicle of the pedestrian crossing, taking into account the distance of 1 m to the pedestrian crossing and its release | seconds |
Number of vehicles completing the passage to the red signal of the traffic light | one unit |
L1—the distance from the stop line to the border of the intersection with the conflicting direction | meters |
L2—the curvature of the carriageway when turning right | meters |
L3—the distance from the end point of the curvature of the carriageway (intersection border) to the pedestrian crossing when turning right | meters |
Sampling for the maximum possible number of vehicles driving without pedestrians | one unit |
The actual number of passing cars | one unit |
Model | R | R Square | Adjusted R Square | Std. Error of the Estimate | Durbin-Watson |
---|---|---|---|---|---|
1 | 0.950 | 0.902 | 0.409 | 1.881 | 1.533 |
Coefficients | ||
---|---|---|
B | Beta | |
Unstandardized Coefficients | Standardized Coefficients | |
(Constant) | 0.614 | |
Duration of the resolving signal of a traffic light | 0.303 | 1.275 |
Sampling for the maximum possible number of vehicles driving without pedestrians | −0.360 | −0.752 |
L2—the curvature of the carriageway when turning right | 0.189 | 0.573 |
t4—time of leaving the 1st vehicle of the pedestrian crossing, taking into account the distance of 1 m to the pedestrian crossing and its release | 0.358 | 0.522 |
Number of vehicles driven in the 2nd window | 0.461 | 0.426 |
The number of pedestrians in the direction of the vehicle (left) | −0.280 | −0.417 |
L1—the distance from the stop line to the border of the intersection with the conflicting direction | −0.184 | −0.396 |
The duration of the 1st free window in the pedestrian stream for driving | −0.099 | −0.268 |
t2—time of movement of the 1st vehicle in an arc (until the exit from the turn) | 0.242 | 0.250 |
Number of vehicles driven in the 1st window | 0.272 | 0.232 |
The duration of the 3rd free window in the pedestrian stream for driving | −0.392 | −0.210 |
The duration of the 2nd free window in the pedestrian flow for driving | 0.102 | 0.166 |
The number of pedestrians in the direction of the vehicle (right) | −0.117 | −0.159 |
Number of vehicles driving in the 3rd window | 0.531 | 0.106 |
Driving time through the free window in the pedestrian flow, taking into account the distance of 1 m to the pedestrian crossing and its release | 0.056 | 0.098 |
Number of vehicles in the queue due to waiting for pedestrians to pass | 0.099 | 0.098 |
L3—the distance from the end point of the curvature of the carriageway (intersection border) to the pedestrian crossing when turning right | 0.049 | 0.062 |
t1—time of movement of the 1st vehicle from the stop line to the beginning of rounding | −0.068 | −0.046 |
t3—the time of movement of the 1st vehicle on the segment of approach to the pedestrian crossing after exiting from the turn | 0.110 | 0.041 |
Number of vehicles completing the passage to the red signal of the traffic light | 0.013 | 0.005 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shepelev, V.; Aliukov, S.; Nikolskaya, K.; Shabiev, S. The Capacity of the Road Network: Data Collection and Statistical Analysis of Traffic Characteristics. Energies 2020, 13, 1765. https://doi.org/10.3390/en13071765
Shepelev V, Aliukov S, Nikolskaya K, Shabiev S. The Capacity of the Road Network: Data Collection and Statistical Analysis of Traffic Characteristics. Energies. 2020; 13(7):1765. https://doi.org/10.3390/en13071765
Chicago/Turabian StyleShepelev, Vladimir, Sergei Aliukov, Kseniya Nikolskaya, and Salavat Shabiev. 2020. "The Capacity of the Road Network: Data Collection and Statistical Analysis of Traffic Characteristics" Energies 13, no. 7: 1765. https://doi.org/10.3390/en13071765
APA StyleShepelev, V., Aliukov, S., Nikolskaya, K., & Shabiev, S. (2020). The Capacity of the Road Network: Data Collection and Statistical Analysis of Traffic Characteristics. Energies, 13(7), 1765. https://doi.org/10.3390/en13071765