Fusion of Deep Sort and Yolov5 for Effective Vehicle Detection and Tracking Scheme in Real-Time Traffic Management Sustainable System
Abstract
:1. Introduction
1.1. Contribution
- We propose an effective vehicle detection and tracking scheme with the fusion of deep sort and Yolov5 in a real-time traffic management system.
- The detection of vehicles using YOLOv5 to identify the current location of vehicle using a bounding box and classification.
- Deep SORT algorithm tracks the vehicles after identifying the bounding box by the YOLOv5 model for tracking the vehicles and counting their vehicles. The same algorithm will also strengthen the vehicle detection and tracking to reduce erroneous detections and missed detections relying on extraneous factors.
- Assigning unique IDs to each vehicle to monitor the traffic flow for vehicle tracking and counting while passing through hot zones and virtual lines.
1.2. Organization
2. Related Work
2.1. Research Study of Vehicle Detection
2.2. Research Study of Vehicle Tracking
2.3. Previous Works
2.4. Key Consideration
- Accuracy of vehicle detection: Accuracy is crucial in vehicle detection. Ensure that the detection algorithm/model is able to accurately detect vehicles in various scenarios, including different vehicle types, lighting conditions, weather conditions, and occlusion situations. Evaluate the detection accuracy using appropriate metrics such as the precision, recall, and F1 score.
- Accurate and consistent vehicle tracking: Vehicle tracking should provide accurate and consistent results over time. The tracking algorithm should be able to associate detected vehicles across frames and maintain their identities, even in situations with occlusions or appearance changes. Consider the use of techniques like Deep SORT or Kalman filtering to improve the tracking performance.
- Multi-object tracking: Consider the ability of the system to track multiple vehicles simultaneously. This is especially important in scenarios with heavy traffic or crowded scenes. The tracking algorithm should handle multiple objects and maintain their identities correctly, without confusing or swapping identities.
- Adaptability to different environments: Consider the adaptability of the system to different environments or domains. A robust vehicle detection and tracking system should be able to generalize well and perform effectively in various scenarios, such as urban environments, highways, or off-road situations.
- Limitations and future directions: Discuss the limitations of the proposed system for vehicle detection and tracking and outline potential areas for improvement. Highlight any challenges faced during the development of the system and propose future research directions, such as exploring advanced algorithms, incorporating contextual information, or addressing specific use cases.
3. Proposed Vehicle Detection and Tracking Scheme
3.1. Overview of Proposed Work
- Object detection with YOLOv5:
- YOLOv5 is employed for real-time object detection, including vehicles, in each frame of a video or a sequence of images.
- The output of YOLOv5 includes bounding boxes around detected objects along with class labels and confidence scores.
- Feature extraction and data association with Deep SORT:
- Deep Simple Online and Realtime Tracking (SORT) is utilized for object tracking and maintaining unique track identities across frames.
- Features, such as appearance and motion information, are extracted from the bounding boxes generated by YOLOv5.
- The Kalman filter is employed to predict the next location of each track based on its historical motion.
- The Hungarian algorithm is often used for data association, associating the predicted tracks with the newly detected bounding boxes.
- Updating Track Information:
- The association step helps link the detected bounding boxes with existing tracks, updating the track information with the latest detection.
- Tracks that are not associated with any new detection for a certain period may be considered as finished tracks, while new detections that are not associated with any existing track may result in the creation of new tracks.
- Handling occlusions and ambiguities:
- Deep SORT is designed to handle challenges such as occlusions, where a vehicle may be temporarily hidden from view by another object.
- The combination of YOLOv5’s real-time detection and Deep SORT’s tracking helps maintain track identities even when vehicles are temporarily obscured.
3.2. YOLOv5 Model Overview
3.3. Deep SORT Algorithm for Vehicle Tracking
Algorithm 1: Kalman filter algorithm for vehicle detection-tracking. |
Input: A bounding box on YOLO’s detected matrix along with aspect ratio and height of the bounding box Output: Current frame prediction based on previous position of the target
|
Algorithm 2: Hungarian algorithm for unmatched detection and tracking. |
Input: n by n square matrix, Detection A, Detection B, Detection C,…………., IDs: 0,1,2,……….) Output: (Unmatched detections, Unmatched tracking)
|
Algorithm 3: Cascade matching and IOU tracking. |
Input: Trackers (detections, ) Output: Vehicle tracking and assigning ID’s
|
3.4. Methodological Flow of Proposed Work
Algorithm 4: Vehicle detection and tracking with YOLOv5 and Deep SORT. |
Input: Set of images or videos Output: Target position of the bounding box () by YOLOv5 and tracking for counting vehicles by Deep SORT.
|
3.5. Vehicle Identification and Vehicle Tracking
3.6. Vehicle Tracking with Virtual Lines and Hot Zones
3.7. Enhancing the Detection and Tracking of Small Objects Using YOLOv5 and Deep SORT
4. Evaluation and Performance Results
4.1. Results Discussion
4.2. Vehicle Detection Result Analysis
4.3. Vehicle Counting and Tracking Result Analysis
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xu, P.; Tan, Q.; Zhang, Y.; Zha, X.; Yang, S.; Yang, R. Research on maize seed classification and recognition based on machine vision and deep learning. Agriculture 2022, 12, 232. [Google Scholar] [CrossRef]
- Cao, J.; Song, C.; Song, S.; Peng, S.; Wang, D.; Shao, Y.; Xiao, F. Front vehicle detection algorithm for smart car based on improved SSD model. Sensors 2020, 20, 4646. [Google Scholar] [CrossRef] [PubMed]
- Ali, S.M.; Appolloni, A.; Cavallaro, F.; D’Adamo, I.; Di Vaio, A.; Ferella, F.; Gastaldi, M.; Ikram, M.; Kumar, N.M.; Martin, M.A.; et al. Development Goals towards Sustainability. Sustainability 2023, 15, 9443. [Google Scholar] [CrossRef]
- Le, N.; Rathour, V.S.; Yamazaki, K.; Luu, K.; Savvides, M. Deep reinforcement learning in computer vision: A comprehensive survey. Artif. Intell. Rev. 2022, 55, 2733–2819. [Google Scholar] [CrossRef]
- Kuswantori, A.; Suesut, T.; Tangsrirat, W.; Schleining, G.; Nunak, N. Fish Detection and Classification for Automatic Sorting System with an Optimized YOLO Algorithm. Appl. Sci. 2023, 13, 3812. [Google Scholar] [CrossRef]
- Qiu, Z.; Bai, H.; Chen, T. Special Vehicle Detection from UAV Perspective via YOLO-GNS Based Deep Learning Network. Drones 2023, 7, 117. [Google Scholar] [CrossRef]
- Wu, Z.; Sang, J.; Zhang, Q.; Xiang, H.; Cai, B.; Xia, X. Multi-scale vehicle detection for foreground-background class im-balance with improved YOLOv2. Sensors 2019, 19, 3336. [Google Scholar] [CrossRef]
- Li, X.-Q.; Song, L.-K.; Choy, Y.-S.; Bai, G.-C. Multivariate ensembles-based hierarchical linkage strategy for system reliability evaluation of aeroengine cooling blades. Aerosp. Sci. Technol. 2023, 138, 108325. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef]
- Kumar, S.; Jailia, M.; Varshney, S.; Pathak, N.; Urooj, S.; Elmunim, N.A. Robust vehicle detection based on improved you look only once. Comput. Mater. Contin. 2023, 74, 3561–3577. [Google Scholar] [CrossRef]
- Okafor, E.; Udekwe, D.; Ibrahim, Y.; Mu’Azu, M.B.; Okafor, E.G. Heuristic and deep reinforcement learning-based PID control of trajectory tracking in a ball-and-plate system. J. Inf. Telecommun. 2021, 5, 179–196. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2017, arXiv:1703.06870. [Google Scholar]
- Kumar, S.; Jailia, M.; Varshney, S. An efficient approach for highway lane detection based on the Hough transform and Kalman filter. Innov. Infrastruct. Solut. 2022, 7, 290. [Google Scholar] [CrossRef]
- Song, S.; Li, Y.; Huang, Q.; Li, G. A new real-time detection and tracking method in videos for small target traffic signs. Appl. Sci. 2021, 11, 3061. [Google Scholar] [CrossRef]
- Malta, A.; Mendes, M.; Farinha, T. Augmented reality maintenance assistant using YOLOv5. Appl. Sci. 2021, 11, 4758. [Google Scholar] [CrossRef]
- Parico, A.I.B.; Ahamed, T. Real time pear fruit detection and counting using YOLOv4 models and Deep SORT. Sensors 2021, 21, 4803. [Google Scholar] [CrossRef]
- Kumar, S.; Jailia, M.; Varshney, S. Improved YOLOv4 approach: A real time occluded vehicle detection. Int. J. Comput. Digit. Syst. 2022, 12, 489–497. [Google Scholar] [CrossRef] [PubMed]
- Xue, Z.; Xu, R.; Bai, D.; Lin, H. YOLO-Tea: A tea disease detection model improved by YOLOv5. Forests 2023, 14, 415. [Google Scholar] [CrossRef]
- Kim, J.-H.; Kim, N.; Park, Y.W.; Won, C.S. Object detection and classification based on YOLO-V5 with improved maritime dataset. J. Mar. Sci. Eng. 2022, 10, 377. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Singh, S.K.; Yang, L.T.; Park, J.H. FusionFedBlock: Fusion of blockchain and federated learning to preserve privacy in industry 5.0. Inf. Fusion 2023, 90, 233–240. [Google Scholar] [CrossRef]
- Pan, Q.; Zhang, H. Key Algorithms of video target detection and recognition in intelligent transportation systems. Int. J. Pattern Recognit. Artif. Intell. 2020, 34, 2055016. [Google Scholar] [CrossRef]
- Li, X.-Q.; Song, L.-K.; Bai, G.-C. Deep learning regression-based stratified probabilistic combined cycle fatigue damage evaluation for turbine bladed disks. Int. J. Fatigue 2022, 159, 106812. [Google Scholar] [CrossRef]
- Ge, W.; Yang, S.; Yu, Y. Multi-evidence filtering and fusion for multi-label classification, object detection and semantic segmentation based on weakly supervised learning. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1277–1286. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Liao, H.-Y.M.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Li, Y.; Zhang, X.; Shen, Z. YOLO-Submarine Cable: An improved YOLO-V3 network for object detection on submarine cable images. J. Mar. Sci. Eng. 2022, 10, 1143. [Google Scholar] [CrossRef]
- Yue, X.; Li, H.; Shimizu, M.; Kawamura, S.; Meng, L. YOLO-GD: A deep learning-based object detection algorithm for empty-dish recycling robots. Machines 2022, 10, 294. [Google Scholar] [CrossRef]
- Huang, Z.; Wang, J.; Fu, X.; Yu, T.; Guo, Y.; Wang, R. DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection. Inf. Sci. 2020, 522, 241–258. [Google Scholar] [CrossRef]
- Liu, Y.; Lu, B.; Peng, J.; Zhang, Z. Research on the use of YOLOv5 object detection algorithm in mask wearing recognition. World Sci. Res. J. 2020, 6, 276–284. [Google Scholar]
- Yan, B.; Fan, P.; Lei, X.; Liu, Z.; Yang, F. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
- Reid, D.B. An algorithm for tracking multiple targets. IEEE Trans. Automat. Contr. 1979, 24, 843–854. [Google Scholar] [CrossRef]
- Fortmann, T.; Bar-Shalom, Y.; Scheffe, M. Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Ocean. Eng. 1983, 8, 173–184. [Google Scholar] [CrossRef]
- Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
- Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple Online and Realtime Tracking. arXiv 2016, arXiv:1602.0076. [Google Scholar]
- Teoh, S.S.; Bräunl, T. Symmetry-based monocular vehicle detection system. Mach. Vis. Appl. 2012, 23, 831–842. [Google Scholar] [CrossRef]
- Xiaoyong, W.; Bo, W.; Lu, S. Real-time on-road vehicle detection algorithm based on monocular vision. In Proceedings of the 2012 2nd International Conference on Computer Science and Network Technology, Changchun, China, 29–31 December 2012. [Google Scholar]
- Yunzhou, Z.; Pengfei, S.; Jifan, L.; Lei, M. Real-time vehicle detection in highway based on improved Adaboost and image segmentation. In Proceedings of the 2015 IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Shenyang, China, 8–12 June 2015; pp. 2006–2011. [Google Scholar]
- Kim, J.; Baek, J.; Kim, E. A Novel On-Road Vehicle Detection Method Using pi HOG. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3414–3429. [Google Scholar] [CrossRef]
- Latif, G.; Bouchard, K.; Maitre, J.; Back, A.; Bédard, L.P. Deep-learning-based automatic mineral grain segmentation and recognition. Minerals 2022, 12, 455. [Google Scholar] [CrossRef]
- Qu, T.; Zhang, Q.; Sun, S. Vehicle detection from high-resolution aerial images using spatial pyramid pooling-based deep convolutional neural networks. Multimed. Tools Appl. 2017, 76, 21651–21663. [Google Scholar] [CrossRef]
- Liu, W.; Liao, S.; Hu, W. Towards accurate tiny vehicle detection in complex scenes. Neurocomputing 2019, 347, 24–33. [Google Scholar] [CrossRef]
- Wu, W.; Gao, Y.; Bienenstock, E.; Donoghue, J.P.; Black, M.J. Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Comput. 2006, 18, 80–118. [Google Scholar] [CrossRef]
- Punn, N.S.; Sonbhadra, S.K.; Agarwal, S.; Rai, G. Monitoring COVID-19 social distancing with person detection and tracking via fine-tuned YOLO v3 and Deepsort techniques. arXiv 2020, arXiv:2005.01385. [Google Scholar]
- Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-based moving obstacle detection and tracking in paddy field using improved Yolov3 and deep SORT. Sensors 2020, 20, 4082. [Google Scholar] [CrossRef] [PubMed]
- Li, D.; Ahmed, F.; Wu, N.; Sethi, A.I. YOLO-JD: A deep learning network for jute diseases and pests detection from images. Plants 2022, 11, 937. [Google Scholar] [CrossRef]
- Kang, H.; Chen, C. Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput. Electron. Agric. 2020, 168, 105108. [Google Scholar] [CrossRef]
- Simon, M.; Amende, K.; Kraus, A.; Honer, J.; Samann, T.; Kaulbersch, H.; Milz, S.; Michael Gross, H. Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Biffi, L.J.; Mitishita, E.; Liesenberg, V.; dos Santos, A.A.; Gonçalves, D.N.; Estrabis, N.V.; Silva, J.d.A.; Osco, L.P.; Ramos, A.P.M.; Centeno, J.A.S.; et al. ATSS Deep Learning-based approach to detect apple fruits. Remote Sens. 2020, 13, 54. [Google Scholar] [CrossRef]
- Singh, S.K.; Park, J.H.; Sharma, P.K.; Pan, Y. BIIoVT: Blockchain-based secure storage architecture for intelligent internet of vehicular things. IEEE Consum. Electron. Mag. 2022, 11, 75–82. [Google Scholar] [CrossRef]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Lian, J.; Yin, Y.; Li, L.; Wang, Z.; Zhou, Y. Small object detection in traffic scenes based on attention feature fusion. Sensors 2021, 21, 3031. [Google Scholar] [CrossRef] [PubMed]
Authors | Year | Technique Used | Description | Advantage | Limitation | Accuracy |
---|---|---|---|---|---|---|
Ren et al. [12] | 2015 | Faster R-CNN | Region-based convolutional neural networks | Highly accurate detection | Computationally intensive and slower inference times | High |
Redmon et al. [29] | 2016 | YOLO | Real-time object detection | Real-time detection and tracking | May sacrifice accuracy for speed | Moderate to high |
Li, Y et al. [32] | 2022 | SiamRPN | Siamese network-based visual tracking | Accurate and robust tracking across various scenarios | May require significant computational resources | High |
Wojke et al. [40] | 2017 | DeepSORT | Deep learning-based tracking with ID association | Accurate tracking with ID association | Requires high computational resources | High |
Bewley, A. et al. [43] | 2016 | SORT | Deep learning-based object tracking | Accurate and robust tracking across various scenarios | May require significant computational resources | High |
Zhang et al. [46] | 2015 | HOG with SVM | Pedestrian detection using histogram of oriented gradients | Effective for pedestrian detection | Can be sensitive to lighting and scale variations | Moderate to high |
Latif et al. [48] | 2022 | Haar cascades | Real-time face detection using integral images | Fast and efficient | May struggle with complex backgrounds and occlusions | Moderate |
Liu et al. [50] | 2016 | SSD | Single-shot multibox detector | Good balance between accuracy and speed | Can struggle with detecting small objects or occlusions | Moderate to high |
Wu, W. et al. [51] | 2006 | Kalman filter | Bayesian filtering and prediction | Effective in handling motion prediction | Prone to errors in occluded or non-linear scenarios | Moderate |
Qiu, Z. et al. [53] | 2020 | Particle filter | Sequential Monte Carlo method | Robust in handling non-linear motion and occlusion | Requires careful tuning for optimal performance | Moderate to high |
Proposed work | 2023 | Yolov5, Deep Sort | Vehicle detection and tracking | Working on real-time traffic management System | Occlusions, low Illumination | 92.18% |
Performance Metrics |
---|
Methods | Precision | Recall | mAP | FPS |
---|---|---|---|---|
YOLOv5s [18] | 32.5 | 57.7 | 50.6 | 45 |
AFFB_YOLOv5s | [17] 33.0 | 58.3 | 51.5 | 52 |
YOLOv5 + Deep SORT | 34.7 | 59.3 | 51.7 | 58 |
Methods | Precision | Recall | mAP | FPS |
---|---|---|---|---|
YOLOv5s [18] | 60.3 | 82.3 | 79.4 | 48 |
AFFB_YOLOv5s | [17] 63.4 | 82.9 | 80.8 | 50 |
YOLOv5 + Deep SORT | 65.7 | 83.4 | 81.2 | 57 |
Methods | Precision | Recall | [email protected] |
---|---|---|---|
YOLOv4-3SPP [6] | 88.6% | 82.4% | 86.5% |
YOLOv5s [18] | 80.3% | 89% | 90.5% |
YOLOv5 [36] | 83.83% | 91.48% | 86.75% |
YOLOv3 + Deep SORT [59] | 91% | 90% | 84.76% |
YOLOV5 + Deep SORT | 91.25% | 93.52% | 92.18% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kumar, S.; Singh, S.K.; Varshney, S.; Singh, S.; Kumar, P.; Kim, B.-G.; Ra, I.-H. Fusion of Deep Sort and Yolov5 for Effective Vehicle Detection and Tracking Scheme in Real-Time Traffic Management Sustainable System. Sustainability 2023, 15, 16869. https://doi.org/10.3390/su152416869
Kumar S, Singh SK, Varshney S, Singh S, Kumar P, Kim B-G, Ra I-H. Fusion of Deep Sort and Yolov5 for Effective Vehicle Detection and Tracking Scheme in Real-Time Traffic Management Sustainable System. Sustainability. 2023; 15(24):16869. https://doi.org/10.3390/su152416869
Chicago/Turabian StyleKumar, Sunil, Sushil Kumar Singh, Sudeep Varshney, Saurabh Singh, Prashant Kumar, Bong-Gyu Kim, and In-Ho Ra. 2023. "Fusion of Deep Sort and Yolov5 for Effective Vehicle Detection and Tracking Scheme in Real-Time Traffic Management Sustainable System" Sustainability 15, no. 24: 16869. https://doi.org/10.3390/su152416869
APA StyleKumar, S., Singh, S. K., Varshney, S., Singh, S., Kumar, P., Kim, B. -G., & Ra, I. -H. (2023). Fusion of Deep Sort and Yolov5 for Effective Vehicle Detection and Tracking Scheme in Real-Time Traffic Management Sustainable System. Sustainability, 15(24), 16869. https://doi.org/10.3390/su152416869