YOLO-ABD: A Multi-Scale Detection Model for Pedestrian Anomaly Behavior Detection
Abstract
:1. Introduction
- Introducing an end-to-end pedestrian anomaly detection method that utilizes the SimAM attention mechanism [24] to reduce background interference. Additionally, it incorporates a custom-designed small-object detection head to identify pedestrian anomalies at various scales.
- Integrating the GSConv (Group Shuffle Convolution) module with a symmetrical structure enhances the model’s accuracy. Moreover, adopting shuffling strategies decreases the computational complexity, thereby achieving the goal of lightweighting the model.
- The proposed method is trained and validated on public datasets for anomaly behavior detection. Generalization testing in traffic scene detection demonstrates significant performance improvements over existing methods.
2. Related Works
3. Methodology
3.1. The General Structure of YOLO-ABD
3.2. Baseline Model
3.3. Small Object Detection Head
3.4. GSConv Module
3.5. SimAm Attention Module
4. Experiments
4.1. Dataset
4.2. Training Setting
4.2.1. Evaluating Indicators
4.2.2. Result Analysis
4.2.3. Generalization Study
4.2.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Pang, G.; Shen, C.; Cao, L.; Hengel, A.V.D. Deep learning for anomaly detection: A review. ACM Comput. Surv. 2021, 54, 1–38. [Google Scholar] [CrossRef]
- Nassif, A.B.; Talib, M.A.; Nasir, Q.; Dakalbab, F.M. Machine learning for anomaly detection: A systematic review. IEEE Access 2021, 9, 78658–78700. [Google Scholar] [CrossRef]
- Ristea, N.C.; Madan, N.; Ionescu, R.T.; Nasrollahi, K.; Khan, F.S.; Moeslund, T.B. Self-supervised predictive convolutional attentive block for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13576–13586. [Google Scholar]
- Kwon, H. Adversarial image perturbations with distortions weighted by color on deep neural networks. Multimed. Tools Appl. 2023, 82, 13779–13795. [Google Scholar] [CrossRef]
- Chen, B.; Wang, X.; Bao, Q.; Jia, B.; Li, X.; Wang, Y. An unsafe behavior detection method based on improved YOLO framework. Electronics 2022, 11, 1912. [Google Scholar] [CrossRef]
- Liu, B.; Yu, C.; Chen, B.; Zhao, Y. YOLO-GP: A Multi-Scale Dangerous Behavior Detection Model Based on YOLOv8. Symmetry 2024, 16, 730. [Google Scholar] [CrossRef]
- Ravanbakhsh, M.; Nabi, M.; Sangineto, E.; Marcenaro, L.; Regazzoni, C.; Sebe, N. Abnormal event detection in videos using generative adversarial nets. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1577–1581. [Google Scholar]
- Lv, H.; Chen, C.; Cui, Z.; Xu, C.; Li, Y.; Yang, J. Learning normal dynamics in videos with meta prototype network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15425–15434. [Google Scholar]
- Yajing, L.; Zhongjian, D. Abnormal behavior detection in crowd scene using YOLO and Conv-AE. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 1720–1725. [Google Scholar]
- Dong, F.; Zhang, Y.; Nie, X. Dual discriminator generative adversarial network for video anomaly detection. IEEE Access 2020, 8, 88170–88176. [Google Scholar] [CrossRef]
- Lee, S.; Kim, H.G.; Ro, Y.M. BMAN: Bidirectional multi-scale aggregation networks for abnormal event detection. IEEE Trans. Image Process. 2019, 29, 2395–2408. [Google Scholar] [CrossRef]
- Ullah, W.; Hussain, T.; Ullah, F.U.M.; Lee, M.Y.; Baik, S.W. TransCNN: Hybrid CNN and transformer mechanism for surveillance anomaly detection. Eng. Appl. Artif. Intell. 2023, 123, 106173. [Google Scholar] [CrossRef]
- Pang, G.; Yan, C.; Shen, C.; Hengel, A.V.D.; Bai, X. Self-trained deep ordinal regression for end-to-end video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12173–12182. [Google Scholar]
- Hao, Y.; Tang, Z.; Alzahrani, B.; Alotaibi, R.; Alharthi, R.; Zhao, M.; Mahmood, A. An end-to-end human abnormal behavior recognition framework for crowds with mentally disordered individuals. IEEE J. Biomed. Health Inf. 2021, 26, 3618–3625. [Google Scholar] [CrossRef]
- Chen, S.; Guo, W. Auto-encoders in deep learning—A review with new perspectives. Mathematics 2023, 11, 1777. [Google Scholar] [CrossRef]
- Gong, D.; Liu, L.; Le, V.; Saha, B.; Mansour, M.R.; Venkatesh, S.; Hengel, A.V.D. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1705–1714. [Google Scholar]
- Luo, W.; Liu, W.; Lian, D.; Gao, S. Future frame prediction network for video anomaly detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7505–7520. [Google Scholar] [CrossRef]
- Li, S.; Fang, J.; Xu, H.; Xue, J. Video frame prediction by deep multi-branch mask network. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1283–1295. [Google Scholar] [CrossRef]
- Wang, X.; Che, Z.; Jiang, B.; Xiao, N.; Yang, K.; Tang, J.; Qi, Q. Robust unsupervised video anomaly detection by multipath frame prediction. IEEE Trans. Neural Networks Learn. Syst. 2022, 33, 2301–2312. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Li, H.; Zhang, G. Future frame prediction based on generative assistant discriminative network for anomaly detection. Appl. Intell. 2023, 53, 542–559. [Google Scholar] [CrossRef]
- Straka, Z.; Svoboda, T.; Hoffmann, M. PreCNet: Next-frame video prediction based on predictive coding. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Hussain, M. YOLOv1 to v8: Unveiling Each Variant—A Comprehensive Review of YOLO. IEEE Access 2024, 12, 42816–42833. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wei, H.; Liu, Z.; Zhan, Z.; Ren, Q. Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. arXiv 2022, arXiv:2206.02424. [Google Scholar]
- Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
- Cheoi, K.J. Temporal saliency-based suspicious behavior pattern detection. Appl. Sci. 2020, 10, 1020. [Google Scholar] [CrossRef]
- Smoliński, A.; Forczmański, P.; Nowosielski, A. Processing and Integration of Multimodal Image Data Supporting the Detection of Behaviors Related to Reduced Concentration Level of Motor Vehicle Users. Electronics 2024, 13, 2457. [Google Scholar] [CrossRef]
- Xie, B.; Guo, H.; Zheng, G. Mining Abnormal Patterns in Moving Target Trajectories Based on Multi-Attribute Classification. Mathematics 2024, 12, 1924. [Google Scholar] [CrossRef]
- Lei, J.; Sun, W.; Fang, Y.; Ye, N.; Yang, S.; Wu, J. A Model for Detecting Abnormal Elevator Passenger Behavior Based on Video Classification. Electronics 2024, 13, 2472. [Google Scholar] [CrossRef]
- Xie, Y.; Zhang, S.; Liu, Y. Abnormal Behavior Recognition in Classroom Pose Estimation of College Students Based on Spatiotemporal Representation Learning. Trait. Signal 2021, 38, 89–95. [Google Scholar] [CrossRef]
- Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R. Multimodal behavior analysis in computer-enabled laboratories using nonverbal cues. Signal Image Video Proces. 2020, 14, 1617–1624. [Google Scholar] [CrossRef]
- Guan, Y.; Hu, W.; Hu, X. Abnormal behavior recognition using 3D-CNN combined with LSTM. Multimed. Tools Appl. 2021, 80, 18787–18801. [Google Scholar] [CrossRef]
- Rashmi, M.; Ashwin, T.S.; Guddeti, R.M.R. Surveillance video analysis for student action recognition and localization inside computer laboratories of a smart campus. Multimed. Tools Appl. 2021, 80, 2907–2929. [Google Scholar] [CrossRef]
- Lentzas, A.; Vrakas, D. Non-intrusive human activity recognition and abnormal behavior detection on elderly people: A review. Artif. Intell. Rev. 2020, 53, 1975–2021. [Google Scholar] [CrossRef]
- Lina, W.; Ding, J. Behavior detection method of OpenPose combined with Yolo network. In Proceedings of the 2020 International Conference on Communications, Kuala Lumpur, Malaysia, 3–5 July 2020; pp. 326–330. [Google Scholar]
- Ganagavalli, K.; Santhi, V. YOLO-based anomaly activity detection system for human behavior analysis and crime mitigation. Signal Image Video Process. 2024, 18, 417–427. [Google Scholar] [CrossRef]
- Zhou, T.; Zheng, L.; Peng, Y.; Jiang, R. A survey of research on crowd abnormal behavior detection algorithm based on YOLO network. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; pp. 783–786. [Google Scholar]
- Maity, M.; Banerjee, S.; Chaudhuri, S.S. Faster r-cnn and yolo based vehicle detection: A survey. In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 8–10 April 2021; pp. 1442–1447. [Google Scholar]
- Mansour, R.F.; Escorcia-Gutierrez, J.; Gamarra, M.; Villanueva, J.A.; Leal, N. Intelligent video anomaly detection and classification using faster RCNN with deep reinforcement learning model. Image Vis. Comput. 2021, 112, 104229. [Google Scholar] [CrossRef]
- Su, H.; Ying, H.; Zhu, G.; Zhang, C. Behavior Identification based on Improved Two-Stream Convolutional Networks and Faster RCNN. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 1771–1776. [Google Scholar]
- Chen, N.; Man, Y.; Sun, Y. Abnormal cockpit pilot driving behavior detection using YOLOv4 fused attention mechanism. Electronics 2022, 11, 2538. [Google Scholar] [CrossRef]
- Chen, H.; Zhou, G.; Jiang, H. Student behavior detection in the classroom based on improved YOLOv8. Sensors 2023, 23, 8385. [Google Scholar] [CrossRef]
- Chang, J.; Zhang, G.; Chen, W.; Yuan, D.; Wang, Y. Gas station unsafe behavior detection based on YOLO-V3 algorithm. China Saf. Sci. J. 2023, 33, 31–37. [Google Scholar]
- Benjumea, A.; Teeti, I.; Cuzzolin, F.; Bradley, A. YOLO-Z: Improving small object detection in YOLOv5 for autonomous vehicles. arXiv 2021, arXiv:2112.11798. [Google Scholar]
- Xiao, Y.; Wang, Y.; Li, W.; Sun, M.; Shen, X.; Luo, Z. Monitoring the Abnormal Human Behaviors in Substations based on Probabilistic Behaviours Prediction and YOLO-V5. In Proceedings of the 2022 7th Asia Conference on Power and Electrical Engineering (ACPEE), Hangzhou, China, 15–17 April 2022; pp. 943–948. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Wang, H.; Jin, Y.; Ke, H.; Zhang, X. DDH-YOLOv5: Improved YOLOv5 based on Double IoU-aware Decoupled Head for object detection. J. Real-Time Image Process. 2022, 19, 1023–1033. [Google Scholar] [CrossRef]
- Rodrigues, R.; Bhargava, N.; Velmurugan, R.; Chaudhuri, S. Multi-timescale trajectory prediction for abnormal human activity detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 2626–2634. [Google Scholar]
- Gennari, M.; Fawcett, R.; Prisacariu, V.A. DSConv: Efficient Convolution Operator. arXiv 2019, arXiv:1901.01928. [Google Scholar]
- Guo, J.; Teodorescu, R.; Agrawal, G. Fused DSConv: Optimizing sparse CNN inference for execution on edge devices. In Proceedings of the 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGrid), Melbourne, Australia, 10–13 May 2021; pp. 545–554. [Google Scholar]
- Alalwan, N.; Abozeid, A.; ElHabshy, A.A.; Alzahrani, A. Efficient 3D deep learning model for medical image semantic segmentation. Alex. Eng. J. 2021, 60, 1231–1239. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting tassels in RGB UAV imagery with improved YOLOv5 based on transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
- Zhao, H.; Zhang, H.; Zhao, Y. Yolov7-sea: Object detection of maritime uav images based on improved yolov7. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 233–238. [Google Scholar]
- Jin, X.; Xie, Y.; Wei, X.S.; Zhao, B.R.; Chen, Z.M.; Tan, X. Delving deep into spatial pooling for squeeze-and-excitation networks. Pattern Recognit. 2022, 121, 108159. [Google Scholar] [CrossRef]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015, 28, 2017–2025. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Liu, Y.; Shao, Z.; Hoffmann, N. Global attention mechanism: Retain information to enhance channel-spatial interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
- FSMVU. Street View Dataset. 2023. Available online: https://universe.roboflow.com/fsmvu/street-view-gdogo (accessed on 5 September 2023).
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
Dataset | Abnormal Behavior | Number of Boxes |
---|---|---|
IITB-Corridor | Bag Exchange | 209 |
Cycling | 577 | |
Suspicious Object | 2255 | |
Running | 2301 | |
Fighting | 2072 | |
Hiding | 396 | |
Playing With Ball | 2058 | |
Protest | 5575 |
Methods | Bag | Cyc | Sus | Run | Fig | Hid | Pla | Pro | mAP50 | GFLOPs | Parameter | FPS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Faster-RCNN | 76.5% | 87.3% | 96.0% | 70.2% | 90.4% | 85.2% | 66.9% | 87.5% | 82.5% | - | - | - |
YOLOv3 | 81.0% | 78.6% | 97.7% | 71.3% | 94.6% | 98.1% | 81.7% | 90.5% | 86.7% | 18.9 | 12.1M | 467 |
YOLOv5s | 70.4% | 82.0% | 94.8% | 55.4% | 89.0% | 95.7% | 66.8% | 89.2% | 80.4% | 23.8 | 8.6M | 381 |
YOLOv5n | 48.6% | 76.0% | 94.1% | 55.8% | 88.8% | 92.4% | 66.8% | 88.5% | 76.4% | 7.1 | 2.3M | 503 |
YOLOv6s | 73.7% | 85.3% | 96.9% | 67.2% | 93.6% | 95.0% | 72.8% | 91.4% | 84.5% | 44.0 | 15.5M | 352 |
YOLOv6n | 68.8% | 81.6% | 96.4% | 59.7% | 91.7% | 94.2% | 67.1% | 89.6% | 81.2% | 11.8 | 4.2M | 509 |
YOLOv8s | 76.7% | 88.1% | 96.6% | 62.0% | 91.1% | 96.5% | 72.4% | 90.9% | 84.3% | 28.5 | 11.1M | 380 |
YOLOv8n | 75.2% | 84.4% | 96.9% | 79.6% | 94.8% | 94.8% | 73.7% | 91.5% | 86.4% | 10.7 | 2.49M | 347 |
YOLOv9t | 64.9% | 75.8% | 94.5% | 70.1% | 93.3% | 93.7% | 65.7% | 89.6% | 81.0% | 8.7 | 2.86M | 504 |
YOLOv10n | 69.3% | 77.0% | 95.1% | 77.8% | 89.1% | 92.9% | 69.2% | 90.5% | 82.6% | 8.2 | 2.57M | 689 |
Ours | 76.7% | 92.5% | 96.7% | 84.6% | 95.6% | 95.7% | 80.5% | 92.2% | 89.3% | 11.4 | 2.56M | 436 |
Methods | Bicycle | Bus | Car | Motor | Person | mAP50 | GFLOPs | Parameter | FPS |
---|---|---|---|---|---|---|---|---|---|
Faster-RCNN | 86.7% | 89.8% | 94.7% | 80.9% | 79.1% | 86.3% | - | - | - |
YOLOv3 | 92.1% | 94.2% | 94.3% | 80.9% | 68.2% | 85.9% | 18.9 | 12.1M | 95 |
YOLOv5s | 93.4% | 95.7% | 96.6% | 89.8% | 86.6% | 92.3% | 23.8 | 8.6M | 806 |
YOLOv5n | 91.9% | 96.1% | 96.6% | 89.3% | 84.9% | 91.8% | 7.1 | 2.3M | 111 |
YOLOv6s | 93.4% | 95.7% | 96.3% | 90.3% | 85.2% | 92.2% | 44.0 | 15.5M | 63 |
YOLOv6n | 88.2% | 91.3% | 95.7% | 86.8% | 76.6% | 87.7% | 11.8 | 4.2M | 105 |
YOLOv8s | 94.7% | 96.8% | 96.7% | 91.1% | 87.3% | 93.3% | 28.5 | 11.1M | 82 |
YOLOv8n | 92.5% | 95.7% | 96.5% | 89.1% | 85.5% | 91.9% | 8.1 | 2.49M | 65 |
YOLOv9t | 89.3% | 94.2% | 95.9% | 87.3% | 76.1% | 88.5% | 8.7 | 2.86M | 92 |
YOLOv10n | 91.2% | 94.9% | 96.3% | 87.5% | 83.1% | 90.6% | 8.2 | 2.57M | 139 |
Ours | 93.5% | 95.4% | 97% | 89.9% | 88.1% | 92.8% | 11.4 | 2.56M | 122 |
Basic Convolutional Method | mAP50 | mAP50-95 | GFLOPs | Parameter | FPS |
---|---|---|---|---|---|
SC | 86.4% | 57.4% | 8.1 | 2.86M | 504 |
DSC | 85.6% | 55.8% | 7.3 | 2.5M | 505 |
GSConv | 87.5% | 58.1% | 7.7 | 2.68M | 552 |
SimAm | GSConv | Small | Precision | Recall | mAP50 | mAP50-95 | GFLOPs | Parameter | FPS |
---|---|---|---|---|---|---|---|---|---|
- | - | - | 83.8% | 79.3% | 86.4% | 57.4% | 8.1 | 2.86M | 504 |
✓ | - | - | 83.8% | 79.5% | 86.7% | 57.4% | 8.1 | 2.86M | 507 |
- | ✓ | - | 84.4% | 78.1% | 87.5% | 58.1% | 7.7 | 2.68M | 552 |
✓ | ✓ | - | 85.0% | 79.0% | 88.0% | 58.2% | 7.7 | 2.68M | 504 |
- | - | ✓ | 85.1% | 79.4% | 86.8% | 57.7% | 11.8 | 2.75M | 420 |
✓ | - | ✓ | 85.0% | 81.1% | 87.4% | 58.2% | 11.8 | 2.75M | 405 |
- | ✓ | ✓ | 84.1% | 83.0% | 88.9% | 59.2% | 11.4 | 2.56M | 422 |
✓ | ✓ | ✓ | 83.9% | 81.5% | 89.3% | 60.6% | 11.4 | 2.56M | 436 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hua, C.; Luo, K.; Wu, Y.; Shi, R. YOLO-ABD: A Multi-Scale Detection Model for Pedestrian Anomaly Behavior Detection. Symmetry 2024, 16, 1003. https://doi.org/10.3390/sym16081003
Hua C, Luo K, Wu Y, Shi R. YOLO-ABD: A Multi-Scale Detection Model for Pedestrian Anomaly Behavior Detection. Symmetry. 2024; 16(8):1003. https://doi.org/10.3390/sym16081003
Chicago/Turabian StyleHua, Caijian, Kun Luo, Yadong Wu, and Rui Shi. 2024. "YOLO-ABD: A Multi-Scale Detection Model for Pedestrian Anomaly Behavior Detection" Symmetry 16, no. 8: 1003. https://doi.org/10.3390/sym16081003
APA StyleHua, C., Luo, K., Wu, Y., & Shi, R. (2024). YOLO-ABD: A Multi-Scale Detection Model for Pedestrian Anomaly Behavior Detection. Symmetry, 16(8), 1003. https://doi.org/10.3390/sym16081003