Ground Pedestrian and Vehicle Detections Using Imaging Environment Perception Mechanisms and Deep Learning Networks
Abstract
:1. Introduction
- (1)
- A ground pedestrian and vehicle detection method, which considers both luminance evaluation-based environment perception and improved one-stage lightweight deep learning network, is developed.
- (2)
- The MSCN values, information entropy indices, and SVM classification are considered for implementing imaging luminance estimation.
- (3)
- DCBlock, OPBlock, and Denseblock are designed or used in Yolov3-tiny to realize a robust pedestrian and vehicle detection under complex imaging luminance.
2. Proposed Algorithm
2.1. The Color Space Transform
2.2. The Environment Perception Computation
2.3. The Ground Pedestrian and Vehicle Detection
3. Experiments and Results
3.1. Experiment Data Collection and Orgnization
3.2. Evaluation of Environment Perception Method
3.3. Evaluation of Ground Pedestrian and Vehicle Detection Methods
3.4. Discussions
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, F.; Xu, Z.; Chen, W.; Zhang, Z.; Zhong, H.; Luan, J.; Li, C. An image compression method for video surveillance system in underground mines based on residual networks and discrete wavelet transform. Electronics 2019, 8, 1559. [Google Scholar] [CrossRef] [Green Version]
- Nawaratne, R.; Kahawala, S.; Nguyen, S.; De Silva, D. A generative latent space approach for real-time surveillance in smart cities. IEEE Trans. Ind. Inform. 2021, 17, 4872–4881. [Google Scholar] [CrossRef]
- Li, X.; Yu, Q.; Alzahrani, B.; Barnawi, A.; Alhindi, A.; Alghazzawi, D.; Miao, Y. Data fusion for intelligent crowd monitoring and management systems: A survey. IEEE Access 2021, 9, 47069–47083. [Google Scholar] [CrossRef]
- Kim, H.; Kim, S.; Yu, K. Automatic extraction of indoor spatial information from floor plan image: A patch-based deep learning methodology application on large-scale complex buildings. ISPRS Int. J. Geo Inf. 2021, 10, 828. [Google Scholar] [CrossRef]
- Liu, H.; Yan, B.; Wang, W.; Li, X.; Guo, Z. Manhole cover detection from natural scene based imaging environment perception. KSII Trans. Internet Inf. Syst. 2019, 13, 5059–5111. [Google Scholar]
- Honkavaara, E.; Eskelinen, M.A.; Polonen, I.; Saari, H.; Ojanen, H.; Mannila, R.; Holmlund, C.; Hakala, T.; Litkey, P.; Rosnell, T.; et al. Moisture of a peat production area using hyperspectral frame cameras in visible to short-wave infrared spectral ranges onboard a small unmanned airborne vehicle (UAV). IEEE Trans. Geosci. Remote Sens. 2016, 54, 5440–5454. [Google Scholar] [CrossRef] [Green Version]
- Gao, T.; Li, K.; Chen, T.; Liu, M.; Mei, S.; Xing, K.; Li, Y. A novel UAV sensing image defogging method. IEEE J STARS 2020, 13, 2610–2625. [Google Scholar] [CrossRef]
- Khelifi, L.; Mignotte, M. Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis. IEEE Access 2020, 8, 126385–126400. [Google Scholar] [CrossRef]
- Ezeme, O.M.; Mahmoud, Q.H.; Azim, A.A. Design and development of AD-CGAN: Conditional generative adversarial networks for anomaly detection. IEEE Access 2020, 8, 177667–177681. [Google Scholar] [CrossRef]
- Azar, A.T.; Koubaa, A.; Mohamed, N.A.; Ibrahim, H.A.; Ibrahim, Z.F.; Kazim, M.; Ammar, A.; Benjdira, B.; Khamis, A.M.; Hameed, I.A.; et al. Drone deep reinforcement learning: A review. Electronics 2021, 10, 999. [Google Scholar] [CrossRef]
- Liu, C.; Wu, Y.; Liu, J.; Sun, Z. Improved YOLOv3 network for insulator detection in aerial images with diverse background interference. Electronics 2021, 10, 771. [Google Scholar] [CrossRef]
- Ammar, A.; Koubaa, A.; Ahmed, M.; Saad, A.; Benjdira, B. Vehicle detection from aerial images using deep learning: A comparative study. Electronics 2021, 10, 820. [Google Scholar] [CrossRef]
- Vasic, M.K.; Papic, V. Multimodel deep learning for person detection in aerial images. Electronics 2020, 9, 1459. [Google Scholar] [CrossRef]
- Shen, Z.; Liu, Z.; Li, J.; Jiang, Y.-G.; Chen, Y.; Xue, X. Object detection from scratch with deep supervision. IEEE Trans. Pattern Anal. 2020, 42, 398–412. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Haroun, F.M.E.; Deros, S.N.M.; Din, N.M. Detection and monitoring of power line corridor from satellite imagery using RetinaNet and K-mean clustering. IEEE Access 2021, 9, 116720–116730. [Google Scholar] [CrossRef]
- Rao, Y.; Yu, G.; Xue, J.; Pu, J.; Gou, J.; Wang, Q.; Wang, Q. Light-net: Lightweight object detector. IEEE Access 2020, 8, 201700–201712. [Google Scholar] [CrossRef]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’19), Seoul, Korea, 27 October–2 November 2019; pp. 6568–6577. [Google Scholar]
- Mekhalfi, M.L.; Nicolo, C.; Bazi, Y.; Rahhal, M.M.A.; Alsharif, N.A.; Maghayreh, E.A. Contrasting YOLOv5, transformer, and EfficientDet detectors for crop circle detection in desert. IEEE Geosci. Remote Sens. 2022, 19, 205. [Google Scholar] [CrossRef]
- Gao, Y.; Hou, R.; Gao, Q.; Hou, Y. A fast and accurate few-shot detector for objects with fewer pixels in drone image. Electronics 2021, 10, 783. [Google Scholar] [CrossRef]
- Li, L.; Yang, Z.; Jiao, L.; Liu, F.; Liu, X. High-resolution SAR change detection based on ROI and SPP net. IEEE Access 2019, 7, 177009–177022. [Google Scholar] [CrossRef]
- Li, J.; Liang, X.; Shen, S.; Xu, T.; Feng, J.; Yan, S. Scale-aware fast R-CNN for pedestrian detection. IEEE Trans. Multimed. 2018, 20, 985–996. [Google Scholar] [CrossRef] [Green Version]
- Dike, H.U.; Zhou, Y. A robust quadruplet and faster region-based CNN for UAV video-based multiple object tracking in crowded environment. Electronics 2021, 10, 795. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High quality object detection and instance segmentation. IEEE Trans. Pattern Anal. 2021, 43, 1483–1498. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, Y.; Chen, Y.; Wang, N.; Zhang, Z. Scale-aware trident networks for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’19), Seoul, Korea, 27 October–2 November 2019; pp. 6053–6062. [Google Scholar]
- Yu, W.; Lv, P. An end-to-end intelligent fault diagnosis application for rolling bearing based on MobileNet. IEEE Access 2021, 9, 41925–41933. [Google Scholar] [CrossRef]
- Qiang, B.; Zhai, Y.; Zhou, M.; Yang, X.; Peng, B.; Wang, Y.; Pang, Y. SqueezeNet and fusion network-based accurate fast fully convolutional network for hand detection and gesture recognition. IEEE Access 2021, 9, 77661–77674. [Google Scholar] [CrossRef]
- Gomes, R.; Rozario, P.; Adhikari, N. Deep learning optimization in remote sensing image segmentation using dilated convolutions and ShuffleNet. In Proceedings of the IEEE International Conference on Electro Information Technology (EIT’21), Mt. Pleasant, MI, USA, 14–15 May 2021; pp. 244–249. [Google Scholar]
- Thoonen, G.; Mahmood, Z.; Peeters, S.; Scheunders, P. Multisource classification of color and hyperspectral images using color attribute profiles and composite decision fusion. IEEE J STARS 2012, 5, 510–521. [Google Scholar] [CrossRef]
- Zheng, L.; Shen, L.; Chen, J.; An, P.; Luo, J. No-reference quality assessment for screen content images based on hybrid region features fusion. IEEE Trans. Multimed. 2019, 21, 2057–2070. [Google Scholar] [CrossRef]
- Xu, L.; Chen, Q.; Wang, Q. Application of color entropy to image quality assessment. J. Imag. Grap. 2015, 20, 1583–1592. [Google Scholar]
- Lin, S.-L. Application of machine learning to a medium Gaussian support vector machine in the diagnosis of motor bearing faults. Electronics 2021, 10, 2266. [Google Scholar] [CrossRef]
- Kumar, S.; Yadav, D.; Gupta, H.; Verma, O.P.; Ansari, I.A.; Ahn, C.W. A novel YOLOv3 algorithm-based deep learning approach for waste segregation: Towards smart waste management. Electronics 2021, 10, 14. [Google Scholar] [CrossRef]
- Xiao, D.; Shan, F.; Li, Z.; Le, Z.L.; Liu, X.; Li, X. A target detection model based on improved tiny-Yolov3 under the environment of mining truck. IEEE Access 2019, 7, 123757–123764. [Google Scholar] [CrossRef]
- Gong, J.; Zhao, J.; Li, F.; Zhang, H. Vehicle detection in thermal images with an improved Yolov3-tiny. In Proceedings of the IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS’20), Shenyang, China, 28–30 July 2020; pp. 253–256. [Google Scholar]
- Ufuk Agar, A.; Allebach, J.P. Model-based color halftoning using direct binary search. IEEE Trans. Image Process. 2005, 14, 1945–1959. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Xie, J.; Sun, H.; Jiao, Y.; Lu, B. Inertial vertical speed warning model in an approaching phase. In Proceedings of the IEEE International Conference on Civil Aviation Safety and Information Technology (ICCASIT’20), Weihai, China, 14–16 October 2020; pp. 940–943. [Google Scholar]
- Min, X.; Ma, K.; Gu, K.; Zhai, G.; Wang, Z.; Lin, W. Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Trans. Image Process. 2017, 26, 5462–5474. [Google Scholar] [CrossRef]
- Bakshi, S.; Rajan, S. Fall event detection system using inception-Densenet inspired sparse Siamese network. IEEE Sens. Lett. 2021, 5, 7002804. [Google Scholar] [CrossRef]
- Zhang, M.; Chu, R.; Dong, C.; Wei, J.; Lu, W.; Xiong, N. Residual learning diagnosis detection: An advanced residual learning diagnosis detection system for COVID-19 in industrial internet of things. IEEE Trans. Ind. Inform. 2021, 17, 6510–6518. [Google Scholar] [CrossRef]
- Li, Q.; Garg, S.; Nie, J.; Li, X. A highly efficient vehicle taillight detection approach based on deep learning. IEEE Trans. Intell. Transp. 2021, 22, 4716–4726. [Google Scholar] [CrossRef]
- Xu, Z.; Jia, R.; Liu, Y.; Zhao, C.; Sun, H. Fast method of detecting tomatoes in a complex scene for picking robots. IEEE Access 2020, 8, 55289–55299. [Google Scholar] [CrossRef]
- Cheng, R.; He, X.; Zheng, Z.; Wang, Z. Multi-scale safety helmet detection based on SAS-YOLOv3-tiny. Appl. Sci. 2021, 11, 3652. [Google Scholar] [CrossRef]
- Adiono, T.; Putra, A.; Sutisna, N.; Syafalni, I.; Mulyawan, R. Low latency Yolov3-yiny accelerator for low –cost FPGA using general matrix multiplication principle. IEEE Access 2021, 9, 141890–141913. [Google Scholar] [CrossRef]
- Yao, K.; Ma, Z.; Lei, J.; Shen, S.; Zhu, Y. Unsupervised representation learning method for UAV’s scene perception. In Proceedings of the IEEE 9th International Conference on Software Engineering and Service Science (ICSESS’18), Beijing, China, 23–25 November 2018; pp. 323–327. [Google Scholar]
- Back, S.; Lee, S.; Shin, S.; Yu, Y.; Yuk, T.; Jong, S.; Ryu, S.; Lee, K. Robust skin disease classification by distilling deep neural network ensemble for the mobile diagnosis of herpes zoster. IEEE Assess 2021, 9, 20156–20169. [Google Scholar] [CrossRef]
- Mulim, W.; Revikasha, M.F.; Rivandi; Hanafiah, N. Waste classification using EfficientNet-80. In Proceedings of the 1st International Conference on Computer Science and Artificial Intelligence (ICCSAI’21), Jakarta, Indonesia, 28 October 2021; pp. 253–257. [Google Scholar]
- Gao, X.; Han, S.; Luo, C. A detection and verification model based on SSD and encoder-decoder network for scene text detection. IEEE Access 2019, 7, 71299–71310. [Google Scholar] [CrossRef]
- Giyenko, A.; Cho, Y.I. Intelligent UAV in smart cities using IoT. In Proceedings of the 16th international Conference on Control, Automation and Systems (ICCAS’16), Gyeongju, Korea, 16–19 October 2016; pp. 207–210. [Google Scholar]
- Liu, H.; Lu, H.; Zhang, Y. Image enhancement for outdoor long-range surveillance using IQ-learning multiscale Retinex. IET Image Process. 2017, 11, 786–795. [Google Scholar] [CrossRef]
- Liu, H.; Lv, M.; Gao, Y.; Li, J.; Lan, J.; Gao, W. Information processing system design for multi-rotor UAV-based earthquake rescue. In Proceedings of the International Conference on Man-Machine-Environment System Engineering (ICMMESE’20), Zhengzhou, China, 19–21 December 2020; pp. 320–321. [Google Scholar]
- Liu, H.; Wang, W.; He, Z.; Tong, Q.; Wang, X.; Yu, W. Blind image quality evaluation metrics design for UAV photographic application. In Proceedings of the 5th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (CYBER’15), Shenyang, China, 8–12 June 2015; pp. 293–297. [Google Scholar]
- Passalis, N.; Tzelepi, M.; Tefa, A. Probabilistic knowledge transfer for lightweight deep representation learning. IEEE Trans. Neural Netw. Learn. 2021, 32, 2030–2039. [Google Scholar] [CrossRef] [PubMed]
- Shao, L.; Zhu, F.; Li, X. Transfer learning for visual categorization: A survey. IEEE Trans. Neural Netw. Learn. 2015, 26, 1019–1034. [Google Scholar] [CrossRef] [PubMed]
- Haneche, H.; Ouahabi, A.; Boudraa, B. New mobile communication system design for Rayleigh environments based on compressed sensing-source coding. IET Commun. 2019, 13, 2375–2385. [Google Scholar] [CrossRef]
- Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image denoising using a compressive sensing approach based on regularization constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef]
- Mimouna, A.; Alouani, I.; Khalifa, A.B.; Hillali, Y.E.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Amara, N.E.B. OLOMP: A heterogeneous multimodal dataset for advanced environment perception. Electronics 2020, 9, 560. [Google Scholar] [CrossRef] [Green Version]
- Galvao, L.G.; Abbod, M.; Kalganova, T.; Palade, V.; Huda, M.N. Pedestrian and vehicle detection in autonomous vehicle perception systems—A review. Sensors 2021, 21, 7267. [Google Scholar] [CrossRef]
- Roman, J.C.M.; Noguera, J.L.V.; Legal-Ayala, H.; Pinto-Roa, D.P.; Gomez-Guerrero, S.; Torres, M.G. Entropy and contrast enhancement of infrared thermal images using the multiscale top-hat transform. Entropy 2019, 21, 244. [Google Scholar] [CrossRef] [Green Version]
Network Type | Representative Network |
---|---|
One-stage method | You only look once (Yolo) series [11,12], single shot multibox detector (SSD) series [13,14], RetinaNet [15], CornerNet [16], CenterNet [17], and EfficientDet [18]. |
Two-stage approach | Region-based convolutional neural network (R-CNN) [19], spatial pyramid pooling network (SPP-Net) [20], Fast R-CNN [21], Faster R-CNN [22], Mask R-CNN [23], Cascade R-CNN [24], and TridentNet [25]. |
Imaging Luminance Degree | |||
---|---|---|---|
Excellent | Ordinary | Severe | |
The environment light intensity | ~≥1500 lx | ~≥300 lx and ~<1500 lx | ~≥50 lx and ~<300 lx |
The information entropy | HL > 4.8, 2.8 > Ha > 1.5, and 2.5 > Hb ≥ 1.0 | 5.0 > HL ≥ 3.7, 2.9 > Ha ≥ 1.7, and 2.2 > Hb ≥ 1.3 | 4.8 > HL ≥ 2.8, 3.6 > Ha ≥ 1.0, 2.0 > Hb ≥ 1.2 |
Kernel Function | Polynomial Function | Linear Function | Sigmoid Function | Radial Basis Function |
Classification Accuracy | 89.88% | 89.91% | 86.83% | 94.18% |
Image Name | HL | Ha | Hb | SVM Output | Subjective Imaging Luminance Evaluation Result |
---|---|---|---|---|---|
Figure 5(a1) | 5.0249 | 2.3291 | 1.6198 | Excellent | Excellent |
Figure 5(a2) | 4.8127 | 1.9634 | 1.9739 | Excellent | Excellent |
Figure 5(a3) | 4.9405 | 2.7008 | 2.4635 | Excellent | Excellent |
Figure 5(a4) | 4.9211 | 2.4632 | 1.9028 | Excellent | Excellent |
Figure 5(a5) | 4.8957 | 2.0603 | 1.7359 | Excellent | Excellent |
Figure 5(b1) | 3.8071 | 1.9116 | 1.5406 | Ordinary | Ordinary |
Figure 5(b2) | 4.1549 | 2.3701 | 1.6831 | Ordinary | Ordinary |
Figure 5(b3) | 3.7945 | 1.7859 | 1.380 | Ordinary | Severe |
Figure 5(b4) | 4.2191 | 3.3829 | 2.1078 | Ordinary | Ordinary |
Figure 5(b5) | 4.4361 | 2.3164 | 1.6128 | Ordinary | Ordinary |
Figure 5(c1) | 4.6073 | 3.6122 | 1.6468 | Severe | Severe |
Figure 5(c2) | 2.8561 | 1.6391 | 1.3944 | Severe | Severe |
Figure 5(c3) | 3.0664 | 1.1879 | 1.4347 | Severe | Severe |
Figure 5(c4) | 4.0096 | 1.7767 | 2.0469 | Severe | Severe |
Figure 5(c5) | 3.6726 | 1.4922 | 1.5965 | Severe | Severe |
MAP(%)/ Recall(%) | Network Type | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Yolov3- Tiny-PDB1 | Yolov3- Tiny-PDB2 | Yolov3- Tiny-PDB3 | Yolov3- Tiny-VDB1 | Yolov3- Tiny-VDB2 | Yolov3- Tiny-VDB3 | Yolov3- Tiny | Yolov4+ Mobilenetv3 | Efficientnet -B0 | Retina Net | Center Net | SSD | |
Mixture dataset1 a | 81.10/ 68.95 | 82.13/ 71.39 | 77.64/ 55.73 | - | - | - | 77.94/ 79.27 | 76.56/ 74.97 | 68.52/ 57.71 | 42.03/ 20.04 | 75.57/ 49.32 | 47.36/ 11.49 |
Mixture dataset2 | - | - | - | 87.76/ 74.33 | 91.89/ 84.73 | 90.33/ 76.58 | 91.54/ 88.24 | 89.88/ 77.90 | 86.64/ 72.85 | 86.14/ 71.07 | 86.51/ 46.57 | 88.98/ 60.37 |
Typical dataset1 | 85.36/ 74.69 | 82.50/ 73.86 | 78.17/ 73.31 | - | - | - | 73.21/ 71.26 | 78.67/ 76.06 | 76.01/ 51.84 | 39.68/ 18.50 | 81.80/ 36.28 | 49.38/ 8.95 |
Typical dataset2 | 80.09/ 72.58 | 84.75/ 82.21 | 77.04/ 54.97 | - | - | - | 70.43/ 70.08 | 75.52/ 76.90 | 66.02/ 43.15 | 38.96/ 18.14 | 82.65/ 44.74 | 46.07/ 13.24 |
Typical dataset3 | 77.76/ 66.39 | 76.59/ 71.18 | 79.15/ 67.32 | - | - | - | 68.31/ 67.49 | 77.09/ 71.83 | 64.78/ 40.57 | 36.99/ 15.84 | 78.12/ 56.91 | 33.07/ 5.08 |
Typical dataset4 | - | - | - | 94.29/ 76.83 | 90.21/ 90.03 | 90.77/ 79.62 | 92.47/ 91.41 | 93.34/ 91.98 | 83.39/ 75.40 | 84.07/ 68.37 | 91.96/ 79.24 | 91.16/ 61.51 |
Typical dataset5 | - | - | - | 90.46/ 78.14 | 94.26/ 91.25 | 92.93/ 86.96 | 92.14/ 92.81 | 92.61/ 90.05 | 82.61/ 72.68 | 83.99/ 69.23 | 89.67/ 75.86 | 90.53/ 62.07 |
Typical dataset6 | - | - | - | 87.76/ 74.33 | 92.57/ 92.07 | 94.48/ 90.79 | 91.66/ 92.17 | 92.33/ 89.46 | 80.77/ 67.71 | 82.87/ 67.24 | 89.59/ 66.78 | 90.33/ 61.41 |
Yolov3- Tiny-PDB1 | Yolov3- Tiny-PDB2 | Yolov3- Tiny-PDB3 | Yolov3- Tiny-VDB1 | Yolov3- Tiny-VDB2 | Yolov3- Tiny-VDB3 | Yolov3- tiny | Yolov4+ Mobilenetv3 | Efficientnet -B0 | Retina Net | Center Net | SSD | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Processing time (ms/frame) | 4.154 | 5.380 | 6.798 | 5.251 | 5.416 | 7.448 | 3.874 | 14.259 | 100.274 | 65.239 | 22.359 | 52.632 |
Network Description | Dataset Description | MAP(%)/Recall(%) |
---|---|---|
Yolov3-tiny_VDB1 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 100%, 0%, and 0%. | 94.29/76.83 |
Yolov3-tiny_VDB1 without module 1. | 87.86/72.36 | |
Yolov3-tiny-VDB1 without modules 1 and 2. | 86.68/69.83 | |
Yolov3-tiny-VDB1 without modules 1, 2, and 3. | 83.86/64.85 | |
Yolov3-tiny-VDB1 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 87.76/74.33 |
Yolov3-tiny_VDB1 without module 1. | 87.54/71.53 | |
Yolov3-tiny-VDB1 without modules 1 and 2. | 86.03/66.16 | |
Yolov3-tiny-VDB1 without modules 1, 2, and 3. | 84.19/70.55 | |
Yolov3-tiny-PDB1 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 100%, 0%, and 0%. | 85.36/74.69 |
Yolov3-tiny-PDB1 without module 1. | 71.79/66.19 | |
Yolov3-tiny-PDB1 without modules 1 and 2. | 69.28/59.32 | |
Yolov3-tiny-PDB1 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 81.10/68.95 |
Yolov3-tiny-PDB1 without module 1. | 69.48/59.67 | |
Yolov3-tiny-PDB1 without modules 1 and 2. | 66.13/54.20 | |
Yolov3-tiny-VDB2 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 0%, 100%, and 0%. | 94.26/91.25 |
Yolov3-tiny-VDB2 without module 1. | 91.91/83.72 | |
Yolov3-tiny-VDB2 without modules 1 and 2. | 91.27/83.02 | |
Yolov3-tiny-VDB2 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 91.89/84.73 |
Yolov3-tiny-VDB2 without module 1. | 90.69/86.96 | |
Yolov3-tiny-VDB2 without modules 1 and 2. | 87.20/72.58 | |
Yolov3-tiny-PDB2 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 0%, 100%, and 0%. | 84.75/82.21 |
Yolov3-tiny-PDB2 without module 1. | 67.94/56.55 | |
Yolov3-tiny-PDB2 without modules 1 and 2. | 70.49/59.67 | |
Yolov3-tiny-PDB2 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 82.13/71.39 |
Yolov3-tiny-PDB2 without module 1. | 65.31/55.89 | |
Yolov3-tiny-PDB2 without modules 1 and 2. | 66.81/54.01 | |
Yolov3-tiny-VDB3 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 0%, 0%, and 100%. | 94.48/90.79 |
Yolov3-tiny-VDB3 without module 1. | 89.78/75.57 | |
Yolov3-tiny-VDB3 | Data of vehicle detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 90.33/76.58 |
Yolov3-tiny-VDB3 without module 1. | 82.24/61.49 | |
Yolov3-tiny-PDB3 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are 0%, 0%, and 100%. | 79.15/67.32 |
Yolov3-tiny-PDB3 without module 1. | 48.01/34.89 | |
Yolov3-tiny-PDB3 without modules 1 and 2. | 43.32/22.95 | |
Yolov3-tiny-PDB3 without modules 1, 2, and 3. | 37.91/26.69 | |
Yolov3-tiny-PDB3 | Data of pedestrian detection experiment; the data proportions of excellent, ordinary, and severe luminance degrees are equal. | 77.64/55.73 |
Yolov3-tiny-PDB3 without module 1. | 37.70/20.31 | |
Yolov3-tiny-PDB3 without modules 1 and 2. | 32.75/11.97 | |
Yolov3-tiny-PDB3 without modules 1, 2, and 3. | 30.01/12.61 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, H.; Chen, S.; Zheng, N.; Wang, Y.; Ge, J.; Ding, K.; Guo, Z.; Li, W.; Lan, J. Ground Pedestrian and Vehicle Detections Using Imaging Environment Perception Mechanisms and Deep Learning Networks. Electronics 2022, 11, 1873. https://doi.org/10.3390/electronics11121873
Liu H, Chen S, Zheng N, Wang Y, Ge J, Ding K, Guo Z, Li W, Lan J. Ground Pedestrian and Vehicle Detections Using Imaging Environment Perception Mechanisms and Deep Learning Networks. Electronics. 2022; 11(12):1873. https://doi.org/10.3390/electronics11121873
Chicago/Turabian StyleLiu, Haoting, Shuai Chen, Na Zheng, Yuan Wang, Jianyue Ge, Kai Ding, Zhenhui Guo, Wei Li, and Jinhui Lan. 2022. "Ground Pedestrian and Vehicle Detections Using Imaging Environment Perception Mechanisms and Deep Learning Networks" Electronics 11, no. 12: 1873. https://doi.org/10.3390/electronics11121873
APA StyleLiu, H., Chen, S., Zheng, N., Wang, Y., Ge, J., Ding, K., Guo, Z., Li, W., & Lan, J. (2022). Ground Pedestrian and Vehicle Detections Using Imaging Environment Perception Mechanisms and Deep Learning Networks. Electronics, 11(12), 1873. https://doi.org/10.3390/electronics11121873