Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System
Abstract
:1. Introduction
2. Materials and Methods
2.1. CNN for Object Detection
2.2. Spatial Pyramid Pooling (SPP)
2.3. Object Detection Architecture
2.3.1. Yolo V3 and Tiny Yolo V3
2.3.2. Densenet
2.3.3. Resnet 50
2.4. Methods
Algorithm 1 Yolo V3 SPP Recognition Process |
1. Split the input image into (S × S) grids. |
2. Create K bounding boxes in concert with the estimation of the anchor boxes for every grid. |
3. Extract all object features from the image using the convolutional neural network. |
4. Predict the and the . |
5. Appeal the optimum confidence of the K bounding boxes with the threshold . |
6. If > means that the bounding box includes the object. Otherwise, the bounding box does not contain the object. |
7. Select the category with the greatest predicted probability as the object category relating to. |
8. Apply the non-maximum suppression (NMS) to conduct a maximum local search to overcome redundant boxes and output. |
9. Object detection result presentation. |
3. Results
3.1. Dataset
3.2. Training Result
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Arcos-García, Á.; Álvarez-García, J.A.; Soria-Morillo, L.M. Evaluation of deep neural networks for traffic sign detection systems. Neurocomputing 2018, 316, 332–344. [Google Scholar] [CrossRef]
- Fatmehsari, Y.R.; Ghahari, A.; Zoroofi, R. A Gabor wavelet for road sign detection and recognition using a hybrid classifier. In Proceedings of the MCIT’2010: International Conference on Multimedia Computing and Information Technology, Sharjah, United Arab Emirates, 2–4 March 2010; pp. 25–28. [Google Scholar]
- Wang, C.Y.; Cheng-Yue, R. Traffic Sign Detection using You Only Look Once Framework. Neurocomputing 2017, 2514, 1–9. [Google Scholar]
- Ruta, A.; Li, Y.; Liu, X. Real-time traffic sign recognition from video by class-specific discriminative features. Pattern Recognit. 2010, 43, 416–430. [Google Scholar] [CrossRef]
- Nguyen, K.; Huynh, N.T.; Nguyen, P.C.; Nguyen, K.-D.; Vo, N.D.; Nguyen, T.V. Detecting Objects from Space: An Evaluation of Deep-Learning Modern Approaches. Electronics 2020, 9, 583. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 120, 1–9. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR2015), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. GoogLeNet Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Park, H.C.; Kim, Y.J.; Lee, S.W. Adenocarcinoma recognition in endoscopy images using optimized convolutional neural networks. Appl. Sci. 2020, 10, 1650. [Google Scholar] [CrossRef] [Green Version]
- Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Chollet, F. Deep Learning with Separable Convolutions. arXiv 2016, arXiv:1610.02357. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. In Proceedings of the Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef] [Green Version]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HA, USA, 21–26 July 2017. [Google Scholar]
- Huang, G.; Liu, S.; Van Der Maaten, L.; Weinberger, K.Q. CondenseNet: An Efficient DenseNet Using Learned Group Convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Feng, R.; Fan, C.; Li, Z.; Chen, X. Mixed Road User Trajectory Extraction From Moving Aerial Videos Based on Convolution Neural Network Detection. IEEE Access 2020, 8, 43508–43519. [Google Scholar] [CrossRef]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. In Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014—Conference Track Proceedings, Banff, AB, Canada, 14–16 April 2014; pp. 1–16. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1–14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Amsterdam, The Netherlands, 10–16 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. (YOLO) You Only Look Once. Cvpr 2016, arXiv:1506.02640. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bangquan, X.; Xiong, W.X. Real-time embedded traffic sign recognition using efficient convolutional neural network. IEEE Access 2019, 7, 53330–53346. [Google Scholar] [CrossRef]
- Grauman, K.; Darrell, T. The pyramid match kernel: Discriminative classification with sets of image features. In Proceedings of the IEEE International Conference on Computer Vision, Beijing, China, 17–20 October 2005; pp. 1458–1465. [Google Scholar]
- Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 1–8. [Google Scholar]
- Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object detection via region-based fully convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 379–387. [Google Scholar]
- Sivic, J.; Zisserman, A. Video google: A text retrieval approach to object matching in videos. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 14–17 October 2003; pp. 1–8. [Google Scholar]
- Yang, J.; Yu, K.; Gong, Y.; Huang, T. Linear spatial pyramid matching using sparse coding for image classification. In Proceedings of the CVPR Workshops 2009, Miami, FL, USA, 20–25 June 2009; pp. 1794–1801. [Google Scholar]
- Wang, J.; Yang, J.; Yu, K.; Lv, F.; Huang, T.; Gong, Y. Locality-constrained linear coding for image classification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3360–3367. [Google Scholar]
- Van De Sande, K.E.A.; Uijlings, J.R.R.; Gevers, T.; Smeulders, A.W.M. Segmentation as selective search for object recognition. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1879–1886. [Google Scholar]
- Chen, H.; He, Z.; Shi, B.; Zhong, T. Research on Recognition Method of Electrical Components Based on YOLO V3. IEEE Access 2019, 7. [Google Scholar] [CrossRef]
- Zhao, L.; Li, S. Object Detection Algorithm Based on Improved YOLOv3. Electronics 2020, 9, 537. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HA, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Xiao, D.; Shan, F.; Li, Z.; Le, B.T.; Liu, X.; Li, X. A target detection model based on improved tiny-yolov3 under the environment of mining truck. IEEE Access 2019, 7, 123757–123764. [Google Scholar] [CrossRef]
- Zhang, P.; Zhong, Y.; Li, X. SlimYOLOv3: Narrower, faster and better for real-time UAV applications. In Proceedings of the 2019 International Conference on Computer Vision Workshop, ICCVW 2019, Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
- Huang, L.; Pun, C.M. Audio Replay Spoof Attack Detection Using Segment-based Hybrid Feature and DenseNet-LSTM Network. In Proceedings of the ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 2567–2571. [Google Scholar]
- Yu, C.; He, X.; Ma, H.; Qi, X.; Lu, J.; Zhao, Y. S-DenseNet: A DenseNet Compression Model Based on Convolution Grouping Strategy Using Skyline Method. IEEE Access 2019, 7, 183604–183613. [Google Scholar] [CrossRef]
- Zhang, K.; Guo, Y.; Wang, X.; Yuan, J.; Ding, Q. Multiple feature reweight DenseNet for image classification. IEEE Access 2019, 7, 9872–9880. [Google Scholar] [CrossRef]
- Ghatwary, N.; Ye, X.; Zolgharni, M. Esophageal Abnormality Detection Using DenseNet Based Faster R-CNN with Gabor Features. IEEE Access 2019, 7, 84374–84385. [Google Scholar] [CrossRef]
- Huang, Z.; Zhu, X.; Ding, M.; Zhang, X. Medical Image Classification Using a Light-Weighted Hybrid Neural Network Based on PCANet and DenseNet. IEEE Access 2020, 8, 24697–24712. [Google Scholar] [CrossRef]
- Chander, G.; Markham, B.L.; Helder, D.L. Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors. Remote Sens. Environ. 2009, 113, 893–903. [Google Scholar] [CrossRef]
- Fang, W.; Wang, C.; Chen, X.; Wan, W.; Li, H.; Zhu, S.; Fang, Y.; Liu, B.; Hong, Y. Recognizing Global Reservoirs From Landsat 8 Images: A Deep Learning Approach. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 3168–3177. [Google Scholar] [CrossRef]
- Ibrahim, Y.; Wang, H.; Bai, M.; Liu, Z.; Wang, J.; Yang, Z.; Chen, Z. Soft Error Resilience of Deep Residual Networks for Object Recognition. IEEE Access 2020, 8, 19490–19503. [Google Scholar] [CrossRef]
- Sai Sundar, K.V.; Bonta, L.R.; Reddy, A.K.B.; Baruah, P.K.; Sankara, S.S. Evaluating Training Time of Inception-v3 and Resnet-50,101 Models using TensorFlow across CPU and GPU. In Proceedings of the 2nd International Conference on Electronics, Communication and Aerospace Technology, ICECA 2018, Coimbatore, India, 29–30 March 2018; pp. 1964–1968. [Google Scholar]
- Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Drugge, R.J.; Rabinovitz, H.S.; Oliviero, M.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
- Sai Bharadwaj Reddy, A.; Sujitha Juliet, D. Transfer learning with RESNET-50 for malaria cell-image classification. In Proceedings of the 2019 IEEE International Conference on Communication and Signal Processing, ICCSP 2019, Melmaruvathur, India, 4–6 April 2019; pp. 0945–0949. [Google Scholar]
- Bbox Label Tool. Available online: https://github.com/puzzledqs/BBox-Label-Tool (accessed on 13 January 2020).
- Hendry; Chen, R.C. Automatic License Plate Recognition via sliding-window darknet-YOLO deep learning. Image Vis. Comput. 2019, 87, 47–56. [Google Scholar] [CrossRef]
- Huang, Z.; Wang, J. DC-SPP-YOLO: Dense Connection and Spatial Pyramid Pooling Based YOLO for Object Detection. CoRR 2019, abs/1903.0, 1–23. [Google Scholar] [CrossRef] [Green Version]
- Mao, Q.C.; Sun, H.M.; Liu, Y.B.; Jia, R.S. Mini-YOLOv3: Real-Time Object Detector for Embedded Applications. IEEE Access 2019, 7, 133529–133538. [Google Scholar] [CrossRef]
- Wu, F.; Jin, G.; Gao, M.; He, Z.; Yang, Y. Helmet detection based on improved YOLO V3 deep model. In Proceedings of the 19 IEEE 16th International Conference on Networking, Sensing and Control, ICNSC 2019, Banff, AB, Canada, 9–11 May 2019; pp. 363–368. [Google Scholar]
- Xu, Q.; Lin, R.; Yue, H.; Huang, H.; Yang, Y.; Yao, Z. Research on Small Target Detection in Driving Scenarios Based on Improved Yolo Network. IEEE Access 2020, 8, 27574–27583. [Google Scholar] [CrossRef]
- Fang, W.; Wang, L.; Ren, P. Tinier-YOLO: A Real-Time Object Detection Method for Constrained Environments. IEEE Access 2020, 8, 1935–1944. [Google Scholar] [CrossRef]
- Yang, H.; Chen, L.; Chen, M.; Ma, Z.; Deng, F.; Li, M.; Li, X. Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model. IEEE Access 2019, 7, 180998–181011. [Google Scholar] [CrossRef]
- Dewi, C.; Chen, R.-C. Human Activity Recognition Based on Evolution of Features Selection and Random Forest. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2496–2501. [Google Scholar]
- Yuan, Y.; Xiong, Z.; Wang, Q. An Incremental Framework for Video-Based Traffic Sign Detection, Tracking, and Recognition. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1918–1929. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Shi, R.; Li, T.; Yamaguchi, Y. An attribution-based pruning method for real-time mango detection with YOLO network. Comput. Electron. Agric. 2020, 169, 1–11. [Google Scholar] [CrossRef]
- Kang, H.; Chen, C. Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput. Electron. Agric. 2020, 168, 105108. [Google Scholar] [CrossRef]
- Dewi, C.; Chen, R.-C.; Hendry; Liu, Y.-T. Similar Music Instrument Detection via Deep Convolution YOLO-Generative Adversarial Network. In Proceedings of the 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), Morioka, Japan, 23–25 October 2019; pp. 1–6. [Google Scholar]
- Yang, J.; Kannan, A.; Batra, D.; Parikh, D. LR-GAN: Layered recursive generative adversarial networks for image generation. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017—Conference Track Proceedings, Toulon, France, 24–26 April 2017; pp. 1–21. [Google Scholar]
- Zhang, Y.; Liu, S.; Dong, C.; Zhang, X.; Yuan, Y. Multiple cycle-in-cycle generative adversarial networks for unsupervised image super-resolution. IEEE Trans. Image Process. 2020, 29, 1101–1112. [Google Scholar] [CrossRef]
ID | Class | Name | Sign |
---|---|---|---|
0 | P1 | No entry | |
1 | P2 | No stopping | |
2 | P3 | No parking | |
3 | P4 | Speed Limit |
Model | Loss Value | Name | AP (%) | TP | FP | Precision | Recall | F1-Score | IoU (%) | mAP@0.50 (%) |
---|---|---|---|---|---|---|---|---|---|---|
Yolo V3 | 0.0141 | P1 | 97.5 | 77 | 0 | 0.99 | 0.99 | 0.99 | 88.20 | 98.73 |
P2 | 98.8 | 83 | 0 | |||||||
P3 | 99.9 | 62 | 1 | |||||||
P4 | 98.7 | 76 | 2 | |||||||
Yolo V3 SPP | 0.0125 | P1 | 97.5 | 78 | 0 | 0.99 | 0.99 | 0.99 | 90.09 | 98.88 |
P2 | 98.8 | 83 | 0 | |||||||
P3 | 99.9 | 62 | 1 | |||||||
P4 | 98.9 | 79 | 3 | |||||||
Densenet | 0.0031 | P1 | 97.4 | 78 | 2 | 0.98 | 0.99 | 0.99 | 88.19 | 98.33 |
P2 | 100 | 83 | 0 | |||||||
P3 | 100 | 62 | 2 | |||||||
P4 | 99.9 | 76 | 2 | |||||||
Densenet SPP | 0.0078 | P1 | 98.8 | 78 | 1 | 0.98 | 0.99 | 0.99 | 88.55 | 98.53 |
P2 | 100 | 82 | 1 | |||||||
P3 | 100 | 62 | 1 | |||||||
P4 | 99.4 | 75 | 2 | |||||||
Resnet 50 | 0.004 | P1 | 96.2 | 75 | 4 | 0.93 | 0.96 | 0.94 | 73.11 | 97.09 |
P2 | 98.2 | 80 | 3 | |||||||
P3 | 97.2 | 59 | 7 | |||||||
P4 | 96 | 74 | 9 | |||||||
Resnet 50 SPP | 0.0045 | P1 | 97.4 | 78 | 1 | 0.97 | 0.98 | 0.98 | 79.33 | 97.7 |
P2 | 100 | 83 | 0 | |||||||
P3 | 96.6 | 61 | 2 | |||||||
P4 | 96.8 | 73 | 5 | |||||||
Tiny Yolo V3 | 0.0144 | P1 | 79.6 | 49 | 1 | 0.93 | 0.7 | 0.8 | 75.29 | 82.69 |
P2 | 86.0 | 59 | 2 | |||||||
P3 | 88.7 | 52 | 11 | |||||||
P4 | 76.4 | 51 | 2 | |||||||
Tiny Yolo V3 SPP | 0.0185 | P1 | 84.2 | 59 | 2 | 0.98 | 0.79 | 0.88 | 79.23 | 84.79 |
P2 | 90.0 | 71 | 0 | |||||||
P3 | 95.4 | 56 | 2 | |||||||
P4 | 69.6 | 52 | 1 |
Image | Yolo V3 | Yolo V3 SPP | Densenet | Densenet SPP | Resnet 50 | Resnet 50 SPP | Tiny Yolo V3 | Tiny Yolo V3 SPP | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
acc | ms | acc | ms | acc | ms | acc | ms | acc | ms | acc | ms | acc | ms | acc | ms | |
1 | 0.97 | 15 | 0.97 | 14.4 | 0.73 | 48 | 0.81 | 24 | 0.84 | 16.9 | 0.49 | 20.7 | 0.35 | 6.4 | 0.44 | 12.9 |
2 | 0.9 | 20 | 0.98 | 21.3 | 0.92 | 27.5 | 0.89 | 51.9 | 0.68 | 15.6 | 0.6 | 22.5 | - | 4.9 | - | 3.2 |
3 | 0.83 | 19 | 0.99 | 15.2 | 0.94 | 41.3 | 0.92 | 21.1 | 0.79 | 15.3 | 0.86 | 24.3 | - | 2.8 | - | 5.3 |
4 | 0.95 | 21 | 0.99 | 26.1 | 0.8 | 49 | 0.76 | 48.8 | - | 16.7 | 0.61 | 24.1 | 1 | 4.7 | 0.99 | 3.5 |
5 | 0.89 | 18 | 0.99 | 14.8 | 0.85 | 45.9 | 0.83 | 21 | 0.91 | 9.2 | 0.91 | 11.8 | - | 2.7 | 0.37 | 3.9 |
6 | 0.98 | 18 | 0.99 | 14.4 | 0.87 | 30.9 | 0.86 | 41.1 | - | 15.7 | 0.51 | 10.6 | - | 3.2 | - | 6.3 |
7 | 0.96 | 20 | 1 | 23.2 | 0.85 | 46.4 | 0.88 | 47.9 | - | 16.6 | 0.81 | 23 | 1 | 2.8 | 0.79 | 21.3 |
8 | 1 | 16 | 1 | 16.8 | 0.9 | 30.2 | 0.96 | 45.9 | 0.6 | 15.8 | 0.9 | 17.6 | 0.99 | 3.1 | 0.9 | 13.9 |
9 | 0.88 | 21 | 0.99 | 15.1 | 0.76 | 40.6 | 0.74 | 29.2 | - | 15.5 | 0.64 | 20.3 | - | 14.3 | - | 3.3 |
10 | 0.9 | 15 | 0.98 | 32.6 | 0.63 | 28.5 | 0.82 | 53.2 | - | 19.8 | 0.51 | 24.1 | - | 2.7 | 0.94 | 3.3 |
11 | 0.96 | 22 | 0.99 | 14.5 | 0.82 | 28.6 | 0.96 | 34.9 | 0.39 | 16.1 | 0.43 | 16.8 | 0.54 | 10.3 | - | 8.5 |
12 | 0.79 | 15 | 0.99 | 15 | 0.74 | 27.2 | 0.95 | 42.9 | 0.55 | 9.4 | 0.68 | 10.2 | 0.98 | 4.4 | 0.87 | 3.5 |
13 | 1 | 14 | 1 | 16.6 | 0.92 | 45.8 | 0.95 | 49.1 | 0.93 | 13.9 | 0.94 | 10 | - | 3.5 | - | 3.3 |
14 | 1 | 15 | 1 | 17.2 | 0.92 | 37.3 | 0.95 | 36.5 | 0.85 | 21.9 | 0.92 | 25.6 | - | 3.8 | - | 7.1 |
15 | 0.92 | 15 | 0.97 | 14.7 | 0.8 | 45.6 | 0.95 | 27.4 | 0.76 | 15.5 | 0.61 | 11.8 | 0.99 | 19.8 | 0.87 | 19.2 |
16 | 0.87 | 15 | 0.99 | 14.9 | 0.79 | 47 | 0.5 | 46.8 | 0.29 | 10.9 | 0.31 | 20 | - | 3.9 | - | 8.4 |
17 | 0.98 | 13 | 1 | 14.6 | 0.74 | 42.2 | 0.91 | 59.1 | 0.92 | 9 | 0.69 | 18.8 | - | 2.8 | - | 19.9 |
18 | 0.95 | 15 | 1 | 14.8 | 0.89 | 42.1 | 0.9 | 23.9 | 0.9 | 21.1 | 0.87 | 22.5 | 0.99 | 2.7 | 1 | 3.9 |
19 | 0.87 | 15 | 0.99 | 21.2 | 0.72 | 24.8 | 0.82 | 43.1 | 0.28 | 9.9 | 0.78 | 25.8 | 0.94 | 3.4 | 1 | 5.8 |
20 | 0.88 | 15 | 0.99 | 15.2 | 0.86 | 38 | 0.94 | 52.2 | 0.68 | 20.6 | 0.9 | 24.7 | 0.98 | 5.6 | 0.98 | 3.4 |
Average | 0.92 | 17 | 0.99 | 17.6 | 0.82 | 38.3 | 0.87 | 40 | 0.52 | 15.3 | 0.7 | 19.3 | 0.4 | 5.4 | 0.5 | 8 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dewi, C.; Chen, R.-C.; Tai, S.-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. Electronics 2020, 9, 889. https://doi.org/10.3390/electronics9060889
Dewi C, Chen R-C, Tai S-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. Electronics. 2020; 9(6):889. https://doi.org/10.3390/electronics9060889
Chicago/Turabian StyleDewi, Christine, Rung-Ching Chen, and Shao-Kuo Tai. 2020. "Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System" Electronics 9, no. 6: 889. https://doi.org/10.3390/electronics9060889
APA StyleDewi, C., Chen, R. -C., & Tai, S. -K. (2020). Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. Electronics, 9(6), 889. https://doi.org/10.3390/electronics9060889