Automated Detection and Classification of Returnable Packaging Based on YOLOV4 Algorithm
Abstract
:1. Introduction
- Is it possible to achieve high classification accuracy with the YOLOV4 algorithm?
- Is it possible to improve classification accuracy by tweaking the specific hyperparameters (max batch size and activation function) of the YOLOV4 algorithm?
- Does the train/test dataset ratio has any influence on YOLOV4 classification accuracy?
2. Materials and Methods
- gathering of original images of returnable packaging,
- combination of original images to those obtained from Kaggle,
- performing the dataset augmentation to enlarge the number of images to get as many combinations as possible combinations for training, testing, and validation set,
- labeling the bounding boxes for determination of objects inside of selected image in YOLOV4 compatible format, and
- testing the video camera detection in real-time to check the operation of the designed detection algorithm.
2.1. Dataset Description and Preparation
2.2. Description of Methods
2.3. YOLOV4 Limitations and Accuracy Measures
- TP—True Positive predictions,
- FP—False Positive predictions, and
- FN—False Negative predictions.
Configuration, Setup, and Hyper-Parameters Used for YOLOV4 Project
- x and y-coordinates of a particular grid in offset in a range from 0 to 1,
- and indicates parameters for increasing the loss value of bounding box coordinates while predicting and decreasing the loss value of confidence predictions bounding boxes that does not appear to contain any objects,
- indicates whether the object appears in cell i and
- indicates that the j-th predictor of the boundary frame in the cell is “responsible” for that prediction.
3. Results and Discussion
3.1. Results
3.2. Discussion
4. Conclusions
- The YOLOV4 algorithm can achieve high classification accuracy in process of detection and classification of returnable packaging; however, the tuning process of YOLOV4 algorithm hyperparameters must be performed.
- The classification accuracy was improved by tuning just two hyperparameters: max batch size, and type of activation function used in the YOLOV4 algorithm. The investigation showed that max batch size has a greater influence than the type of activation function.
- The train/test dataset ratio did not have a notable influence on classification accuracy for a couple of reasons. The first reason is that the values of train/test ratios were the same as those ratios that are commonly used in other research papers. The lower size of the training dataset size would cause a weakly trained YOLOV4 model, while the higher size of the training dataset could cause potential overfitting of the YOLOV4 model.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix A.1
Appendix A.2
Appendix A.3
Appendix B
Appendix B.1
Appendix B.2
Appendix B.3
Appendix C
Appendix C.1
Appendix C.2
Appendix D
Appendix D.1
Appendix D.2
References
- Reno, J. Waste and waste management. Annu. Rev. Anthropol. 2015, 44, 557–572. [Google Scholar] [CrossRef] [Green Version]
- Guan, L. Multimedia Image and Video Processing; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Alsanabani, A.A.; Ahmed, M.A.; Al Smadi, A.A. Vehicle counting using detecting-tracking combinations: A comparative analysis. In Proceedings of the 4th International Conference on Video and Image Processing, Shanghai, China, 25–27 November 2020; pp. 48–54. [Google Scholar]
- Wu, J.; Osuntogun, A.; Choudhury, T.; Philipose, M.; Rehg, J.M. A scalable approach to activity recognition based on object use. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Kumar, A.; Kaur, A.; Kumar, M. Face detection techniques: A review. Artif. Intell. Rev. 2019, 52, 927–948. [Google Scholar] [CrossRef]
- Feng, D.; Harakeh, A.; Waslander, S.L.; Dietmayer, K. A review and comparative study on probabilistic object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 9961–9980. [Google Scholar] [CrossRef]
- Aydin, I.; Othman, N.A. A new IoT combined face detection of people by using computer vision for security application. In Proceedings of the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 16–17 September 2017; pp. 1–6. [Google Scholar]
- Husain, A.A.; Maity, T.; Yadav, R.K. Vehicle detection in intelligent transport system under a hazy environment: A survey. IET Image Process. 2020, 14, 1–10. [Google Scholar] [CrossRef]
- Alsubaei, F.S.; Al-Wesabi, F.N.; Hilal, A.M. Deep Learning-Based Small Object Detection and Classification Model for Garbage Waste Management in Smart Cities and IoT Environment. Appl. Sci. 2022, 12, 2281. [Google Scholar] [CrossRef]
- Mao, W.L.; Chen, W.C.; Wang, C.T.; Lin, Y.H. Recycling waste classification using optimized convolutional neural network. Resour. Conserv. Recycl. 2021, 164, 105132. [Google Scholar] [CrossRef]
- Yang, M.; Thung, G. Classification of trash for recyclability status. CS229 Proj. Rep. 2016, 2016, 3. [Google Scholar]
- Adedeji, O.; Wang, Z. Intelligent waste classification system using deep learning convolutional neural network. Procedia Manuf. 2019, 35, 607–612. [Google Scholar] [CrossRef]
- Bobulski, J.; Kubanek, M. Waste classification system using image processing and convolutional neural networks. In International Work-Conference on Artificial Neural Networks; Springer: Cham, Switzerland, 2019; pp. 350–361. [Google Scholar]
- Shi, C.; Tan, C.; Wang, T.; Wang, L. A waste classification method based on a multilayer hybrid convolution neural network. Appl. Sci. 2021, 11, 8572. [Google Scholar] [CrossRef]
- Meng, S.; Chu, W.T. A study of garbage classification with convolutional neural networks. In Proceedings of the 2020 Indo–Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN), Rajpura, India, 7–15 February 2020; pp. 152–157. [Google Scholar]
- Cchangcs. Garbage Classification. 2018. Available online: https://www.kaggle.com/datasets/asdasdasasdas/garbage-classification (accessed on 11 October 2022).
- Altikat, A.; Gulbe, A.; Altikat, S. Intelligent solid waste classification using deep convolutional neural networks. Int. J. Environ. Sci. Technol. 2022, 19, 1285–1292. [Google Scholar] [CrossRef]
- Aishwarya, A.; Wadhwa, P.; Owais, O.; Vashisht, V. A waste management technique to detect and separate non-biodegradable waste using machine learning and YOLO algorithm. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 443–447. [Google Scholar]
- Wahyutama, A.B.; Hwang, M. YOLO-Based Object Detection for Separate Collection of Recyclables and Capacity Monitoring of Trash Bins. Electronics 2022, 11, 1323. [Google Scholar] [CrossRef]
- Lin, W. YOLO-Green: A Real-Time Classification and Object Detection Model Optimized for Waste Management. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 51–57. [Google Scholar]
- Ye, A.; Pang, B.; Jin, Y.; Cui, J. A YOLO-based neural network with VAE for intelligent garbage detection and classification. In Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 24–26 December 2020; pp. 1–7. [Google Scholar]
- Kim, J.A.; Sung, J.Y.; Park, S.H. Comparison of Faster-RCNN, YOLO, and SSD for real-time vehicle type recognition. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Seoul, Korea, 1–3 November 2020; pp. 1–4. [Google Scholar]
- Kulshreshtha, M.; Chandra, S.S.; Randhawa, P.; Tsaramirsis, G.; Khadidos, A.; Khadidos, A.O. OATCR: Outdoor autonomous trash-collecting robot design using YOLOv4-tiny. Electronics 2021, 10, 2292. [Google Scholar] [CrossRef]
- Tian, M.; Li, X.; Kong, S.; Wu, L.; Yu, J. A modified YOLOv4 detection method for a vision-based underwater garbage cleaning robot. Front. Inf. Technol. Electron. Eng. 2022, 23, 1217–1228. [Google Scholar] [CrossRef]
- Drinking Waste Classification. Available online: https://www.kaggle.com/arkadiyhacks/drinking-waste-classification (accessed on 24 November 2021).
- Liu, Z.J.; Chen, L.; Wu, D.; Ding, W.; Zhang, H.; Zhou, W.; Fu, Z.Q.; Wang, B.C. A multi-dataset data-collection strategy produces better diffraction data. Acta Crystallogr. Sect. A Found. Crystallogr. 2011, 67, 544–549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Herrera, Y.M.; Kapur, D. Improving data quality: Actors, incentives, and capabilities. Political Anal. 2007, 15, 365–386. [Google Scholar] [CrossRef] [Green Version]
- Han, K.; Lee, S.; Lee, W.; Lee, J.; Lee, D.h. An Evaluation Dataset and Strategy for Building Robust Multi-turn Response Selection Model. arXiv 2021, arXiv:2109.04834. [Google Scholar]
- Howse, J. OpenCV Computer Vision with Python; Packt Publishing: Birmingham, UK, 2013; Volume 27. [Google Scholar]
- Bui, H.M.; Lech, M.; Cheng, E.; Neville, K.; Burnett, I.S. Using grayscale images for object recognition with convolutional-recursive neural network. In Proceedings of the 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), Ha-Long, Vietnam, 27–29 July 2016; pp. 321–325. [Google Scholar]
- Grum, E.; Vasseure, B. How to Select the Best Dataset for a Task? 2004. Available online: https://pdfhall.com/how-to-select-the-best-dataset-for-a-task_5b6ecd57097c47a9568b4687.html (accessed on 11 October 2022).
- Soekhoe, D.; Van Der Putten, P.; Plaat, A. On the impact of data set size in transfer learning using deep neural networks. In International Symposium on Intelligent Data Analysis; Springer: Cham, Switzerland, 2016; pp. 50–60. [Google Scholar]
- Feng, S.; Zhou, H.; Dong, H. Using deep neural network with small dataset to predict material defects. Mater. Des. 2019, 162, 300–310. [Google Scholar] [CrossRef]
- Lippi, M.; Bonucci, N.; Carpio, R.F.; Contarini, M.; Speranza, S.; Gasparri, A. A yolo-based pest detection system for precision agriculture. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 342–347. [Google Scholar]
- Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
- Taylor, L.; Nitschke, G. Improving deep learning with generic data augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
- Beyeler, M. Machine Learning for OpenCV; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
- Ramyachitra, D.; Manikandan, P. Imbalanced dataset classification and solutions: A review. Int. J. Comput. Bus. Res. (IJCBR) 2014, 5, 1–29. [Google Scholar]
- Picard, S.; Chapdelaine, C.; Cappi, C.; Gardes, L.; Jenn, E.; Lefèvre, B.; Soumarmon, T. Ensuring Dataset Quality for Machine Learning Certification. In Proceedings of the 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), Coimbra, Portugal, 12–15 October 2020; pp. 275–282. [Google Scholar]
- Yadav, S.; Ekbal, A.; Saha, S.; Bhattacharyya, P. Deep learning architecture for patient data de-identification in clinical records. In Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), Osaka, Japan, 11 December 2016; pp. 32–41. [Google Scholar]
- Brownlee, J. Data Preparation for Machine Learning: Data Cleaning, Feature Selection, and Data Transforms in Python; Machine Learning Mastery: Melbourne, Australia, 2020. [Google Scholar]
- Lin, J.P.; Sun, M.T. A YOLO-based traffic counting system. In Proceedings of the 2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI), Taichung, Taiwan, 30 November–2 December 2018; pp. 82–85. [Google Scholar]
- Dharneeshkar, J.; Aniruthan, S.; Karthika, R.; Parameswaran, L. Deep Learning based Detection of potholes in Indian roads using YOLO. In Proceedings of the 2020 International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–28 February 2020; pp. 381–385. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Yu, J.; Zhang, W. Face mask wearing detection algorithm based on improved YOLO-v4. Sensors 2021, 21, 3263. [Google Scholar] [CrossRef]
- Ma, J.; Chen, L.; Gao, Z. Hardware implementation and optimization of tiny-yolo network. In International Forum on Digital TV and Wireless Multimedia Communications; Springer: Singapore, 2017; pp. 224–234. [Google Scholar]
- Fast, P.D.U.A.B. R-CNN. In Digital TV and Wireless Multimedia Communication: Proceedings of the 14th International Forum, IFTC 2017, Shanghai, China, 8–9 November 2017; Revised Selected Papers; Springer: Berlin/Heidelberg, Germany, 2018; Volume 815, p. 172. [Google Scholar]
- Roh, M.C.; Lee, J.y. Refining faster-RCNN for accurate object detection. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 514–517. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 October 2015; pp. 1440–1448. [Google Scholar]
- Chen, S.; Lin, W. Embedded system real-time vehicle detection based on improved YOLO network. In Proceedings of the 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 11–13 October 2019; pp. 1400–1403. [Google Scholar]
- Shinde, S.; Kothari, A.; Gupta, V. YOLO based human action recognition and localization. Procedia Comput. Sci. 2018, 133, 831–838. [Google Scholar] [CrossRef]
- Neubeck, A.; Van Gool, L. Efficient non-maximum suppression. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
- Hosang, J.; Benenson, R.; Schiele, B. Learning non-maximum suppression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4507–4515. [Google Scholar]
- Wang, A.; Hu, C.; Liu, X.; Iwahori, Y.; Kang, R. A modified non-maximum suppression algorithm. In Information Science and Electronic Engineering; CRC Press: Boca Raton, FL, USA, 2016; pp. 83–86. [Google Scholar]
- Shi, P.; Jiang, Q.; Shi, C.; Xi, J.; Tao, G.; Zhang, S.; Zhang, Z.; Liu, B.; Gao, X.; Wu, Q. Oil Well Detection via Large-Scale and High-Resolution Remote Sensing Images Based on Improved YOLO v4. Remote Sens. 2021, 13, 3243. [Google Scholar] [CrossRef]
- Baressi Šegota, S.; Lorencin, I.; Smolić, K.; Anđelić, N.; Markić, D.; Mrzljak, V.; Štifanić, D.; Musulin, J.; Španjol, J.; Car, Z. Semantic Segmentation of Urinary Bladder Cancer Masses from CT Images: A Transfer Learning Approach. Biology 2021, 10, 1134. [Google Scholar] [CrossRef] [PubMed]
- Štifanić, D.; Musulin, J.; Jurilj, Z.; Šegota, S.; Lorencin, I.; Anđelić, N.; Vlahinić, S.; Šušteršič, T.; Blagojević, A.; Filipović, N.; et al. Semantic segmentation of chest X-ray images based on the severity of COVID-19 infected patients. EAI Endorsed Trans. Bioeng. Bioinform. 2021, 1, e3. [Google Scholar] [CrossRef]
- Wu, T.H.; Wang, T.W.; Liu, Y.Q. Real-time vehicle and distance detection based on improved yolo v5 network. In Proceedings of the 2021 3rd World Symposium on Artificial Intelligence (WSAI), Guangzhou, China, 18–20 June 2021; pp. 24–28. [Google Scholar]
- Yang, W.; Jiachun, Z. Real-time face detection based on YOLO. In Proceedings of the 2018 1st IEEE International Conference on Knowledge Innovation and Invention (ICKII), Jeju Island, Korea, 23–27 July 2018; pp. 221–224. [Google Scholar]
- Seo, J.; Sa, J.; Choi, Y.; Chung, Y.; Park, D.; Kim, H. A yolo-based separation of touching-pigs for smart pig farm applications. In Proceedings of the 2019 21st International Conference on Advanced Communication Technology (ICACT), PyeongChang, Korea, 17–20 February 2019; pp. 395–401. [Google Scholar]
- Francies, M.L.; Ata, M.M.; Mohamed, M.A. A robust multiclass 3D object recognition based on modern YOLO deep learning algorithms. Concurr. Comput. Pract. Exp. 2022, 34, e6517. [Google Scholar] [CrossRef]
- Zhang, P.; Su, W. Statistical inference on recall, precision and average precision under random selection. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 1348–1352. [Google Scholar]
- Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Charuchinda, P.; Kasetkasem, T. Land Cover Mapping Using the Class Activation Map. Ph.D. Thesis, Kasetsart University, Bangkok, Thailand, 2019. [Google Scholar]
- Carpenter, G.A.; Grossberg, S. A self-organizing neural network for supervised learning, recognition, and prediction. IEEE Commun. Mag. 1992, 30, 38–49. [Google Scholar] [CrossRef]
- Nguyen, Q.H.; Ly, H.B.; Ho, L.S.; Al-Ansari, N.; Le, H.V.; Tran, V.Q.; Prakash, I.; Pham, B.T. Influence of data splitting on performance of machine learning models in prediction of shear strength of soil. Math. Probl. Eng. 2021, 2021, 4832864. [Google Scholar] [CrossRef]
- Yin, Y.; Li, H.; Fu, W. Faster-YOLO: An accurate and faster object detection method. Digit. Signal Process. 2020, 102, 102756. [Google Scholar] [CrossRef]
- Chai, E.; Ta, L.; Ma, Z.; Zhi, M. ERF-YOLO: A YOLO algorithm compatible with fewer parameters and higher accuracy. Image Vis. Comput. 2021, 116, 104317. [Google Scholar] [CrossRef]
Class | Type of Packaging | Number of Original Images | Number of Augmented Images | Number of Total Images |
---|---|---|---|---|
1 | Plastic | 1003 | 869 | 1872 |
2 | Glass | 1224 | 600 | 1824 |
3 | Aluminum | 611 | 694 | 1205 |
Class | X Center | Y Center | Width | Height |
---|---|---|---|---|
1 | 0.2855 | 0.8367 | 0.2593 | 0.2282 |
1 | 0.5844 | 0.6422 | 0.2765 | 0.2217 |
1 | 0.6008 | 0.3644 | 0.1949 | 0.2619 |
1 | 0.2977 | 0.3014 | 0.2495 | 0.2511 |
No. | Max Batches | Activation Function | Train/Test Ratio | No. | Max Batches | Activation Function | Train/Test Ratio |
---|---|---|---|---|---|---|---|
1 | 6000 | Linear | 70/30 | 19 | 10,000 | ReLU | 70/30 |
2 | ReLU | 20 | Mish | ||||
3 | Mish | 21 | ReLU | 75/25 | |||
4 | Linear | 75/25 | 22 | Mish | |||
5 | ReLU | 23 | ReLU | 80/20 | |||
6 | Mish | 24 | Mish | ||||
7 | Linear | 80/20 | 25 | 20,000 | ReLU | 70/30 | |
8 | ReLU | 26 | Mish | ||||
9 | Mish | 27 | ReLU | 75/25 | |||
10 | 8000 | Linear | 70/30 | 28 | Mish | ||
11 | ReLU | 29 | ReLU | 80/20 | |||
12 | Mish | 30 | Mish | ||||
13 | Linear | 75/25 | |||||
14 | ReLU | ||||||
15 | Mish | ||||||
16 | Linear | 80/20 | |||||
17 | ReLU | ||||||
18 | Mish |
Max Batches | 6000 | ||||||||
Activation Function | Linear | ReLu | Mish | ||||||
Train/test ratio | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 |
mAP | 60.18% | 59.85% | 59.11% | 93.12% | 94.17% | 94.16% | 94.25% | 93.25% | 92.26% |
F1 | 0.52 | 0.48 | 0.47 | 0.87 | 0.85 | 0.81 | 0.86 | 0.84 | 0.83 |
Average IoU | 29.98% | 28.82% | 28.14% | 62.18% | 60.49% | 58.21% | 61.52% | 60.77% | 59.98% |
Precision | 0.44 | 0.41 | 0.40 | 0.84 | 0.79 | 0.75 | 0.81 | 0.78 | 0.76 |
Recall | 0.64 | 0.59 | 0.58 | 0.92 | 0.92 | 0.90 | 0.93 | 0.92 | 0.92 |
Average Loss | 2.6178 | 2.4873 | 2.4285 | 1.1888 | 1.1897 | 1.2689 | 1.1770 | 1.2203 | 1.2406 |
Max Batches | 8000 | ||||||||
Activation Function | Linear | ReLu | Mish | ||||||
Train/test ratio | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 |
mAP | 94.51% | 93.17% | 91.27% | 99.77% | 96.17% | 95.93% | 99.71% | 99.70% | 99.65% |
F1 | 0.59 | 0.70 | 0.76 | 0.98 | 0.97 | 0.97 | 0.98 | 0.97 | 0.98 |
Average IoU | 60.11% | 57.15% | 52.27% | 80.75% | 78.12% | 77.63% | 79.80% | 81.24% | 81.25% |
Precision | 0.53 | 0.64 | 0.72 | 0.97 | 0.97 | 0.96 | 0.98 | 0.97 | 0.97 |
Recall | 0.69 | 0.79 | 0.80 | 0.99 | 0.98 | 0.99 | 0.99 | 0.99 | 0.99 |
Average Loss | 1.8411 | 1.7828 | 2.0389 | 0.9096 | 0.8660 | 0.9215 | 0.9443 | 0.9091 | 0.9145 |
Max Batches | 10,000 | ||||||||
Activation Function | Linear | ReLu | Mish | ||||||
Train/test ratio | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 |
mAP | / | / | / | 99.90% | 99.90% | 99.77% | 99.84% | 99.83% | 99.81% |
F1 | / | / | / | 0.99 | 0.99 | 0.98 | 0.99 | 0.99 | 0.99 |
Average IoU | / | / | / | 84.14% | 84.59% | 83.41% | 83.44% | 83.59% | 83.33% |
Precision | / | / | / | 0.98 | 0.99 | 0.98 | 0.99 | 0.99 | 0.98 |
Recall | / | / | / | 1.00 | 1.00 | 1.00 | 1.00 | 0.99 | 1.00 |
Average Loss | / | / | / | 0.6320 | 0.6951 | 0.6862 | 0.6945 | 0.6809 | 0.6500 |
Max Batches | 20,000 | ||||||||
Activation Function | Linear | ReLu | Mish | ||||||
Train/test ratio | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 | 70/30 | 75/25 | 80/20 |
mAP | / | / | / | 88.33% | 88.60% | 89.08% | 89.92% | 99.96% | 99.94% |
F1 | / | / | / | 0.86 | 0.86 | 0.87 | 0.87 | 1.00 | 1.00 |
Average IoU | / | / | / | 84.41% | 86.68% | 83.90% | 82.73% | 91.13% | 91.06% |
Precision | / | / | / | 0.95 | 0.96 | 0.94 | 0.93 | 1.00 | 1.00 |
Recall | / | / | / | 0.75 | 0.79 | 0.80 | 0.81 | 1.00 | 1.00 |
Average Loss | / | / | / | 0.9219 | 0.8496 | 0.7914 | 0.8598 | 0.3643 | 0.3400 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Glučina, M.; Baressi Šegota, S.; Anđelić, N.; Car, Z. Automated Detection and Classification of Returnable Packaging Based on YOLOV4 Algorithm. Appl. Sci. 2022, 12, 11131. https://doi.org/10.3390/app122111131
Glučina M, Baressi Šegota S, Anđelić N, Car Z. Automated Detection and Classification of Returnable Packaging Based on YOLOV4 Algorithm. Applied Sciences. 2022; 12(21):11131. https://doi.org/10.3390/app122111131
Chicago/Turabian StyleGlučina, Matko, Sandi Baressi Šegota, Nikola Anđelić, and Zlatan Car. 2022. "Automated Detection and Classification of Returnable Packaging Based on YOLOV4 Algorithm" Applied Sciences 12, no. 21: 11131. https://doi.org/10.3390/app122111131
APA StyleGlučina, M., Baressi Šegota, S., Anđelić, N., & Car, Z. (2022). Automated Detection and Classification of Returnable Packaging Based on YOLOV4 Algorithm. Applied Sciences, 12(21), 11131. https://doi.org/10.3390/app122111131