Design and Implementation of Defect Detection System Based on YOLOv5-CBAM for Lead Tabs in Secondary Battery Manufacturing
Abstract
:1. Introduction
- Secondary Battery Lead Tab Quality Inspection Automation: Secondary battery lead tab quality inspection automation using AI can give companies an edge in competitiveness and increase productivity by reducing worker fatigue and improving inspection speed.
- YOLOv5_CBAM: The CBAM based on the Attention mechanism is applied to the Bottleneck part of YOLOv5 to reduce the amount of computation and improve the accuracy.
- Accuracy: Instead of simply adding layers to improve accuracy, we improved accuracy using an algorithm based on an attention mechanism that remembers important information and suppresses unnecessary information.
2. Related Work
2.1. Yolov5
2.2. Attention Mechanism
2.2.1. Cross-Modal Attention
2.2.2. Self-Attention
2.2.3. Adaptive Modules
2.3. CBAM (Convolutional Block Attention Module)
2.3.1. Channel Attention Module
2.3.2. Spatial Attention Module
3. Yolov5 _CBAM-Based Inspection
3.1. System Architecture
3.2. C3 _CBAM
3.3. Data Postprocessing
4. Performance Analysis
4.1. Experimental Environments
4.2. Experimental Datasets
4.3. Evaluation Index
4.4. Results
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- QYResearch KOREA. Lithium-Ion Battery Lead Tabs Market Report 2023. Revised. Available online: https://qyresearch.co.kr/post-one/%eb%a6%ac%ed%8a%ac%ec%9d%b4%ec%98%a8%eb%b0%b0%ed%84%b0%eb%a6%ac-%eb%a6%ac%eb%93%9c%ed%83%ad-lead-tabs-%ec%8b%9c%ec%9e%a5%eb%b3%b4%ea%b3%a0%ec%84%9c-2023%eb%85%84-%ea%b0%9c%ec%a0%95%ed%8c%90 (accessed on 16 March 2023).
- U.S. Department of the Treasury. Treasury Releases Proposed Guidance on New Clean Vehicle Credit to Lower Costs for Consumers, Build U.S. Industrial Base, Strengthen Supply Chains. Available online: https://home.treasury.gov/news/press-releases/jy1379 (accessed on 31 March 2023).
- Council of the EU. First ‘Fit for 55’ Proposal Agreed: The EU Strengthens Targets for CO2 Emissions for New Cars and Vans. Available online: https://www.consilium.europa.eu/en/press/press-releases/2022/10/27/first-fit-for-55-proposal-agreed-the-eu-strengthens-targets-for-co2-emissions-for-new-cars-and-vans/ (accessed on 27 October 2022).
- LMC Automotive. The Batteries Fuelling Global Light Vehicle Electrification. 5. Available online: https://www.thebatteryshow.com/content/dam/Informa/amg/novi/2022/docs/10_15%20-%20Riddell.pdf (accessed on 21 August 2023).
- Autoview. By 2022, 1 in 10 New Cars Worldwide Will Be Electric Vehicles…Ranked 2nd in Exports to China. Available online: http://www.autoview.co.kr/content/article.asp?num_code=78987&news_section=world_news&pageshow=1&page=1&newchk=news (accessed on 17 January 2023).
- The Guru. ‘Milestone’ of 10% Global Share of EVs in 2022…7.8 Million Units Sold. Available online: https://www.theguru.co.kr/news/article_print.html?no=48371 (accessed on 18 January 2023).
- Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2022, 82, 9243–9275. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, Z.; Jin, L.; Wang, S.; Xu, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biol. Technol. 2022, 185, 111808. [Google Scholar] [CrossRef]
- Zhang, Y.; Guo, Z.; Wu, J.; Tian, Y.; Tang, H.; Guo, X. Real-Time Vehicle Detection Based on Improved YOLO v5. Sustainability 2022, 14, 12274. [Google Scholar] [CrossRef]
- Li, Z.; Xie, W.; Zhang, L.; Lu, S.; Xie, L.; Su, H.; Du, W.; Hou, W. Toward Efficient Safety Helmet Detection Based on YoloV5 with Hierarchical Positive Sample Selection and Box Density Filtering. IEEE Trans. Instrum. Meas. 2022, 71, 1–14. [Google Scholar] [CrossRef]
- Wang, L.; Liu, X.; Ma, J.; Su, W.; Li, H. Real-Time Steel Surface Defect Detection with Improved Multi-Scale YOLO-v5. 2023. Available online: https://www.mdpi.com/2227-9717/11/5/1357 (accessed on 25 April 2023).
- Liu, W.; Xiao, Y.; Zheng, A.; Zheng, Z.; Liu, X.; Zhang, Z.; Li, C. Research on Fault Diagnosis of Steel Surface Based on Improved YOLOV5. 2022. Available online: https://www.mdpi.com/2227-9717/10/11/2274 (accessed on 31 October 2022).
- Cao, Z.; Fang, L.; Li, Z.; Li, J. Lightweight Target Detection for Coal and Gangue Based on Improved Yolov5s. 2023. Available online: https://www.mdpi.com/2227-9717/11/4/1268 (accessed on 18 April 2023).
- Corbetta, M.; Shulman, G.L. Control of goal-directed and stimulusdriven attention in the brain. Nat. Rev. Neurosci. 2002, 3, 201–215. [Google Scholar] [CrossRef] [PubMed]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 1998, 20, 1254–1259. [Google Scholar] [CrossRef]
- Rensink, R.A. The dynamic representation of scenes. Vis. Cogn. 2000, 7, 17–42. [Google Scholar] [CrossRef]
- Larochelle, H.; Hinton, G.E. Learning to combine foveal glimpses with a third-order Boltzmann machine. Adv. Neural Inf. Process. Syst. 2010. Available online: https://papers.nips.cc/paper_files/paper/2010/hash/677e09724f0e2df9b6c000b75b5da10d-Abstract.html (accessed on 18 April 2023).
- Hirsch, J.; Curcio, C.A. The spatial resolution capacity of human foveal retina. Vis. Res. 1989, 2, 1095–1101. [Google Scholar] [CrossRef] [PubMed]
- Yang, Z.; He, X.; Gao, J.; Deng, L.; Smola, A. Stacked attention networks for image question answering. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Nam, H.; Ha, J.-W.; Kim, J. Dual attention networks for multimodal reasoning and matching. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2156–2164. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. arXiv 2017, arXiv:1709.01507. [Google Scholar]
- Wang, F.; Jiang, M.; Qian, C.; Yang, S.; Li, C.; Zhang, H.; Wang, X.; Tang, X. Residual attention network for image classification. arXiv 2017, arXiv:1704.06904. [Google Scholar]
- Jia, X.; De Brabandere, B.; Tuytelaars, T.; Gool, L.V. Dynamic filter networks. Adv. Neural Inf. Process. Syst. 2016. [Google Scholar] [CrossRef]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. CoRR 2017, 1, 3. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for largescale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
- Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv 2016, arXiv:1605.07146. [Google Scholar]
- Han, D.; Kim, J.; Kim, J. Deep pyramidal residual networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6307–6315. [Google Scholar]
- Park, J.; Woo, S.; Lee, J.-Y. In So Kweon, BAM: Bottleneck Attention Module. 514. 2018. Available online: https://arxiv.org/abs/1807.06514v2 (accessed on 17 July 2018).
- Park, J.; Woo, S.; Lee, J.-Y. In So Kweon, CBAM: Convolutional Block Attention Module. 2018. Available online: https://arxiv.org/abs/1807.06521 (accessed on 17 July 2018).
Image Size | Material Type | Training Images | Defect Type | Training Objects |
---|---|---|---|---|
1280 × 1280 | Al | 1050 | Metal pollution | 513 |
Surface bubble | 270 | |||
Ripped off | 538 | |||
Film alien substance | 377 | |||
Metal alien substance | 192 | |||
Teflon | 150 | |||
Jinjeop | 479 | |||
Ni | 1050 | Metal pollution | 629 | |
Surface bubble | 661 | |||
Ripped off | 873 | |||
Film alien substance | 495 | |||
Metal alien substance | 762 | |||
Teflon | 150 | |||
Jinjeop | 445 |
Defect Type | Material Type | ||
---|---|---|---|
Al | Ni | ||
Faultless | 700 | 700 | |
Metal pollution | 100 | 100 | |
Surface bubble | 100 | 100 | |
Ripped off | 100 | 100 | |
Film alien substance | 100 | 100 | |
Metal alien substance | 100 | 100 | |
Teflon | 100 | 100 | |
Jinjeop | 100 | 100 | |
Total | 2800 |
Model | Precision | Recall | F1-Score |
---|---|---|---|
YOLOv5 | 1.0 | 0.93 | 0.96 |
YOLOv5_CBAM_Backbone | 1.0 | 0.78 | 0.87 |
YOLOv5_CBAM_Neck | 1.0 | 0.94 | 0.97 |
YOLOv5_CBAM_All | 1.0 | 0.97 | 0.98 |
Model | Parameters | GFLOPs | F1-Score |
---|---|---|---|
YOLOv5 | 86.6 M | 205.8 | 0.96 |
YOLOv5_CBAM_Backbone | 55.9 M | 114.4 | 0.87 |
YOLOv5_CBAM_Neck | 61.3 M | 153.8 | 0.97 |
YOLOv5_CBAM_All | 30.6 M | 62.3 | 0.98 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mun, J.; Kim, J.; Do, Y.; Kim, H.; Lee, C.; Jeong, J. Design and Implementation of Defect Detection System Based on YOLOv5-CBAM for Lead Tabs in Secondary Battery Manufacturing. Processes 2023, 11, 2751. https://doi.org/10.3390/pr11092751
Mun J, Kim J, Do Y, Kim H, Lee C, Jeong J. Design and Implementation of Defect Detection System Based on YOLOv5-CBAM for Lead Tabs in Secondary Battery Manufacturing. Processes. 2023; 11(9):2751. https://doi.org/10.3390/pr11092751
Chicago/Turabian StyleMun, Jisang, Jinyoub Kim, Yeji Do, Hayul Kim, Chegyu Lee, and Jongpil Jeong. 2023. "Design and Implementation of Defect Detection System Based on YOLOv5-CBAM for Lead Tabs in Secondary Battery Manufacturing" Processes 11, no. 9: 2751. https://doi.org/10.3390/pr11092751
APA StyleMun, J., Kim, J., Do, Y., Kim, H., Lee, C., & Jeong, J. (2023). Design and Implementation of Defect Detection System Based on YOLOv5-CBAM for Lead Tabs in Secondary Battery Manufacturing. Processes, 11(9), 2751. https://doi.org/10.3390/pr11092751