A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces
Abstract
:1. Introduction
2. Experimental Setup
2.1. Belt Grinding Mechanism
2.2. The Mechanism of Spark Generation
3. Image Preprocessing and Annotation
3.1. Image Preprocessing
3.2. Annotation of Images
4. Methodologies
4.1. Overall Block Diagram
4.2. YOLO5 Model
4.2.1. Focus Module
4.2.2. BottlenetCSP Module
4.2.3. SPP Module
4.2.4. Output
Algorithm 1: Generalised Intersection over Union [24] |
input: Two arbitrary convex shapes: A, B ⊆ S ∈ Rn output: GIoU 1 For A and B, find the smallest enclosing convex object C, where C ⊆ S ∈ Rn (1) (2) |
5. Experiments and Discussion
5.1. Datasets
5.2. YOLO5 Experimental Settings
- (a)
- Training environment setup
- (b) YOLO5 training parameter settings
- (c) YOLO5 Testing parameter settings
5.3. Training and Analysis of Results
5.4. Forecasting and Analysis of Results
5.5. YOLO4 Training and Prediction
5.6. YOLO4 Training and Analysis of Results
5.7. Discussion
6. Conclusions
- (1)
- YOLO5 was used for spark recognition in this study. The optimal model obtained after training is able to track the spark image area quickly and accurately, and with a more higher performancecomputer hardware configuration, even faster spark image recognition and detection can be achieved.
- (2)
- Compared to YOLO4, the YOLO5 model has the advantage of high detection accuracy and high interference immunity. It can achieve good recognition under natural conditions, such as backlight or a dim machine tool processing environment, and can accurately identify and locate the spark image target.
- (3)
- The small size of the YOLO5 model has better potential for portability than YOLO4, and with a more higher performance computer hardware configuration, the speed of target detection can reach the ms level, which is sufficient for real-time tracking of spark images. This work lays the foundation for future research on the automatic segmentation of spark images and the relationship between material removal rate and spark images.
- (1)
- Segmentation of a complete spark image from the spark image area detected by YOLO5.
- (2)
- Investigation of the relationship between the material removal rate and spark images;
- (3)
- Establishment of a prediction model accounting for the relationship between the spark image and the material removal rate to realize automatic control of the grinding process.
Author Contributions
Funding
Conflicts of Interest
References
- Qi, J.D.; Chen, B.; Zhang, D.H. Multi-information fusion-based belt condition monitoring in grinding process using the improved-Mahalanobis distance and convolutional neural networks. J. Manuf. Process. 2020, 59, 302–315. [Google Scholar] [CrossRef]
- Pandiyan, V.; Shevchik, S.; Wasmer, K.; Castagnec, S.; Tjahjowidodod, T. Modelling and monitoring of abrasive finishing processes using artificial intelligence techniques: A review. J. Manuf. Process. 2020, 57, 114–135. [Google Scholar] [CrossRef]
- Pandiyan, V.; Caesarendra, W.; Tjahjowidodo, T.; Tan, H.H. In-process tool condition monitoring in compliant abrasive belt grinding process using support vector machine and genetic algorithm. J. Manuf. Process. 2018, 31, 199–213. [Google Scholar] [CrossRef]
- Gao, K.; Chen, H.; Zhang, X.; Ren, X.; Chen, J.; Chen, X. A novel material removal prediction method based on acoustic sensing and ensemble XGBoost learning algorithm for robotic belt grinding of Inconel 718. Int. J. Adv. Manuf. Technol. 2019, 105, 217–232. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unifified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934v1. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision-ECCV 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Girshick, R.; Donahue, J.; Malik, T.D.J.; Berkeley, U. Rich feature hierarchies for accurate object detection and semantic segmentation Tech report (v5). arXiv 2014, arXiv:1311.2524v5. [Google Scholar]
- Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083v2. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2016, arXiv:1506.01497v3. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. arXiv 2018, arXiv:1703.06870v3. [Google Scholar]
- Fu, L.; Gu, W.-b.; Ai, Y.-b.; Li, W.; Wang, D. Adaptive spatial pixel-level feature fusion network for multispectral pedestrian detection. Infrared Phys. Technol. 2021, 116, 103770. [Google Scholar] [CrossRef]
- Lian, J.; Yin, Y.; Li, L.; Wang, Z.; Zhou, Y. Small Object Detection in Traffific Scenes Based on Attention Feature Fusion. Sensors 2021, 21, 3031. [Google Scholar] [CrossRef]
- Wenkel, S.; Alhazmi, K.; Liiv, T.; Alrshoud, S.; Simon, M. Confifidence Score: The Forgotten Dimension of Object Detection Performance Evaluation. Sensors 2021, 21, 4350. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Wang, N.; Li, L.; Ren, Z. Real-time behavior detection and judgment of egg breeders based on YOLO v3. Neural Comput. Appl. 2020, 32, 5471–5481. [Google Scholar] [CrossRef]
- Arunabha; Roy, M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLO4deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar]
- Ren, L.J.; Zhang, G.P.; Wang, Y.; Zhang, Q.; Huang, Y.M. A new in-process material removal rate monitoring approach in abrasive belt grinding. Int. J. Adv. Manuf. Technol. 2019, 104, 2715–2726. [Google Scholar] [CrossRef]
- Wang, N.; Zhang, G.; Pang, W.; Ren, L.; Wang, Y. Novel monitoring method for material removal rate considering quantitative wear of abrasive belts based on LightGBM learning algorithm. Int. J. Adv. Manuf. Technol. 2021, 114, 3241–3253. [Google Scholar] [CrossRef]
- Wang, N.; Zhang, G.; Pang, W.; Wang, Y. Vision and sound fusion-based material removal rate monitoring for abrasive belt grinding using improved LightGBM algorithm. J. Manuf. Process. 2021, 66, 281–292. [Google Scholar] [CrossRef]
- Huaibo, S.; Yanan, W.; Yunfei, W.; Shuaichao, L.; Mei, J. Camellia Fruit Detection in Natural Scene Based on YOLO v5s. Trans. Chin. Soc. Agric. Mach. 2022, 53, 234–242. [Google Scholar]
- Wenliang, W.; Yanxiang, L.; Yifan, Z.; Peng, H.; Shihao, L. MPANet-YOLOv5: Multi-Path Aggregation Network for Complex Sea Object Detection. J. Hunan Univ. Nat. Sci. 2021. [Google Scholar] [CrossRef]
- Rezatofifighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. arXiv 2019, arXiv:1902.09630v2. [Google Scholar]
- Tian, Y.; Zhao, D.; Wang, T. An improved YOLO Nano model for dorsal hand vein detection system. Med. Biol. Eng. Comput. 2022, 60, 1225–1237. [Google Scholar] [CrossRef] [PubMed]
- Tajar, A.T.; Ramazani, A.; Mansoorizadeh, M. A lightweight Tiny-YOLOv3 vehicle detection approach. J. Real-Time Image Process. 2021, 18, 2389–2401. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, H.; Xu, J.; Song, X.; Wang, J.; Chen, X. A novel sound-based belt condition monitoring method for robotic grinding using optimally pruned extreme learning machine. J. Mater. Process. Tech. 2018, 260, 9–19. [Google Scholar] [CrossRef]
- Gai, R.; Chen, N.; Yuan, H. A detection algorithm for cherry fruits based on the improved YOLO-v4 mode. Neural Comput. Appl. 2021. [Google Scholar] [CrossRef]
- Ting, Z.F. Research on Target Detection System of Basketball Robot Based on Improved YOLOv5 Algorithm; Chong Qing University: Chongqing, China, 2021. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. arXiv 2014, arXiv:1406.4729. [Google Scholar]
- Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Huang, G.; Liu, Z.; Laurens, V.; Weinberger, K. Densely Connected convolutional Networks. arXiv 2018, arXiv:1608.06993v5. [Google Scholar]
Content List | Performance |
---|---|
MCU | INTEL E3827 |
Memory | 4 GB DDR3 |
Interface | 2 × RJ45 10/100/1000 Mbit/s, 1 × DVI-I, 4 × USB2.0, 1× |
System | Windows Embedded Compact 7 |
Power | 20 w |
Performance | Parameter |
---|---|
Industrial camera model | HT-GE200GC |
Sensor type | Colour COMS industrial camera |
Pixels | 2 million |
Resolution | 1200 × 1600 |
Lens sight distance | 6–12 mm |
Angle | 28–53 |
Size | 32 (mm) × 41 (mm) |
Element | C | Cr | Mn | Si | P | S | Mo | Fe |
---|---|---|---|---|---|---|---|---|
Content | 0.95–1.05 | 1.40–1.65 | 0.25–0.45 | 0.25–0.35 | ≤0.025 | ≤0.025 | ≤0.1 | Other |
Project | Content |
---|---|
CPU | 6core 11th Gen USA Intel(R) Core(TM) i5-11400F |
RAM | 32 GB |
GPU | USA NVIDIA RTX3060 12GB |
Operating system | USA Microsoft Windows 10 |
CUDA | USA NVIDIA Cuda11.3 |
Data Processing | USA Google Python3.8.8 |
Deep learning framework | USA Facebook Torch1.12 |
Project | Content |
---|---|
CPU | 6core 11th Gen Intel(R) Core(TM) i5-11400F |
RAM | 32 GB |
GPU | RTX3060 12 GB |
Operating system | Windows 10 |
CUDA | Cuda11.6 with Cudnn8.4 |
Data processing | Python3.8 |
Deep learning framework | Darknet |
Performance Indicator | YOLO4 | YOLO5 |
---|---|---|
Epochs | 4000 | 300 |
Training time | 6 h | 0.68 h |
Testing time | 5 s | 2 s |
mAP (%) | 95.14 | 99.5 |
Recall (%) | 88 | 82 |
FPS | 4.2 | 7 |
Optimal model size (M) | 227 | 14 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, J.; Zhang, G. A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces. Sensors 2023, 23, 2025. https://doi.org/10.3390/s23042025
Huang J, Zhang G. A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces. Sensors. 2023; 23(4):2025. https://doi.org/10.3390/s23042025
Chicago/Turabian StyleHuang, Jian, and Guangpeng Zhang. 2023. "A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces" Sensors 23, no. 4: 2025. https://doi.org/10.3390/s23042025
APA StyleHuang, J., & Zhang, G. (2023). A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces. Sensors, 23(4), 2025. https://doi.org/10.3390/s23042025