Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network
Abstract
:1. Introduction
2. Related Work
3. Methods
3.1. Convention Image Processing
3.2. Baseline U-Net
3.3. Residual Connections
3.4. Squeeze-and-Excitation
3.5. Atrous Spatial Pyramid Pooling
3.6. CoordConv
4. Data
4.1. Image Capture Setup
4.2. Wooden Furniture Panels Image Data
4.3. Data Preparation
5. Experiments and Evaluation
- UNet;
- UNet with a squeeze and excitation (UNet + SE);
- UNet with CoordConv (UNet + CoordConv);
- UNet with a squeeze and excitation and CoordConv (UNet + SE + CoordConv);
- UNet with residual connections and atrous spatial pyramid pooling (UNet + Res + ASPP);
- UNet with residual connections, atrous spatial pyramid pooling, and squeeze and excitation (UNet + Res + ASPP + SE);
- UNet with residual connections, atrous spatial pyramid pooling, and CoordConv (UNet + Res + ASPP + CoordConv);
- UNet with residual connection, atrous spatial pyramid pooling, squeeze and excitation, and CoordConv (UNet + Res + ASPP + SE + CoordConv).
6. Results
6.1. Conventional Image Processing Methods
6.2. Convolutional Neural Network Results
7. Discussion
8. Integration and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Czimmermann, T.; Ciuti, G.; Milazzo, M.; Chiurazzi, M.; Roccella, S.; Oddo, C.M.; Dario, P. Visual-Based Defect Detection and Classification Approaches for Industrial Applications—A SURVEY. Sensors 2020, 20, 1459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2018; pp. 7132–7141. [Google Scholar]
- Liu, R.; Lehman, J.; Molino, P.; Such, F.P.; Frank, E.; Sergeev, A.; Yosinski, J. An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution. July 2018. Available online: https://proceedings.neurips.cc/paper/2018/file/60106888f8977b71e1f15db7bc9a88d1-Paper.pdf (accessed on 4 April 2021).
- Augustauskas, R. Models Implementation Code. 2021. Available online: https://github.com/rytisss/PanelsDrillSegmentation (accessed on 5 April 2021).
- Hernandez, A.; Maghami, A.; Khoshdarregi, M. A Machine Vision Framework for Autonomous Inspection of Drilled Holes in CFRP Panels. In Proceedings of the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 20–23 April 2020; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2020; pp. 669–675. [Google Scholar]
- Caggiano, A.; Angelone, R.; Teti, R. Image Analysis for CFRP Drilled Hole Quality Assessment. Procedia CIRP 2017, 62, 440–445. [Google Scholar] [CrossRef]
- Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
- Yu, L.; Bi, Q.; Ji, Y.; Fan, Y.; Huang, N.; Wang, Y. Vision based in-process inspection for countersink in automated drilling and riveting. Precis. Eng. 2019, 58, 35–46. [Google Scholar] [CrossRef]
- Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Li, G.; Yang, S.; Cao, S.; Zhu, W.; Ke, Y. A semi-supervised deep learning approach for circular hole detection on composite parts. Vis. Comput. 2021, 37, 433–445. [Google Scholar] [CrossRef]
- He, D.-C.; Wang, L. Texture Unit, Texture Spectrum, and Texture Analysis. IEEE Trans. Geosci. Remote Sens. 1990, 28, 509–512. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014. ECCV 2014. Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8693. [Google Scholar] [CrossRef] [Green Version]
- Kuznetsova, A.; Rom, H.; Alldrin, N.; Uijlings, J.; Krasin, I.; Pont-Tuset, J.; Kamali, S.; Popov, S.; Malloci, M.; Kolesnikov, A.; et al. The Open Images Dataset V4. Int. J. Comput. Vis. 2020, 128, 1956–1981. [Google Scholar] [CrossRef] [Green Version]
- Touvron, H.; Vedaldi, A.; Douze, M.; Jegou, H. Fixing the train-test resolution discrepancy. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32, Available online: https://proceedings.neurips.cc/paper/2019/file/d03a857a23b5285736c4d55e0bb067c8-Paper.pdf (accessed on 4 April 2021).
- Du, X.; Lin, T.-Y.; Jin, P.; Ghiasi, G.; Tan, M.; Cui, Y.; Le, Q.V.; Song, X. SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2020; pp. 11589–11598. [Google Scholar]
- Kaggle Competition. Open Images 2019. Deep Neural Network ResNeXt152 Solution. Kaggle Competition. 2019. Available online: https://www.kaggle.com/c/open-images-2019-object-detection/discussion/110953 (accessed on 21 February 2021).
- Qian, K. Automated Detection of Steel Defects via Machine Learning based on Real-Time Semantic Segmentation. ACM Int. Conf. Proceeding Ser. 2019, 42–46. [Google Scholar] [CrossRef]
- Xue, B.; Chang, B.; Du, D. Multi-Output Monitoring of High-Speed Laser Welding State Based on Deep Learning. Sensors 2021, 21, 1626. [Google Scholar] [CrossRef]
- Huang, X.; Liu, Z.; Zhang, X.; Kang, J.; Zhang, M.; Guo, Y. Surface damage detection for steel wire ropes using deep learning and computer vision techniques. Measurement 2020, 161, 107843. [Google Scholar] [CrossRef]
- Gao, M.; Chen, J.; Mu, H.; Qi, D. A Transfer Residual Neural Network Based on ResNet-34 for Detection of Wood Knot Defects. Forests 2021, 12, 212. [Google Scholar] [CrossRef]
- Yang, Y.; Zhou, X.; Liu, Y.; Hu, Z.; Ding, F. Wood Defect Detection Based on Depth Extreme Learning Machine. Appl. Sci. 2020, 10, 7488. [Google Scholar] [CrossRef]
- Urbonas, A.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R. Automated Identification of Wood Veneer Surface Defects Using Faster Region-Based Convolutional Neural Network with Data Augmentation and Transfer Learning. Appl. Sci. 2019, 9, 4898. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Jiang, W.; Wu, L.; Wen, H.; Liu, M.; Wang, Y. Real-Time Classification of Rubber Wood Boards Using an SSR-Based CNN. IEEE Trans. Instrum. Meas. 2020, 69, 8725–8734. [Google Scholar] [CrossRef]
- Sheu, R.-K.; Teng, Y.-H.; Tseng, C.-H.; Chen, L.-C. Apparatus and Method of Defect Detection for Resin Films. Appl. Sci. 2020, 10, 1206. [Google Scholar] [CrossRef] [Green Version]
- Muresan, M.P.; Cireap, D.G.; Giosan, I. Automatic Vision Inspection Solution for the Manufacturing Process of Automotive Components Through Plastic Injection Molding. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2020; pp. 423–430. [Google Scholar]
- Lenty, B. Machine vision system for quality control of molded plastic packaging. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2019, Wilga, Poland, 6 November 2019; p. 77. [Google Scholar] [CrossRef]
- Yin, X.; Chen, Y.; Bouferguene, A.; Zaman, H.; Al-Hussein, M.; Kurach, L. A deep learning-based framework for an automated defect detection system for sewer pipes. Autom. Constr. 2020, 109, 102967. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Adibhatla, V.A.; Chih, H.-C.; Hsu, C.-C.; Cheng, J.; Abbod, M.F.; Shieh, J.-S. Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks. Electronics 2020, 9, 1547. [Google Scholar] [CrossRef]
- Su, B.; Chen, H.Y.; Chen, P.; Bian, G.-B.; Liu, K.; Liu, W. Deep Learning-Based Solar-Cell Manufacturing Defect Detection with Complementary Attention Network. IEEE Trans. Ind. Inform. 2021, 17, 4084–4095. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Roberts, G.; Haile, S.Y.; Sainju, R.; Edwards, D.J.; Hutchinson, B.; Zhu, Y. Deep Learning for Semantic Segmentation of Defects in Advanced STEM Images of Steels. Sci. Rep. 2019, 9, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cham, Switzedland, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Lin, Z.; Ye, H.; Zhan, B.; Huang, X. An Efficient Network for Surface Defect Detection. Appl. Sci. 2020, 10, 6085. [Google Scholar] [CrossRef]
- DAGM. Weakly Supervised Learning for Industrial Optical Inspection. DAGM Dataset. 2007. Available online: https://hci.iwr.uni-heidelberg.de/node/3616 (accessed on 4 April 2021).
- Huang, Y.; Qiu, C.; Wang, X.; Wang, S.; Yuan, K. A Compact Convolutional Neural Network for Surface Defect Inspection. Sensors 2020, 20, 1974. [Google Scholar] [CrossRef] [Green Version]
- Niskanen, M.; Kauppinen, H. Wood inspection with non-supervised clustering. Mach. Vis. Appl. 2003, 13, 275–285. [Google Scholar] [CrossRef] [Green Version]
- Kechen, S.; Yunhui, Y. Northeastern University (NEU) Surface Defect Database. Available online: http://faculty.neu.edu.cn/yunhyan/NEU_surface_defect_database.html (accessed on 4 April 2021).
- Danielsson, P.-E.; Seger, O. Generalized and Separable Sobel Operators. In Machine Vision for Three-Dimensional Scenes; Elsevier BV: Amsterdam, The Netherlands, 1990; pp. 347–379. [Google Scholar]
- van Vliet, L.J.; Young, I.T.; Beckers, G.L. A nonlinear laplace operator as edge detector in noisy images. Comput. Vis. Graph. Image Process. 1989, 45, 167–195. [Google Scholar] [CrossRef] [Green Version]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Gholami, A.; Kwon, K.; Wu, B.; Tai, Z.; Yue, X.; Jin, P.; Zhao, S.; Keutzer, K. SqueezeNext: Hardware-Aware Neural Network Design. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1719–171909. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Chen, G.; Li, C.; Wei, W.; Jing, W.; Woźniak, M.; Blažauskas, T.; Damaševičius, R. Fully Convolutional Neural Network with Augmented Atrous Spatial Pyramid Pool and Fully Connected Fusion Path for High Resolution Remote Sensing Image Segmentation. Appl. Sci. 2019, 9, 1816. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Rabinovich, A.; Berg, A.C. ParseNet: Looking Wider to See Better. June 2015. Available online: http://arxiv.org/abs/1506.04579 (accessed on 4 April 2021).
- el Jurdi, R.; Petitjean, C.; Honeine, P.; Abdallah, F. CoordConv-Unet: Investigating CoordConv for Organ Segmentation. IRBM 2021. [Google Scholar] [CrossRef]
- Zhao, H.; Jia, J.; Koltun, V. Exploring Self-Attention for Image Recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10073–10082. [Google Scholar]
- Uselis, A.; Lukoševičius, M.; Stasytis, L. Localized Convolutional Neural Networks for Geospatial Wind Forecasting. Energies 2020, 13, 3440. [Google Scholar] [CrossRef]
- raL6144-16gm—Basler Racer Camera Website. Available online: https://www.baslerweb.com/en/products/cameras/line-scan-cameras/racer/ral6144-16gm/ (accessed on 4 April 2021).
- AF Nikkor 24 mm f/2.8D Optics Website. Available online: https://www.nikon.lt/en_LT/product/nikkor-lenses/auto-focus-lenses/fx/single-focal-length/af-nikkor-24mm-f-2-8d (accessed on 4 April 2021).
- Autonics E40S6-1500-3-T-24 Encoder Website. Available online: https://www.autonicsonline.com/product/product&product_id=14505 (accessed on 4 April 2021).
- EBAR-1125-WHI-7 TPL-Vision LED Lamp Website. Available online: https://www.tpl-vision.fr/en/bar/ebar-plus/ (accessed on 4 April 2021).
- Keras: The Python Deep Learning API. Available online: https://keras.io/ (accessed on 25 March 2021).
- TensorFlow. An End-to-End Open Source Machine Learning Platform. Available online: https://www.tensorflow.org/ (accessed on 27 August 2020).
- Dice, L.R. Measures of the Amount of Ecologic Association Between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
- Shindjalova, R.; Prodanova, K.; Svechtarov, V. Modeling data for tilted implants in grafted with bio-oss maxillary sinuses using logistic regression. AIP Conf. Proc. 2014, 1631, 58–62. [Google Scholar] [CrossRef]
- Arthur, D.; Vassilvitskii, S. K-Means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–9 January 2007; pp. 1027–1035. [Google Scholar]
- Sklansky, J. Finding the convex hull of a simple polygon. Pattern Recognit. Lett. 1982, 1, 79–83. [Google Scholar] [CrossRef]
- Dai, P.; Ji, S.; Zhang, Y. Gated Convolutional Networks for Cloud Removal from Bi-Temporal Remote Sensing Images. Remote Sens. 2020, 12, 3427. [Google Scholar] [CrossRef]
- Zhang, M.; Jing, W.; Lin, J.; Fang, N.; Wei, W.; Woźniak, M.; Damaševičius, R. NAS-HRIS: Automatic Design and Architecture Search of Neural Network for Semantic Segmentation in Remote Sensing Images. Sensors 2020, 20, 5292. [Google Scholar] [CrossRef]
- Raudonis, V.; Paulauskaite-Taraseviciene, A.; Sutiene, K. Fast Multi-Focus Fusion Based on Deep Learning for Early-Stage Embryo Image Enhancement. Sensors 2021, 21, 863. [Google Scholar] [CrossRef]
- Khan, M.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
- Glowacz, A. Fault diagnosis of electric impact drills using thermal imaging. Measurement 2021, 171, 108815. [Google Scholar] [CrossRef]
- Piekarski, M.; Jaworek-Korjakowska, J.; Wawrzyniak, A.I.; Gorgon, M. Convolutional neural network architecture for beam instabilities identification in Synchrotron Radiation Systems as an anomaly detection problem. Measurement 2020, 165, 108116. [Google Scholar] [CrossRef]
Component | Model |
---|---|
Linear camera | raL6144-16 gm-Basler racer [55] |
Camera optics | NIKON AF Nikkor 24 mm f/2.8D [56] |
Encoder (on the motor) | Autonics E40S6-1500-3-T-24 [57] |
Industrial LED lamp | EBAR-1125-WHI-7 TPL-Vision [58] |
Computer | CPU | RAM | GPU | OS |
---|---|---|---|---|
Desktop | AMD Ryzen 5 3600 | 16 GB | Nvidia 2070S | Windows 10 |
Laptop | Intel i5 8300H | 16 GB | Nvidia 1050Ti | Windows 10 |
Method | Accuracy | Recall | Precision | IoU | Dice |
---|---|---|---|---|---|
Sobel filter | 0.996943 | 0.919077 | 0.637585 | 0.580435 | 0.590472 |
Laplace filter | 0.959769 | 0.931860 | 0.651680 | 0.607032 | 0.614552 |
Canny edge detector | 0.934371 | 0.978507 | 0.693433 | 0.677103 | 0.685342 |
CNN Architecture | Accuracy | Recall | Precision | IoU | Dice |
---|---|---|---|---|---|
UNet | 0.999485 | 0.959081 | 0.958613 | 0.955272 | 0.944966 |
UNet + SE | 0.998978 | 0.936343 | 0.978481 | 0.979132 | 0.953470 |
UNet + CoordConv2D | 0.999390 | 0.961770 | 0.975089 | 0.973536 | 0.965975 |
UNet + SE + CoordConv2D | 0.999102 | 0.949620 | 0.983831 | 0.980100 | 0.962330 |
UNet + Res + ASPP | 0.999475 | 0.959433 | 0.973082 | 0.970765 | 0.961194 |
UNet + Res + ASPP + SE | 0.999681 | 0.982027 | 0.977736 | 0.975958 | 0.978871 |
UNet + Res + ASPP + CoordConv2D | 0.999548 | 0.967609 | 0.977881 | 0.974820 | 0.969476 |
UNet + Res + ASPP + SE + CoordConv2D | 0.999414 | 0.962808 | 0.977196 | 0.974946 | 0.968346 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Augustauskas, R.; Lipnickas, A.; Surgailis, T. Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network. Sensors 2021, 21, 3633. https://doi.org/10.3390/s21113633
Augustauskas R, Lipnickas A, Surgailis T. Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network. Sensors. 2021; 21(11):3633. https://doi.org/10.3390/s21113633
Chicago/Turabian StyleAugustauskas, Rytis, Arūnas Lipnickas, and Tadas Surgailis. 2021. "Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network" Sensors 21, no. 11: 3633. https://doi.org/10.3390/s21113633
APA StyleAugustauskas, R., Lipnickas, A., & Surgailis, T. (2021). Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network. Sensors, 21(11), 3633. https://doi.org/10.3390/s21113633