Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks
Abstract
:1. Introduction
2. Classification System and Data Collection
2.1. Classification System for Hard Candies
2.2. Establish Hard Candy Dataset
3. Methods
3.1. Identification of Defect Candies
3.2. Segmentation of Adhesive Hard Candies
3.2.1. Adhesion Determination
3.2.2. Concave Point Detection
3.2.3. Contour Segment Grouping
- If the average distance deviation (ADD) produced by the fitted ellipse after being divided into one group is smaller than that produced by any contour segment before the combination, then these contour segments can be divided into the same group.
- 2.
- If the distance between the gravity center of the fitted ellipse being divided into the same group and that of the ellipse fitted separately for each contour segment is close, then it can be divided into one group.
- 3.
- If two gravity centers of any two ellipses are fitted from contour segments and , they can be divided into one group.
3.2.4. Ellipse Fitting
3.3. Classification of Defective Hard Candies
- A convolutional layer, a set of convolutional filters that activate image features;
- A rectified linear unit layer (ReLU), an activation function;
- A layer of subsampling or pooling, a form of down sampling;
- A fully connected layer, which integrates the features extracted from the previous layers and outputs them to one dimension;
- A softmax layer, which gives the probability of each category established in the database when classification starts.
- Alexnet [14], one of the first deep networks, is made up of five convolutional layers and three fully connected layers.
- Googlenet [31], compared to Alexnet, has a much deeper network and a lower number of network parameters. It possesses 7 million parameters and contains nine inception modules, four convolutional layers, three average pooling layers, five fully connected layers, and three softmax layers.
- VGG (VGG16) [32], which was developed by the Visual Geometry Group (VGG) of the University of Oxford, is an Alexnet enhanced by replacing kernel-sized filters with multiple 3 3 kernel-sized filters one after another
- Resnet (Resnet-18, Resnet34 and Resnet50) [33] is a series of deep learning models, which is similar to VGG but is deeper and with shortcut connections. Resnet-N means that the model those the number of convolutional layers and fully connected layers is N in total.
- MobileNetV2 [34] is a mobile architecture which is used to object detection in the framework called SSDLite. This model is one of lightweight neural network model with small model parameters and great performance.
- MnasNet0_5 [35] is an automated mobile neural architecture search approach, which is faster than the MobileNetV2 on the object detection.
4. Results
4.1. Hard Candy Classification Test Result
4.1.1. Classification Performance of CNN Models
4.1.2. Classification Performance of Different Models
4.2. Prototype Design Principle and Workflow
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Cárdenas-Pérez, S.; Chanona-Pérez, J.; Méndez-Méndez, J.V.; Calderón-Domínguez, G.; López-Santiago, R.; Perea-Flores, M.J.; Arzate-Vázquez, I. Evaluation of the ripening stages of apple (Golden Delicious) by means of computer vision system. Biosyst. Eng. 2017, 159, 46–58. [Google Scholar] [CrossRef]
- Chao, M.; Kai, C.; Zhiwei, Z. Research on tobacco foreign body detection device based on machine vision. Trans. Inst. Meas. Control 2020, 42, 2857–2871. [Google Scholar] [CrossRef]
- De Carvalho, L.C.; Pereira, F.M.V.; de Morais, C.D.L.M.; de Lima, K.M.G.; de Almeida Teixeira, G.H. Assessment of macadamia kernel quality defects by means of near infrared spectroscopy (NIRS) and nuclear magnetic resonance (NMR). Food Control 2019, 106, 106695. [Google Scholar] [CrossRef]
- Lu, Y.; Lu, R. Detection of surface and subsurface defects of apples using structuredillumination reflectance imaging with machine learning algorithms. Trans. ASABE 2018, 61, 1831–1842. [Google Scholar] [CrossRef]
- Ireri, D.; Belal, E.; Okinda, C.; Makange, N.; Ji, C. A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing. Artif. Intell. Agric. 2019, 2, 28–37. [Google Scholar] [CrossRef]
- Dhakshina Kumar, S.; Esakkirajan, S.; Bama, S.; Keerthiveena, B. A microcontroller based machine vision approach for tomato grading and sorting using SVM classifier. Microprocess. Microsyst. 2020, 76, 103090. [Google Scholar] [CrossRef]
- Chen, S.; Xiong, J.; Guo, W.; Bu, R.; Zheng, Z.; Chen, Y.; Yang, Z.; Lin, R. Colored rice quality inspection system using machine vision. J. Cereal Sci. 2019, 88, 87–95. [Google Scholar] [CrossRef]
- Khojastehnazhand, M.; Ramezani, H. Machine vision system for classification of bulk raisins using texture features. J. Food Eng. 2020, 271, 109864. [Google Scholar] [CrossRef]
- Lin, P.; Xiaoli, L.; Li, D.; Jiang, S.; Zou, Z.; Lu, Q.; Chen, Y. Rapidly and exactly determining postharvest dry soybean seed quality based on machine vision technology. Sci. Rep. 2019, 9, 1–11. [Google Scholar] [CrossRef]
- Ji, Y.; Zhao, Q.; Bi, S.; Shen, T. Apple Grading Method Based on Features of Color and Defect. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 5364–5368. [Google Scholar] [CrossRef]
- Zhang, W.; Zhu, Q.; Huang, M.; Guo, Y.; Qin, J. Detection and Classification of Potato Defects Using Multispectral Imaging System Based on Single Shot Method. Food Anal. Methods 2019, 12, 2920–2929. [Google Scholar] [CrossRef]
- Deng, L.; Du, H.; Han, Z. A carrot sorting system using machine vision technique. Appl. Eng. Agric. 2017, 33, 149–156. [Google Scholar] [CrossRef]
- Iraji, M.S. Comparison between soft computing methods for tomato quality grading using machine vision. J. Food Meas. Charact. 2019, 13, 1–15. [Google Scholar] [CrossRef]
- Krizhevsky, B.A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Da Costa, A.Z.; Figueroa, H.E.H.; Fracarolli, J.A. Computer vision based detection of external defects on tomatoes using deep learning. Biosyst. Eng. 2020, 190, 131–144. [Google Scholar] [CrossRef]
- Xu, X.; Zheng, H.; You, C.; Guo, Z.; Wu, X. Far-net: Feature-wise attention-based relation network for multilabel jujube defect classification. Sensors 2021, 21, 392. [Google Scholar] [CrossRef]
- Jahanbakhshi, A.; Momeny, M.; Mahmoudi, M.; Zhang, Y.D. Classification of sour lemons based on apparent defects using stochastic pooling mechanism in deep convolutional neural networks. Sci. Hortic. 2020, 263, 109133. [Google Scholar] [CrossRef]
- Zhang, J.; Cosma, G.; Watkins, J. Image Enhanced Mask R-CNN: A Deep Learning Pipeline with New Evaluation Measures for Wind Turbine Blade Defect Detection and Classification. J. Imaging 2021, 7, 46. [Google Scholar] [CrossRef]
- Duong, B.P.; Kim, J.Y.; Jeong, I.; Im, K.; Kim, C.H.; Kim, J.M. A Deep-Learning-Based Bearing Fault Diagnosis Using Defect Signature Wavelet Image Visualization. Appl. Sci. 2020, 10, 8800. [Google Scholar] [CrossRef]
- Zhuang, Z.; Liu, Y.; Ding, F.; Wang, Z. Online Color Classification System of Solid Wood Flooring Based on Characteristic Features. Sensors 2021, 21, 336. [Google Scholar] [CrossRef] [PubMed]
- Wan, X.; Zhang, X.; Liu, L. An improved VGG19 transfer learning strip steel surface defect recognition deep neural network based on few samples and imbalanced datasets. Appl. Sci. 2021, 11, 2606. [Google Scholar] [CrossRef]
- Wang, S.; Xia, X.; Ye, L.; Yang, B. Automatic detection and classification of steel surface defect using deep convolutional neural networks. Metals 2021, 11, 388. [Google Scholar] [CrossRef]
- Zhou, H.; Zhuang, Z.; Liu, Y.; Liu, Y.; Zhang, X. Defect Classification of Green Plums Based on Deep Learning. Sensors 2020, 20, 6993. [Google Scholar] [CrossRef] [PubMed]
- Li, C.H.; Lee, C.K. Minimum cross entropy thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
- Li, C.H.; Tam, P. An iterative algorithm for minimum cross entropy thresholding. Pattern Recognit. Lett. 1998, 19, 771–776. [Google Scholar] [CrossRef]
- Zafari, S.; Eerola, T.; Sampo, J.; Kälviäinen, H.; Haario, H. Segmentation of partially overlapping nanoparticles using concave points. Lect. Notes Comput. Sci. 2015, 9474, 187–197. [Google Scholar] [CrossRef]
- He, X.C.; Yung, N.H.C. Curvature scale space corner detector with adaptive threshold and dynamic region of support. Proc. Int. Conf. Pattern Recognit. 2004, 2, 791–794. [Google Scholar] [CrossRef] [Green Version]
- Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; Volume 9, pp. 248–255. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; Volume 45, pp. 770–778. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef] [Green Version]
- Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. MnasNet: Platform-Aware Neural Architecture Search for Mobile. In Proceedings of the 2019 Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; Available online: https://arxiv.org/abs/1807.11626 (accessed on 29 May 2019).
- Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE Trans. Hum. Mach. Syst. 2017, 45, 799–804. [Google Scholar] [CrossRef]
- Wajeed, M.A.; Adilakshmi, T. Semi-supervised text classification using enhanced KNN algorithm. In Proceedings of the 2011 World Congress on Information and Communication Technologies, Mumbai, India, 11–14 December 2011; pp. 138–142. [Google Scholar] [CrossRef]
- Breiman, L. Random forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, B.P.; Nguyen, Q.H.; Doan-Ngoc, G.N.; Nguyen-Vo, T.H.; Rahardja, S. iProDNA-CapsNet: Identifying Protein- DNA binding residues using capsule neural networks. BMC Bioinform. 2019, 20 (Suppl. S23), 1–12. [Google Scholar] [CrossRef]
Subject | Good | Defective | Total | ||
---|---|---|---|---|---|
Holey | Broken | Small | |||
Label | 00 | 01 | 10 | 11 | |
Training Set | 2528 | 2536 | 940 | 1128 | 7132 |
Validation Set | 135 | 136 | 50 | 60 | 381 |
Testing set | 137 | 137 | 52 | 62 | 388 |
Total | 2800 | 2809 | 1042 | 1250 | 7901 |
Network Models | Accuracy | fps |
---|---|---|
Alexnet-based model | 97.68% | ~7.75 |
Googlenet-based model | 98.46% | ~1.79 |
VGG16-based model | 97.94% | ~0.45 |
Resnet-18-based model | 98.20% | ~2.54 |
Resnet-34-based model | 98.45% | ~1.52 |
Resnet-50-based model | 98.71% | ~0.75 |
MobileNetV2-based model | 98.20% | ~1.56 |
MnasNet0_5-based model | 84.28% | ~4.22 |
Models | Good | Holey | Broken | Small | |
---|---|---|---|---|---|
Alexnet-based model | Good | 98.54% | 1.46% | 0 | 0 |
Holey | 0.73% | 97.08% | 2.19% | 0 | |
Broken | 0 | 1.92% | 96.16% | 1.92% | |
Small | 0 | 0 | 1.61% | 98.39% | |
Googlenet-based model | Good | 100% | 0 | 0 | 0 |
Holey | 0.73% | 97.81% | 1.46% | 0 | |
Broken | 0 | 1.92% | 94.23% | 3.85% | |
Small | 0 | 0 | 0 | 100% | |
VGG16-based model | Good | 99.27% | 0.73% | 0 | 0 |
Holey | 0.73% | 96.35% | 2.92% | 0 | |
Broken | 0 | 1.92% | 96.15% | 1.92% | |
Small | 0 | 0 | 0 | 100% | |
Resnet-18-based model | Good | 99.27% | 0.73% | 0 | 0 |
Holey | 0.73% | 97.08% | 2.19% | 0 | |
Broken | 0 | 1.92% | 96.15% | 1.92% | |
Small | 0 | 0 | 0 | 100% | |
Resnet-34-based model | Good | 100% | 0 | 0 | 0 |
Holey | 0.73% | 97.08% | 2.19% | 0 | |
Broken | 0 | 1.92% | 96.15% | 0.0192 | |
Small | 0 | 0 | 0 | 100% | |
Resnet-50-based model | Good | 100% | 0 | 0 | 0 |
Holey | 0.73% | 97.08% | 2.19% | 0 | |
Broken | 0 | 0 | 98.08% | 1.92% | |
Small | 0 | 0 | 0 | 100% | |
MobileNetV2-based model | Good | 99.27% | 0.73% | 0 | 0 |
Holey | 0.73% | 97.08% | 2.19% | 0 | |
Broken | 0 | 1.92% | 96.15% | 1.92% | |
Small | 0 | 0 | 0 | 100% | |
MnasNet0_5-based model | Good | 93.43% | 1.46% | 0.73% | 4.38% |
Holey | 5.11% | 83.94% | 8.76% | 2.19% | |
Broken | 5.77% | 13.46% | 42.31% | 38.46% | |
Small | 0 | 0 | 0 | 100% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Li, Z.; Chen, Q.; Ding, K.; Zhu, T.; Ni, C. Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks. Electronics 2021, 10, 2017. https://doi.org/10.3390/electronics10162017
Wang J, Li Z, Chen Q, Ding K, Zhu T, Ni C. Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks. Electronics. 2021; 10(16):2017. https://doi.org/10.3390/electronics10162017
Chicago/Turabian StyleWang, Jinya, Zhenye Li, Qihang Chen, Kun Ding, Tingting Zhu, and Chao Ni. 2021. "Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks" Electronics 10, no. 16: 2017. https://doi.org/10.3390/electronics10162017
APA StyleWang, J., Li, Z., Chen, Q., Ding, K., Zhu, T., & Ni, C. (2021). Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks. Electronics, 10(16), 2017. https://doi.org/10.3390/electronics10162017