Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation
Abstract
:1. Introduction
- To recognize the shape of sweet pepper in a farm environment, classify each part, and develop an image processing algorithm to classify sweet pepper parts for Plenty (red sweet pepper), President (yellow sweet pepper), and Derby (orange sweet pepper) varieties.
- To classify sweet pepper parts with the NDVI (Normalized Difference Vegetation Index) by dividing the targeted area using k-means clustering and morphological skeletonization followed by extraction of local features using SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features).
- To evaluate performances of developed algorithms such as the BP (Backpropagation) algorithm and the SVM (Support Vector Machine) algorithm for classifying each part (leaf, node, stem, and fruit) compared to a deep neural network algorithm.
2. Materials and Methods
2.1. Hardware Composition
2.2. Sweet Pepper Part Classification Process
2.2.1. Pre-Processing for Differentiation of Stems and Leaves from Backgrounds Using the NDVI
2.2.2. Image Segmentation by K-Means Clustering
2.2.3. Extracting Local Features
2.2.4. Local Features Classification Using the SVM Algorithm and BP Algorithm
2.2.5. Partial Classification Performance Experiment Using Deep Neural Network
3. Results and Discussion
3.1. Part Classification Results
3.1.1. Results of Pre-Process for Differentiation of Stems, Leaves, and Backgrounds Using NDVI
3.1.2. Image Segmentation Results by K-Means Clustering
3.1.3. Results of Local Feature Extraction
3.1.4. Classification Results of Local Features Using SVM and BP Algorithms
3.1.5. Results of Local Feature Performance Experiment Using the Deep Neural Network
3.2. Results of Comparing Classification Performances between BP and CNN for Sweet Pepper Parts
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Han, S. A study on development of the Korea agricultural population forecasting model and long-term prediction. J. Korea Acad. -Ind. Coop. Soc. 2015, 16, 3797–3806. [Google Scholar] [CrossRef] [Green Version]
- Gonzalez, R.; Woods, R. Digital Image Processing, 3rd ed.; Prentice Hall: Hoboken, NJ, USA, 2006. [Google Scholar]
- Meyer, G.E. Machine Vision Identification of Plants. In Recent Trends for Enhancing the Diversity and Quality of Soybean Products; Krezhova, D., Ed.; InTech: Rijeka, Croatia, 2011; pp. 401–420. Available online: https://www.intechopen.com/chapters/22613 (accessed on 14 September 2021). [CrossRef] [Green Version]
- Xia, C.; Lee, J.; Li, Y.; Song, Y.; Chung, B. Plant leaf detection using modified active shape models. Biosyst. Eng. 2013, 116, 23–35. [Google Scholar] [CrossRef]
- Uchiyama, H.; Sakurai, S.; Mishima, M.; Arita, D.; Okayasu, T.; Shimada, A.; Taniguchi, R. An Easy-to-Setup 3D Phenotyping Platform for KOMATSUNA Dataset. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2038–2045. [Google Scholar] [CrossRef]
- Peter, W.; Zhang, S.; Chikkerur, S.; Little, S.; Wing, S.; Serre, T. Computer vision cracks the leaf code. Proc. Natl. Acad. Sci. USA 2016, 113, 3305–3310. [Google Scholar] [CrossRef] [Green Version]
- Sruti, D.; Goswami, S.; Bashyam, S.; Awada, T.; Samal, A. Automated Stem Angle Determination for Temporal Plant Phenotyping Analysis. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2022–2029. [Google Scholar] [CrossRef] [Green Version]
- Lee, W.; Slaughter, D. Recognition of partially occluded plant leaves using a modified watershed algorithm. ASAE 2004, 47, 1269–1280. [Google Scholar] [CrossRef]
- Wang, X.; Huang, D.; Du, J.; Xu, H.; Heutte, L. Classification of plant leaf images with complicated background. Appl. Math. Comput. 2008, 205, 916–926. [Google Scholar] [CrossRef]
- Tang, X.; Liu, M.; Zhao, H.; Tao, W. Leaf Extraction from Complicated Background. In Proceedings of the 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar] [CrossRef]
- João, C.; Meyer, G.; Jones, D. Individual leaf extractions from young canopy images using Gustafson-Kessel clustering and a genetic algorithm. Comput. Electron. Agric. 2006, 51, 66–85. [Google Scholar] [CrossRef]
- Dey, D.; Mummert, L.; Sukthankar, R. Classification of Plant Structures from Uncalibrated Image Sequences. In Proceedings of the IEEE Workshop on the Applications of Computer Vision, Breckenridge, CO, USA, 9–11 January 2012; pp. 329–336. [Google Scholar] [CrossRef] [Green Version]
- Dworak, V.; Selbeck, J.; Dammer, K.; Hoffmann, M.; Zarezadeh, A.; Bobda, C. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems. Sensors 2013, 13, 1523–1538. [Google Scholar] [CrossRef]
- Silva, G.; Goncalves, A.; Silva, C.; Nanni, M.; Facco, C.; Cezar, E.; Silva, A. NDVI response to water stress in different phenological stages in culture bean. J. Agron. 2016, 15, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Robby, T. Visibility in Bad Weather from a Single Image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
- Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef] [Green Version]
- Steinbach, M.; George, K.; Vipin, K. A Comparison of Document Clustering Techniques. Available online: https://conservancy.umn.edu/handle/11299/215421 (accessed on 14 September 2021).
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Li, F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Hopfield, J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
- Collobert, R.; Weston, J. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proceedings of the 25th International Conference on Machine learning, Helsinki, Finland, 5–9 July 2008; pp. 160–167. [Google Scholar] [CrossRef]
- Pound, M.; Atkinson, J.; Wells, D.; Pridmore, T.; French, A. Deep Learning for Multi-Task Plant Phenotyping. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2055–2063. [Google Scholar] [CrossRef] [Green Version]
- Barnea, E.; Mairon, R.; Ben-Shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. [Google Scholar] [CrossRef]
- Kitamura, S.; Oka, K. A recognition method for sweet pepper fruits using led light reflections. SICE J. Control. Meas. Syst. Integr. 2009, 2, 255–260. [Google Scholar] [CrossRef]
- Sa, I.; Lehnert, C.; English, A.; Mccool, C.; Dayoub, F.; Upcroft, B.; Perez, T. Peduncle detection of sweet pepper for autonomous crop harvesting-combined color and 3-d information. IEEE Robot. Autom. Lett. 2017, 2, 765–772. [Google Scholar] [CrossRef] [Green Version]
- Ha, J. Demand Survey of Agricultural Products Export Information and Prioritization Study on Smart Farms. Ph.D. Thesis, Chonnam National University, Gwangju, Korea, 2017. [Google Scholar]
- RDA (Rural Development Administration). Press Releases. Available online: https://www.nihhs.go.kr/pdf/farmerUseProgram/smart_greenhouse_guideline.PDF (accessed on 14 September 2021).
- Ryu, H.; Yu, I.; Cho, M.; Um, Y. Structural reinforcement methods and structural safety analysis for the elevated eaves height 1-2w type plastic green house. J. Bio-Environ. Control. 2009, 18, 192–199. [Google Scholar]
- Lee, J.; Kim, H.; Jeong, P.; Ku, Y.; Bae, J. Effects of supplemental lighting of high pressure sodium and lighting emitting plasma on growth and productivity of paprika during low radiation period of winter season. Hortic. Sci. Technol. 2014, 32, 346–352. [Google Scholar] [CrossRef] [Green Version]
- Kang, S.; Hong, H. A study on comparision of the quantity of phoria as way to separation of binocular fusion. J. Korean Ophthalmic Opt. Soc. 2014, 19, 331–337. [Google Scholar] [CrossRef] [Green Version]
- Um, Y.; Choi, C.; Seo, T.; Lee, J.; Jang, Y.; Lee, S.; Oh, S.; Lee, H. Comparison of growth characteristics and yield by sweet pepper varieties at glass greenhouse in reclaimed land and farms. J. Agric. Life Sci. 2013, 47, 33–41. [Google Scholar] [CrossRef]
- Wang, Q.; Yan, Y.; Zeng, Y.; Jiang, Y. Experimental and numerical study on growth of high-quality ZnO single-crystal microtubes by optical vapor supersaturated precipitation method. J. Cryst. Growth 2016, 468, 638–644. [Google Scholar] [CrossRef]
- Story, D. Design and implementation of a computer vision-guided greenhouse crop diagnostics system. Mach. Vis. Appl. 2015, 26, 495–506. [Google Scholar] [CrossRef]
- Yu, Y. A Study on Index of Greenness Measurement Using Near-Infrared Photograph and Vegetation Index. Ph.D. Thesis, Seoul National University, Seoul, Korea, 2017. [Google Scholar]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the great plains with ERTS. NASA Spec. Publ. 1974, 351, 309–317. [Google Scholar]
- Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
- Chatbri, H.; Kaymeyama, K.; Kwan, P. A comparative study using contours and skeletons as shape representations for binary image matching. Pattern Recognit. Lett. 2015, 76, 59–66. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L. Speeded-up robust features. Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Vapnik, V. The Nature of Statistical Learning Theory, 2nd ed.; Springer-Verlag New York: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
- Werbos, P. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1974. [Google Scholar]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef] [Green Version]
Model | Imaging Sensor | Resolution | Operating Temperature | Operating Humidity |
---|---|---|---|---|
Blackfly BFLY-U3-13S2C-CS | Sony ICX445, 1/3”, 3.75 µm | 1288 × 964 (30 FPS) | 0–45 °C | 20–80% |
Model | Useful Range | FWHM 1 | Tolerance | Peak Transmission |
---|---|---|---|---|
BP470 | 435−495 nm | 85 nm | +/−10 nm | >90% |
BP500 | 440−555 nm | 248 nm | +/−10 nm | >85% |
BP505 | 485−550 nm | 90 nm | +/−10 nm | >90% |
BP525 | 500−555 nm | 80 nm | +/−10 nm | >90% |
BP635 | 610−650 nm | 65 nm | +/−10 nm | >90% |
BP660 | 640−680 nm | 65 nm | +/−10 nm | >90% |
BP695 | 680−720 nm | 65 nm | +/−10 nm | >90% |
BP735 | 715−780 nm | 90 nm | +/−10 nm | >90% |
BP800 | 745−950 nm | 315 nm | +/−10 nm | >90% |
Wavelength | 695 nm | 735 nm | 800 nm | |||
---|---|---|---|---|---|---|
Mean | SD 1 | Mean | SD | Mean | SD | |
470 nm | 8.33309 | 0.0229 | 5.46441 | 0.0949 | 8.62191 | 0.0339 |
500 nm | 8.20943 | 0.0294 | 6.51014 | 0.0837 | 7.81063 | 0.0306 |
505 nm | 8.30828 | 0.0118 | 7.75226 | 0.0999 | 8.68881 | 0.0673 |
525 nm | 8.2368 | 0.0971 | 7.57739 | 0.0283 | 7.53663 | 0.0667 |
635 nm | 7.98845 | 0.0248 | 7.62452 | 0.0393 | 7.66518 | 0.0538 |
660 nm | 7.84794 | 0.0298 | 8.71625 | 0.0315 | 7.70154 | 0.0178 |
Parts | Actual Number | Segmented Number |
---|---|---|
Fruit | 682 | 676 |
Sucker | 345 | 308 |
Leaf | 8169 | 5981 |
Node | 11,856 | 10,435 |
Parts | True Condition | ||||
---|---|---|---|---|---|
Fruit | Node | Leaf | Sucker | ||
Predicted condition | Fruit | 75 | 0 | 25 | 0 |
Node | 0 | 68 | 11 | 21 | |
Leaf | 27 | 3 | 60 | 10 | |
Sucker | 0 | 40 | 8 | 52 |
Parts | Training Feature | Test Feature |
---|---|---|
Fruit | 694 | 6250 |
Node | 10,720 | 96,478 |
Leaf | 6144 | 55,298 |
Sucker | 317 | 2849 |
Total feature | 17,875 | 160,875 |
Parts | True Condition | ||||
---|---|---|---|---|---|
Fruit | Node | Leaf | Sucker | ||
Predicted condition | Fruit | 6616 | 1 | 325 | 2 |
Node | 102 | 103,196 | 3758 | 142 | |
Leaf | 154 | 2416 | 58,718 | 154 | |
Sucker | 7 | 115 | 53 | 2991 |
Parts | True Condition | ||||
---|---|---|---|---|---|
Fruit | Node | Leaf | Sucker | ||
Predicted condition | Fruit | 398 | 0 | 2 | 0 |
Node | 0 | 351 | 0 | 49 | |
Leaf | 9 | 0 | 362 | 0 | |
Sucker | 0 | 51 | 0 | 349 |
Performance | Fruit (%) | Node (%) | Leaf (%) | Sucker (%) |
---|---|---|---|---|
BP Algorithm | 94.44 | 84.73 | 69.97 | 84.34 |
CNN | 99.5 | 87.75 | 90.50 | 87.25 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, B.; Kam, D.; Cho, Y.; Kim, D.-C.; Lee, D.-H. Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation. Appl. Sci. 2021, 11, 9583. https://doi.org/10.3390/app11209583
Lee B, Kam D, Cho Y, Kim D-C, Lee D-H. Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation. Applied Sciences. 2021; 11(20):9583. https://doi.org/10.3390/app11209583
Chicago/Turabian StyleLee, Bongki, Donghwan Kam, Yongjin Cho, Dae-Cheol Kim, and Dong-Hoon Lee. 2021. "Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation" Applied Sciences 11, no. 20: 9583. https://doi.org/10.3390/app11209583
APA StyleLee, B., Kam, D., Cho, Y., Kim, D. -C., & Lee, D. -H. (2021). Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation. Applied Sciences, 11(20), 9583. https://doi.org/10.3390/app11209583