Research on Red Jujubes Recognition Based on a Convolutional Neural Network
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Dataset
2.2. Study Process
2.3. Training and Testing Data
2.4. Detection Algorithms Used
- (1)
- Input layer and convolution layer: The input layer is used to receive the original image, while the convolution layer is used to extract the feature information of the image, which traverses the image information according to the size of the selected convolution core, and finally summarizes it. The convolution layer formula is shown in Equation (1), where xlj represents the jth characteristic graph of the l-layer convolution layer; klij represents the convolution kernel matrix of the l-layer; Ml−1 represents the collection of Lmuri 1-layer characteristic graphs; blj represents the network bias parameter; and f represents the activation function.The convolution kernel for image feature extraction is one of the main parameters of a CNN model, which directly affects the performance of CNN model feature extraction. The activation function defines the transformation mode of the nonlinear mapping of data so that the CNN can better solve the problem of insufficient feature expression ability. The commonly used activation functions are sigmoid, tanh, ReLU, etc.
- (2)
- Pooling layer: The pooling layer is mainly used for downsampling—that is, to reduce the amount of data reasonably according to the detection characteristics to achieve the reduction in calculation and to control overfitting to a certain extent. Its specific calculation formula is the same as the convolution layer.
- (3)
- Full connection layer and output layer: The full connection layer is responsible for transforming the two-dimensional feature graph of convolution output into a one-dimensional vector, thus realizing the end-to-end learning process. Each node of the fully connected layer is connected to all of the nodes of the upper layer, so it is called the fully connected layer, and its single-layer computation is shown in Equation (2), where M represents the upper layer calculation, and F represents the size of the convolution core of the current layer.Its calculation formula is presented in Equation (3), where wl represents the weight of the fully connected layer; bl is the bias parameter of the fully connected layer l; and xl−1 represents the output characteristic graph of the previous layer.After the convolution is completed for multi-layer feature extraction, the output layer acts as a classifier to predict the categories of input samples.
2.4.1. Faster R-CNN
2.4.2. YOLOV5
2.4.3. AlexNet
2.4.4. HOG + SVM
2.5. Evaluation Indicators
2.6. Efficiency of Detection Methods
3. Results
3.1. Model Training
3.2. Model Results
3.3. The Prediction Results of Model
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lai, J.; Li, Y.; Chen, J.; Niu, G.Y.; Lin, P.; Li, Q.; Wang, L.; Han, J.; Luo, Z.; Sun, Y. Massive crop expansion threatens agriculture and water sustainability in northwestern China. Environ. Res. Lett. 2022, 17, 3. [Google Scholar] [CrossRef]
- Meng, X.; Yuan, Y.; Teng, G.; Liu, T. Deep learning for fine-grained classification of jujube fruit in the natural environment. Food Meas. 2021, 15, 4150–4165. [Google Scholar] [CrossRef]
- Liu, M.; Li, C.; Cao, C.; Wang, L.; Li, X.; Che, J.; Yang, H.; Zhang, X.; Zhao, H.; He, G.; et al. Walnut Fruit Processing Equipment: Academic Insights and Perspectives. Food Eng. Rev. 2021, 13, 822–857. [Google Scholar] [CrossRef]
- Yao, S. Past, Present, and Future of Jujubes—Chinese Dates in the United States. HortScience Horts. 2013, 48, 672–680. [Google Scholar] [CrossRef]
- Wang, X.; Shen, L.; Liu, T.; Wei, W.; Zhang, S.; Li, L.; Zhang, W. Microclimate, yield, and income of a jujube–cotton agroforestry system in Xinjiang, China. Ind. Crops Prod. 2022, 182, 114941. [Google Scholar] [CrossRef]
- Shahrajabian, M.H.; Sun, W.; Cheng, Q. Chinese jujube (Ziziphus jujuba Mill.) –A promising fruit from Traditional Chinese Medicine. Annales Universitatis Paedagogicae Cracoviensis Studia. Ann. Univ. Paedagog. Crac. Stud. Nat. 2020, 5, 194–219. [Google Scholar]
- Wang, S.; Sun, J.; Fu, L.; Xu, M.; Tang, N.; Cao, Y.; Yao, K.; Jing, J. Identification of red jujube varieties based on hyperspectral imaging technology combined with CARS-IRIV and SSA-SVM. J. Food Process Eng. 2022, 45, e14137. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, L.; Tuerxun, N.; Luo, L.; Han, C.; Zheng, J. Extraction of Jujube Planting Areas in Sentinel-2 Image Based on NDVI Threshold—A case study of Ruoqiang County. In Proceedings of the 29th International Conference on Geoinformatics, Beijing, China, 15–18 August 2022; pp. 1–6. [Google Scholar]
- Alharbi, A.G.; Arif, M. Detection and Classification of Apple Diseases using Convolutional Neural Networks. In Proceedings of the 2020 2nd International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 13–15 October 2020; pp. 1–6. [Google Scholar]
- Bhatt, P.; Maclean, A.L. Comparison of high-resolution NAIP and unmanned aerial vehicle (UAV) imagery for natural vegetation communities classification using machine learning approaches. GIScience Remote Sens. 2023, 60, 2177448. [Google Scholar] [CrossRef]
- Xu, B.; Chai, L.; Zhang, C. Research and application on corn crop identification and positioning method based on Machine vision. Inf. Process. Agric. 2023, 10, 106–113. [Google Scholar] [CrossRef]
- Chandel, N.S.; Rajwade, Y.A.; Dubey, K.; Chandel, A.K.; Subeesh, A.; Tiwari, M.K. Water Stress Identification of Winter Wheat Crop with State-of-the-Art AI Techniques and High-Resolution Thermal-RGB Imagery. Plants 2022, 11, 3344. [Google Scholar] [CrossRef]
- Khan, H.R.; Gillani, Z.; Jamal, M.H.; Athar, A.; Chaudhry, M.T.; Chao, H.; He, Y.; Chen, M. Early Identification of Crop Type for Smallholder Farming Systems Using Deep Learning on Time-Series Sentinel-2 Imagery. Sensors 2023, 23, 1779. [Google Scholar] [CrossRef] [PubMed]
- Mirbod, O.; Choi, D.; Heinemann, P.H.; Marini, R.P.; He, L. On-tree apple fruit size estimation using stereo vision with deep learning-based occlusion handling. Biosyst. Eng. 2023, 226, 27–42. [Google Scholar] [CrossRef]
- Wang, Q.; Qi, F. Tomato Diseases Recognition Based on Faster RCNN. In Proceedings of the 2019 10th International Conference on Information Technology in Medicine and Education (ITME), Qingdao, China, 23–25 August 2019; pp. 772–776. [Google Scholar]
- Velumani, K.; Lopez-Lozano, R.; Madec, S.; Guo, W.; Gillet, J.; Comar, A.; Baret, F. Estimates of Maize Plant Density from UAV RGB Images Using Faster-RCNN Detection Model: Impact of the Spatial Resolution. Plant Phenomics 2021, 2021, 9824843. [Google Scholar] [CrossRef] [PubMed]
- Alruwaili, M.; Siddiqi, M.H.; Khan, A.; Azad, M.; Khan, A.; Alanazi, S. RTF-RCNN: An Architecture for Real-Time Tomato Plant Leaf Diseases Detection in Video Streaming Using Faster-RCNN. Bioengineering 2022, 9, 565. [Google Scholar] [CrossRef] [PubMed]
- Lutfi, M.; Rizal, H.S.; Hasyim, M.; Amrulloh, M.F.; Saadah, Z.N. Feature Extraction and Naïve Bayes Algorithm for Defect Classification of Manalagi Apples. J. Phys. Conf. Ser. 2022, 2394, 012014. [Google Scholar] [CrossRef]
- Yang, Q.; Duan, S.; Wang, L. Efficient Identification of Apple Leaf Diseases in the Wild Using Convolutional Neural Networks. Agronomy 2022, 12, 2784. [Google Scholar] [CrossRef]
- Hao, Q.; Guo, X.; Yang, F. Fast Recognition Method for Multiple Apple Targets in Complex Occlusion Environment Based on Improved YOLOv5. J. Sens. 2023, 2023, 3609541 . [Google Scholar] [CrossRef]
- Liu, M.; Wang, J.; Wang, L.; Liu, P.; Zhao, J.; Zhao, Z.; Yao, S.; Stănică, F.; Liu, Z.; Wang, L.; et al. The historical and current research progress on jujube–a superfruit for the future. Hortic. Res. 2020, 7, 119. [Google Scholar] [CrossRef]
- Liu, Y.; Lei, X.; Deng, B.; Chen, O.; Deng, L.; Zeng, K. Methionine enhances disease resistance of jujube fruit against postharvest black spot rot by activating lignin biosynthesis. Postharvest Biol. Technol. 2022, 190, 111935. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. NIPS 2015, 2016. [Google Scholar] [CrossRef] [PubMed]
- Liao, X.; Zeng, X. Review of Target Detection Algorithm Based on Deep Learning. In Proceedings of the 2020 International Conference on Artificial Intelligence and Communication Technology(AICT 2020), Chongqing, China, 28–29 March 2020; p. 5. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Li, H.; Ji, Y.; Gong, Z.; Qu, S. Two-stage stochastic minimum cost consensus models with asymmetric adjustment costs. Inf. Fusion 2021, 71, 77–96. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 6. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005. [Google Scholar]
- Li, Q.; Qu, G.; Li, Z. Matching between SAR images and optical images based on HOG descriptor. IET Int. Radar Conf. 2013, 2013, 1–4. [Google Scholar]
- Bedo, J.; Macintyre, G.; Haviv, I.; Kowalczyk, A. Simple SVM based whole-genome segmentation. Nat. Prec. 2009. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. Eur. Conf. Comput. Vis. 2014, 8693, 740–755. [Google Scholar]
- Cemil, Z. A Review of COVID-19 Diagnostic Approaches in Computer Vision. Curr. Med. Imaging 2023, 19, 695–712. [Google Scholar]
- Xu, M.; Yoon, S.; Fuentes, A.; Park, D.S. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning. Pattern Recognit. 2023, 137, 109347. [Google Scholar] [CrossRef]
- Lu, Y.; Chen, D.; Olaniyi, E.; Huang, Y. Generative adversarial networks (GANs) for image augmentation in agriculture: A systematic revie. Comput. Electron. Agric. 2022, 200, 107208. [Google Scholar] [CrossRef]
- Sengupta, S.; Lee, W.S. Identification and determination of the number of immature green citrus fruit in a canopy under different ambient light conditions. Biosyst. Eng. 2014, 117, 51–61. [Google Scholar] [CrossRef]
- Wang, R.Q.; Zhu, F.; Zhang, X.Y.; Liu, C.L. Training with scaled logits to alleviate class-level over-fitting in few-shot learning. Neurocomputing 2023, 522, 142–151. [Google Scholar] [CrossRef]
- Aversano, L.; Bernardi, M.L.; Cimitile, M.; Pecori, R. Deep neural networks ensemble to detect COVID-19 from CT scans. Pattern Recognit. 2021, 120, 108135. [Google Scholar] [CrossRef] [PubMed]
- He, R.; Xiao, Y.; Lu, X.; Zhang, S.; Liu, Y. ST-3DGMR: Spatio-temporal 3D grouped multiscale ResNet network for region-based urban traffic flow prediction. Inf. Sci. 2023, 624, 68–93. [Google Scholar] [CrossRef]
- Song, H.M.; Woo, J.; Kim, H.K. In-vehicle network intrusion detection using deep convolutional neural network. Veh. Commun. 2020, 21, 100198. [Google Scholar] [CrossRef]
Software, Hardware/Systems | Configuration |
---|---|
system | Windows |
CPU | Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz |
GPU | GTX 1650Ti |
Development Languages | python 3.8 |
Deep Learning Framework | torch 1.12.0 + tensorflow 2.3.1 |
Accelerated Environment | CUDA 11.6 |
Method | Average Training Time (s) | Total Training Time (s) | Fastest Testing Time (s) | Total Test Time (s) | Precision (%) |
---|---|---|---|---|---|
Faster R-CNN | 8.37 | 41,846 | 1.7 | 3051 | 100 |
YOLOV5 | 189 | 9450 | 0.2 | 339 | 100 |
HOG + SVM | 822 | 822 | 0.09 | 102 | 93.55 |
AlexNet | 162 | 16,200 | 2.8 | 4294 | 86 |
Method | Precision | Recall | F1 Score |
---|---|---|---|
Faster R-CNN | 100% | 99.65% | 99.82% |
YOLOV5 | 100% | 97.17% | 98.56% |
HOG + SVM | 93.55% | 82.79% | 87.84% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, J.; Wu, C.; Guo, H.; Bai, T.; He, Y.; Li, X. Research on Red Jujubes Recognition Based on a Convolutional Neural Network. Appl. Sci. 2023, 13, 6381. https://doi.org/10.3390/app13116381
Wu J, Wu C, Guo H, Bai T, He Y, Li X. Research on Red Jujubes Recognition Based on a Convolutional Neural Network. Applied Sciences. 2023; 13(11):6381. https://doi.org/10.3390/app13116381
Chicago/Turabian StyleWu, Jingming, Cuiyun Wu, Huaying Guo, Tiecheng Bai, Yufeng He, and Xu Li. 2023. "Research on Red Jujubes Recognition Based on a Convolutional Neural Network" Applied Sciences 13, no. 11: 6381. https://doi.org/10.3390/app13116381
APA StyleWu, J., Wu, C., Guo, H., Bai, T., He, Y., & Li, X. (2023). Research on Red Jujubes Recognition Based on a Convolutional Neural Network. Applied Sciences, 13(11), 6381. https://doi.org/10.3390/app13116381