Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review
Abstract
:1. Introduction
2. Fruit Detection and Recognition Based on DL
2.1. Single-Stage Fruit Detection and Recognition Methods Based on Regression
2.1.1. Fruit Detection and Recognition Methods Based on YOLO
2.1.2. Fruit Detection and Recognition Methods Based on SSD
2.2. Two-Stage Fruit Detection and Recognition Methods Based on Candidate Regions
2.2.1. Fruit Detection and Recognition Methods Based on AlexNet, VGGNet, and ResNet
2.2.2. Fruit Detection and Recognition Methods Based on R-CNN, Fast R-CNN, and Faster R-CNN
2.2.3. Fruit Detection and Recognition Methods Based on FCN, SegNet, and Mask R-CNN
3. Discussion
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Brown, J.; Sukkarieh, S. Design and Evaluation of a Modular Robotic Plum Harvesting System Utilizing Soft Components. J. Field Robot. 2021, 38, 289–306. [Google Scholar] [CrossRef]
- Yan, B.; Fan, P.; Lei, X.; Liu, Z.; Yang, F. A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
- He, L.; Fu, H.; Karkee, M.; Zhang, Q. Effect of Fruit Location on Apple Detachment with Mechanical Shaking. Biosyst. Eng. 2017, 157, 63–71. [Google Scholar] [CrossRef] [Green Version]
- Ji, W.; Zhao, D.; Cheng, F.; Xu, B.; Zhang, Y.; Wang, J. Automatic Recognition Vision System Guided for Apple Harvesting Robot. Comput. Electr. Eng. 2012, 38, 1186–1195. [Google Scholar] [CrossRef]
- Zhao, D.; Lv, J.; Ji, W.; Zhang, Y.; Chen, Y. Design and Control of an Apple Harvesting Robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
- Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a Sweet Pepper Harvesting Robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef] [Green Version]
- Lehnert, C.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef] [Green Version]
- Bac, C.W.; Hemming, J.; Van Henten, E.J. Stem Localization of Sweet-Pepper Plants Using the Support Wire as a Visual Cue. Comput. Electron. Agric. 2014, 105, 111–120. [Google Scholar] [CrossRef]
- Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An Autonomous Strawberry-Harvesting Robot: Design, Development, Integration, and Field Evaluation. J. Field Robot. 2020, 37, 202–224. [Google Scholar] [CrossRef] [Green Version]
- Xiong, Y.; Peng, C.; Grimstad, L.; From, P.J.; Isler, V. Development and Field Evaluation of a Strawberry Harvesting Robot with a Cable-Driven Gripper. Comput. Electron. Agric. 2019, 157, 392–402. [Google Scholar] [CrossRef]
- Hayashi, S.; Shigematsu, K.; Yamamoto, S.; Kobayashi, K.; Kohno, Y.; Kamata, J.; Kurita, M. Evaluation of a Strawber-ry-Harvesting Robot in a Field Test. Biosyst. Eng. 2010, 105, 160–171. [Google Scholar] [CrossRef]
- Xiong, J.; He, Z.; Lin, R.; Liu, Z.; Bu, R.; Yang, Z.; Peng, H.; Zou, X. Visual Positioning Technology of Picking Robots for Dynamic Litchi Clusters with Disturbance. Comput. Electron. Agric. 2018, 151, 226–237. [Google Scholar] [CrossRef]
- Feng, Q.; Zou, W.; Fan, P.; Zhang, C.; Wang, X. Design and Test of Robotic Harvesting System for Cherry Tomato. Int. J. Agric. Biol. Eng. 2018, 11, 96–100. [Google Scholar] [CrossRef]
- Kondo, N.; Yata, K.; Iida, M.; Shiigi, T.; Monta, M.; Kurita, M.; Omori, H. Development of an End-Effector for a Tomato Cluster Harvesting Robot. Eng. Agric. Environ. Food 2010, 3, 20–24. [Google Scholar] [CrossRef]
- Williams, H.A.M.; Jones, M.H.; Nejati, M.; Seabright, M.J.; Bell, J.; Penhall, N.D.; Barnett, J.J.; Duke, M.D.; Scarfe, A.J.; Ahn, H.S.; et al. Robotic Kiwifruit Harvesting Using Machine Vision, Convolutional Neural Networks, and Robotic Arms. Biosyst. Eng. 2019, 181, 140–156. [Google Scholar] [CrossRef]
- Xiao, F.; Wang, H.; Li, Y.; Cao, Y.; Lv, X.; Xu, G. Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review. Agronomy 2023, 13, 639. [Google Scholar] [CrossRef]
- Fu, L.; Gao, F.; Wu, J.; Li, R.; Karkee, M.; Zhang, Q. Application of Consumer RGB-D Cameras for Fruit Detection and Localization in Field: A Critical Review. Comput. Electron. Agric. 2020, 177, 105687. [Google Scholar] [CrossRef]
- Okamoto, H.; Lee, W.S. Green Citrus Detection Using Hyperspectral Imaging. Comput. Electron. Agric. 2009, 66, 201–208. [Google Scholar] [CrossRef]
- Wachs, J.P.; Stern, H.I.; Burks, T.; Alchanatis, V. Low and High-Level Visual Feature-Based Apple Detection from Multi-Modal Images. Precis. Agric. 2010, 11, 717–735. [Google Scholar] [CrossRef]
- Rehman, T.U.; Mahmud, M.S.; Chang, Y.K.; Jin, J.; Shin, J. Current and Future Applications of Statistical Machine Learning Algorithms for Agricultural Machine Vision Systems. Comput. Electron. Agric. 2019, 156, 585–605. [Google Scholar] [CrossRef]
- Patrício, D.I.; Rieder, R. Computer Vision and Artificial Intelligence in Precision Agriculture for Grain Crops: A Systematic Review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
- Yandun Narvaez, F.; Reina, G.; Torres-Torriti, M.; Kantor, G.; Cheein, F.A. A Survey of Ranging and Imaging Techniques for Precision Agriculture Phenotyping. IEEE/ASME Trans. Mechatron. 2017, 22, 2428–2439. [Google Scholar] [CrossRef]
- Jha, K.; Doshi, A.; Patel, P.; Shah, M. A Comprehensive Review on Automation in Agriculture Using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
- Wolfert, S.; Ge, L.; Verdouw, C.; Bogaardt, M.J. Big Data in Smart Farming—A Review. Agric. Syst. 2017, 153, 69–80. [Google Scholar] [CrossRef]
- Saleem, M.H.; Potgieter, J.; Arif, K.M. Plant Disease Detection and Classification by Deep Learning. Plants 2019, 8, 468. [Google Scholar] [CrossRef] [Green Version]
- Wang, D.; Vinson, R.; Holmes, M.; Seibel, G.; Bechar, A.; Nof, S.; Tao, Y. Early Detection of Tomato Spotted Wilt Virus by Hyperspectral Imaging and Outlier Removal Auxiliary Classifier Generative Adversarial Nets (OR-AC-GAN). Sci. Rep. 2019, 9, 4377. [Google Scholar] [CrossRef] [Green Version]
- Wang, A.; Zhang, W.; Wei, X. A Review on Weed Detection Using Ground-Based Machine Vision and Image Processing Techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
- Lv, J.; Xu, H.; Xu, L.; Zou, L.; Rong, H.; Yang, B.; Niu, L.; Ma, Z. Recognition of Fruits and Vegetables with Similar-Color Background in Natural Environment: A Survey. J. Field Robot. 2022, 39, 888–904. [Google Scholar] [CrossRef]
- Li, Y.; Feng, Q.; Li, T.; Xie, F.; Liu, C.; Xiong, Z. Advance of Target Visual Information Acquisition Technology for Fresh Fruit Robotic Harvesting: A Review. Agronomy 2022, 12, 1336. [Google Scholar] [CrossRef]
- Aslam, F.; Khan, Z.; Tahir, A.; Parveen, K.; Albasheer, F.O.; Ul Abrar, S.; Khan, D.M. A Survey of Deep Learning Methods for Fruit and Vegetable Detection and Yield Estimation. In Big Data Analytics and Computational Intelligence for Cybersecurity, 2nd ed.; Ouaissa, M., Boulouard, Z., Ouaissa, M., Khan, I.U., Kaosar, M., Eds.; Springer: Cham, Switzerland, 2022; Volume 111, pp. 299–323. [Google Scholar] [CrossRef]
- Li, Z.; Yuan, X.; Wang, C. A Review on Structural Development and Recognition-Localization Methods for End-Effector of Fruit-Vegetable Picking Robots. Int. J. Adv. Robot. Syst. 2022, 19, 17298806221104906. [Google Scholar] [CrossRef]
- Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
- Maheswari, P.; Raja, P.; Apolo-Apolo, O.E.; Pérez-Ruiz, M. Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques-A Review. Front. Plant Sci. 2021, 12, 684328. [Google Scholar] [CrossRef] [PubMed]
- Bhargava, A.; Bansal, A. Fruits and Vegetables Quality Evaluation Using Computer Vision: A Review. J. King Saud Univ. Comput. Inf. Sci. 2021, 33, 243–257. [Google Scholar] [CrossRef]
- Saleem, M.H.; Potgieter, J.; Arif, K.M. Automation in Agriculture by Machine and Deep Learning Techniques: A Review of Recent Developments. Precis. Agric. 2021, 22, 2053–2091. [Google Scholar] [CrossRef]
- Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef]
- Jia, W.; Zhang, Y.; Lian, J.; Zheng, Y.; Zhao, D.; Li, C. Apple Harvesting Robot under Information Technology: A Review. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420925310. [Google Scholar] [CrossRef]
- Tripathi, M.K.; Maktedar, D.D. A Role of Computer Vision in Fruits and Vegetables among Various Horticulture Products of Agriculture Fields: A Survey. Inf. Process. Agric. 2020, 7, 183–203. [Google Scholar] [CrossRef]
- Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A Review of Convolutional Neural Network Applied to Fruit Image Processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
- Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep Learning-Method Overview and Review of Use for Fruit Detection and Yield Estimation. Comput. Electron. Agric. 2019, 162, 219–234. [Google Scholar] [CrossRef]
- Zhu, N.; Liu, X.; Liu, Z.; Hu, K.; Wang, Y.; Tan, J.; Huang, M.; Zhu, Q.; Ji, X.; Jiang, Y.; et al. Deep Learning for Smart Agriculture: Concepts, Tools, Applications, and Opportunities. Int. J. Agric. Biol. Eng. 2018, 11, 32–44. [Google Scholar] [CrossRef]
- Martín-Martín, A.; Orduna-Malea, E.; Thelwall, M.; Delgado López-Cózar, E. Google Scholar, Web of Science, and Scopus: A Systematic Comparison of Citations in 252 Subject Categories. J. Informetr. 2018, 12, 1160–1177. [Google Scholar] [CrossRef] [Green Version]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Jahanbakhshi, A.; Momeny, M.; Mahmoudi, M.; Zhang, Y.D. Classification of Sour Lemons Based on Apparent Defects Using Stochastic Pooling Mechanism in Deep Convolutional Neural Networks. Sci. Hortic. 2020, 263, 109133. [Google Scholar] [CrossRef]
- Sakib, S.; Ashrafi, Z.; Sidique, A.B. Implementation of Fruits Recognition Classifier Using Convolutional Neural Network Algorithm for Observation of Accuracies for Various Hidden Layers. arXiv 2019, arXiv:1904.00783. [Google Scholar] [CrossRef]
- Chen, J.; Liu, H.; Zhang, Y.; Zhang, D.; Ouyang, H.; Chen, X. A Multiscale Lightweight and Efficient Model Based on YOLOv7: Applied to Citrus Orchard. Plants 2022, 11, 3260. [Google Scholar] [CrossRef]
- Khosravi, H.; Saedi, S.I.; Rezaei, M. Real-Time Recognition of on-Branch Olive Ripening Stages by a Deep Convolutional Neural Network. Sci. Hortic. 2021, 287, 110252. [Google Scholar] [CrossRef]
- Quiroz, I.A.; Alférez, G.H. Image Recognition of Legacy Blueberries in a Chilean Smart Farm through Deep Learning. Comput. Electron. Agric. 2020, 168, 105044. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Impact of Dataset Size and Variety on the Effectiveness of Deep Learning and Transfer Learning for Plant Disease Classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Ni, J.; Gao, J.; Li, J.; Yang, H.; Hao, Z.; Han, Z. E-AlexNet: Quality Evaluation of Strawberry Based on Machine Learning. Food Meas. 2021, 15, 4530–4541. [Google Scholar] [CrossRef]
- Marani, R.; Milella, A.; Petitti, A.; Reina, G. Deep Neural Networks for Grape Bunch Segmentation in Natural Images from a Consumer-Grade Camera. Precis. Agric. 2021, 22, 387–413. [Google Scholar] [CrossRef]
- Altaheri, H.; Alsulaiman, M.; Muhammad, G. Date Fruit Classification for Robotic Harvesting in a Natural Environment Using Deep Learning. IEEE Access 2019, 7, 117115–117133. [Google Scholar] [CrossRef]
- Wang, D.; Li, C.; Song, H.; Xiong, H.; Liu, C.; He, D. Deep Learning Approach for Apple Edge Detection to Remotely Monitor Apple Growth in Orchards. IEEE Access 2020, 8, 26911–26925. [Google Scholar] [CrossRef]
- Tu, S.; Pang, J.; Liu, H.; Zhuang, N.; Chen, Y.; Zheng, C.; Wan, H.; Xue, Y. Passion Fruit Detection and Counting Based on Multiple Scale Faster R-CNN Using RGB-D Images. Precis. Agric. 2020, 21, 1072–1091. [Google Scholar] [CrossRef]
- Wang, P.; Niu, T.; He, D. Tomato Young Fruits Detection Method under Near Color Background Based on Improved Faster R-CNN with Attention Mechanism. Agriculture 2021, 11, 1059. [Google Scholar] [CrossRef]
- Li, C.; Lin, J.; Li, B.; Zhang, S.; Li, J. Partition Harvesting of a Column-Comb Litchi Harvester Based on 3D Clustering. Comput. Electron. Agric. 2022, 197, 106975. [Google Scholar] [CrossRef]
- Miao, Z.; Yu, X.; Li, N.; Zhang, Z.; He, C.; Li, Z.; Deng, C.; Sun, T. Efficient Tomato Harvesting Robot Based on Image Processing and Deep Learning. Precis. Agric. 2022, 24, 254–287. [Google Scholar] [CrossRef]
- Vasconez, J.P.; Delpiano, J.; Vougioukas, S.; Auat Cheein, F. Comparison of Convolutional Neural Networks in Fruit Detection and Counting: A Comprehensive Evaluation. Comput. Electron. Agric. 2020, 173, 105348. [Google Scholar] [CrossRef]
- Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Li, J. Guava Detection and Pose Estimation Using a Low-Cost RGB-D Sensor in the Field. Sensors 2019, 19, 428. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Tang, Y.; Zou, X.; Lin, G.; Wang, H. Detection of Fruit-Bearing Branches and Localization of Litchi Clusters for Vision-Based Harvesting Robots. IEEE Access 2020, 8, 117746–117758. [Google Scholar] [CrossRef]
- Majeed, Y.; Zhang, J.; Zhang, X.; Fu, L.; Karkee, M.; Zhang, Q.; Whiting, M.D. Deep Learning Based Segmentation for Automated Training of Apple Trees on Trellis Wires. Comput. Electron. Agric. 2020, 170, 105277. [Google Scholar] [CrossRef]
- Xu, P.; Fang, N.; Liu, N.; Lin, F.; Yang, S.; Ning, J. Visual Recognition of Cherry Tomatoes in Plant Factory Based on Improved Deep Instance Segmentation. Comput. Electron. Agric. 2022, 197, 106991. [Google Scholar] [CrossRef]
- Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit Detection for Strawberry Harvesting Robot in Non-Structural Environment Based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 7 February 2023).
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. You Only Learn One Representation: Unified Network for Multiple Tasks. arXiv 2021, arXiv:2105.04206. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar] [CrossRef]
- YOLOv8. Available online: https://github.com/ultralytics/ultralytics (accessed on 7 February 2023).
- Xiong, J.; Liu, Z.; Chen, S.; Liu, B.; Zheng, Z.; Zhong, Z.; Yang, Z.; Peng, H. Visual Detection of Green Mangoes by an Unmanned Aerial Vehicle in Orchards Based on a Deep Learning Method. Biosyst. Eng. 2020, 194, 261–272. [Google Scholar] [CrossRef]
- Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A Field-Tested Robotic Harvesting System for Iceberg Lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lou, H.; Duan, X.; Guo, J.; Liu, H.; Gu, J.; Bi, L.; Chen, H. DC-YOLOv8: Small-Size Object Detection Algorithm Based on Camera Sensor. Electronics 2023, 12, 2323. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar] [CrossRef] [Green Version]
- Liang, Q.; Zhu, W.; Long, J.; Wang, Y.; Sun, W.; Wu, W. A Real-Time Detection Framework for On-Tree Mango Based on SSD Network. In Proceedings of the International Conference on Intelligent Robotics and Applications, Newcastle, NSW, Australia, 9–11 August 2018. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Zhu, L.; Li, Z.; Li, C.; Wu, J.; Yue, J. High Performance Vegetable Classification from Images Based on AlexNet Deep Learning Model. Int. J. Agric. Biol. Eng. 2018, 11, 217–223. [Google Scholar] [CrossRef]
- Rangarajan, A.K.; Purushothaman, R.; Ramesh, A. Tomato Crop Disease Classification Using Pre-Trained Deep Learning Algorithm. Procedia Comput. Sci. 2018, 133, 1040–1047. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Mahmood, A.; Singh, S.K.; Tiwari, A.K. Pre-Trained Deep Learning-Based Classification of Jujube Fruits According to Their Maturity Level. Neural Comput. Appl. 2022, 34, 13925–13935. [Google Scholar] [CrossRef]
- Begum, N.; Hazarika, M.K. Maturity Detection of Tomatoes Using Transfer Learning. Meas. Food 2022, 7, 100038. [Google Scholar] [CrossRef]
- Pérez-Pérez, B.D.; García Vázquez, J.P.; Salomón-Torres, R. Evaluation of Convolutional Neural Networks’ Hyperparameters with Transfer Learning to Determine Sorting of Ripe Medjool Dates. Agriculture 2021, 11, 115. [Google Scholar] [CrossRef]
- Li, Z.; Li, F.; Zhu, L.; Yue, J. Vegetable Recognition and Classification Based on Improved VGG Deep Learning Network Model. Int. J. Comput. Intell. Syst. 2020, 13, 559–564. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar] [CrossRef] [Green Version]
- Helwan, A.; Sallam Ma’aitah, M.K.; Abiyev, R.H.; Uzelaltinbulat, S.; Sonyel, B. Deep Learning Based on Residual Networks for Automatic Sorting of Bananas. J. Food Qual. 2021, 2021, 5516368. [Google Scholar] [CrossRef]
- Rahnemoonfar, M.; Sheppard, C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef] [Green Version]
- Kang, H.; Chen, C. Fruit Detection, Segmentation and 3D Visualisation of Environments in Apple Orchards. Comput. Electron. Agric. 2020, 171, 105302. [Google Scholar] [CrossRef] [Green Version]
- Kang, H.; Chen, C. Fruit Detection and Segmentation for Apple Harvesting Using Visual Sensor in Orchards. Sensors 2019, 19, 4599. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Parvathi, S.; Tamil Selvi, S. Detection of Maturity Stages of Coconuts in Complex Background Using Faster R-CNN Model. Biosyst. Eng. 2021, 202, 119–132. [Google Scholar] [CrossRef]
- Wan, S.; Goudos, S. Faster R-CNN for Multi-Class Fruit Detection Using a Robotic Vision System. Comput. Netw. 2020, 168, 107036. [Google Scholar] [CrossRef]
- Fu, L.; Feng, Y.; Majeed, Y.; Zhang, X.; Zhang, J.; Karkee, M.; Zhang, Q. Kiwifruit Detection in Field Images Using Faster R-CNN with ZFNet. IFAC Pap. 2018, 51, 45–50. [Google Scholar] [CrossRef]
- Zhang, J.; Karkee, M.; Zhang, Q.; Zhang, X.; Yaqoob, M.; Fu, L.; Wang, S. Multi-Class Object Detection Using Faster R-CNN and Estimation of Shaking Locations for Automated Shake-and-Catch Apple Harvesting. Comput. Electron. Agric. 2020, 173, 105384. [Google Scholar] [CrossRef]
- Cao, C.; Wang, B.; Zhang, W.; Zeng, X.; Yan, X.; Feng, Z.; Liu, Y.; Wu, Z. An Improved Faster R-CNN for Small Object Detection. IEEE Access 2019, 7, 106838–106846. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef] [Green Version]
- Zabawa, L.; Kicherer, A.; Klingbeil, L.; Milioto, A.; Topfer, R.; Kuhlmann, H.; Roscher, R. Detection of Single Grapevine Berries in Images Using Fully Convolutional Neural Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Cao, Z.; Xiao, Y.; Cremers, A.B. DeepCotton: In-Field Cotton Segmentation Using Deep Fully Convolutional Network. J. Electron. Imaging 2017, 26, 16. [Google Scholar] [CrossRef]
- Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting Apples and Oranges with Deep Learning: A Data-Driven Approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
- Liu, X.; Chen, S.W.; Aditya, S.; Sivakumar, N.; Dcunha, S.; Qu, C.; Taylor, C.J.; Das, J.; Kumar, V. Robust Fruit Counting Combining Deep Learning, Tracking, and Structure from Motion. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef] [Green Version]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Peng, H.; Xue, C.; Shao, Y.; Chen, K.; Xiong, J.; Xie, Z.; Zhang, L. Semantic Segmentation of Litchi Branches Using DeepLabV3+ Model. IEEE Access 2020, 8, 164546–164555. [Google Scholar] [CrossRef]
- Majeed, Y.; Zhang, J.; Zhang, X.; Fu, L.; Karkee, M.; Zhang, Q.; Whiting, M.D. Apple Tree Trunk and Branch Segmentation for Automatic Trellis Training Using Convolutional Neural Network Based Semantic Segmentation. IFAC Pap. 2018, 51, 75–80. [Google Scholar] [CrossRef]
- Barth, R.; Hemming, J.; Van Henten, E.J. Angle Estimation between Plant Parts for Grasp Optimisation in Harvest Robots. Biosyst. Eng. 2019, 183, 26–46. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
- Jia, W.; Tian, Y.; Luo, R.; Zhang, Z.; Lian, J.; Zheng, Y. Detection and Segmentation of Overlapped Fruits Based on Optimized Mask R-CNN Application in Apple Harvesting Robot. Comput. Electron. Agric. 2020, 172, 105380. [Google Scholar] [CrossRef]
- Zu, L.; Zhao, Y.; Liu, J.; Su, F.; Zhang, Y.; Liu, P. Detection and Segmentation of Mature Green Tomatoes Based on Mask R-CNN with Automatic Image Acquisition Approach. Sensors 2021, 21, 7842. [Google Scholar] [CrossRef]
- Ni, X.; Li, C.; Jiang, H.; Takeda, F. Deep Learning Image Segmentation and Extraction of Blueberry Fruit Traits Associated with Harvestability and Yield. Hortic. Res. 2020, 7, 110. [Google Scholar] [CrossRef]
Fruit Imaging Sensors | Types | Information | Advantages | Limitations |
---|---|---|---|---|
RGB-D camera and LSS (Lift, Splat, Shoot) | Active | RGB and depth images | Complete fruit scene characteristics | Lack of feature descriptors |
Black and white camera | Passive | Shape and texture features | Little effect of changes in lighting conditions | Lack of color information |
RGB camera | Color, shape, and texture features | Exploiting all the basic features of target fruits | Highly sensitive to changing lighting conditions | |
Spectral camera | Color features and spectral information | Providing more information about reflectance | Computationally expensive for complete spectrum analysis | |
Thermal camera | Thermal signatures | Color-invariant | Dependency on minute thermal difference |
Types | Accuracy | Applied Crops | Advantages | Disadvantages |
---|---|---|---|---|
YOLO | 84–98% | cabbage, citrus, lychee, mango, tomato | High fruit detection speed; it can meet real-time requirements well for automatic harvesting | Fruit detection accuracy under severe occlusion, low resolution, and changing lighting conditions is low |
SSD | 75–92% | apple, mango, pear, sour lemon | High detection accuracy and speed; good robustness and generalization | Fruit images need to be preprocessed; detection accuracy for small targets is low |
AlexNet | 86–96% | apple, strawberry, sugar beet, tomato | Using dropout to avoid overfitting; good generalization ability | Network convergence takes a little longer |
VGGNet | 92–99% | jujube, potato, sugar beet, tomato | Simple structure of fruit vision detection models | Network convergence takes a little longer; using more network parameters |
ResNet | 90–95% | apple, banana | Using residual blocks to deepen network layers and reduce network parameters | Too deep network layers may result in vanishing gradients, poor training effectiveness, and low detection accuracy |
Faster R-CNN | 90–99% | apple, mango, orange | High detection accuracy | Fruit detection speed is slow, and it cannot meet real-time requirements well |
FCN | 89–98% | cotton, grape, guava, kiwifruit | Accepting fruit image inputs with arbitrary sizes; high efficiency and low computational effort | Insensitive to the details of fruits in fruit images; fruit classification does not consider inter-pixel relationships |
SegNet | 83–95% | apple, tomato | Obtaining edge contours and maintaining the integrity of high-frequency details in segmentation | Neighboring information may be ignored when fruit feature maps with low resolution are unpooled |
Mask R-CNN | 80–94% | apple, strawberry, tomato | Combining semantic segmentation with fruit detection by outputting mask images | Fruit detection speed is slow, and it cannot meet real-time requirements well |
Crops, Description, and Merit | En* | Datasets | Pixels | Sensors | Condition | Improvements | Value (%) |
---|---|---|---|---|---|---|---|
Olive (CNN)/Indian researchers Khosravi, H. et al. (2021) [49] propose a real-time detection method for two olive cultivars in four ripening stages. Adagrad, SGD, SGDM, RMSProp, Adam, and Nadam are evaluated. Nadam shows the best efficiency | √ | Train: 14,017; test: 878 | 256 × 256 | Galaxy J6 smartphone camera | Natural lighting | Lighting conditions and fruit image-capturing settings are not considered | Overall accuracy: 91.91; CPU: 12.64 ms; GPU: 4.10 ms |
Blueberry (CNN)/Chilean researcher Quiroz, I.A. and Mexican researcher Alférez, G.H. (2020) [50] present a DL solution for image recognition of legacy blueberries in rooting stages | × | Total: 258; train: 168; val: 54; pre: 36 | 1920 × 1080 | Microsoft Lifecam Studio digital camera | Good lighting conditions, not blurred, and without distracting objects in the background | It could use GANs to generate synthetic images that closely resemble real ones, minimizing the need for accessing real data | Accuracy: 86; precision: 86; recall: 88; F1 score: 86 |
Sour lemon (CNN)/Jahanbakhshi, A. et al. (including Iranian researchers and a researcher based in the UK) (2020) [46] detect apparent defects in sour lemons. Data augmentation and stochastic pooling mechanisms are used to improve detection performance | √ | Total: 5456; healthy: 2960; damaged: 2496; train: 70%; val: 30% | 16 × 16; 32 × 32; 64 × 64 | Camera (Canon, Japan) | A lighting box including two LED lamps | Future work may include accommodating more varied fruit detection conditions | Accuracy: 100 |
56 diseases infecting 12 plant species (CNN)/Brazilian researcher Barbedo, J.G.A. (2018) [51] studies the effectiveness of DL and TL for plant disease classification | × | Total: 1383; train: 80%; val: 20% | 224 × 224 × 3 | A variety of digital cameras and mobile devices | Under controlled conditions: 15%; under real conditions: 85% | The number of samples is too small for the CNN to thoroughly capture the characteristics and variations associated with each class | It is a challenge to build fruit databases comprehensive enough for the creation of robust fruit detection models |
Strawberry (AlexNet)/Chinese researchers Ni, J. et al. (2021) [52] propose an enhanced AlexNet for strawberry quality evaluation. The size of the convolution kernel is modified. The single convolutional layer is divided into three convolutional layers with different convolution kernels. The BN layer and L2 regularization are used | × | Total: 3006; unripe: 778; medium: 382; fully: 787; bad: 847; malformed: 212; train: 80%; val: 10%; test: 10% | 227 × 227 | HUAWEI mobile phone | Two different scenes of a field and a laboratory | It is not certain which augmentation method will help improve fruit detection performance | Average accuracy: 90.70; after augmentation: 95.75 |
Grape bunch (AlexNet)/Italian researchers Marani, R. et al. (2021) [53] investigate the use of DL for grape bunch segmentation in natural fruit images captured using a consumer-grade camera. It is based on the optimal threshold selection of bunch probability maps as an alternative to the conventional minimization of cross-entropy loss for mutually exclusive classes | × | Total: 84; train: 60; val: 24 | 640 × 480 | Intel RealSense R200 RGB-D camera | Fruit images under direct (opposite) sunlight are not considered since they become overexposed, and their colors saturate to white | Depth data could be used to guide the selection of the size N of the moving window for the proposed processing | Mean segmentation accuracy on the bunch class: 80.58; IoU: 45.64 |
Date fruit (VGGNet)/Saudi Arabian researchers Altaheri, H. et al. (2019) [54] propose an efficient MV framework for date fruit-harvesting robots | × | Total: 8072; 5 date types in different pre-maturity and maturity stages; more than 350 date bunches; belong to 29 date palms | -- | RGB video camera | The dataset reflects the challenges, including variations in angles, scales, and illumination conditions | It may lead to confusion in the detection of date fruit maturity, including labeling rules and interference between maturity stages | Type, maturity, and harvesting decision classification accuracies: 99.01, 97.25, 98.59; classification times: 20.6, 20.7, 35.9 ms |
Apple (ResNet)/Chinese researchers Wang, D. et al. (2020) [55] develop a remote apple horizontal diameter detection system to achieve automatic measurement of apple growth throughout the entire growth period. The fused convolutional feature network developed can effectively remove complex backgrounds and accurately detect apple edges with near real-time performance | √ | Total: 903; train: 743; val: 160; test: 170; 5944 images are eventually obtained through data augmentation; mature red, immature green, semimature | 403 × 303 | iPhone 7 plus | To prevent distinct edges from forming on the surfaces of apples due to intense natural light, the images are captured on cloudy days or at dusk when the light is not as intense | Future improvements are needed to track the monitored apple in order to achieve the goal of adjusting the camera’s shooting angle and selecting seed points automatically | F1 score: 53.1; average run time: 75 ms; mean average absolute error of the apples’ horizontal diameters detected: 0.90 mm |
Passion fruit (Faster R-CNN)/Chinese researchers Tu, S. et al. (2020) [56] propose a multiple-scale Faster R-CNN approach based on RGB-D images for small passion fruit detection and counting. It detects lower-level features by incorporating feature maps from shallower convolutional feature maps for RoI pooling | √ | Total RGB images: 8651; train: 6055; test: 2596; total depth images: 3352; train: 2346; test 1006 | 1920 × 1080; 512 × 424 | Kinect V2 | The Kinect V2 sensor is used to avoid strong sunlight and work in shady areas because the ToF technique is unsuitable in strong sunlight conditions | The detection performance of passion fruit in different growth stages could be evaluated and analyzed | Recall: 96.2; precision: 93.1; F1-score: 94.6 |
Young tomato fruit (Faster R-CNN)/Chinese researchers Wang, P. et al. (2021) [57] propose a method for detecting young tomatoes on near-color backgrounds based on an improved Faster R-CNN with attention mechanisms. Soft non-maximum suppression is used to reduce the missed detection rate of overlapping fruits | × | Total: 2235; train: 80%; val: 10%; test: 10% | 3000 × 3000 | MI 9 smartphone | Different weather conditions (sunny and cloudy) and different time periods (morning, noon, and evening) | Future work could include accommodating various cultivars of tomatoes and more unstructured environments | mAP: 98.46; average detection time: 84 ms |
Lychee (YOLO)/To improve the efficiency of lychee harvesting, Chinese researchers Li, C. et al. (2022) [58] propose a column-comb litchi harvesting method based on K-means 3D clustering partitioning | × | Total: 1049; train: 840; test: 209 | 1280 × 800; 1280 × 720 | Intel RealSense depth camera | Orchard environments (strong light and backlight, sunny and cloudy days, and far and near distances) | Current detection performance are obtained by testing on well-defined fruit images with a limited sample size | Recall: 78.99; precision: 87.43; F1 score: 0.83 |
Tomato (YOLO)/Chinese researchers Miao, Z. et al. (2022) [59] integrate classic image processing methods with YOLOv5 to increase fruit detection accuracy and robustness | × | Total: 1000; train: 800; val: 200 | 1920 × 1080; 1280 × 720 | Intel RealSense depth camera | Artificial experimental environments | Extended tests and improvements in a real orchard and greenhouse will be the main focus | Average deviation: 2 mm; average operating time: 9 s/cluster |
Hass avocado, lemon, apples (SSD)/Vasconez, J.P. et al. (including Chilean researchers and a researcher based in America) (2020) [60] test two of the most common architectures: Faster R-CNN with Inception V2 and SSD with MobileNet. To address the problem of video-based fruit counting, it uses multi-object tracking based on Gaussian estimation | √ | Avocado train: 1021; val: 211; test: 211; apple train: 694; val: 191; test: 191; lemon train: 539; val: 202; test: 202 | 360 × 640 | Commercial RGB camera; acquiring at 30 FPS | Hass avocado, lemon, and apple datasets acquired under illumination levels ranging from 1890 to 43,600, 4800 to 52,000, and 3500 to 38,000 lux, respectively | The CNN architectures are highly dependent on the quality of the training set. The results might not be conclusive for other groves with different fruits | SSD with MobileNet, the minimum relative error: 7 (avocados); 13 (apples); 20 (lemons); computing time: 220 ms |
Guava (FCN)/Chinese researchers Lin, G. et al. (2019) [61] use a low-cost RGB-D sensor to achieve guava detection and pose estimation. It uses Euclidean clustering to detect all the 3D fruits from the fruit binary maps output by FCN. It also establishes a 3D line segment detection method to reconstruct the branches from the branch binary maps | × | Total: 437; train: 80%; val: 20% | 424 × 512 | Kinect V2 | All kinds of illuminations | Branch is a little difficult to segment | Precision: 98.3; recall: 94.8; 3D pose error: 23.43° ± 14.18°; execution time: 56.5 ms |
Lychee clusters (SegNet)/Chinese researchers Li, J. et al. (2020) [62] develop a reliable algorithm based on RGB-D cameras to accurately detect and locate the fruit-bearing branches of multiple lychee clusters. It revises density clustering-based branch extraction and optimal clustering-based parameter analysis | √ | Total: 452; train: 80%; val: 20% | 1920 × 1080; 512 × 424 | Kinect V2 | All kinds of illuminations; no artificial shade or lighting interference | Future studies could focus on improving the success rate of picking tasks | Detection accuracy: 83.33; positioning accuracy: 17.29° ± 24.57°; execution time: 464 ms |
Apple (SegNet)/Majeed, Y. et al. (including American researchers and researchers based in China) (2020) [63] develop a DL-based semantic segmentation method. Both simple and foreground RGB images are used for training SegNet to segment trunks and branches | √ | Total: 509; train: 70%; test: 30% | 960 × 540 | Kinect V2 | Different lighting conditions (sunny, cloudy, and night) | Optimal branches will be selected for training by estimating the essential parameters desired for canopy architecture | Mean accuracy: 89; IoU: 52; boundary-F1-score: 81 |
Cherry tomato (Mask R-CNN)/Chinese researchers Xu, P. et al. (2022) [64] propose an improved Mask R-CNN for the visual recognition of cherry tomatoes by using depth information and considering the prior adjacent constraint between fruits and stems | √ | Total: 3444; train: 80%; val: 20% | 640 × 480 | Intel RealSense depth camera | Natural conditions | Future work may include reducing the processing time and accommodating more varied conditions | Detection accuracy of fruits: 93.76; accuracy and recall of stems: 89.34 and 94.47; computing time: 40 ms |
Strawberry (Mask R-CNN)/Chinese researchers Yu, Y. et al. (2019) [65] perform a visual localization method for strawberry picking points after generating mask images of ripe fruits using Mask R-CNN. ResNet-50 is adopted as the backbone network, combined with the FPN for fruit feature extraction. The RPN is trained end-to-end to create region proposals for each feature map | × | Total: 1900; train: 1520; val: 380; test: 100 | 640 × 480 | Hand-held digital camera | Different periods (morning and afternoon); under varying light intensity (sunny and cloudy conditions); different levels of interference (overlap, occlusion, and oscillation) | Although the average processing frames per second is 8, the speed of the embedded mobile harvesting robot is lower than this result. Therefore, the real-time performance of the model needs to be further improved | Average detection precision: 95.78; recall: 95.41; IoU of instance segmentation: 89.85; average error of picking points: ±1.2 mm |
Types | Methods | Advantages | Shortcomings |
---|---|---|---|
Real fruit detection environment | Hand-held camera | High image quality of fruits; close to real scenes | The process of shooting is time-consuming and laborious; fruit image quality is unstable; fruit image quantization and contrast are difficult |
UGV | |||
UAV | |||
Internet channel | -- | No need for a camera; easy and fast collection | There are situations such as blurred images and incorrect labels; data cleaning and inspection are required |
Datasets | Samples | Species | Web-Link | Year | ||
---|---|---|---|---|---|---|
Total | Training Set | Testing Set | ||||
Fruit images of MS COCO | - | - | - | - | https://cocodataset.org/#download (accessed on 6 March 2023) | 2017 |
Fruit images of ImageNet | - | - | - | - | https://image-net.org/challenges/LSVRC/index.php (accessed on 6 March 2023) | 2012 |
Fruits-360 | 90,380 | 67,692 | 22,688 | 131 (100 × 100 pixels) | www.kaggle.com/datasets/moltean/fruits (accessed on 16 February 2023) | 2020 |
Fruit-A | 22,495 | 16,854 | 5641 | 33 (100 × 100 pixels) | www.kaggle.com/datasets/sshikamaru/fruit-recognition (accessed on 16 February 2023) | 2022 |
Fruit-B | 21,000 | 15,000 | vail: 3000 text: 3000 | 15 (224 × 224 pixels) | www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset (accessed on 16 February 2023) | 2021 |
Fruit quality classification | 19,526 | - | - | 18 (256 × 256/192 pixels) | www.kaggle.com/datasets/ryandpark/fruit-quality-classification (accessed on 16 February 2023) | 2022 |
Fresh and rotten fruits | 13,599 | 10,901 | 2698 | 6 | www.kaggle.com/datasets/sriramr/fruits-fresh-and-rotten-for-classification (accessed on 16 February 2023) | 2019 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xiao, F.; Wang, H.; Xu, Y.; Zhang, R. Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review. Agronomy 2023, 13, 1625. https://doi.org/10.3390/agronomy13061625
Xiao F, Wang H, Xu Y, Zhang R. Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review. Agronomy. 2023; 13(6):1625. https://doi.org/10.3390/agronomy13061625
Chicago/Turabian StyleXiao, Feng, Haibin Wang, Yueqin Xu, and Ruiqing Zhang. 2023. "Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review" Agronomy 13, no. 6: 1625. https://doi.org/10.3390/agronomy13061625
APA StyleXiao, F., Wang, H., Xu, Y., & Zhang, R. (2023). Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review. Agronomy, 13(6), 1625. https://doi.org/10.3390/agronomy13061625