Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers
Abstract
:1. Introduction
- Develop a novel combination of AI modules for detection, pose estimation, and plucking point estimation modules for multi species picking;
- Focus on machine learning methods that require very limited or no novel training data (also called few-shot and zero-shot approaches) to ensure high-quality results without requiring extensive data collection and annotation;
- Novel plucking point estimation method through indirect inference, leveraging a dataset of collected flower measurements.
2. Materials and Methods
2.1. Data Acquisition and Annotations
2.2. Vision Module Pipeline
2.2.1. 2D Detection and Segmentation
2.2.2. Pose Estimation
2.2.3. Plucking Point Estimation
3. Results and Discussion
3.1. 2D Detection
3.2. 3D Localization and Pose Estimation
3.3. Plucking Point Estimation
3.4. Speed Performance Evaluation
3.5. Advantages, Limitations, and Future Perspectives
3.5.1. Advantages
3.5.2. Limitations
3.5.3. Future Perspectives
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Davis, J.R. How Can the Poor Benefit from the Growing Markets for High Value Agricultural Products? SSRN Electron. J. 2006. [Google Scholar] [CrossRef]
- Fernandes, L.; Casal, S.; Pereira, J.A.; Saraiva, J.A.; Ramalhosa, E. An Overview on the Market of Edible Flowers. Food Rev. Int. 2020, 36. [Google Scholar] [CrossRef]
- Falla, N.M.; Contu, S.; Demasi, S.; Caser, M.; Scariot, V. Environmental Impact of Edible Flower Production: A Case Study. Agronomy 2020, 10, 579. [Google Scholar] [CrossRef]
- Makino, Y.; Geringer, M.; Manuelli, S. Promoting Mountain Biodiversity Through Sustainable Value Chains. Mt. Res. Dev. 2020, 40, P1. [Google Scholar] [CrossRef]
- Bac, C.W.; Van Henten, E.; Hemming, J.; Edan, Y. Harvesting Robots for High-Value Crops: State-of-the-Art Review and Challenges Ahead. J. Field Robot. 2014, 31, 888–911. [Google Scholar] [CrossRef]
- Mitaritonna, C.; Ragot, L. After COVID-19, will seasonal migrant agricultural workers in Europe be replaced. CEPII Policy Brief 2020, 33, 1–10. [Google Scholar]
- Gürel, C.; Gol Mohammedzadeh, H.; Erden, A. Rose Stem Branch Point Detection And Cutting Point Location For Rose Harvesting Robot. In Proceedings of the The 17th International Conference on Machine Design and Production, UMTIK, Bursa, Turkiye, 12–15 July 2016. [Google Scholar]
- Yu, Y.; Zhang, K.; Liu, H.; Yang, L.; Zhang, D. Real-Time Visual Localization of the Picking Points for a Ridge-Planting Strawberry Harvesting Robot. IEEE Access 2020, 8, 116556–116568. [Google Scholar] [CrossRef]
- Li, Y.; Wu, S.; He, L.; Tong, J.; Zhao, R.; Jia, J.; Chen, J.; Wu, C. Development and field evaluation of a robotic harvesting system for plucking high-quality tea. Comput. Electron. Agric. 2023, 206, 107659. [Google Scholar] [CrossRef]
- Subramanian, R.R.; Narla, V.; Nallamekala, S.; Vardhan, N.; Maram, U.M.R.; Reddy, M. FlowerBot: A Deep Learning aided Robotic Process to detect and pluck flowers. In Proceedings of the 2022 6th International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, 1–3 December 2022; pp. 1153–1157. [Google Scholar] [CrossRef]
- Insalate in busta mix e monovarietà—IV Gamma. Available online: https://www.linsalatadellorto.it/ (accessed on 15 January 2024).
- ZED 2—AI Stereo Camera|Stereolabs. Available online: https://www.stereolabs.com/zed-2/ (accessed on 27 November 2023).
- NVIDIA Jetson TX2: High Performance AI at the Edge. Available online: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/ (accessed on 27 November 2023).
- Roboflow: Everything You Need to Build and Deploy Computer Vision Applications. Available online: https://roboflow.com/ (accessed on 22 August 2024).
- ImageNet. Available online: https://www.image-net.org/ (accessed on 22 August 2024).
- Kaggle: Your Machine Learning and Data Science Community. Available online: https://www.kaggle.com/ (accessed on 22 August 2024).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. (IJCV) 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Flowers-299. Available online: https://www.kaggle.com/datasets/bogdancretu/flower299 (accessed on 27 November 2023).
- Flower Color Images. Available online: https://www.kaggle.com/datasets/olgabelitskaya/flower-color-images (accessed on 27 November 2023).
- OWL-ViT. Available online: https://huggingface.co/docs/transformers/model_doc/owlvit (accessed on 27 November 2023).
- Ultralytics. Comprehensive Guide to Ultralytics YOLOv5. Available online: https://docs.ultralytics.com/yolov5 (accessed on 27 November 2023).
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A forest fire detection system based on ensemble learning. Forests 2021, 12, 217. [Google Scholar] [CrossRef]
- Terven, J.; Cordova-Esparza, D. A Comprehensive Review of YOLO: From YOLOv1 and Beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2022, arXiv:2108.07258. [Google Scholar] [CrossRef]
- facebookresearch/segment-anything. Available online: https://github.com/facebookresearch/segment-anything (accessed on 15 January 2024).
- Guo, N.; Zhang, B.; Zhou, J.; Zhan, K.; Lai, S. Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning. Comput. Electron. Agric. 2020, 179, 105818. [Google Scholar] [CrossRef]
- Eizentals, P.; Oka, K. 3D pose estimation of green pepper fruit for automated harvesting. Comput. Electron. Agric. 2016, 128, 127–140. [Google Scholar] [CrossRef]
- scikit-learn: A Set of Python Modules for Machine Learning and Data Mining. Available online: http://scikit-learn.org (accessed on 15 January 2024).
- Zhao, X.; Ding, W.; An, Y.; Du, Y.; Yu, T.; Li, M.; Tang, M.; Wang, J. Fast Segment Anything. arXiv 2023, arXiv:2306.12156. [Google Scholar] [CrossRef]
Items | Pansy | Marigold | Snapdragon | |
---|---|---|---|---|
July | Images | - | 25 | 13 |
Flowers | - | 854 | 762 | |
November | Images | 64 | 21 | 11 |
Flowers | 375 | 401 | 51 |
Common Name | ImageNet Synset | Number of Images | Number of Flowers |
---|---|---|---|
Sunflower | n11978233 | 245 | 294 |
Calla lily | n11793779 | 179 | 233 |
Cornflower | n11947802 | 126 | 146 |
Dahlia | n11960245 | 172 | 191 |
Strawflower | n11980318 | 172 | 242 |
Coneflower | n11962272 | 139 | 159 |
Pansy | - | 232 | 905 |
Model Name | Training Set | Starting Weights | Best Epochs | mAP 0.5 | Precision | Recall | Det. Val. Error |
---|---|---|---|---|---|---|---|
D0-FLOLO | D0 | YOLOv5-large | 284 | 0.97 | 0.96 | 0.93 | 0.0039 |
FLOLO | FloraDet | D0-FLOLO | 229 | 0.68 | 0.67 | 0.68 | 0.045 |
Pansy | Snapdragon | Marigold | |
---|---|---|---|
Linear Regression | |||
Upper boundary |
Detection | Localization | Estimation | Total | Time | ||||
---|---|---|---|---|---|---|---|---|
YOLO | SAM | Clean Masks | Isolate 3D Points | Pose (PCA) | Plucking Points | Time | per Flower | |
Marigold | 0.285 | 4.37 | 0.08 | 42.36 | 1.44 | 1.48 | 48.35 | 1.064 |
Snapdragon | 0.293 | 4.36 | 0.165 | 89.90 | 5.76 | 5.87 | 95.99 | 0.98 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Taddei Dalla Torre, F.; Melgani, F.; Pertot, I.; Furlanello, C. Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers. Plants 2024, 13, 3197. https://doi.org/10.3390/plants13223197
Taddei Dalla Torre F, Melgani F, Pertot I, Furlanello C. Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers. Plants. 2024; 13(22):3197. https://doi.org/10.3390/plants13223197
Chicago/Turabian StyleTaddei Dalla Torre, Fabio, Farid Melgani, Ilaria Pertot, and Cesare Furlanello. 2024. "Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers" Plants 13, no. 22: 3197. https://doi.org/10.3390/plants13223197
APA StyleTaddei Dalla Torre, F., Melgani, F., Pertot, I., & Furlanello, C. (2024). Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers. Plants, 13(22), 3197. https://doi.org/10.3390/plants13223197