Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM)
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. UAV Image Acquisition and Processing
2.3. Image Annotation
2.4. The Segment Anything and YOLO Models
2.5. Model Evaluation Metrics
3. Results and Discussions
3.1. The SAM and YOLO Model Performance
3.2. Evaluation of Cotton Yield at Row Level
4. Conclusions
5. Research Reproducibility
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AGL | Above Ground Level |
AVHRR | Advanced Very High Resolution Radiometer |
CMOS | Complementary Metal-Oxide-Semiconductor |
CNN | Convolutional Neural Network |
CLIP | Contrastive Language-Image Pretraining |
DEM | Digital Elevation Model |
ELAN | Efficient Layer Aggregation Network |
FN | False Negative |
FP | False Positive |
GCPs | Ground Control Points |
GNDVI | Green Normalized Difference Vegetation Index |
HSV | Hue, Saturation, and Value |
IoU | Intersection over Union |
LSTM | Long Short-Term Memory |
MAE | Mean Absolute Error |
mAP | Mean Average Precision |
NDVI | Normalized Difference Vegetation Index |
RGB | Red, Green and Blue |
SAM | Segment Anything Model |
SVR | Support Vector Regression |
TP | True Positive |
UAV | Unmanned Aerial Vehicle |
US | United States |
VCI | Vegetation Condition Index |
ViT | Vision Transformer |
YOLO | You Only Look Once |
References
- Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A systematic literature review on crop yield prediction with deep learning and remote sensing. Remote Sens. 2022, 14, 1990. [Google Scholar] [CrossRef]
- Zhang, M.; Feng, A.; Zhou, Z.; Lü, X. Cotton yield prediction using remote visual and spectral images captured by UAV system. Trans. Chin. Soc. Agric. Eng. 2019, 35, 91–98. [Google Scholar]
- Khaki, S.; Pham, H.; Wang, L. Simultaneous corn and soybean yield prediction from remote sensing data using deep transfer learning. Sci. Rep. 2021, 11, 11132. [Google Scholar] [CrossRef] [PubMed]
- Quarmby, N.; Milnes, M.; Hindle, T.; Silleos, N. The use of multi-temporal NDVI measurements from AVHRR data for crop yield estimation and prediction. Int. J. Remote Sens. 1993, 14, 199–210. [Google Scholar] [CrossRef]
- Anastasiou, E.; Balafoutis, A.; Darra, N.; Psiroukis, V.; Biniari, A.; Xanthopoulos, G.; Fountas, S. Satellite and proximal sensing to estimate the yield and quality of table grapes. Agriculture 2018, 8, 94. [Google Scholar] [CrossRef]
- Kogan, F.; Gitelson, A.; Zakarin, E.; Spivak, L.; Lebed, L. AVHRR-based spectral vegetation index for quantitative assessment of vegetation state and productivity. Photogramm. Eng. Remote Sens. 2003, 69, 899–906. [Google Scholar] [CrossRef]
- Ali, A.M.; Abouelghar, M.; Belal, A.; Saleh, N.; Yones, M.; Selim, A.I.; Amin, M.E.; Elwesemy, A.; Kucher, D.E.; Maginan, S.; et al. Crop yield prediction using multi sensors remote sensing. Egypt. J. Remote Sens. Space Sci. 2022, 25, 711–716. [Google Scholar]
- Niu, H.; Peddagudreddygari, J.R.; Bhandari, M.; Landivar, J.A.; Bednarz, C.W.; Duffield, N. In-season cotton yield prediction with scale-aware convolutional neural network models and unmanned aerial vehicle RGB imagery. Sensors 2024, 24, 2432. [Google Scholar] [CrossRef]
- Niu, H.; Chen, Y. Smart Big Data in Digital Agriculture Applications; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
- Veenadhari, S.; Mishra, B.; Singh, C. Soybean productivity modelling using decision tree algorithms. Int. J. Comput. Appl. 2011, 27, 11–15. [Google Scholar] [CrossRef]
- Ramesh, D.; Vardhan, B.V. Analysis of crop yield prediction using data mining techniques. Int. J. Res. Eng. Technol. 2015, 4, 47–473. [Google Scholar]
- Khaki, S.; Wang, L. Crop yield prediction using deep neural networks. Front. Plant Sci. 2019, 10, 621. [Google Scholar] [CrossRef] [PubMed]
- Aggarwal, A.K.; Jaidka, P. Segmentation of crop images for crop yield prediction. Int. J. Biol. Biomed. 2022, 7, 40–44. [Google Scholar]
- You, J.; Li, X.; Low, M.; Lobell, D.; Ermon, S. Deep Gaussian process for crop yield prediction based on remote sensing data. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
- Wang, Q.; Nuske, S.; Bergerman, M.; Singh, S. Design of crop yield estimation system for apple orchards using computer vision. In Proceedings of the 2012 Dallas, Dallas TX, USA, 29 July–1 August 2012; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2012; p. 1. [Google Scholar]
- Sarkate, R.S.; Kalyankar, N.; Khanale, P. Application of computer vision and color image segmentation for yield prediction precision. In Proceedings of the 2013 International Conference on Information Systems and Computer Networks, Mathura, India, 9–10 March 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 9–13. [Google Scholar]
- Maji, A.K.; Marwaha, S.; Kumar, S.; Arora, A.; Chinnusamy, V.; Islam, S. SlypNet: Spikelet-based yield prediction of wheat using advanced plant phenotyping and computer vision techniques. Front. Plant Sci. 2022, 13, 889853. [Google Scholar] [CrossRef] [PubMed]
- Peng, H.; Xue, C.; Shao, Y.; Chen, K.; Xiong, J.; Xie, Z.; Zhang, L. Semantic segmentation of litchi branches using DeepLabV3+ model. IEEE Access 2020, 8, 164546–164555. [Google Scholar] [CrossRef]
- Palacios, F.; Diago, M.P.; Melo-Pinto, P.; Tardaguila, J. Early yield prediction in different grapevine varieties using computer vision and machine learning. Precis. Agric. 2023, 24, 407–435. [Google Scholar] [CrossRef]
- Yu, C.; Lin, D.; He, C. ASE-UNet: An orange fruit segmentation model in an agricultural environment based on deep learning. Opt. Mem. Neural Netw. 2023, 32, 247–257. [Google Scholar]
- Corò, F.; D’Angelo, G.; Velaj, Y. Recommending links to maximize the influence in social networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019), Macao, China, 10–16 August 2019; AAAI Press: Washington, DC, USA, 2019; Volume 4, pp. 2195–2201. [Google Scholar]
- Vaswani, A. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Silva, L.; Drews, P.; de Bem, R. Soybean weeds segmentation using VT-Net: A convolutional-transformer model. In Proceedings of the 2023 36th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Rio Grande, Brazil, 6–9 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 127–132. [Google Scholar]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 4015–4026. [Google Scholar]
- Zhang, L.; Liu, Z.; Zhang, L.; Wu, Z.; Yu, X.; Holmes, J.; Feng, H.; Dai, H.; Li, X.; Li, Q.; et al. Segment anything model (SAM) for radiation oncology. arXiv 2023, arXiv:2306.11730. [Google Scholar]
- Zhang, K.; Liu, D. Customized segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.13785. [Google Scholar]
- Li, Y.; Wang, D.; Yuan, C.; Li, H.; Hu, J. Enhancing agricultural image segmentation with an agricultural segment anything model adapter. Sensors 2023, 23, 7884. [Google Scholar] [CrossRef] [PubMed]
- Ridley, W.; Devadoss, S. Competition and trade policy in the world cotton market: Implications for US cotton exports. Am. J. Agric. Econ. 2023, 105, 1365–1387. [Google Scholar] [CrossRef]
- Adhikari, P.; Ale, S.; Bordovsky, J.P.; Thorp, K.R.; Modala, N.R.; Rajan, N.; Barnes, E.M. Simulating future climate change impacts on seed cotton yield in the Texas High Plains using the CSM-CROPGRO-Cotton model. Agric. Water Manag. 2016, 164, 317–330. [Google Scholar] [CrossRef]
- Ravi, N.; Gabeur, V.; Hu, Y.T.; Hu, R.; Ryali, C.; Ma, T.; Khedr, H.; Rädle, R.; Rolland, C.; Gustafson, L.; et al. SAM 2: Segment anything in images and videos. arXiv 2024, arXiv:2408.00714. [Google Scholar]
- Zhang, C.; Han, D.; Qiao, Y.; Kim, J.U.; Bae, S.H.; Lee, S.; Hong, C.S. Faster segment anything: Towards lightweight SAM for mobile applications. arXiv 2023, arXiv:2306.14289. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Varghese, R.; Sambath, M. YOLOv8: A novel object detection algorithm with enhanced performance and robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
Models | Precision | Recall | F1-Score | mAP0.5 | IoU |
---|---|---|---|---|---|
YOLO v7 + SAM | 0.821 | 0.836 | 0.828 | 0.857 | 0.685 |
YOLO v8 + SAM | 0.814 | 0.791 | 0.802 | 0.833 | 0.683 |
Models | IoU Score | Inference Time (s) |
---|---|---|
U-Net Attention | 0.697 | 0.0087 |
U-Net ResNet 50 | 0.696 | 0.033 |
U-Net VGG 16 | 0.685 | 0.0128 |
U-Net CBAM | 0.680 | 0.013 |
Yolov7 + SAM | 0.685 | 0.526 |
Yolov8 + SAM | 0.683 | 0.554 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Reddy, J.; Niu, H.; Scott, J.L.L.; Bhandari, M.; Landivar, J.A.; Bednarz, C.W.; Duffield, N. Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM). Remote Sens. 2024, 16, 4346. https://doi.org/10.3390/rs16234346
Reddy J, Niu H, Scott JLL, Bhandari M, Landivar JA, Bednarz CW, Duffield N. Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM). Remote Sensing. 2024; 16(23):4346. https://doi.org/10.3390/rs16234346
Chicago/Turabian StyleReddy, Janvita, Haoyu Niu, Jose L. Landivar Scott, Mahendra Bhandari, Juan A. Landivar, Craig W. Bednarz, and Nick Duffield. 2024. "Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM)" Remote Sensing 16, no. 23: 4346. https://doi.org/10.3390/rs16234346
APA StyleReddy, J., Niu, H., Scott, J. L. L., Bhandari, M., Landivar, J. A., Bednarz, C. W., & Duffield, N. (2024). Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM). Remote Sensing, 16(23), 4346. https://doi.org/10.3390/rs16234346