High-Precision Automated Soybean Phenotypic Feature Extraction Based on Deep Learning and Computer Vision
Abstract
:1. Introduction
2. Results
2.1. Model Training
2.2. Overall Performance of Modified YOLOv8
2.3. Comparison Results of Target Detection Models
3. Discussion
References | Crop | Model | Extracted Phenotypes | Evaluation Parameters | Value |
---|---|---|---|---|---|
Zermas et al. [28] | Corn | A 3D model and point cloud processing algorithms | The individual plant segmentation and counting | Accuracy | 88.1% |
Leaf area index (LAI) | 92.5% | ||||
The individual plant height computation | 89.2% | ||||
The leaf length extraction | 74.8% | ||||
Zhang et al. [22] | Corn | PointNet ++ [29] | Branch length | R2 | 0.99 |
Branch angle | 0.94 | ||||
Branch count | 0.96 | ||||
Zhu et al. [30] | Tomato | Mask R-CNN | Fruit color, horizontal and vertical diameters, top and navel angles, locule number, and pericarp thickness | Accuracy | 95.0% |
Songtao et al. [23] | Potato | OCRNet [11] | Leaf number | R2 | 0.93 |
Plant height | 0.95 | ||||
Maximum width | 0.91 | ||||
Zhou et al. [7] | Soybean | The SPP extractor (soybean plant phenotype extractor) | Pod number per plant, plant height, effective branch number, branch length | R2 | 0.93–0.99 |
He et al. [26] | Soybean | Improved YOLOv5 | Single pod weight | R2 | 0.95 |
Lu et al. [27] | Soybean | Improved YOLOv3 | Pods | Precision | 90.3% |
Xiong et al. [31] | Rice | Panicle-SEG-CNN | Rice panicle | Precision | 82.1% |
Teramoto and Uga [32] | Rice | U-Net [33] | The vertical centroid of root distribution | R2 | 0.99 |
The horizontal centroid of root distribution | 0.96 | ||||
Yu et al. [34] | Lettuce | 2DCNN, FCNN, Deep2D and DeepFC | The soluble solid content (SSC) | R2 | 0.90 |
pH | 0.85 | ||||
Zhang et al. [35] | Strawberry | HR-YOLOv8 | Strawberry ripeness | Precision | 92.3% |
Yu et al. [20] | Strawberry | CES-YOLOv8 | Strawberry ripeness | Precision | 88.2% |
Orchi et al. [36] | 13 different plant species | YOLOv8 | Crop leaf disease | Precision | 63.3% |
Sapkota et al. [24] | Apple | YOLOv8 | Azure Kinect | R2 | 0.90 |
RealSense D435i | 0.77 | ||||
Guan et al. [25] | Corn | DBi-YOLOv8 | Leaves | R2 | 0.93 |
Tassels in the canopy | 0.92 | ||||
Proposed method | Soybean | YOLOv8-Repvit | Pods | R2 | 0.96 |
Beans | 0.96 | ||||
MCA | Stem and branch length | 0.96 |
4. Materials and Methods
4.1. Sample Preparation
4.2. Data Acquisition
4.3. Main Methods
4.3.1. Pod Identification and Counting
4.3.2. Separation of Main Stem and Lateral Stems
4.3.3. Extraction of Main Stem and Branch Contours
4.3.4. Stem Length Acquisition Algorithm
4.4. Equipment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Correction Statement
References
- Medic, J.; Atkinson, C.; Hurburgh, C.R. Current knowledge in soybean composition. J. Am. Oil Chem. Soc. 2014, 91, 363–384. [Google Scholar] [CrossRef]
- Khojely, D.M.; Ibrahim, S.E.; Sapey, E.; Han, T. History, current status, and prospects of soybean production and research in sub-Saharan Africa. Crop J. 2018, 6, 226–235. [Google Scholar] [CrossRef]
- Sinclair, T.R.; Marrou, H.; Soltani, A.; Vadez, V.; Chandolu, K.C. Soybean production potential in Africa. Glob. Food Secur. 2014, 3, 31–40. [Google Scholar] [CrossRef]
- Orf, J.H. Breeding, genetics, and production of soybeans. In Soybeans; Elsevier: Amsterdam, The Netherlands, 2008; pp. 33–65. [Google Scholar]
- Liu, X.; Jin, J.; Wang, G.; Herbert, S. Soybean yield physiology and development of high-yielding practices in Northeast China. Field Crops Res. 2008, 105, 157–171. [Google Scholar] [CrossRef]
- Araus, J.L.; Cairns, J.E. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef]
- Zhou, W.; Chen, Y.; Li, W.; Zhang, C.; Xiong, Y.; Zhan, W.; Huang, L.; Wang, J.; Qiu, L. SPP-extractor: Automatic phenotype extraction for densely grown soybean plants. Crop J. 2023, 11, 1569–1578. [Google Scholar] [CrossRef]
- Zhu, R.; Sun, K.; Yan, Z.; Yan, X.; Yu, J.; Shi, J.; Hu, Z.; Jiang, H.; Xin, D.; Zhang, Z. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci. Rep. 2020, 10, 7055. [Google Scholar] [CrossRef]
- Falk, K.G.; Jubery, T.Z.; Mirnezami, S.V.; Parmley, K.A.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B.; Singh, A.K. Computer vision and machine learning enabled soybean root phenotyping pipeline. Plant Methods 2020, 16, 5. [Google Scholar] [CrossRef]
- Haque, S.; Lobaton, E.; Nelson, N.; Yencho, G.C.; Pecota, K.V.; Mierop, R.; Kudenov, M.W.; Boyette, M.; Williams, C.M. Computer vision approach to characterize size and shape phenotypes of horticultural crops using high-throughput imagery. Comput. Electron. Agric. 2021, 182, 106011. [Google Scholar] [CrossRef]
- Yuan, Y.; Chen, X.; Wang, J. Object-contextual representations for semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VI 16, 2020. Springer: Berlin/Heidelberg, Germany, 2020; pp. 173–190. [Google Scholar]
- Mochida, K.; Koda, S.; Inoue, K.; Hirayama, T.; Tanaka, S.; Nishii, R.; Melgani, F. Computer vision-based phenotyping for improvement of plant productivity: A machine learning perspective. GigaScience 2019, 8, giy153. [Google Scholar] [CrossRef]
- Wang, Y.-H.; Su, W.-H. Convolutional neural networks in computer vision for grain crop phenotyping: A review. Agronomy 2022, 12, 2659. [Google Scholar] [CrossRef]
- Weyler, J.; Magistri, F.; Seitz, P.; Behley, J.; Stachniss, C. In-field phenotyping based on crop leaf and plant instance segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 2725–2734. [Google Scholar]
- Uryasheva, A.; Kalashnikova, A.; Shadrin, D.; Evteeva, K.; Moskovtsev, E.; Rodichenko, N. Computer vision-based platform for apple leaves segmentation in field conditions to support digital phenotyping. Comput. Electron. Agric. 2022, 201, 107269. [Google Scholar] [CrossRef]
- Ariza-Sentís, M.; Baja, H.; Vélez, S.; Valente, J. Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits. Comput. Electron. Agric. 2023, 211, 108051. [Google Scholar] [CrossRef]
- Liu, B.-Y.; Fan, K.-J.; Su, W.-H.; Peng, Y. Two-stage convolutional neural networks for diagnosing the severity of alternaria leaf blotch disease of the apple tree. Remote Sens. 2022, 14, 2519. [Google Scholar] [CrossRef]
- Lv, M.; Su, W.-H. YOLOV5-CBAM-C3TR: An optimized model based on transformer module and attention mechanism for apple leaf disease detection. Front. Plant Sci. 2024, 14, 1323301. [Google Scholar] [CrossRef]
- Yang, S.; Zheng, L.; He, P.; Wu, T.; Sun, S.; Wang, M. High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning. Plant Methods 2021, 17, 50. [Google Scholar] [CrossRef]
- Yu, X.; Yin, D.; Xu, H.; Pinto Espinosa, F.; Schmidhalter, U.; Nie, C.; Bai, Y.; Sankaran, S.; Ming, B.; Cui, N. Maize tassel number and tasseling stage monitoring based on near-ground and UAV RGB images by improved YoloV8. Precis. Agric. 2024, 25, 1800–1838. [Google Scholar] [CrossRef]
- Wang, B.; Gao, Y.; Yuan, X.; Xiong, S.; Feng, X. From species to cultivar: Soybean cultivar recognition using joint leaf image patterns by multiscale sliding chord matching. Biosyst. Eng. 2020, 194, 99–111. [Google Scholar] [CrossRef]
- Zhang, W.; Wu, S.; Wen, W.; Lu, X.; Wang, C.; Gou, W.; Li, Y.; Guo, X.; Zhao, C. Three-dimensional branch segmentation and phenotype extraction of maize tassel based on deep learning. Plant Methods 2023, 19, 76. [Google Scholar] [CrossRef]
- Songtao, H.; Ruifang, Z.; Yinghua, W.; Zhi, L.; Jianzhong, Z.; He, R.; Wanneng, Y.; Peng, S. Extraction of potato plant phenotypic parameters based on multi-source data. Smart Agric. 2023, 5, 132. [Google Scholar]
- Sapkota, R.; Ahmed, D.; Churuvija, M.; Karkee, M. Immature green apple detection and sizing in commercial orchards using YOLOv8 and shape fitting techniques. IEEE Access 2024, 12, 43436–43452. [Google Scholar] [CrossRef]
- Guan, H.; Deng, H.; Ma, X.; Zhang, T.; Zhang, Y.; Zhu, T.; Zhou, H.; Gu, Z.; Lu, Y. A corn canopy organs detection method based on improved DBi-YOLOv8 network. Eur. J. Agron. 2024, 154, 127076. [Google Scholar] [CrossRef]
- He, H.; Ma, X.; Guan, H.; Wang, F.; Shen, P. Recognition of soybean pods and yield prediction based on improved deep learning model. Front. Plant Sci. 2023, 13, 1096619. [Google Scholar] [CrossRef] [PubMed]
- Lu, W.; Du, R.; Niu, P.; Xing, G.; Luo, H.; Deng, Y.; Shu, L. Soybean yield preharvest prediction based on bean pods and leaves image recognition using deep learning neural network combined with GRNN. Front. Plant Sci. 2022, 12, 791256. [Google Scholar] [CrossRef] [PubMed]
- Zermas, D.; Morellas, V.; Mulla, D.; Papanikolopoulos, N. 3D model processing for high throughput phenotype extraction–the case of corn. Comput. Electron. Agric. 2020, 172, 105047. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Zhu, Y.; Gu, Q.; Zhao, Y.; Wan, H.; Wang, R.; Zhang, X.; Cheng, Y. Quantitative extraction and evaluation of tomato fruit phenotypes based on image recognition. Front. Plant Sci. 2022, 13, 859290. [Google Scholar] [CrossRef] [PubMed]
- Xiong, X.; Duan, L.; Liu, L.; Tu, H.; Yang, P.; Wu, D.; Chen, G.; Xiong, L.; Yang, W.; Liu, Q. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods 2017, 13, 104. [Google Scholar] [CrossRef] [PubMed]
- Teramoto, S.; Uga, Y. A deep learning-based phenotypic analysis of rice root distribution from field images. Plant Phenomics 2020, 2020, 3194308. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18, 2015. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Yu, S.; Fan, J.; Lu, X.; Wen, W.; Shao, S.; Guo, X.; Zhao, C. Hyperspectral technique combined with deep learning algorithm for prediction of phenotyping traits in lettuce. Front. Plant Sci. 2022, 13, 927832. [Google Scholar] [CrossRef]
- Zhang, J.; Yang, W.; Lu, Z.; Chen, D. HR-YOLOv8: A Crop Growth Status Object Detection Method Based on YOLOv8. Electronics 2024, 13, 1620. [Google Scholar] [CrossRef]
- Orchi, H.; Sadik, M.; Khaldoun, M.; Sabir, E. Real-time detection of crop leaf diseases using enhanced YOLOv8 algorithm. In Proceedings of the 2023 International Wireless Communications and Mobile Computing (IWCMC), Marrakesh, Morocco, 19–23 June 2023; IEEE: New York, NY, USA, 2023; pp. 1690–1696. [Google Scholar]
- Singh, A.K.; Singh, A.; Sarkar, S.; Ganapathysubramanian, B.; Schapaugh, W.; Miguez, F.E.; Carley, C.N.; Carroll, M.E.; Chiozza, M.V.; Chiteri, K.O. High-throughput phenotyping in soybean. In High-Throughput Crop Phenotyping; Springer: Cham, Switzerland, 2021; pp. 129–163. [Google Scholar]
- Momin, M.A.; Yamamoto, K.; Miyamoto, M.; Kondo, N.; Grift, T. Machine vision based soybean quality evaluation. Comput. Electron. Agric. 2017, 140, 452–460. [Google Scholar] [CrossRef]
- Wang, F.; Ma, X.; Liu, M.; Wei, B. Three-dimensional reconstruction of soybean canopy based on multivision technology for calculation of phenotypic traits. Agronomy 2022, 12, 692. [Google Scholar] [CrossRef]
- Moeinizade, S.; Pham, H.; Han, Y.; Dobbels, A.; Hu, G. An applied deep learning approach for estimating soybean relative maturity from UAV imagery to aid plant breeding decisions. Mach. Learn. Appl. 2022, 7, 100233. [Google Scholar] [CrossRef]
- Bhat, J.A.; Yu, D. High-throughput NGS-based genotyping and phenotyping: Role in genomics-assisted breeding for soybean improvement. Legume Sci. 2021, 3, e81. [Google Scholar] [CrossRef]
- Rahman, S.U.; McCoy, E.; Raza, G.; Ali, Z.; Mansoor, S.; Amin, I. Improvement of soybean; A way forward transition from genetic engineering to new plant breeding technologies. Mol. Biotechnol. 2023, 65, 162–180. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wu, T.; Dong, Y. YOLO-SE: Improved YOLOv8 for remote sensing object detection and recognition. Appl. Sci. 2023, 13, 12977. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Lou, H.; Duan, X.; Guo, J.; Liu, H.; Gu, J.; Bi, L.; Chen, H. DC-YOLOv8: Small-size object detection algorithm based on camera sensor. Electronics 2023, 12, 2323. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Lin, Z.; Pu, H.; Ding, G. Repvit: Revisiting mobile cnn from vit perspective. arXiv 2023, arXiv:2307.09283. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Chen, J.; Mai, H.; Luo, L.; Chen, X.; Wu, K. Effective feature fusion network in BIFPN for small object detection. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; IEEE: New York, NY, USA, 2021; pp. 699–703. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Liu, X.; Peng, H.; Zheng, N.; Yang, Y.; Hu, H.; Yuan, Y. Efficientvit: Memory efficient vision transformer with cascaded group attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14420–14430. [Google Scholar]
- Xiong, Z.; Wu, J. Multi-Level Attention Split Network: A Novel Malaria Cell Detection Algorithm. Information 2024, 15, 166. [Google Scholar] [CrossRef]
- Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X.; Feng, J.; Yan, S. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10819–10829. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
Prediction Objects | Models | R2 | RMSE |
---|---|---|---|
Number of pods per plant | YOLOv8 | 0.87 | 5.33 |
YOLOv8-Repvit | 0.96 | 2.89 | |
YOLOv8-Ghost | 0.87 | 5.64 | |
YOLOv8-Bifpn | 0.61 | 8.11 | |
Number of beans per plant | YOLOv8 | 0.90 | 11.80 |
YOLOv8-Repvit | 0.96 | 6.90 | |
YOLOv8-Ghost | 0.90 | 12.50 | |
YOLOv8-Bifpn | 0.65 | 17.33 | |
Stem and branch length | MCA | 0.96 | 3.42 |
Model | Box (P) | Box (R) | Box (mAP50) | Mask (P) | Mask (R) | Mask (mAP50) | Prediction Time |
---|---|---|---|---|---|---|---|
YOLOv8 | 0.785 | 0.802 | 0.848 | 0.786 | 0.797 | 0.846 | 7.6 ms |
YOLOv8-Repvit | 0.786 | 0.805 | 0.857 | 0.783 | 0.810 | 0.856 | 7.7 ms |
YOLOv8-Ghost | 0.796 | 0.789 | 0.854 | 0.795 | 0.785 | 0.849 | 7.8 ms |
YOLOv8-Bifpn | 0.781 | 0.814 | 0.855 | 0.780 | 0.813 | 0.853 | 7.9 ms |
Dataset Type | Data Volume | Description |
---|---|---|
Original dataset | 442 images | Contains images of 442 soybean plants to enhance model generalization. |
Extended dataset | 2210 images | Expanded to 2210 images through data enhancement techniques (brightness adjustment, level flipping, noise addition, and image panning) |
Training dataset | 1545 images | 70% of the extended dataset was used to train the model. |
Validation dataset | 665 images | 30% of the extended dataset was used to validate the model. |
Test dataset | 200 images | An additional 200 soybean plant images were collected to test the four YOLOv8-based models and the proposed stem and branch length extraction methods. |
Total number of labeled pods | 12,110 | The total number of all labeled beanings. Soybeans were categorized by the number of grains as 1, 2, or 3, while four or more beans were categorized as 3 beans because the percentage was less than 0.5%. After data augmentation, a total of 60,550 soybean pods were obtained. |
Total number of labeled plants | 442 plants | Total number of labeled plants used for training and validation. |
Hyperparameter | Value |
---|---|
Input image size | 640 × 640 |
Batch size | 16 |
Epoch | 100 |
Maximum learning rate | 0.001 |
Optimizer | SGD |
Weight decay | 0.0005 |
Thread count | 32 |
Name | Information |
---|---|
CPU | Intel® Core™ i9 14900K @6.00 GHz |
GPU | NVIDIA GeForce RTX 4080 16 G |
Operating system | Windows 11 |
Deep learning framework | Pytorch 2.1.2 [57] |
Programming language | Python 3.9 |
Integrated development environment | VScode |
Package management tools | Anaconda |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Q.-Y.; Fan, K.-J.; Tian, Z.; Guo, K.; Su, W.-H. High-Precision Automated Soybean Phenotypic Feature Extraction Based on Deep Learning and Computer Vision. Plants 2024, 13, 2613. https://doi.org/10.3390/plants13182613
Zhang Q-Y, Fan K-J, Tian Z, Guo K, Su W-H. High-Precision Automated Soybean Phenotypic Feature Extraction Based on Deep Learning and Computer Vision. Plants. 2024; 13(18):2613. https://doi.org/10.3390/plants13182613
Chicago/Turabian StyleZhang, Qi-Yuan, Ke-Jun Fan, Zhixi Tian, Kai Guo, and Wen-Hao Su. 2024. "High-Precision Automated Soybean Phenotypic Feature Extraction Based on Deep Learning and Computer Vision" Plants 13, no. 18: 2613. https://doi.org/10.3390/plants13182613
APA StyleZhang, Q. -Y., Fan, K. -J., Tian, Z., Guo, K., & Su, W. -H. (2024). High-Precision Automated Soybean Phenotypic Feature Extraction Based on Deep Learning and Computer Vision. Plants, 13(18), 2613. https://doi.org/10.3390/plants13182613