Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance
Abstract
:1. Introduction
- (1)
- To enhance the robustness of our model performance, various data augmentation methods were applied to the acquired dataset to ensure it would perform well under diverse conditions, such as different contrast, lighting, and environments.
- (2)
- To improve the computational efficiency of our model, FasterNet [22] was utilized as the primary backbone for feature extraction. A specific objective was to enhance computational efficiency while minimizing the number of parameters, thereby making the model easily deployable on mobile devices.
- (3)
- To enhance the backbone network, dynamic sparse attention and deformable convolution models were integrated into the model. A specific objective was to mitigate the influence of intricate environmental factors, such as the stickiness of wheat ears, while improving the model’s capability to efficiently extract wheat ear features.
- (4)
- To comprehensively capture fine details and context characteristics, feature pyramid network (FPN) [23] and lightweight upsampling operators were integrated into the PAN [24]. A specific objective was to enhance the capability of the proposed model to detect various sizes of wheat ears by optimal extraction of multi-scale features while minimizing the information loss during the upsampling process.
- (5)
- To further build upon the wheat ear detection algorithms, the Kalman filter-based tracking algorithm was incorporated into our model. A specific objective was to overcome the limitations of traditional image-based counting methods by achieving accurate motion prediction, and thereby avoid repeated counting in the continuous sequence by analyzing the context of video frames. Another objective was to significantly decrease the amount of manual work for wheat ear counting in the field.
2. Materials and Methods
2.1. Data Acquisition and Processing
2.1.1. Source of Image Dataset
2.1.2. Image Data Partitioning and Augmentation
2.1.3. Video Data Collection
2.1.4. Annotation of Video Data
2.2. Model Design Method
2.2.1. FasterNet
2.2.2. Loss Function
2.2.3. BiFormer
2.2.4. DCNv2
2.2.5. Improving the PAN Architecture
2.2.6. OC-SORT
2.2.7. Wheat-FasterYOLO Model Structure
2.2.8. Practical Application Process of the Model
2.3. Evaluation Indicators
3. Results and Discussion
3.1. The Impact of Data Augmentation
3.2. Comparative Experiments with Different Attention Integrations
3.3. Ablation Experiment
3.4. Comparative Experimental Analysis of Different Detection Models
3.5. Comparative Experiments Incorporating Different Tracking Algorithms
3.6. Analysis of Counting Accuracy in the Wheat-FasterYOLO Model
3.7. Advantages and Limitations
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhao, J.; Yan, J.; Xue, T.; Wang, S.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Zhang, X. A deep learning method for oriented and small wheat spike detection (OSWSDet) in UAV images. Comput. Electron. Agric. 2022, 198, 107087. [Google Scholar] [CrossRef]
- Zhou, H.; Riche, A.B.; Hawkesford, M.J.; Whalley, W.R.; Atkinson, B.S.; Sturrock, C.J.; Mooney, S.J. Determination of wheat spike and spikelet architecture and grain traits using X-ray Computed Tomography imaging. Plant Methods 2021, 17, 26. [Google Scholar] [CrossRef] [PubMed]
- Nerson, H. Effects of population density and number of ears on wheat yield and its components. Field Crops Res. 1980, 3, 225–234. [Google Scholar] [CrossRef]
- Madec, S.; Jin, X.; Lu, H.; De Solan, B.; Liu, S.; Duyme, F.; Heritier, E.; Baret, F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric. For. Meteorol. 2019, 264, 225–234. [Google Scholar] [CrossRef]
- Sadeghi-Tehran, P.; Virlet, N.; Ampe, E.M.; Reyns, P.; Hawkesford, M.J. DeepCount: In-field automatic quantification of wheat spikes using simple linear iterative clustering and deep convolutional neural networks. Front. Plant Sci. 2019, 10, 1176. [Google Scholar] [CrossRef]
- Sun, J.; Yang, K.; Chen, C.; Shen, J.; Yang, Y.; Wu, X.; Norton, T. Wheat head counting in the wild by an augmented feature pyramid networks-based convolutional neural network. Comput. Electron. Agric. 2022, 193, 106705. [Google Scholar] [CrossRef]
- Zhang, L.; Chen, Y.; Li, Y.; Ma, J.; Du, K. Detection and Counting System for winter wheat ears based on convolutional neural network. Trans. Chin. Soc. Agric. Mach. 2019, 50, 144–150. [Google Scholar]
- Ma, J.; Li, Y.; Liu, H.; Wu, Y.; Zhang, L. Towards improved accuracy of UAV-based wheat ears counting: A transfer learning method of the ground-based fully convolutional network. Expert Syst. Appl. 2022, 191, 116226. [Google Scholar] [CrossRef]
- Zhou, X.; Zheng, H.; Xu, X.; He, J.; Ge, X.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 246–255. [Google Scholar] [CrossRef]
- Fernandez-Gallego, J.A.; Lootens, P.; Borra-Serrano, I.; Derycke, V.; Haesaert, G.; Roldán-Ruiz, I.; Araus, J.L.; Kefauver, S.C. Automatic wheat ear counting using machine learning based on RGB UAV imagery. Plant J. 2020, 103, 1603–1613. [Google Scholar] [CrossRef]
- Tan, C.; Zhang, P.; Zhang, Y.; Zhou, X.; Wang, Z.; Du, Y.; Mao, W.; Li, W.; Wang, D.; Guo, W. Rapid recognition of field-grown wheat spikes based on a superpixel segmentation algorithm using digital images. Front. Plant Sci. 2020, 11, 259. [Google Scholar] [CrossRef]
- Bao, W.; Lin, Z.; Hu, G.; Liang, D.; Huang, L.; Zhang, X. Method for wheat ear counting based on frequency domain decomposition of MSVF-ISCT. Inf. Process. Agric. 2023, 10, 240–255. [Google Scholar] [CrossRef]
- Fang, Y.; Qiu, X.; Guo, T.; Wang, Y.; Cheng, T.; Zhu, Y.; Chen, Q.; Cao, W.; Yao, X.; Niu, Q.; et al. An automatic method for counting wheat tiller number in the field with terrestrial LiDAR. Plant Methods 2020, 16, 132. [Google Scholar] [CrossRef] [PubMed]
- Pérez-Porras, F.J.; Torres-Sánchez, J.; López-Granados, F.; Mesas-Carrascosa, F.J. Early and on-ground image-based detection of poppy (Papaver rhoeas) in wheat using YOLO architectures. Weed Sci. 2023, 71, 50–58. [Google Scholar] [CrossRef]
- Yang, B.; Pan, M.; Gao, Z.; Zhi, H.; Zhang, X. Cross-Platform Wheat Ear Counting Model Using Deep Learning for UAV and Ground Systems. Agronomy 2023, 13, 1792. [Google Scholar] [CrossRef]
- Zaji, A.; Liu, Z.; Xiao, G.; Bhowmik, P.; Sangha, J.S.; Ruan, Y. AutoOLA: Automatic object level augmentation for wheat spikes counting. Comput. Electron. Agric. 2023, 205, 107623. [Google Scholar] [CrossRef]
- Alkhudaydi, T.; De la Lglesia, B. Counting spikelets from infield wheat crop images using fully convolutional networks. Neural Comput. Appl. 2022, 34, 17539–17560. [Google Scholar] [CrossRef]
- Qiu, R.; He, Y.; Zhang, M. Automatic Detection and Counting of Wheat Spikelet Using Semi-Automatic Labeling and Deep Learning. Front. Plant Sci. 2022, 13, 872555. [Google Scholar] [CrossRef]
- Dimitrov, D.D. Internet and Computers for Agriculture. Agriculture 2023, 13, 155. [Google Scholar] [CrossRef]
- Zaji, A.; Liu, Z.; Xiao, G.; Sangha, J.S.; Ruan, Y. A survey on deep learning applications in wheat phenotyping. Appl. Soft Comput. 2022, 131, 109761. [Google Scholar] [CrossRef]
- Wu, T.; Zhong, S.; Chen, H.; Geng, X. Research on the Method of Counting Wheat Ears via Video Based on Improved YOLOv7 and DeepSort. Sensors 2023, 23, 4880. [Google Scholar] [CrossRef]
- Chen, J.; Kao, S.h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, Do not Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12021–12031. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- David, E.; Madec, S.; Sadeghi-Tehran, P.; Aasen, H.; Zheng, B.; Liu, S.; Kirchgessner, N.; Ishikawa, G.; Nagasawa, K.; Badhon, M.A.; et al. Global Wheat Head Detection (GWHD) dataset: A large and diverse dataset of high-resolution RGB-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics 2020, 2020, 3521852. [Google Scholar] [CrossRef]
- Jung, A.B.; Wada, K.; Crall, J.; Tanaka, S.; Graving, J.; Reinders, C.; Yadav, S.; Banerjee, J.; Vecsei, G.; Kraft, A.; et al. Imgaug. Available online: https://github.com/aleju/imgaug (accessed on 5 June 2023).
- DarkLabel. Available online: https://github.com/darkpgmr/DarkLabel (accessed on 1 June 2023).
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
- Yu, J.; Jiang, Y.; Wang, Z.; Cao, Z.; Huang, T. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 516–520. [Google Scholar]
- Gevorgyan, Z. SIoU loss: More powerful learning for bounding box regression. arXiv 2022, arXiv:2205.12740. [Google Scholar]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R.W. BiFormer: Vision Transformer with Bi-Level Routing Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 10323–10333. [Google Scholar]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9308–9316. [Google Scholar]
- Wang, J.; Chen, K.; Xu, R.; Liu, Z.; Loy, C.C.; Lin, D. Carafe: Content-aware reassembly of features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
- Cao, J.; Pang, J.; Weng, X.; Khirodkar, R.; Kitani, K. Observation-centric sort: Rethinking sort for robust multi-object tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 9686–9696. [Google Scholar]
- Luiten, J.; Osep, A.; Dendorfer, P.; Torr, P.; Geiger, A.; Leal-Taixé, L.; Leibe, B. Hota: A higher order metric for evaluating multi-object tracking. Int. J. Comput. Vis. 2021, 129, 548–578. [Google Scholar] [CrossRef]
- Qin, X.; Li, N.; Weng, C.; Su, D.; Li, M. Simple attention module based speaker verification with iterative noisy label detection. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 6722–6726. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Liu, Y.; Shao, Z.; Hoffmann, N. Global attention mechanism: Retain information to enhance channel-spatial interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016, Proceedings, Part I 14; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2019; pp. 10781–10790. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Jonathon, L.; Arne, H. TrackEval. Available online: https://github.com/JonathonLuiten/TrackEval (accessed on 21 June 2023).
- Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. Bytetrack: Multi-object tracking by associating every detection box. In Proceedings of the European Conference on Computer Vision; Springer Nature Switzerland: Cham, Switzerland, 2022; pp. 1–21. [Google Scholar]
- Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; Meng, H. Strongsort: Make deepsort great again. IEEE Trans. Multimed. 2023. Early Access. [Google Scholar] [CrossRef]
Video Name | Wheat Variety | Collection Location | Location Coordinates | Video Length/Frames |
---|---|---|---|---|
Yangmai 17.mp4 | Yangmai 17 | Taodeng Town, Laibin City | Longitude 109 E, Latitude 23 N | 3267 |
Huanuo No.1.mp4 | Huanuo No.1 | Changfu Village, Laibin City | Longitude 109 E, Latitude 23 N | 2327 |
Xumai 45.mp4 | Xumai 45 | Shuangqiao Village, Guilin City | Longitude 111 E, Latitude 26 N | 3264 |
Set of Parameters | Value or Name |
---|---|
Batch size | 16 |
Learning rate | 0.01 |
Epoch | 230 |
Image resize | 640 × 640 |
Optimizer | SGD |
Momentum | 0.937 |
IoU-thres | 0.55 |
Data State | P/% | R/% | mAP/% | F1/% |
---|---|---|---|---|
Non-augmented | 86.28 | 76.62 | 84.91 | 81.17 |
Augmented | 86.52 | 77.88 | 85.66 | 81.97 |
Attention | P/% | R/% | mAP/% | F1/% |
---|---|---|---|---|
None | 86.52 | 77.88 | 85.66 | 81.97 |
SimAM | 86.75 | 78.25 | 86.11 | 82.28 |
CBAM | 86.78 | 78.85 | 85.81 | 82.63 |
GAM | 89.99 | 84.02 | 90.49 | 86.9 |
SE | 86.19 | 77.31 | 85.18 | 81.51 |
BiFormer | 90.2 | 85.35 | 91.21 | 87.71 |
BiFormer | CARAFE-PAN | DCNv2 | P/% | R/% | mAP/% | F1/% |
---|---|---|---|---|---|---|
86.52 | 77.88 | 85.66 | 81.97 | |||
✓ | 90.2 | 85.35 | 91.21 | 87.71 | ||
✓ | ✓ | 92.32 | 88.47 | 93.58 | 90.36 | |
✓ | ✓ | ✓ | 92.63 | 89.04 | 94.01 | 90.8 |
Model | P/% | R/% | mAP/% | F1/% | Parameters | GFLOPs | FPS |
---|---|---|---|---|---|---|---|
SSD-VGG | 90.94 | 63.96 | 82.59 | 75.1 | 2.36 × | 136.6 | 66 |
SSD-MobileNet | 93.44 | 71.21 | 88.45 | 80.82 | 3.54 × | 3.0 | 87 |
Faster R-CNN | 68.52 | 85.41 | 81.13 | 76.04 | 2.83 × | 474.1 | 30 |
EfficientDet | 92.43 | 79.01 | 89.69 | 85.19 | 6.56 × | 5.7 | 21 |
YOLOX | 93.04 | 89.6 | 93.69 | 91.29 | 8.04 × | 21.6 | 117 |
YOLOv7-Tiny | 92.89 | 88.86 | 93.0 | 90.83 | 6.01 × | 13.0 | 125 |
Wheat-FasterYOLO | 92.89 | 89.04 | 94.01 | 90.8 | 1.34 × 10 | 3.9 | 185 |
Tracker | Wheat Variety | DetA/% | AssA/% | DetRe/% | AssRe/% | HOTA/% | FPS |
---|---|---|---|---|---|---|---|
StrongSORT | Yangmai 17 | 60.58 | 38.69 | 74.57 | 75.04 | 48.06 | 23 |
Huanuo No.1 | 58.52 | 42.72 | 74.83 | 72.54 | 49.85 | 17 | |
Xumai 45 | 48.58 | 30.71 | 57.0 | 62.12 | 38.49 | 20 | |
Average | 55.89 | 37.37 | 68.8 | 69.33 | 47.47 | 20 | |
ByteTrack | Yangmai 17 | 58.45 | 65.92 | 65.43 | 73.12 | 61.75 | 115 |
Huanuo No.1 | 60.16 | 62.28 | 67.27 | 70.43 | 61.09 | 101 | |
Xumai 45 | 25.82 | 54.92 | 26.61 | 59.31 | 37.61 | 127 | |
Average | 48.14 | 61.04 | 53.10 | 67.62 | 53.48 | 114 | |
OC-SORT | Yangmai 17 | 63.0 | 69.25 | 74.41 | 78.75 | 65.64 | 104 |
Huanuo No.1 | 62.79 | 65.14 | 73.88 | 75.95 | 63.82 | 83 | |
Xumai 45 | 46.72 | 58.34 | 51.63 | 65.1 | 52.11 | 90 | |
Average | 57.5 | 64.24 | 66.64 | 73.27 | 60.52 | 92 |
Wheat Variety | IDs | GT_IDs | Counting Accuracy/% |
---|---|---|---|
Yangmai 17 | 374 | 343 | 91.71 |
Huanuo No.1 | 518 | 480 | 92.66 |
Xumai 45 | 680 | 745 | 91.28 |
Average | 524 | 523 | 91.88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Hu, X.; Lu, J.; Chen, Y.; Huang, X. Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance. Agriculture 2023, 13, 2110. https://doi.org/10.3390/agriculture13112110
Chen J, Hu X, Lu J, Chen Y, Huang X. Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance. Agriculture. 2023; 13(11):2110. https://doi.org/10.3390/agriculture13112110
Chicago/Turabian StyleChen, Jie, Xiaochun Hu, Jiahao Lu, Yan Chen, and Xin Huang. 2023. "Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance" Agriculture 13, no. 11: 2110. https://doi.org/10.3390/agriculture13112110
APA StyleChen, J., Hu, X., Lu, J., Chen, Y., & Huang, X. (2023). Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance. Agriculture, 13(11), 2110. https://doi.org/10.3390/agriculture13112110