A Method of Modern Standardized Apple Orchard Flowering Monitoring Based on S-YOLO
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Design
2.2. Image Data Acquisition
- The dataset used for the experiment covered a range of weather conditions, the apple tree’s growth postures, and the complete flowering process.
- The original image was a high-resolution image of 3000 × 3000 pixels, where the vast majority of flowers were almost always smaller than 50 × 50 pixels.
- The flowers at each growth stage were manually labeled with sufficient numbers and fineness. All factors affecting apple flower detection, such as biometric features, gestures, shadows, and light, were considered at the human level.
2.3. Slicing Using SAHI
2.4. S-YOLO Detection Model
2.4.1. Model Construction
2.4.2. Model Training and Validation Environment
2.4.3. Evaluation Indicators
- AP: Average precision for a single category (IoU threshold from 0.5 to 0.95 in steps of 0.05), including bud, half-open, fully open, and end-open apple flowers;
- mAPALL: Mean average precision of apple flowers of the four stages (all pixels);
- mAPS: mAP for small objects whose area is smaller than 322 pixels;
- mAPM: mAP for medium objects whose area is between 322 pixels and 962 pixels;
- mAPL: mAP for large objects whose area is bigger than 962 pixels.
2.5. Apple Flowering Monitoring
3. Results and Discussion
3.1. Image Slice Results
3.2. Flower Detection with S-YOLO
3.2.1. Comparison with YOLOX-s
3.2.2. The Results of Different Versions of Models
3.2.3. Comparing the Effects of Other Measures
3.3. Apple Flowering Monitoring Results
4. Conclusions
- Based on the combination of YOLOX and Swin Transformer, the SAHI algorithm was added to form the S-YOLO model. S-YOLO-s improved the precision compared to the original YOLOX-s by 7.94%, 8.05%, 3.49%, and 6.96% for the four flowering states and by 10.00%, 9.10%, 13.10%, and 7.20% for the mAPALL, mAPS, mAPM, and mAPL, respectively. S-YOLO-l resulted in 88.18%, 88.95%, 89.50%, and 91.95% precision at each flowering state and 39.00%, 32.10%, 50.60%, and 64.30% for each type of mAP, respectively. Without considering the SAHI algorithm boost, the non-pure convolutional S-YOLO-l model slightly outperformed the YOLOX-l model with similar parameters and FLOPs in the original dataset, with improvements of 3.30%, 1.98%, 0.26%, and 1.88% in detection precision. In addition, using a bigger Swin Transformer as the backbone, designing an appropriate percentage of structural parameters, and collecting more training data may have resulted in improved experimental outcomes.
- The SAHI algorithm made the object-detected aspect ratio and average area vary between 3.00% and 5.00%, respectively, while increasing the image area ratio by 1250%. The SAHI algorithm increased the number of annotations of flowers in the four growth stages by 170.20%, 170.11%, 182.41%, and 176.16%, respectively, and the total amount of annotated data increased by 150% to 109,813, providing more quality data for the model training process. The experimental results show that the SAHI algorithm improved the precision by 0.70%, 0.04%, 1.04%, 3.38%, and the mAP by 6.70%, 5.50%, 7.20%, 7.70% for each flowering stage, respectively, and the larger the object detected, the more the detection effect was improved.
- Using the results of S-YOLO, the quantity and percentage of apple flowers and the flowering intensity were estimated daily for each stage of the orchard during the flowering period, and the peak time was identified. The average flower density at the peak of each stage was 55.687 on 3 April, 47.565 on 6 April, 118.183 on 9 April, and 17.522 flowers/tree on 15 April, corresponding to 75.7%, 46.7%, 82.26% and 49.58% of all flowers. On the various dates, the flowering intensities of the orchard were 1.13 %, 3.94 %, 19.54 %, 37.34 %, 57.57 %, 72.18 %, 82.26 %, 81.59 %, 75.74 %, 77.04 %, 76.59 %, 47.83 %, and 12.18 %. In addition, the orchard was at its first flowering stage on 3 April, its middle flowering stage on 5 April, its full flowering stage on 7 April, and its last flowering stage on 15 April.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, G.; Xia, X. Present Situation and Development Prospect of the Digital Orchard Technology. In China’s e-Science Blue Book 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 443–458. [Google Scholar]
- Link, H. Significance of Flower and Fruit Thinning on Fruit Quality. Plant Growth Regul. 2000, 31, 17–26. [Google Scholar] [CrossRef]
- Iwanami, H.; Moriya-Tanaka, Y.; Honda, C.; Hanada, T.; Wada, M. A Model for Representing the Relationships among Crop Load, Timing of Thinning, Flower Bud Formation, and Fruit Weight in Apples. Sci. Hortic. 2018, 242, 181–187. [Google Scholar] [CrossRef]
- Bound, S.A. Precision Crop Load Management of Apple (Malus x Domestica Borkh.) without Chemicals. Horticulturae 2018, 5, 3. [Google Scholar] [CrossRef]
- Peck, G.M.; Combs, L.D.; DeLong, C.; Yoder, K.S. Precision Apple Flower Thinning Using Organically Approved Chemicals. In Proceedings of the International Symposium on Innovation in Integrated and Organic Horticulture (INNOHORT), Avignon, France, 8–12 June 2015; pp. 47–52. [Google Scholar]
- Farjon, G.; Krikeb, O.; Hillel, A.B.; Alchanatis, V. Detection and Counting of Flowers on Apple Trees for Better Chemical Thinning Decisions. Precis. Agric. 2020, 21, 503–521. [Google Scholar] [CrossRef]
- Nautiyal, P.; Sharma, U.; Singh, V.; Goswami, S.; Agrawal, K.; Krishali, V.; Bisht, R.; Mittal, H.; Mehta, R.; Pokhriyal, A. Fruit Thinning: Purpose, Methods & Role of Plant Growth Regulators. Pharma Innov. J. 2022, 11, 1500–1504. [Google Scholar]
- Solomakhin, A.A.; Blanke, M.M. Mechanical Flower Thinning Improves the Fruit Quality of Apples. J. Sci. Food Agric. 2010, 90, 735–741. [Google Scholar] [CrossRef] [PubMed]
- Koike, H.; Tamai, H.; Ono, T.; Shigehara, I. Influence of Time of Thinning on Yield, Fruit Quality and Return Flowering of’Fuji’apple. J. Am. Pomol. Soc. 2003, 57, 169. [Google Scholar]
- Wu, D.; Lv, S.; Jiang, M.; Song, H. Using Channel Pruning-Based YOLO v4 Deep Learning Algorithm for the Real-Time and Accurate Detection of Apple Flowers in Natural Environments. Comput. Electron. Agric. 2020, 178, 105742. [Google Scholar] [CrossRef]
- Sun, K.; Wang, X.; Liu, S.; Liu, C. Apple, Peach, and Pear Flower Detection Using Semantic Segmentation Network and Shape Constraint Level Set. Comput. Electron. Agric. 2021, 185, 106150. [Google Scholar] [CrossRef]
- Bhattarai, U.; Bhusal, S.; Majeed, Y.; Karkee, M. Automatic Blossom Detection in Apple Trees Using Deep Learning. IFAC-PapersOnLine 2020, 53, 15810–15815. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Instance Segmentation of Apple Flowers Using the Improved Mask R–CNN Model. Biosyst. Eng. 2020, 193, 264–278. [Google Scholar] [CrossRef]
- Wang, X.A.; Tang, J.; Whitty, M. DeepPhenology: Estimation of Apple Flower Phenology Distributions Based on Deep Learning. Comput. Electron. Agric. 2021, 185, 106123. [Google Scholar] [CrossRef]
- Yuan, W.; Choi, D. UAV-Based Heating Requirement Determination for Frost Management in Apple Orchard. Remote Sens. 2021, 13, 273. [Google Scholar] [CrossRef]
- Wang, X.A.; Tang, J.; Whitty, M. Side-View Apple Flower Mapping Using Edge-Based Fully Convolutional Networks for Variable Rate Chemical Thinning. Comput. Electron. Agric. 2020, 178, 105673. [Google Scholar] [CrossRef]
- Piani, M.; Bortolotti, G.; Manfrini, L. Apple Orchard Flower Clusters Density Mapping by Unmanned Aerial Vehicle RGB Acquisitions. In Proceedings of the 2021 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Trento-Bolzano, Italy, 3–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 92–96. [Google Scholar]
- Zhang, C.; Mouton, C.; Valente, J.; Kooistra, L.; van Ooteghem, R.; de Hoog, D.; van Dalfsen, P.; Frans de Jong, P. Automatic Flower Cluster Estimation in Apple Orchards Using Aerial and Ground Based Point Clouds. Biosyst. Eng. 2022, 221, 164–180. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks. Adv. Neural Inf. Process. Syst. 2015, 39, 1137–1149. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-Cnn. In Proceedings of the IEEE International Conference on Computer Vision, Las Condes, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single Shot Multibox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Guofang, C.; Zhaoying, C.; Yuliang, W.; Jinxing, W.; Guoqiang, F.; Hanqing, L. Research on Detection Method of Apple Flower Based on Data-Enhanced Deep Learning. J. Chin. Agric. Mech. 2022, 43, 148. [Google Scholar] [CrossRef]
- Yuan, W.; Choi, D.; Bolkas, D.; Heinemann, P.H.; He, L. Sensitivity Examination of YOLOv4 Regarding Test Image Distortion and Training Dataset Attribute for Apple Flower Bud Classification. Int. J. Remote Sens. 2022, 43, 3106–3130. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding Yolo Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Zhaosheng, Y.; Tao, L.; Tianle, Y.; Chengxin, J.; Chengming, S. Rapid Detection of Wheat Ears in Orthophotos From Unmanned Aerial Vehicles in Fields Based on YOLOX. Front. Plant Sci. 2022, 13, 851245. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, W.; Yu, J.; He, L.; Chen, J.; He, Y. Complete and Accurate Holly Fruits Counting Using YOLOX Object Detection. Comput. Electron. Agric. 2022, 198, 107062. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Kisantal, M.; Wojna, Z.; Murawski, J.; Naruniec, J.; Cho, K. Augmentation for Small Object Detection. arXiv 2019, arXiv:1902.07296. [Google Scholar]
- Chen, Z.; Wu, K.; Li, Y.; Wang, M.; Li, W. SSD-MSN: An Improved Multi-Scale Object Detection Network Based on SSD. IEEE Access 2019, 7, 80622–80632. [Google Scholar] [CrossRef]
- Akyon, F.C.; Altinuc, S.O.; Temizel, A. Slicing Aided Hyper Inference and Fine-Tuning for Small Object Detection. arXiv 2022, arXiv:2202.06934. [Google Scholar]
- Keles, M.C.; Salmanoglu, B.; Guzel, M.S.; Gursoy, B.; Bostanci, G.E. Evaluation of YOLO Models with Sliced Inference for Small Object Detection. arXiv 2022, arXiv:2203.04799. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. Mixup: Beyond Empirical Risk Minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
- Long, X.; Deng, K.; Wang, G.; Zhang, Y.; Dang, Q.; Gao, Y.; Shen, H.; Ren, J.; Han, S.; Ding, E. PP-YOLO: An Effective and Efficient Implementation of Object Detector. arXiv 2020, arXiv:2007.12099. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and Efficient Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
Date | Weather | Light Intensity | Temperature | Windy |
---|---|---|---|---|
0403 | Overcast | Weak | 8–16° | No |
0404 | Sunny | Strong | 8–18° | Yes |
0405 | Sunny | Strong | 9–20° | Yes |
0406 | Sunny | Strong | 9–20° | Yes |
0407 | Sunny | Strong | 9–20° | Yes |
0408 | Sunny | Hazy | 11–25° | No |
0409 | Sunny | Normal | 8–24° | Yes |
0410 | Light rain | Weak | 5–10° | Yes |
0411 | Light rain | Weak | 5–15° | Yes |
0412 | Sunny | Strong | 5–20° | Yes |
0413 | Sunny | Strong | 9–23° | No |
0415 | Sunny | Normal | 12–25° | No |
Class | Bud | Half-Open | Fully Open | End-Open |
---|---|---|---|---|
Number | 11,865 | 10,885 | 12,288 | 4942 |
Aspect ratio (pixels) | 17.08:17.23 | 22.32:22.30 | 49.36:48.61 | 27.68:27.13 |
Average area (pixels2) | 318.86 | 541.71 | 2586.45 | 798.06 |
Area ratio (%) | 0.0035 | 0.0060 | 0.0287 | 0.0088 |
Hyperparameter | Value |
---|---|
Initial learning rate | 0.01 |
Minimum learning rate | 0.0001 |
Optimizer | sgd |
Momentum | 0.937 |
Weight decay | 0.0005 |
Learn rate decay type | cos |
Class | Bud | Half-Open | Fully Open | End-Open |
---|---|---|---|---|
Number | 32,060 | 29,402 | 34,703 | 13,648 |
Aspect ratio (pixels) | 17.55:17.70 | 22.64:22.68 | 48.51:47.85 | 27.97:27.46 |
Average area (pixels2) | 334.50 | 557.75 | 2485.52 | 813.22 |
Area ratio (%) | 0.0542 | 0.0894 | 0.3934 | 0.1312 |
Class | Bud (%) | Half-Open (%) | Fully Open (%) | End-Open (%) |
---|---|---|---|---|
Number | +170.21 | +170.11 | +182.41 | +176.16 |
Aspect ratio | +2.76: +2.73 | +1.43: +1.70 | −1.72: −1.56 | +1.05: +1.27 |
Average area | +4.90 | +2.96 | −3.90 | +1.90 |
Area ratio | +1428.91 | +1385.91 | +1268.89 | +1379.58 |
Model | P-Bud 1 (%) | P-Half-Open (%) | P-Fully Open (%) | P-End-Open (%) | mAPALL (%) | mAPS (%) | mAPM (%) | mAPL (%) |
---|---|---|---|---|---|---|---|---|
YOLOX-s 2 | 79.19 | 81.27 | 85.21 | 83.90 | 27.40 | 21.40 | 36.00 | 58.90 |
S-YOLO-s | 87.13 | 89.32 | 88.70 | 90.86 | 37.40 | 30.50 | 49.10 | 66.10 |
Model | P-Bud (%) | P-Half-Open (%) | P-Fully Open (%) | P-End-Open (%) | mAPALL (%) | mAPs (%) | mAPm (%) | mAPl (%) | Parameters (M) | FLOPs (G) |
---|---|---|---|---|---|---|---|---|---|---|
YOLOX-s | 79.19 | 81.27 | 85.21 | 83.90 | 27.40 | 21.40 | 36.00 | 58.90 | 8.94 | 26.64 |
YOLOX-s (+SAHI) | 79.89 | 81.31 | 86.25 | 87.28 | 34.10 | 26.90 | 43.20 | 66.60 | 8.94 | 26.64 |
S-YOLO-s | 87.13 | 89.32 | 88.70 | 90.86 | 37.40 | 30.50 | 49.10 | 66.10 | 35.79 | 95.57 |
S-YOLO-t | 79.38 | 80.50 | 83.02 | 84.02 | 32.40 | 25.60 | 41.50 | 57.10 | 30.80 | 80.58 |
S-YOLO-m | 81.95 | 83.48 | 86.96 | 86.86 | 35.10 | 28.20 | 45.40 | 65.70 | 45.89 | 135.35 |
S-YOLO-l | 88.18 | 88.59 | 89.50 | 91.95 | 39.00 | 32.10 | 50.60 | 64.30 | 51.37 | 157.68 |
Swin-S | 82.35 | 84.02 | 85.60 | 87.37 | 34.50 | 27.60 | 44.10 | 65.70 | 57.07 | 165.84 |
Model | Datasets | P-Bud (%) | P-Half-Open (%) | P-Fully Open (%) | P-End-Open (%) | mAPALL (%) | mAPS (%) | mAPM (%) | mAPL (%) |
---|---|---|---|---|---|---|---|---|---|
S-YOLO-s | Mixed | 87.13 | 89.32 | 88.70 | 90.86 | 37.40 | 30.50 | 49.10 | 66.10 |
YOLOX-l | Mixed | 85.27 | 88.66 | 92.02 | 92.92 | 41.70 | 35.60 | 55.40 | 74.60 |
S-YOLO-l | Mixed | 88.18 | 88.59 | 89.50 | 91.95 | 39.00 | 32.10 | 50.60 | 64.30 |
YOLOX-l | Raw20% * | 81.00 | 85.80 | 90.30 | 90.00 | 34.30 | 28.30 | 48.20 | 71.80 |
S-YOLO-l | Raw20% | 84.30 | 87.78 | 90.56 | 91.88 | 34.40 | 28.40 | 48.20 | 70.60 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, X.; Sun, G.; Xu, N.; Zhang, X.; Cai, J.; Yuan, Y.; Huang, Y. A Method of Modern Standardized Apple Orchard Flowering Monitoring Based on S-YOLO. Agriculture 2023, 13, 380. https://doi.org/10.3390/agriculture13020380
Zhou X, Sun G, Xu N, Zhang X, Cai J, Yuan Y, Huang Y. A Method of Modern Standardized Apple Orchard Flowering Monitoring Based on S-YOLO. Agriculture. 2023; 13(2):380. https://doi.org/10.3390/agriculture13020380
Chicago/Turabian StyleZhou, Xinzhu, Guoxiang Sun, Naimin Xu, Xiaolei Zhang, Jiaqi Cai, Yunpeng Yuan, and Yinfeng Huang. 2023. "A Method of Modern Standardized Apple Orchard Flowering Monitoring Based on S-YOLO" Agriculture 13, no. 2: 380. https://doi.org/10.3390/agriculture13020380
APA StyleZhou, X., Sun, G., Xu, N., Zhang, X., Cai, J., Yuan, Y., & Huang, Y. (2023). A Method of Modern Standardized Apple Orchard Flowering Monitoring Based on S-YOLO. Agriculture, 13(2), 380. https://doi.org/10.3390/agriculture13020380