A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection
Abstract
:1. Introduction
- To address the scarcity of Hemerocallis fulva leaf disease data, the Hemerocallis fulva leaf disease dataset (HFLD-Dataset) was created, covering four disease categories collected from the central Yangtze River plain in southern Asia (April–August 2024), providing comprehensive data for model validation.
- An improved object detection model, the Hemerocallis fulva Multi-Scale and Enhanced Network (HF-MSENet), was developed to enhance disease detection accuracy under varying lighting conditions, angles, and growth stages. Experimental results confirm its superiority over traditional methods.
- A Channel–Spatial Multi-Scale Module (CSMSM) is introduced to enhance the model’s ability to focus on and extract features from target regions. By employing a channel–spatial dual attention mechanism and multi-scale feature extraction, it significantly improves the capture of fine-grained information and target region detection.
- Traditional multi-scale feature fusion methods are limited by poor information interaction, making them ineffective for variations in target size and shape. Upsampling stages often introduce interpolation errors, reducing edge detail and detection accuracy. To address these issues, the C3_EMSCP module enhances multi-scale feature fusion through joint multi-scale and group convolutions. Paired with the DySample module, which adjusts sampling positions using dynamic offsets, this approach improves detail reconstruction, reduces interpolation errors, and enhances edge clarity through pixel reordering and grid sampling.
2. Materials and Methods
2.1. Dataset Construction and Pre-Processing
2.1.1. Description of Study Area and Data Collection
2.1.2. Dataset Annotation
2.1.3. Image Enhancement
2.2. HF-MSENet Model
- To enhance the model’s ability to focus on target regions and improve multi-scale feature extraction, the CSMSM module is introduced at the backend of the backbone network in ①. This module serves as a critical component in the detection pipeline, optimizing the model’s ability to prioritize and extract features from key regions where disease symptoms are most prominent. This module strengthens the model’s attention on key regions through a channel–spatial dual attention mechanism, while the incorporation of multi-scale feature extraction strategies comprehensively improves the extraction of disease region features.
- To address the low information interaction efficiency and insufficient cross-scale collaboration in multi-scale feature fusion, the C3_EMSCP module is introduced and optimized at three critical junctions between the backbone and network layers in ②. This strategic placement allows the module to bridge different feature layers, improving the model’s ability to fuse information from diverse resolutions. Using multi-scale convolutional kernels, this module effectively adapts and refines features of different resolutions, while the integration of group convolution structures further enhances computational efficiency and feature fusion depth.
- To solve the problem of detail loss and blurred object boundaries often caused by traditional upsampling methods, the DySample module is introduced during the upsampling phase in ③. The DySample module is designed to specifically address the challenges posed by fine-grained features and boundary precision in disease detection. This module employs a dynamic offset learning mechanism to adaptively adjust sampling positions, avoiding detail loss due to interpolation errors. Additionally, by using pixel reordering and grid sampling techniques, it optimizes detail retention and improves edge clarity.
2.2.1. CSMSM Module
- Channel–Spatial Dual Attention
- Multi-Scale Feature Fusion Strategy
2.2.2. C3_EMSCP Module
- Shallow branch: a 1 × 1 convolution quickly compresses the input features and extracts key information, preserving the integrity of the detail features and laying the foundation for subsequent feature fusion.
- The deep branch, utilizing the stacked Bottleneck_EMSCP structure, enhances feature fusion depth and multi-scale information interaction. In Bottleneck_EMSCP, 1 × 1 convolutions reduce channels and integrate information. The EMSConvP layer applies multi-scale convolution kernels (1 × 1, 3 × 3, 5 × 5, and 7 × 7) and group convolutions, facilitating stepwise convolutions of the grouped input feature map. This enables deep interaction between cross-scale features while minimizing computational redundancy. The Bottleneck_EMSCP structure is illustrated in Figure 11.
2.2.3. DySample Module
- Dynamic Scope Factor
- Static Scope Factor
2.3. Training Environment and Parameter Settings
2.4. Performance Evaluation Metrics
- P measures prediction accuracy, emphasizing a reduction in false positives.
- R quantifies target detection, emphasizing a minimization of false negatives.
- F1 is the harmonic mean of P and R, balancing detection accuracy and sensitivity, especially in imbalanced classes.
- AP evaluates the model’s detection performance for a single category, reflecting its average precision for that specific class.
- mAP@50 measures mean detection precision at an IoU threshold of 0.5, offering an overall model assessment.
- mAP@50–95 calculates mean precision at IoU thresholds of 0.5 to 0.95, evaluating model robustness under varying overlap conditions.
- The formulas for these metrics are provided in Equations (13)–(18).
3. Results
3.1. Experimental Results and Discussion
3.1.1. Ablation Experiment
3.1.2. Discussion of Ablation Experiment
- The baseline model (Figure 14a) shows vague disease localization and misses subtle features due to general extraction techniques, resulting in high background noise and missed detections. Enhancements are needed for improved performance.
- The CSMSM module (Figure 14b) enhances focus on key disease regions through a channel–spatial dual attention mechanism that prioritizes disease-related features. However, limited feature extraction can lead to missed fine details and slight shifts in detection boundaries as the model emphasizes larger areas.
- The C3_EMSCP module (Figure 14c) enhances multi-scale feature fusion, allowing the model to capture features at various resolutions. However, due to incomplete fine-grained feature extraction, small disease lesions may be inaccurately located, causing minor shifts in detection positions.
- The DySample module (Figure 14d) enhances detail recovery and reduces boundary blurring during upsampling. By dynamically adjusting sampling positions, it refines the detection of fine-grained features. Nevertheless, false detections may still occur in complex backgrounds, causing slight shifts in detection locations, particularly in regions with overlapping or subtle disease symptoms.
- The combination of CSMSM and C3_EMSCP (Figure 14e) enhances feature attention and multi-scale fusion, improving the coverage of disease regions and focus on critical areas. However, smaller targets may still be missed as the model prioritizes larger features, causing variations in detection locations for small lesions.
- Combining CSMSM and DySample (Figure 14f) improves regional focus and detail recovery, enhancing detection accuracy across target sizes. Although background noise is reduced and boundary clarity is enhanced, minor shifts in detection locations may still occur due to small misdetections in complex backgrounds.
- The integration of C3_EMSCP and DySample (Figure 14g) demonstrates strong multi-scale fusion and boundary optimization. However, background noise complexity slightly reduces detection accuracy, leading to minor shifts in detection locations, especially for small or irregularly shaped disease lesions.
- Finally, the integration of CSMSM, C3_EMSCP, and DySample in HF-MSENet (Figure 14h) achieves optimal disease region localization. This approach improves detection accuracy across various scales, significantly reducing false and missed detections. The synergistic effect of these modules enhances detection precision and ensures consistent localization, as demonstrated in Figure 14h.
3.1.3. Comparative Experiment
3.1.4. Discussion of Comparative Experiment
3.2. Discussion on Limitations and Future Directions
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Hirota, S.K.; Yasumoto, A.A.; Nitta, K.; Tagane, M.; Miki, N.; Suyama, Y.; Yahara, T. Evolutionary history of Hemerocallis in Japan inferred from chloroplast and nuclear phylogenies and levels of interspecific gene flow. Mol. Phylogenetics Evol. 2021, 164, 107264. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Ji, F.; Hou, F.; Shi, Q.; Xing, G.; Chen, H.; Weng, Y.; Kang, X. Morphological, palynological and molecular assessment of Hemerocallis core collection. Sci. Hortic. 2021, 285, 110181. [Google Scholar] [CrossRef]
- Bortolini, L.; Zanin, G. Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban For. Urban Green. 2018, 34, 121–133. [Google Scholar] [CrossRef]
- Szewczyk, K.; Kalemba, D.; Miazga-Karska, M.; Krzemińska, B.; Dąbrowska, A.; Nowak, R. The essential oil composition of selected Hemerocallis cultivars and their biological activity. Open Chem. 2019, 17, 1412–1422. [Google Scholar] [CrossRef]
- Liang, Y.; Huang, R.; Chen, Y.; Zhong, J.; Deng, J.; Wang, Z.; Wu, Z.; Li, M.; Wang, H.; Sun, Y. Study on the sleep-improvement effects of Hemerocallis citrina Baroni in Drosophila melanogaster and targeted screening to identify its active components and mechanism. Foods 2021, 10, 883. [Google Scholar] [CrossRef]
- Li, X.; Jiang, S.; Cui, J.; Qin, X.; Zhang, G. Progress of genus Hemerocallis in traditional uses, phytochemistry, and pharmacology. J. Hortic. Sci. Biotechnol. 2022, 97, 298–314. [Google Scholar] [CrossRef]
- Sandri, E.; Werner, L.U.; Bernalte Martí, V. Lifestyle Habits and Nutritional Profile of the Spanish Population: A Comparison Between the Period During and After the COVID-19 Pandemic. Foods 2024, 13, 3962. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Qu, Y.-t.; Han, H.; Tang, H.-w.; Chen, F.; Xiong, Y. Effects of Plant Growth Regulators on Adventitious Bud Induction and Proliferation of Hemerocallis fulva; Northeast Forestry University: Harbin, China, 2021. [Google Scholar]
- Yu, Y.; Hu, J.; Wa, J.; Zhang, Z. The control effect of combination of fertilizer and medicine on daylily leaf streak of Hemerocallis fulva. J. Technol. 2023, 23, 177–181. [Google Scholar]
- Zhao, T.-R.; Xu, Z.-H.; Zhang, C.-H.; Wang, J.-J.; Guo, F.-Q.; Ye, Q.-M. Evaluation on Waterlogging Tolerance of Hemerocallis Fulva in Field; Jiangxi Academy of Agricultural Sciences: Nanchang, China, 2021. [Google Scholar]
- Ye, J. Pedro de la Piñuela’s Bencao Bu and the Cultural Exchanges between China and the West. Religions 2024, 15, 343. [Google Scholar] [CrossRef]
- Dhingra, G.; Kumar, V.; Joshi, H. Study of digital image processing techniques for leaf disease detection and classification. Multimed. Tools Appl. 2018, 77, 19951–20000. [Google Scholar] [CrossRef]
- Keivani, M.; Mazloum, J.; Sedaghatfar, E.; Tavakoli, M. Automated analysis of leaf shape, texture, and color features for plant classification. Trait. Du Signal 2020, 37, 17–28. [Google Scholar] [CrossRef]
- Mahmud, M.S.; Chang, Y.K.; Zaman, Q.U.; Esau, T.J. Detection of strawberry powdery mildew disease in leaf using image texture and supervised classifiers. In Proceedings of the CSBE/SCGAB 2018 Annual Conference, Guelph, ON, USA, 22–25 July 2018; pp. 22–25. [Google Scholar]
- Xie, C.; He, Y. Spectrum and image texture features analysis for early blight disease detection on eggplant leaves. Sensors 2016, 16, 676. [Google Scholar] [CrossRef]
- Nashrullah, F.H.; Suryani, E.; Salamah, U.; Prakisya, N.P.; Setyawan, S. Texture-Based Feature Extraction Using Gabor Filters to Detect Diseases of Tomato Leaves. Rev. D’intelligence Artif. 2021, 35, 331. [Google Scholar]
- Ahmad, N.; Asif, H.M.S.; Saleem, G.; Younus, M.U.; Anwar, S.; Anjum, M.R. Leaf image-based plant disease identification using color and texture features. Wirel. Pers. Commun. 2021, 121, 1139–1168. [Google Scholar] [CrossRef]
- Wang, M.; Guo, S.; Niu, X. surveys. Detect. Wheat Leaf Disease. Appl. Int. J. Res. 2015, 6, 1669–1675. [Google Scholar]
- Gangshan, W.; Yinlong, F.; Qiyou, J.; Ming, C.; Na, L.; Yunmeng, O.; Zhihua, D.; Baohua, Z. Early identification of strawberry leaves disease utilizing hyperspectral imaging combing with spectral features, multiple vegetation indices and textural features. Comput. Electron. Agric. 2023, 204, 107553. [Google Scholar]
- Aditi, S.; Harjeet, K. Potato Plant Leaves Disease Detection and Classification using Machine Learning Methodologies. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1022, 012121. [Google Scholar]
- Sen, Z.; Yun, F.; Jiang-nan, C.; Ye, L.; Xu-dong, D.; Yong-liang, L. Application of hyperspectral imaging in the diagnosis of acanthopanax senticosus black spot disease. Spectrosc. Spectr. Anal. 2021, 41, 1898–1904. [Google Scholar]
- Devi, K.S.; Srinivasan, P.; Bandhopadhyay, S. H2K–A robust and optimum approach for detection and classification of groundnut leaf diseases. Comput. Electron. Agric. 2020, 178, 105749. [Google Scholar] [CrossRef]
- Zhao, J.; Fang, Y.; Chu, G.; Yan, H.; Hu, L.; Huang, L. Identification of leaf-scale wheat powdery mildew (Blumeria graminis f. sp. Tritici) combining hyperspectral imaging and an SVM classifier. Plants 2020, 9, 936. [Google Scholar] [CrossRef] [PubMed]
- Maheswaran, S.; Sathesh, S.; Rithika, P.; Shafiq, I.M.; Nandita, S.; Gomathi, R. Detection and classification of paddy leaf diseases using deep learning (cnn). In Proceedings of the International Conference on Computer, Communication, and Signal Processing, Chennai, India, 24–25 February 2022; pp. 60–74. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. pp. 21–37. [Google Scholar]
- Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Farhadi, A.; Redmon, J. Yolov3: An Incremental Improvement; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Ge, Z. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.-Y.; Yeh, I.-H.; Mark Liao, H.-Y. Yolov9: Learning what you want to learn using programmable gradient information. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 1–21. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Khanam, R.; Hussain, M.J. Yolov11: An overview of the key architectural enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
- Tian, L.; Zhang, H.; Liu, B.; Zhang, J.; Duan, N.; Yuan, A.; Huo, Y. Bioinformatics. VMF-SSD: A Novel v-space based multi-scale feature fusion SSD for apple leaf disease detection. IEEE/ACM Trans. Comput. Biol. Bioinform. 2022, 20, 2016–2028. [Google Scholar]
- Deari, S.; Ulukaya, S. Engineering. A hybrid multistage model based on YOLO and modified inception network for rice leaf disease analysis. Arab. J. Sci. Eng. 2024, 49, 6715–6723. [Google Scholar] [CrossRef]
- Wang, J.; Qin, C.; Hou, B.; Yuan, Y.; Zhang, Y.; Feng, W. LCGSC-YOLO: A lightweight apple leaf diseases detection method based on LCNet and GSConv module under YOLO framework. Front. Plant Sci. 2024, 15, 1398277. [Google Scholar] [CrossRef] [PubMed]
- Kumar, V.S.; Jaganathan, M.; Viswanathan, A.; Umamaheswari, M.; Vignesh, J.J. Rice leaf disease detection based on bidirectional feature attention pyramid network with YOLO v5 model. Environ. Res. Commun. 2023, 5, 065014. [Google Scholar] [CrossRef]
- He, Y.; Peng, Y.; Wei, C.; Zheng, Y.; Yang, C.; Zou, T. Automatic Disease Detection from Strawberry Leaf Based on Improved YOLOv8. Plants 2024, 13, 2556. [Google Scholar] [CrossRef]
- Xie, Z.; Li, C.; Yang, Z.; Zhang, Z.; Jiang, J.; Guo, H. YOLOv5s-BiPCNeXt, a Lightweight Model for Detecting Disease in Eggplant Leaves. Plants 2024, 13, 2303. [Google Scholar] [CrossRef]
- Zhu, S.; Ma, W.; Wang, J.; Yang, M.; Wang, Y.; Wang, C. EADD-YOLO: An efficient and accurate disease detector for apple leaf using improved lightweight YOLOv5. Front. Plant Sci. 2023, 14, 1120724. [Google Scholar] [CrossRef] [PubMed]
- Yan, C.; Yang, K. FSM-YOLO: Apple leaf disease detection network based on adaptive feature capture and spatial context awareness. Digit. Signal Process. 2024, 155, 104770. [Google Scholar] [CrossRef]
- Abdullah, A.; Amran, G.A.; Tahmid, S.A.; Alabrah, A.; AL-Bakhrani, A.A.; Ali, A. A deep-learning-based model for the detection of diseased tomato leaves. Agronomy 2024, 14, 1593. [Google Scholar] [CrossRef]
- Xu, W.; Wang, R. ALAD-YOLO: An lightweight and accurate detector for apple leaf diseases. Front. Plant Sci. 2023, 14, 1204569. [Google Scholar]
- Bandi, R.; Swamy, S.; Arvind, C.S. Leaf disease severity classification with explainable artificial intelligence using transformer networks. Int. J. Adv. Technol. Eng. Explor. 2023, 10, 278. [Google Scholar]
- Brownlee, J. Deep Learning for Computer Vision: Image Classification, Object Detection, and Face Recognition in Python; Machine Learning Mastery: Melbourne, Australia, 2019. [Google Scholar]
- Wang, R.; Wang, Z.; Xu, Z.; Wang, C.; Li, Q.; Zhang, Y.; Li, H. A Real-Time Object Detector for Autonomous Vehicles Based on YOLOv4. Comput. Intell. Neurosci. 2021, 2021, 9218137. [Google Scholar] [CrossRef] [PubMed]
- Sohan, M.; Sai Ram, T.; Reddy, R.; Venkata, C. A review on yolov8 and its advancements. In Proceedings of the International Conference on Data Intelligence and Cognitive Informatics, Tirunelveli, India, 18–20 November 2024; pp. 529–545. [Google Scholar]
- Wang, S.-H.; Fernandes, S.L.; Zhu, Z.; Zhang, Y.-D. Attention-based VGG-style network for COVID-19 diagnosis by CBAM. IEEE Sens. J. 2021, 22, 17431–17438. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Shao, Z.; Hoffmann, N. Global attention mechanism: Retain information to enhance channel-spatial interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Wang, C.-Y.; Liao, H.; Wu, Y.-H.; Chen, P.-Y.; Yeh, I. A New Backbone that can Enhance Learning Capability of CNN. 2020 IEEE. In Proceedings of the CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; IEEE: New York, NY, USA; pp. 390–391. [Google Scholar]
- Li, X.; Song, D.; Dong, Y. Hierarchical feature fusion network for salient object detection. IEEE Trans. Image Process. 2020, 29, 9165–9175. [Google Scholar] [CrossRef]
- Yan, C.; Xu, E. ECM-YOLO: A real-time detection method of steel surface defects based on multiscale convolution. J. Opt. Soc. Am. A 2024, 41, 1905–1914. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to upsample by learning to sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 4–6 October 2023; pp. 6027–6037. [Google Scholar]
Collection Equipment | iPhone 13 Pro | iPhone 14 Pro | realmeGT Neo5 | Redmi K60 | Canon 600D | Canon 750D | Canon 200D | Nikon D5300 |
---|---|---|---|---|---|---|---|---|
Pixel Resolution (MP) | 1200 | 4800 | 5000 | 6400 | 1800 | 2420 | 2420 | 2416 |
Images Collected | 199 | 295 | 116 | 260 | 336 | 483 | 210 | 456 |
Overall Category | Number of Images | Total Number of Labels |
---|---|---|
Training set | 9891 | 11,365 |
Validation set | 1413 | 1638 |
Test set | 2826 | 3227 |
System | CPU | GPU | CUDA | Cudnn | Pytorch | Python |
---|---|---|---|---|---|---|
Windows 11 | Intel(R) Core(TM) i5-13400 CPU @ 2.50 GHz | NVIDIA GeForce RTX 3060 12G | 11.8 | 8.7.0 | 2.0.0 | 3.11.0 |
Input Image | Batch Size | Epoch | Lr0 | Momentum | Weight Decay |
---|---|---|---|---|---|
640 × 640 | 16 | 300 | 0.01 | 0.937 | 0.0005 |
No. | CSMSM | C3_EMSCP | DySample | P | R | F1 | mAP@50 | mAP@50–95 |
---|---|---|---|---|---|---|---|---|
1 | - | - | - | 0.920 | 0.901 | 0.910 | 0.931 | 0.738 |
2 | √ | - | - | 0.936 | 0.911 | 0.923 | 0.940 | 0.793 |
3 | - | √ | - | 0.925 | 0.889 | 0.906 | 0.924 | 0.728 |
4 | - | - | √ | 0.917 | 0.910 | 0.914 | 0.937 | 0.747 |
5 | √ | √ | - | 0.937 | 0.916 | 0.926 | 0.943 | 0.789 |
6 | √ | - | √ | 0.936 | 0.916 | 0.926 | 0.944 | 0.804 |
7 | - | √ | √ | 0.920 | 0.897 | 0.908 | 0.930 | 0.720 |
8 | √ | √ | √ | 0.936 | 0.926 | 0.931 | 0.949 | 0.803 |
Model | mAP50 Values for Different Leaf Diseases | mAP@50 | mAP@50–95 | |||
---|---|---|---|---|---|---|
Leaf_Blight | Anthracnose | Leaf_Spot | Rust | |||
Baseline | 0.964 | 0.912 | 0.918 | 0.931 | 0.931 | 0.738 |
HF-MSENet | 0.972 | 0.934 | 0.929 | 0.962 | 0.949 | 0.803 |
Model | P | R | F1 | mAP@50 | mAP@50–95 |
---|---|---|---|---|---|
Faster R-CNN | 0.314 | 0.706 | 0.433 | 0.552 | 0.291 |
SSD | 0.862 | 0.708 | 0.775 | 0.782 | 0.473 |
YOLOv5n | 0.906 | 0.857 | 0.881 | 0.906 | 0.678 |
YOLOv6n | 0.919 | 0.861 | 0.889 | 0.910 | 0.712 |
YOLOv7n | 0.864 | 0.828 | 0.850 | 0.853 | 0.642 |
YOLOv8n | 0.920 | 0.901 | 0.910 | 0.931 | 0.738 |
YOLOv9t | 0.889 | 0.861 | 0.875 | 0.904 | 0.684 |
YOLOv10n | 0.884 | 0.849 | 0.866 | 0.900 | 0.687 |
YOLOv11n | 0.915 | 0.901 | 0.908 | 0.927 | 0.739 |
HF-MSENet | 0.936 | 0.926 | 0.931 | 0.949 | 0.803 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, T.; Xia, H.; Xie, J.; Li, J.; Liu, J. A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection. Agriculture 2025, 15, 262. https://doi.org/10.3390/agriculture15030262
Wang T, Xia H, Xie J, Li J, Liu J. A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection. Agriculture. 2025; 15(3):262. https://doi.org/10.3390/agriculture15030262
Chicago/Turabian StyleWang, Tao, Hongyi Xia, Jiao Xie, Jianjun Li, and Junwan Liu. 2025. "A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection" Agriculture 15, no. 3: 262. https://doi.org/10.3390/agriculture15030262
APA StyleWang, T., Xia, H., Xie, J., Li, J., & Liu, J. (2025). A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection. Agriculture, 15(3), 262. https://doi.org/10.3390/agriculture15030262