Next Article in Journal
Challenges and Strategies in the Industrial Application of Dendrobium officinale
Next Article in Special Issue
Evaluation on the Efficacy of Farrerol in Inhibiting Shoot Blight of Larch (Neofusicoccum laricinum)
Previous Article in Journal
Efficacy of Plant Tissue Culture Techniques for Eliminating Black Mulberry Idaeovirus (BMIV) from Infected Black Mulberry (Morus nigra)
Previous Article in Special Issue
Development, Survival and Reproduction of Nezara viridula (Hemiptera: Pentatomidae) in Sesame Cultivars and Implications for the Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CVW-Etr: A High-Precision Method for Estimating the Severity Level of Cotton Verticillium Wilt Disease

by
Pan Pan
1,2,3,
Qiong Yao
1,2,3,4,
Jiawei Shen
1,2,3,
Lin Hu
1,2,3,*,
Sijian Zhao
1,
Longyu Huang
3,5,
Guoping Yu
3,6,
Guomin Zhou
1,2,3,7 and
Jianhua Zhang
1,2,3,*
1
Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100081, China
2
National Agriculture Science Data Center, Beijing 100081, China
3
National Nanfan Research Institute (Sanya), Chinese Academy of Agricultural Sciences, Sanya 572024, China
4
Agricultural College, Henan University, Kaifeng 475004, China
5
Institute of Cotton Research, Chinese Academy of Agricultural Sciences, Anyang 455000, China
6
China National Rice Research Institute, Hangzhou 311401, China
7
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
*
Authors to whom correspondence should be addressed.
Plants 2024, 13(21), 2960; https://doi.org/10.3390/plants13212960
Submission received: 22 August 2024 / Revised: 16 October 2024 / Accepted: 21 October 2024 / Published: 23 October 2024
(This article belongs to the Special Issue Integrated Pest Management and Plants Health)

Abstract

:
Cotton verticillium wilt significantly impacts both cotton quality and yield. Selecting disease-resistant varieties and using their resistance genes in breeding is an effective and economical control measure. Accurate severity estimation of this disease is crucial for breeding resistant cotton varieties. However, current methods fall short, slowing the breeding process. To address these challenges, this paper introduces CVW-Etr, a high-precision method for estimating the severity of cotton verticillium wilt. CVW-Etr classifies severity into six levels (L0 to L5) based on the proportion of segmented diseased leaves to lesions. Upon integrating YOLOv8-Seg with MobileSAM, CVW-Etr demonstrates excellent performance and efficiency with limited samples in complex field conditions. It incorporates the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules to handle blurry transitions between healthy and diseased regions and variations in angle and distance during image collection, and to optimize the model’s parameter size and computational complexity. Our experimental results show that CVW-Etr effectively segments diseased leaves and lesions, achieving a mean average precision (mAP) of 92.90% and an average severity estimation accuracy of 92.92% with only 2.6M parameters and 10.1G FLOPS. Through experiments, CVW-Etr proves robust in estimating cotton verticillium wilt severity, offering valuable insights for disease-resistant cotton breeding applications.

1. Introduction

Cotton, an essential textile fiber from the Gossypium genus in the Malvaceae family, contributes to approximately 35% of global annual fiber demand [1]. Diseases affecting cotton throughout its growth cycle can significantly reduce yield and quality, posing a serious economic threat to farmers [2]. One of the most significant challenges in cotton production is verticillium wilt, a persistent disease since it was first documented in 1918 [3]. This highly destructive disease presents substantial obstacles to the growth and development of cotton due to its widespread distribution and formidable pathogenicity under favorable conditions [4]. Economic losses can reach alarming rates of 30–50% or more in certain years, largely attributed to inadequate prevention measures and misguided interventions [5].
In cotton cultivation, controlling verticillium wilt often relies on fungicides and fumigation [6,7]. While widely embraced by farmers and generally effective, it is costly, hazardous, and not environmentally friendly [8]. In contrast to fungicides and fumigation, selecting cotton varieties with strong disease resistance and utilizing their resistance genes in breeding can more effectively reduce losses caused by cotton verticillium wilt [9]. To develop cotton varieties with high verticillium wilt resistance, breeders establish experimental plots to estimate disease resistance in various genotypes and progeny lines [10]. During estimation, parameters like disease incidence, severity level, and the time it takes from planting or inoculation to symptom manifestation are considered [11]. Among these, accurately estimating disease severity level is pivotal at every stage, spanning from germplasm selection and progeny screening to variety dissemination.
Ensuring accurate and precise estimates of disease severity levels is crucial for developing cotton varieties with high disease resistance. However, the current manual method, which relies on visually scoring lesions on cotton leaves, is prone to estimator fatigue, bias, and errors, and is time-consuming [12]. Moreover, it requires experienced estimators, and different estimators may yield significantly varied estimations of the same sampling unit [13]. These challenges hinder the facilitation and acceleration of disease-resistance breeding processes. Given the growing need for the accurate and large-scale estimation of disease severity levels for cotton verticillium wilt-resistance breeding, the urgency for automated methods is escalating [14]. Significant attention has been devoted to achieving high-precision estimates of cotton verticillium wilt disease severity level [15].
In recent years, researchers have employed methods such as spectral analysis [16] and unmanned aerial vehicle (UAV) remote sensing to estimate cotton verticillium wilt disease severity [17,18]. However, their effectiveness in disease-resistance breeding is often limited due to the high environmental requirements of spectral measurement devices and the inability of UAV remote sensing to accurately estimate individual disease-resistant breeding materials. Hence, further research is crucial to develop more suitable methods tailored to the demands of disease-resistance breeding applications. The segmentation of diseased leaves and lesions, utilized to calculate the diseased leaf area and lesion area, is the predominant approach for estimating cotton verticillium wilt disease severity levels in disease-resistance breeding [19]. However, accurately segmenting diseased leaves and lesions under field conditions persists as a primary challenge for this estimation method [20].
Disease segmentation involves segmenting crop disease or lesion targets from complex backgrounds, comprising tasks such as leaf and lesion segmentation. Traditional image processing techniques [21,22], region growth algorithms [23], and machine learning [24,25] have been proposed for segmenting crop diseases over the past two decades. However, these methods are only effective for diseased-leaf images with simple backgrounds. When the color of the diseased area closely resembles the background or the boundaries are unclear, these segmentation methods struggle to differentiate between the background and the lesions on diseased leaves, resulting in poor segmentation outcomes.
Deep learning enables computational models consisting of multiple processing layers to learn data representations with various levels of abstraction [26]. This technology has significantly advanced numerous fields, including autonomous driving, medical systems, and agricultural analysis. In this context, deep learning models have emerged for segmenting crop diseases and estimating disease severity levels based on segmentation results. Several noteworthy studies include the following: Ref. [27] introduced a pixel-level segmentation model using an attention mechanism-optimized DeepLabv3+ for accurate grape disease severity estimation. However, their study was limited to simple background disease images, potentially lacking applicability to more complex cases. Ref. [28] developed an improved Mask R-CNN network for the pixel-level segmentation of potato leaves and subsequent grading of late blight. Despite the model’s higher computational load, its performance in severity estimation still requires improvement. Refs. [29,30] presented a two-stage disease segmentation model combining DeepLabv3+ and U-net to sequentially segment diseased leaves and lesions. By calculating the pixel count from the segmentation results, they estimated the disease severity levels. This two-stage approach first extracts diseased leaf instances from images using a leaf segmentation model, followed by lesion segmentation. This method has shown excellent performance in crops like cucumber and corn, but its potential for crops with blurred disease presentations remains to be evaluated.
While these studies have advanced the field significantly, current methods face challenges in assessing cotton verticillium wilt severity: complex backgrounds and limited sample sizes complicate diseased-leaf segmentation; variations in angle and distance affect the location and size of the disease; and cotton verticillium wilt causes shape variations and color changes in leaves, leading to blurred transitions between diseased and healthy areas.
To address these issues, our study introduces a high-precision method for estimating cotton verticillium wilt disease severity. This method aims to provide timely and accurate estimations, facilitating and accelerating disease-resistant cotton breeding efforts. Our study offers the following contributions:
(1)
We constructed an image dataset of cotton diseases with complex backgrounds for leaf and lesion segmentation and disease severity level estimation.
(2)
We introduced the MobileSAM universal segmentation model, which pre-segments leaf images to enhance performance, especially with limited dataset availability.
(3)
We proposed an improved method based on YOLOv8-Seg for segmenting diseased leaves and lesions, addressing challenges such as blurred lesion boundaries and variations in angle and distance. Additionally, we optimized the model parameters and computational complexity.
(4)
Through experiments, we validated the effectiveness of the proposed disease severity estimation method. We also developed and deployed a cotton disease severity assessment app for smartphones, which was used in field validation experiments, demonstrating robustness in estimating cotton verticillium wilt severity in field environments.
This paper is organized as follows: Section 2 covers image acquisition, dataset production, the architectures of the methods used, and enhancements to the segmentation model. Section 3 presents the experimental results. Section 4 presents a discussion. Finally, Section 5 concludes this paper.

2. Materials and Methods

2.1. Materials

Image Data Acquisition and Dataset Production

The image dataset for this study was collected from two locations: the cotton fields at the Langfang Research Base of the Chinese Academy of Agricultural Sciences in Hebei Province, China (39°27′55.59″ N, 116°45′28.54″ E), and the Potianyang Base in Yazhou District, Sanya City, Hainan Province, China (18°23′49.71″ N, 109°10′39.84″ E). Data collection occurred between September 2020 and February 2023, under a variety of weather conditions, including clear and overcast skies, at different times of the day—morning, noon, and evening. Images were captured with a Canon EOS 850D digital camera and a Huawei P40 Pro smartphone from distances of 20–50 cm. The images, with a resolution of 4608 × 3456 pixels, were saved in JPG format and included backgrounds such as weeds, soil, leaves, stems, shadows, and human hands. To ensure the accuracy of the dataset, two expert cotton pathologists rigorously diagnosed and confirmed the symptoms in the images. The diagnosis was based on well-established morphological characteristics of Verticillium wilt, including foliar symptoms like leaf chlorosis, necrosis, and wilting. The pathologists carefully differentiated these symptoms from those caused by water stress and other abiotic factors. Water stress symptoms were identified and excluded by observing patterns such as uniform wilting across the entire plant rather than in isolated areas, along with environmental context, such as recent irrigation records and soil conditions. In cases where symptoms were ambiguous, the images were cross-checked with field notes and additional diagnostic methods, such as assessing stem cross-sections for vascular discoloration, which is characteristic of Verticillium wilt.
Following image collection, the dataset underwent data augmentation using the Albumentations library (http://github.com/albumentations-team/albumentations (accessed on 22 July 2024)) and the built-in augmentations in YOLOv8. The techniques included horizontal and vertical flips, random rotations, translations, scaling, random cropping, Gaussian noise, brightness and contrast adjustments, and elastic transformations. These augmentations simulated various field conditions, enhancing the model’s robustness and segmentation accuracy under different scenarios. Advanced methods such as grid distortion and optical distortion were also applied to further diversify the training data. For segmentation, we annotated the dataset using LabelMe, creating JSON files with the image size, label names, and points outlining lesions and diseased leaves. This process resulted in 11,310 images of cotton Verticillium Wilt. The dataset was divided into three subsets: 9050 images for training, and 1130 images each for validation and testing. Representative sample images from the dataset are shown in Figure 1.

2.2. Methods

2.2.1. Overall Model

One of the primary challenges in automated cotton verticillium wilt severity estimation is accurately segmenting lesions under natural field conditions. Several difficulties arise in this context: (1) Complex backgrounds, such as soil, plastic film, and water pipes, pose significant challenges in segmenting diseased leaves and lesions from images with limited samples. (2) Verticillium wilt in cotton often exhibits excessively blurred lesion boundaries, and variations in angle and distance affect the location and size of the wilt, hindering accurate disease segmentation. (3) Segmentation models require optimization in parameter size and computational complexity to meet deployment needs.
In this context, we propose a composite framework, CVW-Etr, for estimating cotton verticillium wilt severity levels under natural field conditions. CVW-Etr is based on the YOLOv8-Seg model and incorporates the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules. Additionally, it incorporates YOLOv8-Seg with MobileSAM. The architecture of CVW-Etr is visualized in Figure 2. The framework comprises five main components: input, pre-segmentation, backbone, neck, head, and disease severity level estimation. The methods and enhancements integrated into CVW-Etr are as follows:
(1)
Utilizing the MobileSAM suitable for resource-constrained devices, we conducted pre-segmentation on the images, roughly segmenting all leaves in the image and setting the background to black. This enhanced the performance and efficiency of segmentation models trained under a limited sample in a complex field background.
(2)
Improved YOLOv8-Seg model for accurate and rapid segmentation: To enhance the segmentation accuracy and speed for cotton verticillium wilt leaves and lesions, several improvements were made to the YOLOv8-Seg model. The RFCBAMConv and C2f-RFCBAMConv modules replaced the Conv and C2f modules in the backbone network, addressing the blurry transitions between healthy and diseased regions. The AWDownSample-Lite module replaced the Conv module in the neck network, handling variations in angle and distance by aggregating information within each receptive field. The GSegment segmentation head, replacing the original segmentation head in YOLOv8-Seg, reduced the model parameters and computational complexity while improving the model’s ability to perceive diseased leaves and lesions at different scales.
(3)
The severity levels of cotton verticillium wilt were categorized into six levels, from L0 to L5, based on the proportion of diseased area to lesion area.

2.2.2. MobileSAM

Recently, Meta AI introduced the Segment Anything Model (SAM), a groundbreaking model for image segmentation. SAM is trained on the extensive SA-1B dataset, which includes over 110 million images and 1 billion masks, enabling it to generalize effectively. It utilizes a Transformer-based architecture and is designed to output a segmentation mask for a given input image based on one or several input prompts, such as points, bounding boxes, or segmentation masks [31]. Notably, SAM demonstrates exceptional performance and generalization in common scenes, surpassing the accuracy of prior supervised methods in certain application areas with few samples. Its impact extends across various computer vision applications, including remote sensing segmentation and medical image analysis [32]. In the application scenario of cotton leaf segmentation, the complex backgrounds in cotton fields, including soil, plastic film, and water pipes, pose challenges for visual feature extraction in cotton leaf segmentation. The limited sample size has made it challenging to directly segment diseased leaves from images. However, due to SAM’s outstanding performance in zero-shot learning, its application in estimating cotton verticillium wilt disease severity level holds promise for enhancing performance and efficiency.
Unfortunately, SAM tends to segment the entire object or its main parts, which results in unsatisfactory performance when segmenting verticillium wilt on cotton due to the blurry boundary transitions characteristic of this disease. Consequently, SAM can only segment the leaves from images, and struggles to separate the lesions from the diseased leaves. Furthermore, according to the technical specifications for identifying verticillium wilt resistance in cotton and the experience of disease-resistant cotton researchers, the area of the leaf stem is typically not considered in verticillium wilt disease severity level estimation. However, SAM fails to distinguish subtle visual cues like leaf stems, significantly impacting the accuracy of verticillium wilt disease severity level estimation in the context of disease-resistant cotton breeding. Combining SAM with specialized segmentation models like YOLO-Seg, which has been fine-tuned and optimized to address this task, appears to be a more appropriate approach.
Additionally, its practical applications remain challenging in segmenting diseased leaves. The primary issue stems from the substantial computational resources required by the Transformer (ViT) models, which are integral to SAM’s architecture. Further optimization is needed to meet the deployment requirements on edge computing devices. To address this challenge, ref. [33] proposed a “decoupled distillation” approach to distill the ViT decoder of SAM, resulting in a lightweight version known as MobileSAM. MobileSAM performs satisfactorily on resource-constrained devices such as edge computing devices. Considering these advantages and disadvantages, to ensure the speed and accuracy of CVW-Etr, it relies on MobileSAM for image pre-segmentation. We conducted pre-segmentation on images, roughly segmenting all leaves in the image and setting the background to black, which achieved one-step segmentation of diseased leaves and lesions.

2.2.3. YOLOv8-Seg

YOLOv8s-Seg, developed by Ultralytics, represents the latest advancement in the YOLO (You Only Look Once) series tailored for segmentation tasks, debuting on 10 January 2023 [34]. Drawing inspiration from the principles of the YOLACT network, it comprises three core components—backbone, neck, and head. The model employs a CSPDarknet feature extractor for its backbone and augments it with a C2f module instead of the traditional YOLO neck architecture. Following the C2f module, two segmentation heads predict segmentation masks for the input image. Remarkably, the YOLOv8-Seg model has set new benchmarks in segmentation performance while maintaining exceptional speed and efficiency [35]. Therefore, this paper adopts YOLOv8-Seg as the baseline and proposes further enhancements based on this model.

2.2.4. RFCBAMConv Module and C2f-RFCBAMConv

In images captured from different angles and distances, the location and size of cotton verticillium wilt vary. If convolutional operations use the same parameters in each receptive field to extract information without considering differential information from different locations and the importance of each feature, it can limit the model’s performance in segmenting diseased leaves and lesions, thus affecting the speed and accuracy of disease severity level estimation. While attention mechanisms can address the parameter-sharing issue in convolutional operations, they fail to emphasize the importance of each feature within the receptive field, resulting in insufficient information in the generated attention maps for large-scale convolutions. Additionally, spatial attention mechanisms such as CBAM (the Convolutional Block Attention Module) and CA (Channel Attention) introduce excessive convolution operations and computational burden, rendering them unsuitable for cotton verticillium wilt-diseased leaf and lesion segmentation.
To address these challenges, this study proposes the RFCBAMConv module, which integrates a spatial attention CBAM focusing on receptive field features with convolution operations. This module not only emphasizes the importance of various features within the receptive field, but also resolves the parameter-sharing issue of convolutional kernels, with only a small increase in the parameters and computational cost. The specific structure of the RFCBAMConv module is illustrated in Figure 3. Regarding attention to receptive field features, the RFCBAMConv module assigns different weights to each position and feature channel within the receptive field using a receptive field weight matrix, thereby adjusting the weight distribution of different features within different receptive fields to highlight important disease details. Concerning parameter sharing, RFCBAMConv adaptively adjusts the shape and scope of the receptive field based on the size of the convolutional kernel, enabling more flexible adjustment of convolutional kernel parameters and providing different processing methods for different regions. Larger receptive fields are utilized to capture global information for heavily affected or larger areas of damage, while smaller receptive fields are generated for lightly affected or smaller targets to enhance the segmentation accuracy of small target lesions. Meanwhile, to further emphasize the detailed features of cotton verticillium wilt and reduce information loss, we integrated the RFCBAMConv module with the C2f module, proposing the C2f_RFCBAMConv module, which replaces the bottleneck in the C2f module with RFCBAMConv. The specific structure is shown in Figure 3.
We integrated the RFCBAMConv and C2f_RFCBAMConv modules into the backbone network of YOLOv8-Seg, replacing the original Conv and C2f modules. This enhancement significantly improves the model’s segmentation performance, particularly for cotton Verticillium wilt lesions. RFCBAMConv adaptively adjusts the receptive field to capture multi-scale lesion features, optimizing feature weighting for both small and large diseased areas. Paired with C2f_RFCBAMConv, the model retains and refines crucial features, enhancing segmentation accuracy in complex field conditions.

2.2.5. AWDownSample-Lite

Cotton verticillium wilt disease induces shape variations and color changes in leaves, often leading to blurred transitions between diseased and healthy areas. Moreover, natural leaf changes, aging, or other physiological factors may resemble disease-induced alterations, making it challenging for models to accurately capture disease shapes and boundaries. The neck section of the YOLOv8-Seg model is crucial for integrating features from different scales. However, its downsampling module filters out what it deems unimportant information during downsampling, affecting the effective extraction of leaf and lesion features of cotton verticillium wilt disease.
To address the issue of information loss concerning diseased leaves and lesion features and achieve high-precision models, this study proposes the AWDownSample-Lite module. Inspired by the principle of enlarging the receptive field in the RFAConv design, this module adjusts the receptive field while maximizing the extraction of diseased leaf and lesion features. The specific structure is illustrated in Figure 4, and its implementation steps are as follows:
(1)
Input the feature.
(2)
Extract global information from the input feature through the AvgPool operation.
(3)
Extract information within the receptive field through the Group Conv operation.
(4)
Emphasize the importance of each feature within the receptive field through the SoftMax operation.
(5)
Fuse the extracted features with the spatial features of the receptive field and utilize them for adjusting the convolutional parameter weights.
(6)
Output the feature.
The AWDownSample-Lite module aggregates feature information within each receptive field. This choice ensures that both the global and local context are preserved, helping the model differentiate between disease-induced leaf changes and natural variations due to factors like aging or environmental stress. We integrated the AWDownSample-Lite module into the neck network of the YOLOv8-Seg model, replacing the Conv modules. This enhancement significantly improves the effective extraction of cotton verticillium wilt disease features by the segmentation model, thereby enhancing the accuracy of cotton verticillium wilt disease leaf and lesion segmentation.

2.2.6. GSegment

The YOLOv8-Seg model introduces a decoupled head mechanism, separating convolutional layers from fully connected layers. This technique utilizes the output features from the neck network to predict the category and position of targets through different branches. However, it includes twelve 3 × 3 Conv modules, which, while aiding in improving model convergence and accuracy, introduce a significant number of additional parameters and computational costs.
To enhance computational efficiency, we propose the GSegment module. Inspired by the group convolution design principle of AlexNet, the GSegment module divides convolutional kernels and input feature maps into g groups simultaneously. Within each group, the operation of convolving input feature maps with convolutional kernels is referred to as Group Conv. Group Conv divides the input feature maps into g groups, resulting in a significant reduction in parameters and computational complexity. Additionally, the coupling between the feature maps obtained through different convolution paths is low, with varying attention to primary features. This helps enhance the perception of diseased leaves and lesions of different scales and shapes. The GSegment module replaces the 3 × 3 Convolution in YOLOv8-Seg with two 3×3 Group Convolutions. This improvement significantly reduces the parameter count and computational complexity of the segmentation model.

2.2.7. Severity Level Estimation Method

CVW-Etr estimates the severity levels of cotton verticillium wilt disease by calculating the proportion of lesion area to diseased leaf area. Following technical specifications for evaluating the resistance of cotton verticillium wilt disease [36] and expert recommendations from disease-resistant cotton breeding specialists, severity levels are categorized into six levels, ranging from L0 to L5, as illustrated in Table 1. The intervals corresponding to the proportion of lesion area to diseased leaf area define the severity level.
The specific estimation steps are as follows:
(1)
Segmenting cotton verticillium wilt-diseased leaves and lesions.
(2)
Calculating the number of pixels of the diseased leaves and lesions based on the segment result.
(3)
Computing the proportion of the number of pixels of lesions to diseased leaves, which serves as the basis for grading the severity level of cotton verticillium wilt disease. The calculation formula is presented as Equation (1):
P = C Disease   C Leaf
C Disease   represents the number of pixels in the segmented lesions, C Leaf represents the number of pixels in the segmented diseased leaves, and P denotes the proportion of lesions to diseased leaves.
(4)
Estimating the cotton verticillium wilt disease severity level based on the proportion of lesions to diseased leaves.

2.2.8. APP Development

In large-scale assessments of cotton Verticillium wilt, evaluating numerous cotton varieties requires surveyors to complete assessments quickly and sync the results to the Cloud for further analysis. Processing data solely on smartphones proved inefficient, leading to increased wait times. Additionally, at tropical cotton breeding sites, where temperatures often exceed 25 °C, field tests revealed that running models locally caused smartphone processors to overheat and even shut down, significantly slowing the survey process and reducing efficiency. To address this, we developed an application for estimating cotton disease severity, implementing a solution that transfers data via HTTPS to a server or edge computing device, improving processing speed and enhancing system stability.
This application consists of a smartphone terminal and a server terminal, as shown in Figure 5. The smartphone terminal, built on the Uni-app framework, includes two modules: Information Acquisition and Result Display. In the Information Acquisition module, users can capture images of diseased cotton plants and input relevant data, such as the cotton variety, field details, and survey information. These inputs are automatically uploaded to the server via HTTPS for processing. The Result Display module allows users to filter and query the processed results based on criteria such as cotton variety, field location, and disease severity, providing clear and detailed disease assessments.
The server terminal, built on the Micronaut framework, performs key tasks such as segmenting diseased leaves and lesions and estimating disease severity. Once processing is complete, the results are transmitted back to the app for user access.

2.3. Model Training Procedures

This experiment was conducted on a Dell desktop workstation running Windows 11, equipped with a 12th Gen Intel Core i5-12500 processor (3.00 GHz), 32 GB RAM, and a 1 TB SSD. GPU acceleration was utilized through an NVIDIA GeForce RTX 3080 (10 GB memory). The software environment included Python 3.7.16, PyTorch 1.7.0, Torchvision 0.8.2, and CUDA 11.0.
The training process spanned 200 epochs, with an early stopping mechanism triggered if performance stagnated for 50 consecutive epochs to prevent overfitting. A batch size of 8 was used, and an Adam optimizer was applied with an initial learning rate of 1 × 10−3, which decayed to 1 × 10−5. The momentum was set at 0.937 with no weight decay, and the input image resolution was 640 × 640 pixels.
To prevent errors caused by overlapping segments in diseased-leaf segmentation, the overlap_mask parameter was set to False, ensuring overlapping areas were overridden rather than excluded. Additionally, we set the mask_ratio to 1, allowing the segmentation mask to train at the original resolution, enhancing segmentation accuracy for diseased leaves and lesions.

3. Results

3.1. Performance Evaluation

To evaluate the model’s performance, we used several metrics, including Precision, Recall, [email protected], the number of parameters (Params), and computational costs (FLOPS).
Precision is the ratio of correctly predicted positive samples to all predicted positive samples, and is calculated as follows:
P r e c i s i o n = T P T P + F P
where TP denotes true positives and FP denotes false positives.
Recall measures the proportion of actual positive samples correctly identified by the model, calculated as follows:
R e c a l l = T P T P + F N
where FN represents false negatives.
mAP (mean average precision) is calculated from a precision–recall curve and defined as follows:
m A P = i = 1 N   A P i N
Here, [email protected] refers to the average AP when the Intersection over Union (IoU) threshold is 0.5.
The number of parameters (Params) reflects the model’s complexity and is given as follows:
Param   = K × K × C in   × C out  
where K is the convolution kernel size, and C in   and C out   are the input and output channels, respectively.
FLOPS (Floating-Point Operations Per Second) measures the computational costs, and is calculated as follows:
F L O P s = K × K × C in   × C out   × H × W
where H × WH × W is the size of the output feature map.

3.2. Segmentation Performance of Diseased Leaves and Lesions

Table 2 illustrates the segmentation performance of diseased leaves and lesions using CVW-Etr. The mean average precision (mAP) for diseased-leaf segmentation reaches 99.5%, surpassing the results reported in the existing literature for field environments [37,38,39]. This high performance is attributed to CVW-Etr’s integration of YOLOv8-Seg with MobileSAM, leveraging MobileSAM’s exceptional performance with limited sample sizes, and the fine-tuning and optimization of YOLOv8-Seg. For lesion segmentation, CVW-Etr incorporates the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules to handle lesions with blurred transitions effectively, resulting in high-precision segmentation.

3.3. Ablation Experiment

To further illustrate the effectiveness of the proposed enhancement method, we conducted ablation experiments using YOLOv8-Seg as the baseline model to estimate the role of each module in detail. The experimental results are summarized in Table 3.
Impact of RFCBAMConv, C2f-RFCBAMConv, and AWDownSample-Lite: Integrating RFCBAMConv, C2f-RFCBAMConv, and AWDownSample-Lite into the YOLOv8-Seg model resulted in a 1.2% increase in mAP. These modules improve model accuracy by handling blurry transitions between healthy and diseased regions and addressing variations in angle and distance during image collection.
Impact of GSegment: After integrating GSegment, the model’s parameter count and computational complexity decreased by 21.21% and 19.84%, respectively, while mAP increased by 0.7%. This suggests that GSegment effectively reduces model parameters and computational complexity, optimizing the model’s performance in accurately estimating cotton verticillium wilt severity.
Overall Effects: The enhanced version of YOLOv8-Seg, incorporating the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules, outperformed the original YOLOv8-Seg in terms of mAP, computational complexity, and parameter count. The improved YOLOv8-Seg achieved a 1.9% increase in [email protected], reduced the parameter count by 18.75%, and lowered the computational complexity by 15.83%.

3.4. Performance Comparison with the State-of-the-Art Segmentation Models

To assess the effectiveness of our proposed method, we performed comparative experiments against well-known instance segmentation models, including YOLACT, Mask R-CNN, and YOLOv8-Seg. The same dataset was used for all models, comprising 9050 training images, 1130 validation images, and 1130 test images. Consistent experimental conditions were applied across all models to ensure a fair comparison. The results are presented in Figure 6 and Table 4.
CVW-Etr achieves a [email protected] of 92.9%, outperforming YOLACT, Mask R-CNN, and YOLOv8-Seg in terms of segmentation accuracy. Additionally, our proposed model exhibits lower FLOPS and parameters, specifically 10.1G and 2.6M, respectively.
Several factors contribute to these results. Firstly, Mask R-CNN, a classic two-stage instance segmentation model, achieves high segmentation accuracy but requires a higher parameter count and increased computational resources, making it challenging to deploy on resource-constrained devices. Secondly, YOLACT, a one-stage instance segmentation model, excels in terms of parameter count and computational efficiency but lacks accuracy due to its insufficient prototype mask response. Finally, compared to the original YOLOv8-Seg, CVW-Etr achieves a reduction in both model size and computational costs while preserving comparable segmentation accuracy. This improvement is primarily attributed to the integration of the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules.
To further substantiate the performance of the CVW-Etr model, we randomly selected segmentation results from various instance segmentation models, as displayed in Figure 7. In these visualizations, green represents the segmentation of diseased leaves, yellow represents the segmentation of lesions, and orange or the absence of a mask indicates missegmentation.
In summary, our results indicate that the proposed model outperforms current mainstream instance segmentation models in terms of three key aspects: model size, detection accuracy, and detection speed. This suggests that CVW-Etr achieves an optimal trade-off between accuracy and speed, making it suitable for estimating the severity levels of cotton verticillium wilt.

3.5. Severity Level Estimation Results

This study utilized 113 non-augmented images of cotton verticillium wilt disease to compare the severity levels estimated by the proposed method with those manually estimated by disease-resistant cotton breeding experts, thereby computing the model’s estimation accuracy. The experimental results are presented in Table 5. To further substantiate the performance of the CVW-Etr model, we randomly selected segmentation results from all testing samples, as displayed in Figure 8. In these visualizations, the second column shows the segmentation results for diseased leaves (non-black areas), and the third column highlights the lesion segmentation results with orange masks.
To further validate the model, additional tests were conducted to evaluate its consistency across various environmental conditions and leaf appearances. The CVW-Etr model was tested on images with different lighting conditions, backgrounds, and leaf orientations, demonstrating consistent accuracy and robustness. Consequently, CVW-Etr proves to be reliable in estimating the severity levels of cotton verticillium wilt disease.

3.6. Field Validation Experiment

To validate the effectiveness of the proposed method, a field validation experiment was conducted from 10 to 17 December 2023 at the Potianyang Base in Yazhou District, Sanya City, Hainan Province, China. The validation involved 24 participants, including disease-resistant cotton breeding practitioners, experts, local cotton farmers, and graduate students specializing in crop protection or disease-resistance breeding.
Participants were divided into two groups: a manual estimation group consisting of four practitioners and local farmers who relied on their experience and standards to manually estimate cotton verticillium wilt severity, and an automatic estimation group comprising four graduate students who used the developed application for automated estimation. The validation process was as follows:
(1)
Both groups independently estimated verticillium wilt severity in 300 cotton plants. In each cotton plant, eight leaves were inspected from top to bottom.
(2)
The time taken for estimation was recorded for each group.
(3)
Sixteen disease-resistant cotton breeding experts conducted secondary estimations and evaluated the estimation accuracy of each group.
The experimental results, presented in Table 6, indicated that the automatic group required significantly less time than the manual group. Due to the large number of cotton plants assessed, the manual group’s efficiency decreased, and estimation errors increased after 20 min. Conversely, the automatic group demonstrated higher efficiency and accuracy. These results unequivocally show that CVW-Etr is highly suitable for the accurate and large-scale estimation of cotton verticillium wilt severity, particularly in the context of disease-resistant cotton breeding.

4. Discussion

4.1. Contributions of This Study

Estimating cotton disease severity using deep learning has been extensively researched with two primary methods: spectral analysis and UAV-based approaches. However, both methods have notable limitations.
Previous studies [16,19,40] have primarily employed spectrometry-based methods. While these methods are effective for estimating disease severity, they face challenges in real field conditions, such as varying illumination and soil interference, which can reduce the accuracy and reliability of the spectral data. Furthermore, the high cost of spectrometric equipment and its susceptibility to environmental conditions limit its use in mobile applications.
Similarly, studies by [41,42,43] used UAV-based approaches to estimate cotton disease severity. However, these methods often lack the precision necessary for estimating the severity level of individual breeding materials, which is critical in disease-resistance breeding applications. Despite extensive research using these methods, there remains a gap in addressing the segmentation of diseased leaves and lesions for precise cotton disease severity estimation, especially in the context of disease-resistance breeding. This study aims to fill this gap with the introduction of CVW-Etr, which offers the following advantages:
(1)
Enhanced Performance and Efficiency: To improve segmentation performance with limited datasets in complex field conditions, CVW-Etr utilizes MobileSAM for image pre-segmentation. This helps to enhance model performance, especially when only a small dataset is available.
(2)
Integrated Segmentation Models: To address challenges like blurry transitions between healthy and diseased regions, variations in angles and distances, and the need for model optimization, CVW-Etr incorporates modules such as RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment to optimize the YOLOv8-Seg model for more accurate segmentation.
(3)
High-Precision Severity Estimation: Based on the segmentation results of diseased leaves and lesions, CVW-Etr classifies cotton Verticillium wilt severity into six levels (L0 to L5) by calculating the ratio of diseased leaf area to lesion area. This method meets the high precision requirements for severity estimation in disease-resistance breeding applications.

4.2. Limitations and Future Prospects

While the proposed method has shown promising results, several limitations must be addressed in future research:
(1)
Challenges with Severe Infections: Cotton leaves severely infected with Verticillium wilt (L4 or L5) often exhibit significant damage or curling. Although CVW-Etr can provide accurate estimates due to large lesion areas, the segmentation of severely damaged leaves may introduce errors. Future work will focus on improving severity estimates for these cases by incorporating image recognition techniques.
(2)
Data Limitations: This study used images from Hebei and Hainan provinces for training and validation, with field verification limited to Hainan. Due to the limitation in publicly available datasets for cotton Verticillium wilt, in future research, we will expand our datasets by integrating models [44] with agricultural inspection robots to collect data under diverse conditions. Additionally, we plan to involve human experts to verify and correct model predictions and incorporate techniques such as conditional Generative Adversarial Networks (GANs) to generate synthetic images, which will augment our dataset and improve model robustness.
(3)
Scope of CVW-Etr: Currently, CVW-Etr only estimates the severity of cotton Verticillium wilt and does not address other cotton diseases. Future research will focus on expanding the model to cover additional cotton diseases.
Despite these limitations, CVW-Etr provides a valuable technical reference for estimating cotton Verticillium wilt severity in complex field environments.

5. Conclusions

This study presents CVW-Etr, a precise method for estimating cotton verticillium wilt severity. CVW-Etr categorizes severity into six levels (L0 to L5) based on the proportion of segmented lesions to diseased leaves. By integrating MobileSAM with the YOLOv8-Seg model, CVW-Etr enhances performance and efficiency, even with limited samples in complex field backgrounds. It incorporates the RFCBAMConv, C2f-RFCBAMConv, AWDownSample-Lite, and GSegment modules to handle blurry transitions between healthy and diseased regions and variations in angle and distance, and to optimize the model’s parameter size and computational complexity. Our experimental findings demonstrate CVW-Etr’s effectiveness in segmenting diseased leaves and lesions, achieving a mean average precision (mAP) of 92.90% and an average accuracy of 92.92% in disease severity estimation with only 2.6M parameters and 10.1G FLOPS. The main advantages of CVW-Etr over existing methods include its ability to handle complex field backgrounds, its efficiency with limited data, and its lower computational cost. Despite CVW-Etr’s robustness in estimating cotton Verticillium wilt severity and its valuable contributions to disease-resistant cotton breeding, the model currently has some limitations. For instance, it is specifically designed for cotton Verticillium wilt and has not yet been adapted to other cotton diseases or crop types. Future research will focus on expanding these severity estimation methods to cover additional cotton diseases and potentially different crops to broaden the model’s applicability. Additionally, we aim to develop a comprehensive management system for disease-resistant cotton breeding, integrating disease detection, identification, and severity estimation into a unified platform. This system will provide an efficient tool for the accurate management of disease-resistant breeding materials.

Author Contributions

P.P.: writing—original draft preparation, conceptualization, and methodology. Q.Y.: data curation. J.S.: data curation. L.H. (Lin Hu): writing—review and editing. S.Z.: validation. L.H. (Longyu Huang): formal analysis. G.Y.: project administration. G.Z.: project administration and funding acquisition. J.Z.: writing—review and editing, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China (No. 2022YFF0711805, No. 2022YFF0711801, No. 2021YFF0704204), the Project of Sanya Yazhou Bay Science and Technology City (No. SCKJ-JYRC-2023-45), the National Natural Science Foundation of China (No. 31971792, No. 32160421), the Innovation Project of the Chinese Academy of Agricultural Sciences (No. CAAS-ASTIP-2024-AII, No. ZDXM23011), the Special Fund of the Chinese Central Government for Basic Scientific Research Operations in Commonweal Research Institutes (No. JBYW-AII-2024-05), and the Nanfan special project, CAAS Grant Nos. YBXM2312.

Data Availability Statement

The model weights, the codes, and any derivatives of the YOLOv8 component in this study are licensed under the Affero General Public License (AGPLv3). In adherence to the AGPLv3 license, the codes, the optimal model, and some of the data that were used and analyzed in this study can be accessed on the following website: https://github.com/cn-panpan/CVW-Etr (accessed on 22 July 2024). The data used to support this study are available from the corresponding author upon request.

Acknowledgments

Thanks are given to the editors and reviewers of the journal Plants.

Conflicts of Interest

The authors declare that this research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Huang, G.; Huang, J.-Q.; Chen, X.-Y.; Zhu, Y.-X. Recent Advances and Future Perspectives in Cotton Research. Annu. Rev. Plant Biol. 2021, 72, 437–462. [Google Scholar] [CrossRef] [PubMed]
  2. Chi, B.-J.; Zhang, D.-M.; Dong, H.-Z. Control of cotton pests and diseases by intercropping: A review. J. Integr. Agric. 2021, 20, 3089–3100. [Google Scholar] [CrossRef]
  3. Carpenter, C.W. Wilt Diseases of Okra and the Verticillium-Wilt Problem; US Government Printing Office: Washington, DC, USA, 1918. [Google Scholar]
  4. Zhu, Y.; Zhao, M.; Li, T.; Wang, L.; Liao, C.; Liu, D.; Zhang, H.; Zhao, Y.; Liu, L.; Ge, X.; et al. Interactions between Verticillium dahliae and cotton: Pathogenic mechanism and cotton resistance mechanism to Verticillium wilt. Front. Plant Sci. 2023, 14, 1174281. [Google Scholar] [CrossRef] [PubMed]
  5. Dadd-Daigle, P.; Kirkby, K.; Chowdhury, P.R.; Labbate, M.; Chapman, T.A. The Verticillium wilt problem in Australian cotton. Australas. Plant Pathol. 2021, 50, 129–135. [Google Scholar] [CrossRef]
  6. Shaban, M.; Miao, Y.; Ullah, A.; Khan, A.Q.; Menghwar, H.; Khan, A.H.; Ahmed, M.M.; Tabassum, M.A.; Zhu, L. Physiological and molecular mechanism of defense in cotton against Verticillium dahliae. Plant Physiol. Biochem. 2018, 125, 193–204. [Google Scholar] [CrossRef]
  7. Zheng, Y.; Xue, Q.-Y.; Xu, L.-L.; Xu, Q.; Lu, S.; Gu, C.; Guo, J.-H. A screening strategy of fungal biocontrol agents towards Verticillium wilt of cotton. Biol. Control 2011, 56, 209–216. [Google Scholar] [CrossRef]
  8. Huang, J.; Li, H.; Yuan, H. Effect of organic amendments on Verticillium wilt of cotton. Crop Prot. 2006, 25, 1167–1173. [Google Scholar] [CrossRef]
  9. Egan, L.M.; Stiller, W.N. The Past, Present, and Future of Host Plant Resistance in Cotton: An Australian Perspective. Front. Plant Sci. 2022, 13, 895877. [Google Scholar] [CrossRef]
  10. Wheeler, T.A.; Woodward, J.E. Field assessment of commercial cotton cultivars for Verticillium wilt resistance and yield. Crop Prot. 2016, 88, 1–6. [Google Scholar] [CrossRef]
  11. Zhou, H.; Fang, H.; Sanogo, S.; Hughs, S.E.; Jones, D.C.; Zhang, J. Evaluation of Verticillium wilt resistance in commercial cultivars and advanced breeding lines of cotton. Euphytica 2013, 196, 437–448. [Google Scholar] [CrossRef]
  12. Gao, J.; Westergaard, J.C.; Sundmark, E.H.R.; Bagge, M.; Liljeroth, E.; Alexandersson, E. Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning. Knowl.-Based Syst. 2021, 214, 106723. [Google Scholar] [CrossRef]
  13. Nutter, F., Jr.; Gleason, M.; Jenco, J.; Christians, N. Assessing the accuracy, intra-rater repeatability, and inter-rater reliability of disease assessment systems. Phytopathology 1993, 83, 806–812. [Google Scholar] [CrossRef]
  14. Pan, P.; Guo, W.; Zheng, X.; Hu, L.; Zhou, G.; Zhang, J. Xoo-YOLO: A detection method for wild rice bacterial blight in the field from the perspective of unmanned aerial vehicles. Front. Plant Sci. 2023, 14, 1256545. [Google Scholar] [CrossRef] [PubMed]
  15. Pan, P.A.; Jianhua, Z.H.; Xiaoming, Z.H.; Guomin, Z.H.; Lin, H.U.; Quan, F.E.; Xiujuan, C.H. Research progress of deep learning in intelligent identification of disease resistance of crops and their related species. Acta Agric. Zhejiangensis 2023, 35, 1993–2012. [Google Scholar] [CrossRef]
  16. Kang, X.; Huang, C.; Zhang, L.; Yang, M.; Zhang, Z.; Lyu, X. Assessing the severity of cotton Verticillium wilt disease from in situ canopy images and spectra using convolutional neural networks. Crop J. 2022, 11, 933–940. [Google Scholar] [CrossRef]
  17. Abdalla, A.; Wheeler, T.A.; Dever, J.; Lin, Z.; Arce, J.; Guo, W. Assessing fusarium oxysporum disease severity in cotton using unmanned aerial system images and a hybrid domain adaptation deep learning time series model. Biosyst. Eng. 2024, 237, 220–231. [Google Scholar] [CrossRef]
  18. Chen, B.; Li, S.; Wang, K.; Zhou, G.; Bai, J. Evaluating the severity level of cotton Verticillium using spectral signature analysis. Int. J. Remote Sens. 2012, 33, 2706–2724. [Google Scholar] [CrossRef]
  19. Zhang, N.; Zhang, X.; Shang, P.; Ma, R.; Yuan, X.; Li, L.; Bai, T. Detection of Cotton Verticillium Wilt Disease Severity Based on Hyperspectrum and GWO-SVM. Remote Sens. 2023, 15, 3373. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Zhou, G.; Chen, A.; He, M.; Li, J.; Hu, Y. A precise apple leaf diseases detection using BCTNet under unconstrained environments. Comput. Electron. Agric. 2023, 212, 108132. [Google Scholar] [CrossRef]
  21. Rahman, K.S.; Rakib, R.I.; Salehin, M.M.; Ali, R.; Rahman, A. Assessment of paddy leaves disease severity level using image processing technique. Smart Agric. Technol. 2024, 7, 100410. [Google Scholar] [CrossRef]
  22. Zhang, J.-H.; Kong, F.-T.; Wu, J.-Z.; Han, S.-Q.; Zhai, Z.-F. Automatic image segmentation method for cotton leaves with disease under natural environment. J. Integr. Agric. 2018, 17, 1800–1814. [Google Scholar] [CrossRef]
  23. Pang, J.; Bai, Z.-Y.; Lai, J.-C.; Li, S.-K. Automatic Segmentation of Crop Leaf Spot Disease Images by Integrating Local Threshold and Seeded Region Growing. In Proceedings of the 2011 International Conference on Image Analysis and Signal Processing (IASP), Wuhan, China, 21–23 October 2011; pp. 590–594. [Google Scholar]
  24. Banerjee, D.; Kukreja, V.; Hariharan, S.; Jain, V.; Dutta, S. An Intelligent Framework for Grassy Shoot Disease Severity Detection and Classification in Sugarcane Crop. In Proceedings of the 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, 4–6 May 2023; pp. 849–854. [Google Scholar]
  25. Liang, X. Few-shot cotton leaf spots disease classification based on metric learning. Plant Methods 2021, 17, 1–11. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  27. Shu, H.; Liu, J.; Hua, Y.; Chen, J.; Zhang, S.; Su, M.; Luo, Y. A grape disease identification and severity estimation system. Multimed. Tools Appl. 2023, 82, 23655–23672. [Google Scholar] [CrossRef]
  28. Gao, J.; Guo, M.; Yin, X.; Wang, L. Segmentation and Grading Method of Potato Late-Blight on field by Improved Mask R-CNN. In Proceedings of the 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), Madrid, Spain, 11–14 September 2023; pp. 14–19. [Google Scholar]
  29. Divyanth, L.; Ahmad, A.; Saraswat, D. A two-stage deep-learning based segmentation model for crop disease quantification based on corn field imagery. Smart Agric. Technol. 2023, 3, 100108. [Google Scholar] [CrossRef]
  30. Wang, C.; Du, P.; Wu, H.; Li, J.; Zhao, C.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
  31. Carraro, A.; Sozzi, M.; Marinello, F. The Segment Anything Model (SAM) for accelerating the smart farming revolution. Smart Agric. Technol. 2023, 6, 100367. [Google Scholar] [CrossRef]
  32. Ji, W.; Li, J.; Bi, Q.; Liu, T.; Li, W.; Cheng, L. Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications. arXiv 2023, arXiv:2304.05750. [Google Scholar] [CrossRef]
  33. Zhang, C.; Han, D.; Qiao, Y.; Kim, J.U.; Bae, S.H.; Lee, S.; Hong, C.S. Faster Segment Anything: Towards Lightweight SAM for Mobile Applications. arXiv 2023, arXiv:2306.14289. [Google Scholar]
  34. Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A Review on YOLOv8 and Its Advancements. In Data Intelligence and Cognitive Informatics; Jacob, I.J., Piramuthu, S., Eds.; Springer Nature: Singapore, 2022; pp. 529–545. [Google Scholar]
  35. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  36. General Administration of Quality Supervision. Technical Specification for Evaluating Resistance of Cotton to Diseases and Insect Pests—Part 5: Verticillium Wilt; China Standards Press: Beijing, China, 2009. [Google Scholar]
  37. Li, L.; Wang, B.; Li, Y.; Yang, H. Diagnosis and Mobile Application of Apple Leaf Disease Degree Based on a Small-Sample Dataset. Plants 2023, 12, 786. [Google Scholar] [CrossRef] [PubMed]
  38. Zhu, S.; Ma, W.; Lu, J.; Ren, B.; Wang, C.; Wang, J. A novel approach for apple leaf disease image segmentation in complex scenes based on two-stage DeepLabv3+ with adaptive loss. Comput. Electron. Agric. 2023, 204, 107539. [Google Scholar] [CrossRef]
  39. Li, K.; Song, Y.; Zhu, X.; Zhang, L. A severity estimation method for lightweight cucumber leaf disease based on DM-BiSeNet. Inf. Process. Agric. 2024. [Google Scholar] [CrossRef]
  40. Yang, M.; Kang, X.; Qiu, X.; Ma, L.; Ren, H.; Huang, C.; Zhang, Z.; Lv, X. Method for early diagnosis of verticillium wilt in cotton based on chlorophyll fluorescence and hyperspectral technology. Comput. Electron. Agric. 2024, 216, 108497. [Google Scholar] [CrossRef]
  41. Ma, R.; Zhang, N.; Zhang, X.; Bai, T.; Yuan, X.; Bao, H.; He, D.; Sun, W.; He, Y. Cotton Verticillium wilt monitoring based on UAV multispectral-visible multi-source feature fusion. Comput. Electron. Agric. 2024, 217, 108628. [Google Scholar] [CrossRef]
  42. Chen, B.; Wang, J.; Wang, Q.; Yu, Y.; Song, Y.; Sun, L.; Han, H.; Wang, F. Yield Loss Estimation of Verticillium Wilt Cotton Field Based on UAV Multi-spectral and Regression Model. In Proceedings of the 2022 Global Conference on Robotics, Artificial Intelligence and Information Technology (GCRAIT), Chicago, IL, USA, 30–31 July 2022; pp. 62–67. [Google Scholar]
  43. Chen, B.; Wang, Q.; Wang, J.; Liu, T.; Yu, Y.; Song, Y.; Chen, Z.; Bai, Z. The Estimate Severity Level of Cotton Verticillium Wilt Using New Multi-spectra of UAV Comprehensive Monitoring Disease Index. In Proceedings of the 2023 International Seminar on Computer Science and Engineering Technology (SCSET), New York, NY, USA, 29–30 April 2023; pp. 520–527. [Google Scholar]
  44. Pan, P.; Shao, M.; He, P.; Hu, L.; Zhao, S.; Huang, L.; Zhou, G.; Zhang, J. Lightweight cotton diseases real-time detection model for resource-constrained devices in natural environments. Front. Plant Sci. 2024, 15, 1383863. [Google Scholar] [CrossRef]
Figure 1. Representative sample images from the cotton Verticillium wilt dataset.
Figure 1. Representative sample images from the cotton Verticillium wilt dataset.
Plants 13 02960 g001
Figure 2. Overall structural diagram of the disease severity estimation model.
Figure 2. Overall structural diagram of the disease severity estimation model.
Plants 13 02960 g002
Figure 3. Architectural Diagram of C2f-RFCBAMConv and RFCBAMConv modules.
Figure 3. Architectural Diagram of C2f-RFCBAMConv and RFCBAMConv modules.
Plants 13 02960 g003
Figure 4. Architecture of AWDownSample-Lite Module.
Figure 4. Architecture of AWDownSample-Lite Module.
Plants 13 02960 g004
Figure 5. System architecture of the disease severity estimation app.
Figure 5. System architecture of the disease severity estimation app.
Plants 13 02960 g005
Figure 6. Comparison of segmentation performance across different models.
Figure 6. Comparison of segmentation performance across different models.
Plants 13 02960 g006
Figure 7. Comparison of segmentation results from different models.
Figure 7. Comparison of segmentation results from different models.
Plants 13 02960 g007
Figure 8. Visual comparison of disease severity estimation results.
Figure 8. Visual comparison of disease severity estimation results.
Plants 13 02960 g008
Table 1. Criteria for estimating disease severity levels in cotton verticillium wilt.
Table 1. Criteria for estimating disease severity levels in cotton verticillium wilt.
Disease Severity of LeavesProportion of Lesions to Diseased Leaves
L00
L10 < p ≤ 0.15
L20.15 < p ≤ 0.25
L30.25 < p ≤ 0.40
L40.40 < p ≤ 0.60
L50.60 < p ≤ 1.00
Table 2. Segmentation performance for diseased leaves and lesions.
Table 2. Segmentation performance for diseased leaves and lesions.
ClassPrecisionRecall[email protected][email protected]:0.95
Diseased leaves97.7%100%99.5%95.6%
Lesions88.3%82.0%86.3%54.8%
All93.0%91.0%92.9%75.2%
Table 3. Performance comparison of ablation experiment results.
Table 3. Performance comparison of ablation experiment results.
BaselineRFCBAMConvAWDownSample-LiteGSegment[email protected]FLOPS/GParams/M
91.0%12.13.2
91.8%12.73.3
91.9%12.14.0
90.7%9.72.6
92.2%12.73.2
91.1%10.22.7
90.8%9.82.6
92.9%10.32.6
Table 4. Segmentation performance comparison of different models.
Table 4. Segmentation performance comparison of different models.
Models[email protected]FLOPS/GParams/M
YOLACT64.8%96.430.7
Mask R-CNN91.5%149.044.7
YOLOv8-Seg91.0%12.03.2
CVW-Etr92.9%10.12.6
Table 5. Accuracy of disease severity estimation at different levels.
Table 5. Accuracy of disease severity estimation at different levels.
Disease Severity LevelNumberCorrect EstimationAccuracy/%
L0201890.00%
L1201890.00%
L2201995.00%
L3141392.86%
L4191789.47%
L52020100%
All11310592.92%
Table 6. Field validation comparison of manual and CVW-Etr methods.
Table 6. Field validation comparison of manual and CVW-Etr methods.
MethodAccuracy/%Time/s
Manual91.33%1770
Automatic (CVW-Etr)92.66%1330
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, P.; Yao, Q.; Shen, J.; Hu, L.; Zhao, S.; Huang, L.; Yu, G.; Zhou, G.; Zhang, J. CVW-Etr: A High-Precision Method for Estimating the Severity Level of Cotton Verticillium Wilt Disease. Plants 2024, 13, 2960. https://doi.org/10.3390/plants13212960

AMA Style

Pan P, Yao Q, Shen J, Hu L, Zhao S, Huang L, Yu G, Zhou G, Zhang J. CVW-Etr: A High-Precision Method for Estimating the Severity Level of Cotton Verticillium Wilt Disease. Plants. 2024; 13(21):2960. https://doi.org/10.3390/plants13212960

Chicago/Turabian Style

Pan, Pan, Qiong Yao, Jiawei Shen, Lin Hu, Sijian Zhao, Longyu Huang, Guoping Yu, Guomin Zhou, and Jianhua Zhang. 2024. "CVW-Etr: A High-Precision Method for Estimating the Severity Level of Cotton Verticillium Wilt Disease" Plants 13, no. 21: 2960. https://doi.org/10.3390/plants13212960

APA Style

Pan, P., Yao, Q., Shen, J., Hu, L., Zhao, S., Huang, L., Yu, G., Zhou, G., & Zhang, J. (2024). CVW-Etr: A High-Precision Method for Estimating the Severity Level of Cotton Verticillium Wilt Disease. Plants, 13(21), 2960. https://doi.org/10.3390/plants13212960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop