Next Article in Journal
Improved Cd Detection in Rice Grain Using LIBS with Husk-Based XGBoost Transfer Learning
Previous Article in Journal
Study on the Extraction of Maize Phenological Stages Based on Multiple Spectral Index Time-Series Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Botrytis Cinerea Control Efficacy via Deep Learning

1
School of Software, Jiangxi Agricultural University, Nanchang 330045, China
2
Faculty of Computer Science and Technology, Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg 197022, Russia
3
Institute of Applied Physics, Jiangxi Academy of Sciences, Nanchang 330096, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(11), 2054; https://doi.org/10.3390/agriculture14112054
Submission received: 10 October 2024 / Revised: 9 November 2024 / Accepted: 12 November 2024 / Published: 14 November 2024
(This article belongs to the Section Digital Agriculture)

Abstract

:
This study proposes a deep learning-based method for monitoring the growth of Botrytis cinerea and evaluating the effectiveness of control measures. It aims to address the limitations of traditional statistical analysis methods in capturing non-linear relationships and multi-factor synergistic effects. The method integrates colony growth environment data and images as network inputs, achieving real-time prediction of colony area through an improved RepVGG network. The innovations include (1) combining channel attention mechanism, multi-head self-attention mechanism, and multi-scale feature extractor to improve prediction accuracy and (2) introducing the Shapley value algorithm to achieve a precise quantitative analysis of environmental variables’ contribution to colony growth. Experimental results show that the validation loss of this method reaches 0.007, with a mean absolute error of 0.0148, outperforming other comparative models. This study enriches the theory of gray mold control and provides information technology for optimizing and selecting its inhibitors.

1. Introduction

Botrytis cinerea is a widely distributed pathogenic fungus that can infect many crops, such as vegetables, fruit trees, and flowers, leading to gray mold disease after infection [1]. Gray mold is one of the major fungal diseases affecting crops, capable of infecting over 470 plant species, including Solanaceae, Cucurbitaceae, and Rosaceae [2]. When the disease occurs, Botrytis cinerea grows gray, fluffy mycelium and spores on the plant surface, causing fruits, leaves, and flowers to soften [3]. Botrytis cinerea can simultaneously infect multiple parts of the plant, including stems, leaves, flowers, and fruits, making preventing and controlling gray mold particularly crucial and challenging [4]. This disease causes global economic losses exceeding $10 billion annually. In the wine industry alone, grape yield can decrease by over 20% in years of severe gray mold infection [5]. Grapes in storage are also frequently affected by gray mold, with losses reaching up to 50% in severe cases. In China, gray mold affects agricultural and socio-economic development to varying degrees each year [6]. As one of the major diseases affecting critical economic crops, gray mold is prone to outbreaks under high temperature and humidity conditions, spreading rapidly with strong pathogenicity and severely impacting crop yields [7]. In recent years, climate changes in some main production areas have led to changes in precipitation and temperature, exacerbating gray mold outbreaks, with yield reductions of over 40% [8].
Furthermore, increased greenhouse cultivation has created warm and humid environments more conducive to fungal diseases, making prevention and control more difficult [9]. Farmers use large amounts of chemical pesticides to combat gray mold, resulting in some agricultural exports failing food pesticide residue tests. Therefore, researching more effective methods to prevent and control gray mold is necessary to meet agricultural production needs and ensure sustainable economic and social development.
In traditional methods for detecting colony growth and prevention effectiveness, statistical analysis techniques such as variance and regression analysis are often used to determine the influence of various factors on colony growth [10]. Using statistical analysis methods typically requires manual recording and analysis of data, as well as making assumptions and preprocessing the recorded data. However, this manual data processing approach has limitations, particularly in capturing non-linear relationships and colony growth under multi-factor synergistic effects, which can lead to inaccurate analysis results. Deep learning can extract relevant features from raw colony and environmental data without manual analysis [11]. It not only saves human and material resources but also ensures the accuracy of data processing and analysis.
Additionally, deep learning models are suitable for processing numerical, textual, and image data, and integrating multiple modal data helps improve model analysis accuracy [12]. Deep learning models can represent complex non-linear relationships, providing an essential avenue for analyzing the inhibitory effects of various environmental factors on Botrytis cinerea growth [13]. Furthermore, deep learning methods excel at handling large-scale data and processing complex conditional factor data, thereby improving the model’s generalization ability [14]. Therefore, this study proposes a deep learning-based method for detecting the prevention and control effectiveness of Botrytis cinerea, with the following main contributions:
  • Propose an efficient deep learning-based method for monitoring Botrytis cinerea growth and evaluating prevention effectiveness. This method achieves real-time prediction of Botrytis cinerea colony area by fusing colony growth environmental data and images as network input, providing a quantitative basis for assessing prevention effectiveness.
  • Integrate channel attention mechanism and multi-head self-attention mechanism into the RepVGG network, combined with a multi-scale feature extractor, further improving the accuracy of colony growth area prediction and laying a solid foundation for subsequent quantitative analysis of prevention effectiveness.
  • Introduce the Shapley value from game theory to achieve a precise quantitative analysis of the contribution of various environmental variables to Botrytis cinerea growth.

2. Related Work

In recent years, with the development of deep learning technology, various models have been applied to plant disease detection and prevention [15,16,17,18,19]. Hyperspectral imaging technology can collect plant spectral information for disease diagnosis [20,21,22,23]. In plant disease diagnosis, Kundu et al. [24] used deep transfer learning models to identify rice blast and pearl millet rust, achieving 98.78% accuracy in disease classification; Zhao et al. [25] added feature pyramids and attention mechanisms to Faster R-CNN, detecting multiple diseases in greenhouse strawberries with a mAP of 92.18%. To adapt to resource-constrained IoT devices, Ale et al. [26] proposed lightweight DNN techniques for plant disease detection; Tang et al. [27] designed a grape disease diagnosis model based on ShuffleNet V1/V2 and SE attention mechanism, achieving a maximum accuracy of 99.14%; Guo et al. [28] used CST, a convolutional form of Swin Transformer, to identify plant disease categories and severity, achieving 90.9% accuracy in natural environments.
Along with improving various network models, many excellent model improvement methods have emerged in plant disease detection [29,30,31,32,33,34,35]. First, Simonyan et al. [36] developed deep convolutional networks (VGG) for large-scale image recognition, evaluating deepened networks using tiny convolutional filters and employing a single-path architecture to speed up model inference. Subsequently, Ding et al. [37] proposed a simple but effective convolutional neural network architecture called RepVGG based on VGG, characterized by structural re-parameterization that decouples training and inference structures, using multi-branch structures during training and single-branch structures during inference. Therefore, the computational cost of the model’s inference phase is relatively low and suitable for inference tasks on resource-limited devices. Meanwhile, the model’s ability to switch between training and inference structures makes it more flexible and applicable to many scenarios. Thus, Zheng et al. [38] used a lightweight structure re-parameterization model called RepDI, which flexibly combines depth-separable convolution and structural re-parameterization techniques.
On the one hand, structural re-parameterization technology can reduce the computational complexity of the model while reducing the number of parameters, which helps to reduce the storage requirements of the model, making it more lightweight. On the other hand, network models have achieved better results by adding attention mechanisms. Therefore, it has a faster inference speed on CPU devices than other models, achieving a prediction accuracy of 98.92% in the Apple Leaf dataset.
Wang et al. [39] proposed an efficient channel attention (ECA) module, pointing out that appropriate cross-channel interaction can maintain network performance while reducing model complexity. Woo et al. [40] proposed a convolutional block attention module (CBAM) that infers attention maps sequentially along the channel and spatial dimensions independently and then multiplies the attention maps by the input feature maps for feature refinement. Zhao et al. [41] developed a vegetable disease recognition model called DTL-SE-ResNet50, which optimizes ResNet50 using the SE attention mechanism before transfer learning training. The addition of the SE attention mechanism increased the vegetable disease recognition model’s focus on critical features of the data, resulting in significant performance improvements in vegetable disease recognition tasks. Under the same experimental conditions, the model’s recognition accuracy reached 97.24%. Furthermore, Li et al. [42] proposed a network model based on spatial convolutional self-attention, using multi-head self-attention to capture long-range feature dependencies in strawberry disease images, thereby improving strawberry disease recognition efficiency and achieving an accuracy of 99.10% after validation on the strawberry disease dataset.
The above research shows that deep learning technology has been widely applied in plant disease detection and prevention, achieving good results. However, in terms of analyzing the effectiveness of Botrytis cinerea prevention, some key issues still need further resolution. Firstly, most of the above research focuses on improving model design indicators and has achieved good results, but there needs to be more research on model interpretability [43]. In practical applications, understanding how the model makes decisions and its underlying reasoning process is crucial for researchers and plant disease protection experts. Secondly, the current network training data types are limited and must fully utilize information fusion from different data types. Previous studies relied on specific datasets, but in actual natural environments, plant disease manifestations are usually influenced by multiple factors, including temperature, light, and soil nutrient conditions. Therefore, to improve the model’s robustness and generalization ability, research methods should introduce more diverse data and consider plant disease manifestations under different conditional factors more comprehensively. The model can adapt to complex and changing natural environments through practical information fusion strategies.

3. Materials and Methods

3.1. Experimental Materials

3.1.1. Data Collection

The structure of the apparatus for Botrytis cinerea prevention data acquisition is illustrated in Figure 1. The apparatus comprises four functional units:
  • A base module (Module 1).
  • A colony cultivation observation module (Module 2).
  • An image acquisition module (Module 3).
  • A data processing module (Module 4).
The base module, serving as the main structural component, provides support and enables precise adjustment of equipment positioning during experimentation. It is equipped with a wheel mechanism to ensure the mobility of the entire apparatus.
The colony cultivation observation and image acquisition modules are mounted to the base module via telescopic support rods, allowing for flexible height adjustment according to experimental requirements. The colony cultivation observation module has a specialized enclosure for Botrytis cinerea observation, with core components including a clean bench, supplementary lighting system, photometric sensor, and temperature sensing device. The clean bench accommodates Petri dishes, while the supplementary lighting system enables precise light intensity adjustment and irradiation angle. The photometric sensor monitors light intensity parameters within the observation chamber.
The specifications of each functional module are as follows:
  • The light incubator employs the MCD-B3G model from Shanghai Minquan Instruments Co., Ltd., Shanghai, China.
  • The biochemical cultivation device utilizes the SPX-150-II model from Shanghai Yuejin Medical Equipment Co., Ltd., Shanghai, China.
  • The clean bench implements the SW-CJ-2FD model from Suzhou Antai Air Technology Co., Ltd., Suzhou, China.
  • The temperature sensor employs the Jianda Renke COS-03 model with a measurement precision of 0.1 °C.
The image acquisition module captures Botrytis cinerea colony images under various cultivation conditions and transmits the data to the processing module for analysis. This module utilizes an Annai Technology AN-5502 industrial inspection camera (Shenzhen Anne Technology Co., Ltd., Shenzhen, China), equipped with a 16-megapixel CMOS image sensor (Universal Version), supporting HDMI/USB high-definition interface output. The data processing module employs a computer system configured with an AMD Ryzen 7 5800H @3.2 GHz processor, 16 GB RAM, and NVIDIA GEFORCE RTX3060 graphics processor (Advanced Micro Devices, Inc., Santa Clara, California, USA), primarily for receiving and processing colony images to achieve precise colony contour segmentation.
The experimental strain was isolated from a vineyard in Daxing Town, Qianwei County, Leshan City, Sichuan Province, China. Preliminary experimental preparations included the preparation of Potato Dextrose Agar (PDA) medium with a mass fraction of 2.5%, supplemented with 1% chloramphenicol, followed by moist-heat sterilization at 121 °C for 30 min before use. The Botrytis cinerea strain was inoculated onto the center of 90 mm disposable sterile Petri dishes containing PDA medium on a clean bench and then incubated in a culture chamber for 5 days.
This study employed the growth rate method to evaluate the inhibitory activity of 0.4% Polygonum cuspidatum extract suspension against Botrytis cinerea under various cultivation conditions, aiming to provide a theoretical foundation for its technological application. The experimental procedure incorporated the test agent into the medium at predetermined concentrations. Following medium cooling, colonized agar plugs were extracted using a 4 mm diameter cork borer and transferred to the agent-containing medium, followed by inverted cultivation under specified conditions.
Using light intensity (affecting fungal DNA and secondary metabolism), temperature (optimal growth range 18–25 °C), nutrient composition (influencing host resistance and pathogen survival), and agent concentration (determining inhibitory effect) as input variables, establish a Botrytis cinerea growth monitoring model. The agent concentration gradients are 0.4% Polygonum cuspidatum extract diluted 200-fold (25 ppm), 500-fold (8 ppm), and 800-fold (5 ppm); light gradients are no light and light (16 L:8D); temperature gradients are 15, 20, 25, 30, and 35 °C; and nutrient composition in culture medium includes 2.5% PDA, 1.25% PDA, and 0% PDA, with four biological replicates for each experimental treatment.

3.1.2. Data Preprocessing

The dataset constructed for this study comprises training and validation sets, consisting of 700 RGB color images in total, with a random split ratio of 7:3 between training and validation sets. As illustrated in Figure 2, colony growth images were acquired using a specialized Botrytis cinerea collection apparatus while recording critical environmental parameters within the observation chamber, including light intensity, temperature, medium nutrient levels, and agent concentration.
During data preprocessing, the above four environmental factors were utilized as RGB parameters and intensity features of the images. The specific preprocessing workflow is as follows: initially, the collected environmental factor data underwent normalization, mapping all parameter values to the [0, 1] interval; subsequently, the normalized values of light intensity, temperature, and nutrient levels were correspondingly mapped to the red (R), green (G), and blue (B) channels of RGB images, while agent concentration was correlated with image contrast, thereby generating standardized RGB images; finally, the processed image dataset was archived for subsequent analysis and model training.
This preprocessing methodology achieves the visualization of environmental parameters and provides a standardized data foundation for subsequent image analysis and machine learning model construction.

3.2. Proposed Method

3.2.1. Model Design

As shown in Figure 3, the Botrytis Cinerea Control effectiveness analysis prediction model consists of the RepVGG network [37], channel attention mechanism combined with a multi-head self-attention module [44,45], and multi-scale feature extractor [46]. The output of the RepVGG network enters the channel attention mechanism combined with a multi-head self-attention module. Then, it enters the multi-scale feature extractor composed of multiple convolutional layers. In the feature extraction model, the RepVGG network is composed of three convolutional groups in series, each convolutional group including a 1 × 1, 3 × 3 two-dimensional convolution and a ReLU activation function, and the output result of the third convolutional group is input into the multi-head self-attention module fused with the channel attention mechanism.
The proposed model’s loss function L consists of cross-entropy loss, channel attention loss, multi-head self-attention loss, and multi-scale feature extractor loss. These losses are combined using weight coefficients, as shown in Equation (1)—the weight coefficients α-δ are optimized and adjusted during network training.
L = α ( i y t r u e i log ( y i ) ) + β ( i ( A i 1 ) 2 ) + γ ( h e a d 1 , h e a d 2 i , j ( y h e a d 1 i , j   y h e a d 2 i , j ) 2 ) + δ ( i ( y s c a l e 1 i y s c a l e 2 i ) 2 )
where i and j represent the spatial positions of the input image, head1 and head2 represent different attention heads, y represents the model output, A represents the channel attention weights, and the weight coefficients α-δ are optimized and adjusted during the network training process.

3.2.2. Colony Area Calculation Method

Based on the camera imaging principle, the actual data of the gray mold fungus colony size and the culture dish itself in the culture dish can be independently obtained, as shown in Figure 4. Equations (2) and (3) show the precise calculation process.
R = L R 1 f
R 1 = R f L
where R represents the radius of the Petri dish, f denotes the camera’s focal length, and L indicates the distance between the camera and the Petri dish. Using these established or quantifiable variables, the radius R1 of the Petri dish in the captured Botrytis cinerea images is determined. Subsequently, a U-Net segmentation network model is employed to identify and delineate the boundaries of the Petri dish and the Botrytis cinerea colony within the cultivation images. The radius ratio λ between the Petri dish and the colony can be calculated using the segmented images obtained through U-Net. Consequently, Equation (4) determines the Botrytis cinerea colony size R2 within the segmented image.
R 2 = λ R 1

3.2.3. Prevention Effectiveness Analysis Method

This study employs the Shapley value algorithm from game theory [47], known as Shapley Additive exPlanations (SHAP) in machine learning, which aims to improve neural network interpretability. For the four experimental input parameters, light intensity, temperature, nutrient composition, and agent concentration, it investigates their effects on colony area under various conditions and outputs an analysis graph of Botrytis cinerea growth control efficacy. The calculation process of the SHAP value is shown in Equation (5).
φ ( x ) = S N / { x } | S | ! ( | N | | S | 1 ) ! | N | ! × [ v ( S { x } ) v ( S ) ]
where φ(x) represents the influence factor of the conditional factor x on the growth status of Botrytis cinerea, N represents the set of n conditional factors, the total set of conditional factors N = {1, 2, …, n}, and the set of conditional factors S is a subset of the total set of conditional factors N. However, it does not include factor x, |S| represents the number of factors included in the set of conditional factors S, |N| represents the number of all factors, v(S) represents the colony growth size under the set of conditional factors S, and v(S∪{x}) represents the colony growth size after combining the set of conditional factors S with the conditional factor x.

4. Results

4.1. Experimental Environment

Hardware configuration: the processor is AMD Ryzen 7 5800H @3.2 GHz, memory is 16 GB, and the graphics processing unit (GPU) is NVIDIA GEFORCE RTX3060. Software configuration: the deep learning framework is Pytorch 1.12.1, and the programming language is Python 3.7. Initial environment variable settings:
  • The number of training epochs is 200, batch_size = 32.
  • Output results are displayed once every 20 training iterations.
  • The learning rate of the Adam optimizer is set to 0.000001.

4.2. Evaluation Metrics

The Loss Value serves as a crucial metric for evaluating machine learning model performance, quantifying the degree of deviation between model predictions and ground truth labels. As shown in Equation (6), this study employs Mean Square Error (MSE) as the loss function.
Given that this research aims to predict the inhibitory effects on Botrytis cinerea growth under various environmental parameters, which constitutes a typical regression task, Mean Absolute Error (MAE) is adopted as the primary evaluation metric (as shown in Equation (7)). MAE effectively reflects the model’s predictive performance in regression tasks, as it not only intuitively represents the average deviation between predicted and actual values but also exhibits lower sensitivity to outliers, making it particularly suitable for regression model assessment in this study.
L o s s M S E = ( y i y ¯ i ) 2 n
M A E = | y i y ¯ i | n
where yi represents the actual value, y ¯ represents the predicted value, and n represents the number of samples in the experiment.
In addition, this experiment also uses Accuracy as a critical indicator to evaluate the model’s prediction effect. Accuracy describes the degree of agreement between the model’s predicted Botrytis cinerea growth area and the actual area, as shown in Equation (8). It evaluates the match between the model’s predicted Botrytis cinerea growth area and the actual area by calculating the model’s predicted colony area ratio to the actual area. The choice of this indicator aims to ensure the objectivity and Accuracy of the experimental evaluation, thereby better reflecting the model’s practical application effect.
A c c u r a c y = 1 n i = 1 n ( 1 | A p r e d i c t e d , i A a c t u a l , i | A a c t u a l , i ) × 100 %
where Apredicted,i represents the predicted area of the i-th data group; Aactual,i represents the actual area of the i-th data group; and n is the total number of data points.

4.3. Experimental Results

This study selected the loss function (Loss) and Mean Absolute Error (MAE) as the primary evaluation indicators and conducted a systematic analysis of the experimental results to verify the effectiveness and Accuracy of the deep learning model proposed in this study for detecting the prevention and control effect of Botrytis cinerea.
As illustrated in Figure 5a, the loss function convergence curves of the proposed model on both training and validation sets exhibit the following characteristics. During the initial 17 iterations, loss values for both sets demonstrate a significant downward trend, followed by a steady decreasing pattern. Notably, the MAE for both sets ultimately approaches zero, indicating strong consistency between the model’s training and validation phases. To ensure experimental comparability, although early stopping thresholds were established for various evaluation metrics during the training process, all models were uniformly trained for 200 epochs. This standardization eliminates potential experimental bias arising from variations in training iterations, thereby ensuring fair comparison across different network models.
Figure 5b further presents the experimental network model’s MAE curves on the training and validation sets. The study results show that in the first 40 iterations, the MAE values of the training and validation sets experienced a significant initial decrease and continued to decrease in subsequent iterations. It is particularly noteworthy that as the number of iterations increased, the MAE values of the training and validation set finally reached a convergence state at the 180th iteration. The above results confirm the stability and reliability of the model and provide essential references for determining the optimal training period of the model.
To further verify the accuracy of the model’s predictions, this study used statistical methods to conduct an in-depth comparative analysis between the measured data and the model’s predicted results under 50 sample conditions. Specifically, we compared the relationships between the actual measured colony area, colony diameter, and prevention effectiveness of Botrytis cinerea with the corresponding values predicted by the model. Figure 6 visually present this comparison as scatter plots. The horizontal axis represents the measured values in these sub-figures, while the vertical axis represents the model’s predicted values. Each data point’s coordinate position is determined jointly by the measured and predicted values, forming discrete points in the scatter plot. Notably, these data points exhibit a clear linear distribution trend, which, after fitting, forms the gray diagonal line in the figure. This diagonal line represents the linear relationship between the measured values and the model’s predicted values, and its degree of alignment with the diagonal is a crucial indicator for evaluating the model’s prediction accuracy. Theoretically, the closer the gray line is to the diagonal, the better the model’s fit to the measured data, indicating higher prediction accuracy. Statistical analysis results show a significant linear relationship between the model’s predicted and actual values, with correlation coefficients (R2) exceeding 0.95. This high correlation validates the prediction accuracy of the model proposed in this study and further corroborates the conclusions drawn based on the loss function and MAE indicators.

4.4. Model Prevention Effectiveness Analysis Results

After validating the overall accuracy of the model’s predictions, this study further explored the key factors affecting the growth control of Botrytis cinerea and their relative importance. To this end, we selected four groups of samples (sequentially adjusting one feature value among light intensity, temperature, nutrient composition, or agent concentration). We applied the SHAP value analysis method to interpret the model’s predicted effectiveness in controlling Botrytis cinerea growth. Figure 7a presents the quantitative analysis results of the impact of different conditional factors on colony growth. The horizontal axis represents the selected sample group number, while the vertical axis represents the absolute value of the SHAP value. The absolute magnitude of the SHAP value directly reflects the degree of influence of each conditional factor on colony growth. The analysis results show that sample 4 has the highest SHAP value, indicating that under the condition combination represented by sample 4, the selected factor has the most significant impact on colony growth. This finding provides an essential empirical basis for optimizing Botrytis cinerea prevention and control strategies.
Figure 7b displays the SHAP Force Plot, which further details the specific contributions of each feature to the model’s prediction results within a single sample. In this visualization, different colors represent the direction and intensity of feature influence: red indicates that an increase in feature value will raise the model output (i.e., promote colony growth), while blue indicates that an increase in feature value will lower the model output (i.e., inhibit colony growth). Based on SHAP value analysis indicating that light and temperature significantly affect Botrytis cinerea growth, optimal disease control can be achieved in production by regulating greenhouse light conditions and temperature combined with appropriate concentrations of Polygonum cuspidatum extract.

5. Discussion

5.1. Ablation Analysis

This study conducted ablation experiments to examine the impact of fusing different network variants or modules with the RepVGG network model [37] on the accuracy of analyzing Botrytis cinerea control. The experiments were mainly conducted from two aspects:
(1)
Network structure variants: different structures from RepVGG_A0 to B2 [37] were adopted to explore the impact of network depth and complexity on performance.
(2)
Attention modules: attention mechanisms such as ECA [39], CBAM, Self-Attention [44], MultiHead SelfAttention, and SE_MHSA were introduced to evaluate their enhancement effect on model performance.
Figure 8a shows the training Loss curves of various network variants. The results indicate that network models combining CBAM and SE_MHSA_block exhibited lower Loss values and prediction errors. The method proposed in this study performed best regarding convergence stability and Loss value.
Figure 8b presents the mean absolute error of different network variants after 19 training rounds. The results show that adding attention modules reduced the absolute training error and improved prediction accuracy. Among them, the combination of CBAM and SE_MHSA_block performed exceptionally well. Self-Attention and MultiHead SelfAttention modules demonstrated faster convergence speeds in the early stages of training.
Overall, the method proposed in this study outperformed other network variants in terms of average training error, stability, and initial convergence speed, confirming its accuracy in analyzing the control effect of Botrytis cinerea.
This study selected six representative RepVGG network variants that differ in the number of channels in residual blocks, resulting in different network depths and widths, and evaluated the performance of different RepVGG network structure variants in predicting Botrytis cinerea growth.
Figure 9 presents a bar graph comparing the average training absolute error among six network variant structures of the improved SE_MHSARepVGG network model (fusing SE_MHSA module and multi-scale feature extractor). The experiment used average training absolute error as the evaluation metric, and after 200 rounds of training, the following results were obtained:
(1)
The B1 structure performed best, with an average training absolute error close to 0.012, showing high prediction accuracy.
(2)
The A0 and A2 structures had similar training effects.
(3)
Other network variants showed significant performance differences.
Considering the experimental results and the number of channels in residual blocks, balancing performance, and computational efficiency, this study ultimately chose RepVGG_A0 as the primary network model.

5.2. Model Performance Discussion

To evaluate the performance of the improved SE_MHSARepVGG model proposed in this study, we selected five typical deep learning models for comparison: VGG16, VGG19 [36], ResNet50 [41], ShuffleNet V2 [27], and EfficientNet [48]. Figure 10a shows each model’s loss function (Loss) curves. The results indicate that all models have Loss values below 0.01 after stabilization, demonstrating good performance. However, the method proposed in this study outperformed other models with a Loss value of about 0.002, highlighting its superior performance. Figure 10b presents the Mean Absolute Error (MAE) curves. Most models have MAE values below 0.1 after stabilization, while the MAE value of our method is about 0.02 lower than other methods, further confirming its higher prediction accuracy. In terms of convergence speed, VGG16 performed the slowest, while the method proposed in this study demonstrated a relatively faster convergence speed. Moreover, our model showed the most minor curve fluctuations during training, reflecting its stability.
Based on a comprehensive evaluation of Loss value, MAE, convergence rate, and model stability, the proposed improved SE_MHSARepVGG model demonstrates superior performance on the training set. Compared to existing studies, such as Nikolaos et al. [49], who employed a U-Net++ model with a MobileViT-S encoder, our research achieves enhanced detection precision and recall rates in Botrytis cinerea detection tasks. Although Meng et al. [50] implemented multispectral imaging technology and phenotypic analysis platforms to monitor the Chlorophyll Index and modified Anthocyanin Reflectance Index for Botrytis cinerea disease detection, their study did not adequately consider the impact of environmental factors on its growth and development.
Furthermore, this study exhibits significant differences from existing work in methodology, dataset construction, and experimental results, providing new research perspectives for Botrytis cinerea disease prevention. This study demonstrates significant potential in agricultural applications: firstly, its deep learning-based monitoring method enables precise prediction of disease development, contributing to early warning and control decisions in agricultural production; secondly, through the SHAP value algorithm quantifying environmental factors’ impact on pathogen growth, it provides a scientific basis for developing integrated disease control strategies.
However, this method also faces challenges: the adaptability and stability of the model in actual field environments need further validation; moreover, this technology requires high-level hardware facilities, which may present implementation difficulties in grassroots agricultural extension. Addressing these issues will be an important direction for future research.

6. Conclusions

This study deeply explored the effects of various environmental factors and drug treatments on the colony growth of Botrytis cinerea, aiming to predict the prevention and control effects of gray mold under different conditions. We proposed a precise detection method based on an improved SE_MHSARepVGG network model, which not only accurately predicts the growth status of Botrytis cinerea under various conditions but also analyzes the contribution of different factors to the prevention and control effect through the SHAP value algorithm. Experimental results show that our model outperforms traditional models such as VGG, ResNet, and EfficientNet regarding prediction accuracy, convergence speed, and stability. This achievement provides a reliable theoretical basis and technical support for optimizing drug prevention and control strategies for gray mold, particularly in evaluating the prevention efficacy against gray mold in grapes through deep learning. This method facilitates targeted drug design in plant-derived research institutions and reduces drug development costs.
However, this study has several limitations. Firstly, the sample size and diversity must be further increased to enhance the model’s generalization ability. Secondly, there are computational complexity issues with SHAP value calculations, which increase exponentially with the number of features. In dynamic agricultural production environments, its ability to capture nonlinear interactions between features and the real-time accuracy of results may be affected. Therefore, future research will focus on the following aspects: (1) expanding the dataset scale and diversity by collecting more Botrytis cinerea growth images under different disease severity levels and environmental conditions to improve the model’s generalization ability; (2) optimizing the SHAP value algorithm and exploring lightweight feature attribution methods to improve computational efficiency in complex agricultural environments and enhance the ability to capture dynamic changes and nonlinear interactions of environmental factors.

Author Contributions

Conceptualization, W.Y.; methodology, W.Y. and I.G.; software, X.Z.; validation, S.K. and S.D.; formal analysis, W.Y., X.Z. and S.K.; investigation W.Y.; resources, W.Y. and X.C.; data curation, X.Z.; writing—original draft, W.Y. and X.Z.; writing—review and editing, W.Y. and X.C.; visualization, X.Z. and S.K.; supervision, W.Y. and I.G.; project administration W.Y.; funding acquisition, W.Y. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The study is financially supported by the National Natural Science Foundation of China (Grant No. 62462035, Grant No. 61762048); the Natural Science Foundation of Jiangxi Province (Grant No. 20212BAB202015); Jiangxi Provincial Special Program 03 and 5G Projects (Grant No. 20232ABC03A18).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yin, Z.; Liu, J.; Zhao, H.; Chu, X.; Liu, H.; Ding, X.; Lu, C.; Wang, X.; Zhao, X.; Li, Y. SlMYB1 regulates the accumulation of lycopene, fruit shape, and resistance to Botrytis cinerea in tomato. Hortic. Res. 2023, 10, uhac282. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, H.-C.; Li, L.-C.; Cai, B.; Cai, L.-T.; Chen, X.-J.; Yu, Z.-H.; Zhang, C.-Q. Metabolic phenotype characterization of Botrytis cinerea, the causal agent of gray mold. Front. Microbiol. 2018, 9, 470. [Google Scholar] [CrossRef] [PubMed]
  3. Bi, K.; Liang, Y.; Mengiste, T.; Sharon, A. Killing softly: A roadmap of Botrytis cinerea pathogenicity. Trends Plant Sci. 2023, 28, 211–222. [Google Scholar] [CrossRef]
  4. Breia, R.; Conde, A.; Badim, H.; Fortes, A.M.; Gerós, H.; Granell, A. Plant SWEETs: From sugar transport to plant–pathogen interaction and more unexpected physiological roles. Plant Physiol. 2021, 186, 836–852. [Google Scholar] [CrossRef]
  5. Steel, C.C.; Blackman, J.W.; Schmidtke, L.M. Grapevine bunch rots: Impacts on wine composition, quality, and potential procedures for the removal of wine faults. J. Agric. Food Chem. 2013, 61, 5189–5206. [Google Scholar] [CrossRef]
  6. Mincuzzi, A.; Ippolito, A. Pomegranate: Postharvest Fungal Diseases and Control; IntechOpen: Rijeka, Croatia, 2023. [Google Scholar] [CrossRef]
  7. Liu, J.; Zhu, X.; Chen, X.; Liu, Y.; Gong, Y.; Yuan, G.; Liu, J.; Chen, L. Defense and inhibition integrated mesoporous nanoselenium delivery system against tomato gray mold. Environ. Sci. Nano 2020, 7, 210–227. [Google Scholar] [CrossRef]
  8. Rienth, M.; Vigneron, N.; Walker, R.P.; Castellarin, S.D.; Sweetman, C.; Burbidge, C.A.; Bonghi, C.; Famiani, F.; Darriet, P. Modifications of grapevine berry composition induced by main viral and fungal pathogens in a climate change scenario. Front. Plant Sci. 2021, 12, 717223. [Google Scholar] [CrossRef]
  9. Singh, R.; Tiwari, S.; Singh, M.; Singh, A.; Singh, A. Important diseases of greenhouse crops and their integrated management: A review. J. Entomol. Zool. Stud. 2020, 8, 962–970. [Google Scholar]
  10. Anyansi, C.; Straub, T.J.; Manson, A.L.; Earl, A.M.; Abeel, T. Computational methods for strain-level microbial detection in colony and metagenome sequencing data. Front. Microbiol. 2020, 11, 1925. [Google Scholar] [CrossRef]
  11. Mishra, G.; Panda, B.K.; Ramirez, W.A.; Jung, H.; Singh, C.B.; Lee, S.H.; Lee, I. Research advancements in optical imaging and spectroscopic techniques for nondestructive detection of mold infection and mycotoxins in cereal grains and nuts. Compr. Rev. Food Sci. Food Saf. 2021, 20, 4612–4651. [Google Scholar] [CrossRef]
  12. Iqbal, T.; Qureshi, S. The survey: Text generation models in deep learning. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 2515–2528. [Google Scholar] [CrossRef]
  13. Xiaolin, X.; Xiaozhi, L.; Guoping, H.; Hongwei, L.; Jinkuo, G.; Xiyun, B.; Zhen, T.; Xiaofang, M.; Yanxia, L.; Na, X. Overfit deep neural network for predicting drug-target interactions. iScience 2023, 26, 107646. [Google Scholar] [CrossRef]
  14. Li, Y.; Zhan, X.; Liu, S.; Lu, H.; Jiang, R.; Guo, W.; Chapman, S.; Ge, Y.; Solan, B.; Ding, Y. Self-supervised plant phenotyping by combining domain adaptation with 3D plant model simulations: Application to wheat leaf counting at seedling stage. Plant Phenomics 2023, 5, 41. [Google Scholar] [CrossRef] [PubMed]
  15. Wani, J.A.; Sharma, S.; Muzamil, M.; Ahmed, S.; Sharma, S.; Singh, S. Machine learning and deep learning based computational techniques in automatic agricultural diseases detection: Methodologies, applications, and challenges. Arch. Comput. Methods Eng. 2022, 29, 641–677. [Google Scholar] [CrossRef]
  16. Sai Reddy, B.; Neeraja, S. Plant leaf disease classification and damage detection system using deep learning models. Multimed. Tools Appl. 2022, 81, 24021–24040. [Google Scholar] [CrossRef]
  17. Chug, A.; Bhatia, A.; Singh, A.P.; Singh, D. A novel framework for image-based plant disease detection using hybrid deep learning approach. Soft Comput. 2023, 27, 13613–13638. [Google Scholar] [CrossRef]
  18. Shoaib, M.; Hussain, T.; Shah, B.; Ullah, I.; Shah, S.M.; Ali, F.; Park, S.H. Deep learning-based segmentation and classification of leaf images for detection of tomato plant disease. Front. Plant Sci. 2022, 13, 1031748. [Google Scholar] [CrossRef]
  19. Anandhakrishnan, T.; Jaisakthi, S. Deep Convolutional Neural Networks for image based tomato leaf disease detection. Sustain. Chem. Pharm. 2022, 30, 100793. [Google Scholar] [CrossRef]
  20. Zhang, C.; Zhou, L.; Zhao, Y.; Zhu, S.; Liu, F.; He, Y. Noise reduction in the spectral domain of hyperspectral images using denoising autoencoder methods. Chemom. Intell. Lab. Syst. 2020, 203, 104063. [Google Scholar] [CrossRef]
  21. Golhani, K.; Balasundram, S.K.; Vadamalai, G.; Pradhan, B. A review of neural networks in plant disease detection using hyperspectral data. Inf. Process. Agric. 2018, 5, 354–371. [Google Scholar] [CrossRef]
  22. Xie, C.; Yang, C.; He, Y. Hyperspectral imaging for classification of healthy and gray mold diseased tomato leaves with different infection severities. Comput. Electron. Agric. 2017, 135, 154–162. [Google Scholar] [CrossRef]
  23. Zhu, H.; Chu, B.; Zhang, C.; Liu, F.; Jiang, L.; He, Y. Hyperspectral imaging for presymptomatic detection of tobacco disease with successive projections algorithm and machine-learning classifiers. Sci. Rep. 2017, 7, 4125. [Google Scholar] [CrossRef] [PubMed]
  24. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. IoT and interpretable machine learning based framework for disease prediction in pearl millet. Sensors 2021, 21, 5386. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, S.; Liu, J.; Wu, S. Multiple disease detection method for greenhouse-cultivated strawberry based on multiscale feature fusion Faster R_CNN. Comput. Electron. Agric. 2022, 199, 107176. [Google Scholar] [CrossRef]
  26. Ale, L.; Sheta, A.; Li, L.; Wang, Y.; Zhang, N. Deep learning based plant disease detection for smart agriculture. In Proceedings of the 2019 IEEE Globecom Workshops (GC Wkshps), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  27. Tang, Z.; Yang, J.; Li, Z.; Qi, F. Grape disease image classification based on lightweight convolution neural networks and channelwise attention. Comput. Electron. Agric. 2020, 178, 105735. [Google Scholar] [CrossRef]
  28. Guo, Y.; Lan, Y.; Chen, X. CST: Convolutional Swin Transformer for detecting the degree and types of plant diseases. Comput. Electron. Agric. 2022, 202, 107407. [Google Scholar] [CrossRef]
  29. Sutaji, D.; Yıldız, O. LEMOXINET: Lite ensemble MobileNetV2 and Xception models to predict plant disease. Ecol. Inform. 2022, 70, 101698. [Google Scholar] [CrossRef]
  30. Jin, H.; Li, Y.; Qi, J.; Feng, J.; Tian, D.; Mu, W. GrapeGAN: Unsupervised image enhancement for improved grape leaf disease recognition. Comput. Electron. Agric. 2022, 198, 107055. [Google Scholar] [CrossRef]
  31. Li, M.; Zhou, G.; Chen, A.; Yi, J.; Lu, C.; He, M.; Hu, Y. FWDGAN-based data augmentation for tomato leaf disease identification. Comput. Electron. Agric. 2022, 194, 106779. [Google Scholar] [CrossRef]
  32. Fan, X.; Luo, P.; Mu, Y.; Zhou, R.; Tjahjadi, T.; Ren, Y. Leaf image based plant disease identification using transfer learning and feature fusion. Comput. Electron. Agric. 2022, 196, 106892. [Google Scholar] [CrossRef]
  33. Yi, W.; Zhang, L.; Xu, Y.; Cheng, X.; Chen, T. MIDF-DMAP: Multimodal information dynamic fusion for drug molecule activity prediction. Expert Syst. Appl. 2025, 260, 125403. [Google Scholar] [CrossRef]
  34. Yi, W.; Xia, S.; Kuzmin, S.; Gerasimov, I.; Cheng, X. YOLOv7-KDT: An ensemble model for pomelo counting in complex environment. Comput. Electron. Agric. 2024, 227, 109469. [Google Scholar] [CrossRef]
  35. Yi, W.; Zhang, X.; Dai, S.; Kuzmin, S.; Gerasimov, I.; Cheng, X. MV-SSRP: Machine Vision Approach for Stress–Strain Measurement in Rice Plants. Agronomy 2024, 14, 1443. [Google Scholar] [CrossRef]
  36. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  37. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13733–13742. [Google Scholar] [CrossRef]
  38. Zheng, J.; Li, K.; Wu, W.; Ruan, H. RepDI: A light-weight CPU network for apple leaf disease identification. Comput. Electron. Agric. 2023, 212, 108122. [Google Scholar] [CrossRef]
  39. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar] [CrossRef]
  40. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar] [CrossRef]
  41. Zhao, X.; Li, K.; Li, Y.; Ma, J.; Zhang, L. Identification method of vegetable diseases based on transfer learning and attention mechanism. Comput. Electron. Agric. 2022, 193, 106703. [Google Scholar] [CrossRef]
  42. Li, G.; Jiao, L.; Chen, P.; Liu, K.; Wang, R.; Dong, S.; Kang, C. Spatial convolutional self-attention-based transformer module for strawberry disease identification under complex background. Comput. Electron. Agric. 2023, 212, 108121. [Google Scholar] [CrossRef]
  43. Thakur, P.S.; Chaturvedi, S.; Khanna, P.; Sheorey, T.; Ojha, A. Vision transformer meets convolutional neural network for plant disease classification. Ecol. Inform. 2023, 77, 102245. [Google Scholar] [CrossRef]
  44. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
  45. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
  46. Li, X.; Zhang, W.; Ding, Q. Deep learning-based remaining useful life estimation of bearings using multi-scale feature extraction. Reliab. Eng. Syst. Saf. 2019, 182, 208–218. [Google Scholar] [CrossRef]
  47. Winter, E. The shapley value. In Handbook of Game Theory with Economic Applications; Elsevier: Amsterdam, The Netherlands, 2002; Volume 3, pp. 2025–2054. [Google Scholar] [CrossRef]
  48. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  49. Giakoumoglou, N.; Kalogeropoulou, E.; Klaridopoulos, C.; Pechlivani, E.M.; Christakakis, P.; Markellou, E.; Frangakis, N.; Tzovaras, D. Early detection of Botrytis cinerea symptoms using deep learning multi-spectral image segmentation. Smart Agric. Technol. 2024, 8, 100481. [Google Scholar] [CrossRef]
  50. Meng, L.; Audenaert, K.; Van Labeke, M.-C.; Höfte, M. Detection of Botrytis cinerea on strawberry leaves upon mycelial infection through imaging technique. Sci. Hortic. 2024, 330, 113071. [Google Scholar] [CrossRef]
Figure 1. Data collection device for the control efficacy of Botrytis cinerea.
Figure 1. Data collection device for the control efficacy of Botrytis cinerea.
Agriculture 14 02054 g001
Figure 2. Semantic segmentation of Botrytis cinerea colonies.
Figure 2. Semantic segmentation of Botrytis cinerea colonies.
Agriculture 14 02054 g002
Figure 3. Network for detecting Botrytis cinerea prevention results.
Figure 3. Network for detecting Botrytis cinerea prevention results.
Agriculture 14 02054 g003
Figure 4. Calculation of Botrytis cinerea colony area.
Figure 4. Calculation of Botrytis cinerea colony area.
Agriculture 14 02054 g004
Figure 5. Proposed network model training and validation results. (a) Loss value; (b) MAE value.
Figure 5. Proposed network model training and validation results. (a) Loss value; (b) MAE value.
Agriculture 14 02054 g005
Figure 6. Results of statistical analysis.
Figure 6. Results of statistical analysis.
Agriculture 14 02054 g006
Figure 7. Analysis of Botrytis cinerea control efficacy. (a) Impact of varying conditions on colony growth; (b) Impact of single sample on the proposed model’s performance.
Figure 7. Analysis of Botrytis cinerea control efficacy. (a) Impact of varying conditions on colony growth; (b) Impact of single sample on the proposed model’s performance.
Agriculture 14 02054 g007
Figure 8. Training Loss results of different improved RepVGG networks. (a) Loss value; (b) MAE value.
Figure 8. Training Loss results of different improved RepVGG networks. (a) Loss value; (b) MAE value.
Agriculture 14 02054 g008
Figure 9. Comparison of training average absolute errors for various network architecture variants.
Figure 9. Comparison of training average absolute errors for various network architecture variants.
Agriculture 14 02054 g009
Figure 10. Comparison of different network models. (a) Loss value; (b) MAE value.
Figure 10. Comparison of different network models. (a) Loss value; (b) MAE value.
Agriculture 14 02054 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yi, W.; Zhang, X.; Dai, S.; Kuzmin, S.; Gerasimov, I.; Cheng, X. Detecting Botrytis Cinerea Control Efficacy via Deep Learning. Agriculture 2024, 14, 2054. https://doi.org/10.3390/agriculture14112054

AMA Style

Yi W, Zhang X, Dai S, Kuzmin S, Gerasimov I, Cheng X. Detecting Botrytis Cinerea Control Efficacy via Deep Learning. Agriculture. 2024; 14(11):2054. https://doi.org/10.3390/agriculture14112054

Chicago/Turabian Style

Yi, Wenlong, Xunsheng Zhang, Shiming Dai, Sergey Kuzmin, Igor Gerasimov, and Xiangping Cheng. 2024. "Detecting Botrytis Cinerea Control Efficacy via Deep Learning" Agriculture 14, no. 11: 2054. https://doi.org/10.3390/agriculture14112054

APA Style

Yi, W., Zhang, X., Dai, S., Kuzmin, S., Gerasimov, I., & Cheng, X. (2024). Detecting Botrytis Cinerea Control Efficacy via Deep Learning. Agriculture, 14(11), 2054. https://doi.org/10.3390/agriculture14112054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop