Corn is the crop with the highest global production and is a significant cereal, feed, and industrial raw material crop [
1]. Throughout its growth stages, corn is prone to various diseases, which can severely affect fruit quality and potentially lead to substantial yield losses. Corn is susceptible to a common disease called northern corn leaf blight (NCLB), which is brought on by italize it. It primarily harms the leaves, and in severe cases, it can also affect the sheaths and bracts. The disease usually starts from the bottom leaves and gradually spreads upwards, potentially engulfing the entire plant. NCLB is one of the crucial diseases of corn and is widely distributed across all corn-growing regions worldwide. In outbreak years, it can cause a reduction in yield by 15–20%, and in severe cases, over 50% [
1]. The occurrence and prevalence of this disease are influenced by a combination of factors, including the resistance of inbred lines, crop rotation systems, climatic conditions, and cultivation practices. Therefore, timely and accurate diagnosis of crop diseases is vital for the correct development of sustainable agriculture. Currently, the identification of corn leaf diseases primarily relies on manual inspection and the planting experience of farmers, which is highly subjective and often cannot rapidly and accurately identify the diseases due to the numerous types of diseases. Hence, achieving rapid, efficient, and intelligent identification of corn diseases is of great importance [
2]. Computerized systems outperform manual methods in detecting and pinpointing corn leaf diseases, allowing for swift, targeted interventions that bolster crop yield and national food security [
3].
Color and texture characteristics can be used to differentiate between sick regions and backgrounds using conventional image segmentation techniques, such as threshold segmentation. In particular, every pixel in the image is compared; pixels with gray values above the threshold are assigned to the class impacted by the disease. This classification implies that the pixel corresponds to an area within the image where the visual characteristics indicative of disease, such as discoloration, irregular shapes, or texture changes, are present. In the context of plant pathology or medical imaging, these disease-affected regions are the focus of diagnostic and therapeutic activity, making them of prime importance for investigation. Pixels with gray values below the threshold are categorized as background class or healthy tissue, whereas pixels with gray values more than the threshold are categorized into another class. This category encompasses all the pixels that represent areas of the image that do not exhibit the visual hallmarks of disease. In the context of plant pathology, this would include the normal, healthy green tissue of the leaf, as well as any non-leaf elements present in the background of the image, such as soil, other plants, or the imaging environment itself. By modifying VI and Otsu thresholds, Talukdar [
3] suggested a technique for the effective and real-time segmentation of sick cabbage leaves. In order to obtain complete images of corn leaves, Tong [
4] used the Otsu method, OpenCV morphological operations, and morphological transformation methods to outline the contours of healthy corn leaves, large spots, corn rust, and corn gray spots. They then used these contours to determine the difference set between the corn leaves and the background. High picture quality is necessary for traditional image segmentation techniques; if external factors degrade the image quality, the recognition results will be subpar or perhaps incorrect. Consequently, these methodologies’ robustness and generalizability are inadequate, and they are unable to ensure accuracy in real-world applications.
In intelligent crop disease identification based on images, leaf image segmentation is a key step [
5]. To improve the accuracy and efficiency of intelligent crop disease detection, numerous scholars have conducted research and made improvements [
6,
7]. Numerous researchers have started using machine learning for illness spot segmentation as a result of its advancements. A crop disease detection and classification EDLFM-RPD model was created by Almasoud [
6]. EDLFM-RPD locates disease characteristics by using k-means and median filtering as preprocessing. The features are then obtained by combining deep features based on inception with manually generated gray-level co-occurrence matrices. Lastly, FSVM is used for categorization. Ambarwari [
7] identified plant species with an accuracy of 82.67% using RBF kernel support vector machines (SVM). Machine learning techniques can yield good segmentation results with a small sample size, but they are rather difficult to implement and require several stages of image preprocessing. Machine learning techniques used for disease detection through digital images have some issues unrelated to human visual perception. Moreover, machine learning-based segmentation algorithms are typically poor in unstructured contexts and need researchers to manually build feature extraction and classifiers, making the process more challenging. Machine learning techniques are also used to detect various seeds. Numerous methodologies exist for the identification of various torch types, including content-based image retrieval (CBIR) techniques, and random forest classifiers have been used to detect corn seeds. However, these techniques require a large amount of data to support them, which is particularly challenging in the field of agriculture, where data is often incomplete and difficult to access [
8]. The rapid development of deep learning techniques provides a new way of thinking for researchers in the field of agriculture. To augment the precision and efficacy of corn disease detection, the international academic and expert community has persistently engaged in the investigation and refinement of network models, culminating in significant advancements. In their study, Wang [
9] introduced an ADSNN-BO model, which is predicated on MobileNet architecture and augmented with advanced attention mechanisms, tailored for the detection and categorization of crop diseases in agricultural fields. The efficacy of this approach was substantiated by cross-validation classification experiments conducted on publicly available datasets, yielding an accuracy rate of 94.65%. Nonetheless, the precision of crop disease identification remains an area requiring additional enhancement. Wang [
10] selected apple black rot images from the PlantVillage dataset to diagnose the severity of apple black rot, using a deep convolutional neural network (DCNN) for training. The empirical findings suggest that the proposed methodology obviates the need for labor-intensive feature engineering and threshold-based segmentation techniques. Utilization of the VGG-16 model for training purposes resulted in the highest overall accuracy rate of 90.4%. Sibiya et al. [
11] introduced fuzzy logic decision rules to determine the severity of leaf diseases, using the index of lesion area to leaf area ratio. Joshi [
12] proposed a convolutional neural network (CNN), Vir Leaf Net, for the diagnosis of
Clitoria ternatea mosaic virus, initially using traditional image segmentation techniques to separate the disease from the background in images, followed by the Vir Leaf Net for disease severity assessment, achieving a diagnostic accuracy of 90.48%. Subramanian [
13] investigated the use of VGG16, a densely connected CNN, for efficient classification of corn leaf diseases. This method employed Bayesian optimization for hyperparameter tuning and transfer learning. The images were sourced from the PlantVillage dataset and online resources. The assessment of the method was conducted using the performance metrics of precision, accuracy, and recall. A comparative analysis with alternative methodologies revealed that the current approach enhanced precision and concurrently reduced computational time. Nevertheless, it did not achieve comprehensive detection of all pathologies associated with corn foliage. Entuni [
14] proposed an automatic identification method for corn leaf diseases using the DenseNet-201 algorithm, with images collected from the PlantVillage dataset. The recognition accuracy of the DenseNet-201 model was recorded at 95.11%, demonstrating elevated precision and robustness, outperforming prior techniques, including ResNet-50, ResNet-101, and the bag of features approach. Nonetheless, the methodology did not incorporate the analysis of other plant components for the identification of infected diseases. Akila [
15] analyzed the application of deep convolutional neural networks (DCNN) in the classification of corn leaf diseases. Atila, U [
16] elaborated on the use of a DONN based on feature extraction for the prediction of leaf disease types. This method could accurately predict the location of diseases within leaves and classify them, The outcomes of the study indicate that the developed method enhanced accuracy by 96.88%, rendering it appropriate for predictive processes within practical agricultural contexts, albeit it lacked the capability to forecast diseases affecting other crop species. Successively, Chen introduced a series of Deeplab architectures, namely Deeplab [
17], DeeplabV2 [
18] DeeplabV3 [
19], and DeeplabV3+ [
20], which are proficient in the efficient extraction of multi-scale semantic information from images. Ronneberger [
21] proposed the U-Net model, which advanced the fully convolutional network (FCN) by integrating an encoding pathway that captures contextual information with a decoding pathway designed for accurate localization, The decoding pathway of the U-Net network facilitates the concatenation of high-resolution features with the upsampled output features of the decoder via a jump connection structure. Chen [
22] subsequently proposed BLSNet, which is based on U-Net architecture and further enhances the accuracy of crop damage segmentation through the incorporation of attention mechanisms and multi-scale feature extraction. In response to the challenges presented by traditional models, such as missed segmentations, erroneous segmentations, and low segmentation precision, an innovative technique was developed based on an enhanced U-Net network model, termed NCLB-net. Compared to traditional methods, the improved model excelled in the segmentation of northern corn leaf blight. This confirmed the model’s outstanding performance and generalization ability in the segmentation of corn leaf lesions, highlighting its potential value in practical applications.
The contributions of this paper are primarily encompassed in the following three aspects: