Next Article in Journal
Localization of Multi-Class On-Road and Aerial Targets Using mmWave FMCW Radar
Next Article in Special Issue
Environmental Perception Q-Learning to Prolong the Lifetime of Poultry Farm Monitoring Networks
Previous Article in Journal
Detection and Localization of Failures in Hybrid Fiber–Coaxial Network Using Big Data Platform
Previous Article in Special Issue
Segmentation of Overlapping Grape Clusters Based on the Depth Region Growing Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition of Rice Sheath Blight Based on a Backpropagation Neural Network

1
School of Mechanical Engineering, Nantong University, Nantong 226019, China
2
School of Life Sciences, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(23), 2907; https://doi.org/10.3390/electronics10232907
Submission received: 2 November 2021 / Revised: 22 November 2021 / Accepted: 23 November 2021 / Published: 24 November 2021
(This article belongs to the Collection Electronics for Agriculture)

Abstract

:
Rice sheath blight is one of the main diseases in rice production. The traditional detection method, which needs manual recognition, is usually inefficient and slow. In this study, a recognition method for identifying rice sheath blight based on a backpropagation (BP) neural network is posed. Firstly, the sample image is smoothed by median filtering and histogram equalization, and the edge of the lesion is segmented using a Sobel operator, which largely reduces the background information and significantly improves the image quality. Then, the corresponding feature parameters of the image are extracted based on color and texture features. Finally, a BP neural network is built for training and testing with excellent tunability and easy optimization. The results demonstrate that when the number of hidden layer nodes is set to 90, the recognition accuracy of the BP neural network can reach up to 85.8%. Based on the color and texture features of the rice sheath blight image, the recognition algorithm constructed with a BP neural network has high accuracy and can effectively make up for the deficiency of manual recognition.

1. Introduction

Rice is one of the most important food crops for mankind, with a long history of cultivation and consumption. Half of the world’s population eats rice, mainly in Asia, southern Europe, tropical America and parts of Africa [1,2]. Rice sheath blight, caused by Rhizoctonia solani, is one of the main diseases in rice production [3]. Over the past 20 years, the occurrence of sheath blight in rice areas in China has gradually increased. The disease mainly occurs in leaf sheaths and leaves. It can occur over the whole growth period of rice and has a great impact on yield [4,5]. If the problem persists, it will have a huge impact on production, with estimates of a general reduction of 10% to 30%, and even a serious reduction of 50%. In the early stage of the disease, an oval, dark-green, water-stained lesion appears on the leaf sheath, and then gradually expands into a moiré shape with a grayish white in the middle [6,7]. Traditionally, agricultural practitioners recognize rice sheath blight disease using their naked eye to judge the health of the plant. This recognition method not only requires a lot of manpower, but also has low accuracy. During the recognition process, agricultural practitioners are required to go to the paddy field for field observation. Traditional disease recognition methods are time consuming, labor intensive, have low efficiency, and involve subjective judgments, which cannot meet the standards of real-time and rapid monitoring required by modern society [8]. Therefore, the application of computer technology and image processing technology in the recognition and detection of rice sheath blight is of great value to rice production because it is fast, accurate and works in real time.
With the development of computer image recognition and deep learning technology in recent years, there are many studies on the recognition and classification of crop diseases around the world. Rozaqi [9] et al. found a way to recognize the early blight and late blight of potato leaves, using deep learning training in combination with a convolutional neural network architecture. They had a ratio of the training set against the validation set of 7:3, and used 20 batch-processed images over 10 time periods. The final accuracy rate was 92%. Addressing apple gray spot and cedar rust, Li [10] et al. used image segmentation methods to compare and improve the three convolutional neural network models of the support vector machine (SVM), ResNet and VGG. Finally, they chose the ResNet-18 deep learning model and achieved a higher recognition rate. Almadhor [11] et al. proposed an artificial intelligence-driven (AI-driven) framework to detect and classify the most common guava plant diseases. Full-name, 4E-color difference image segmentation was employed to segregate the infected areas. Color (RGB, HSV), histogram and texture (LBP) features were used to extract feature vectors, and advanced machine learning classifiers, such as fine KNN, complex tree and boosted tree, were used for recognition. The classification accuracy was 99%. Moreover, Oyewola [12] et al. developed a novel deep residual convolution neural network (DRNN) for cassava mosaic disease (CMD) detection in cassava leaf images, which utilized a balanced image dataset to improve the classification accuracy. The DRNN model achieved an accuracy rate of up to 96.75%. In addition, Abayomi-Alli [13] et al. studied the recognition of cassava leaf disease using a novel image color histogram transformation technology for data augmentation in image classification tasks, in order to solve the problem of using neural networks when one only has access to low-quality images. Four methods—resolution down-sampling, Gaussian blurring, motion blur, and overexposure—were carefully carried out for verification. Furthermore, Kundu [14] et al. proposed a ‘Custom-Net’ model to identify blast and rice rust on pearl millet, and obtained images in real time through the ‘Automatic and Intelligent Data Collector and Classifier’ framework. The Custom-Net model had a classification accuracy of 98.78%. Finally, Hu [15] et al. proposed a convolution neural network based on data augmentation and transfer learning to efficiently recognize corn leaf disease models. By simply tuning the GoogleNet pretraining network and controllably adjusting the parameters, including the optimizer and learning rate, the average recognition accuracy of corn disease, which consists of corn common leaf rust, corn common rust and corn northern leaf blight, was greater than 95%.
Although image recognition technology has been widely used in the field of agriculture, due to the planting characteristics of rice and the long incidence cycle of sheath blight, image recognition technology is relatively less used in rice disease recognition, especially for rice sheath blight. Until now, little direct agricultural work was reported in studies on rice disease recognition. Phadikar [16] et al. used image segmentation technology for preprocessing operations, then employed a neural network as a classifier for classification operations and proposed a system for detecting rice disease images. Anthonys [17] et al. selected rice blast and brown spot as diseases that could be targeted in the development of recognition and classification processes. In the recognition process, the digital morphology method was conducted to preprocess the rice disease images, and the membership function was utilized to classify 50 sample images. The recognition accuracy reached 70%, and a working system which can recognize the types of rice diseases was thus developed. In addition, Majid [18] et al. used the fuzzy entropy method for feature extraction, employing a probabilistic neural network to recognize rice disease images, and developing a rice disease recognition application. Furthermore, Suman [19] et al. performed histogram preprocessing on leaf images of rice blast and Cercospora leaf spot, and used SVM for classification and recognition. To conclude, rice stripe disease recognition methods based on image recognition technology and neural network technology has become the focus of significant present and future research.
In this study, a novel and facile method based on neural networks was rationally designed for the recognition and detection of rice sheath blight. Firstly, given the quality of real rice sheath blight images, preprocessing was carried out, including image smoothing, image enhancement and image segmentation, to greatly weaken the influence of complex backgrounds on the overall recognition. Secondly, the color and texture features of the rice sheath blight images were extracted to create the parameters that defined the subsequent training. Then, a backpropagation (BP) neural network model was built in MATLAB: 480 pictures were used as the training set, 120 pictures were used as the verification set and the input matrix and output matrix were established for training and testing. Finally, based on the actual recognition situation, we show how the number of hidden layer nodes of BP neural network can be efficiently adjusted for optimization. In China, few studies have been conducted so far that concern the recognition and detection of rice sheath blight using computer image recognition technology. The BP neural network model proposed in this paper provides a new platform for the recognition of rice sheath blight that is fast, accurate and works in real time. This will thereby promote the rapid development of sustainable, green and automatic agriculture.
The rest of the paper is arranged as follows: Section 2 introduces the proposed method for processing the image of rice sheath blight. Section 3 presents the experiment and results. The discussion on the proposed method is presented in Section 4. Section 5 concludes this paper.

2. Proposed Methods

2.1. Image Preprocessing

The quality of the acquired images is affected by many factors, such as illumination and noise, which will severely affect the boundary recognition [20,21]. To alleviate this issue, image smoothing can efficiently weaken noise or distortion of the image. Common processing methods include mean filtering and median filtering. The principle of mean filtering is based on using a region in the image as a template, with the data in the region averaged and assigned to the center of the region [22]. In contrast, median filtering is more widely used. The basic principle of median filtering is to replace the value of a point in a digital image or digital sequence with the median value of each point in the neighborhood surrounding that point, so that the surrounding pixel values are close to the true value, which thus reduces the influence of isolated noise points [23].
Image enhancement is used to strengthen the contour edge and feature information in the image, enhance the contrast of gray at the edges, and facilitate the analysis of information. Through image enhancement, we can remarkably improve the quality and recognizability of the image for better display. Common image enhancement methods include gray change enhancement and histogram equalization. Gray change enhancement makes the image clearer by simply adjusting the light and dark contrast of the gray image. Histogram equalization is a method used to enhance the image contrast. The main idea is to change the histogram distribution of an image into an approximately uniform distribution, so as to enhance the image contrast [24,25].
Image segmentation separates diseased spots on leaves from the image background. Image segmentation refers to the division of an image into several disjointed areas based on image features, such as grayscale, color, spatial texture and geometric shapes, with a consideration of the consistency or similarity in the same area. Moreover, differences between regions become obvious. Through reasonable threshold selection, the diseased spots can be separated from the leaves [26,27,28].

2.2. Feature Extraction

Image features can be divided into visual features and statistical features. Visual features refers mainly to the natural features of human visual perception, such as color, texture, and shape. Statistical features refers to the features that need to be measured through transformation, such as the frequency spectrum and histogram [29,30,31].
Color features have scale change invariance; that is, there is relatively small dependence on the image size, direction, and viewing angle, with relatively low sensitivity to various deformations, which is the most direct feature used to describe the image [32]. Color features can be used to effectively recognize different rice diseases. With respect to image processing, RGB is the most important and common color model [33]. Established on the basis of the Cartesian coordinate system, it is based on the three basic colors of red, green and blue, with different degrees of superposition to produce rich and extensive colors. It is also called the three primary colors model.
Texture features are global features that describes the surface properties of the scene corresponding to the image or image area. Texture features are not based on pixel features. Rather, they are features that need to be statistically calculated for a specific area that contains multiple pixels [34,35]. These features are superior because local deviations do not cause them to fail. The extraction of texture features means mapping different statistics, spatial structures and sizes to different gray values. Texture features have rotation invariance; that is, they have relatively strong resistance to noise. Commonly used texture feature extraction methods include gray difference statistics, gray level co-occurrence matrix and gray scale stroke statistics. Gray difference statistics use the gray histogram of the texture area as the texture feature to extract features such as mean, variance, energy and entropy for texture description.
Assuming that (x,y) is a point in the image, the gray level difference of the nearby point (x + Δx,y + Δy) is as follows:
g Δ ( x , y ) = g ( x , y ) g ( x + Δ x , y + Δ y )
where g Δ is gray difference. It is assumed that all possible values of gray difference have m levels. Assuming that the point (x,y) moves across the whole image, the number of times g Δ takes each value is accumulated, which means that the histogram can be constructed. The value probability p ( k ) of g Δ can be obtained from the histogram. When the value of k is small, if the value of p ( k ) is large, and so the texture is rough; if the value of p ( k ) is small, and so the texture is fine.
Other related texture features are shown in Equations (2)–(4):
Mean   value :   m e a n = i i p ( i ) / M
Contrast   ratio :   c o n = i i 2 p ( i )
Entropy :   E n t r o p y = i p ( i ) log 2 [ p ( i ) ]
In the above equations, the closer the p ( k ) value is to the original value, the smaller the mean value. When p ( k ) is more stable, the entropy is much larger, while the energy is comparatively smaller [36].

2.3. The BP Neural Network

A BP neural network is a multi-layer feedforward neural network that is trained according to the error backpropagation algorithm. As shown in Figure 1, the structure mainly includes the input layer, output layer and hidden layer. The basic BP algorithm is composed of two processes. One process is the forward transfer of data; that is, the sample data enter from the input layer, are processed layer by layer in the hidden layer and are then transferred to the output layer. The other process is the reverse transmission of errors, which is generally used when the actual output value is inconsistent with the expected output value in order to correct the weights between neurons [37,38,39,40]. Neural network training refers to the continuous adjustment of the weights of each layer and the reduction of output errors through the forward and reverse transmission of multiple sets of samples, so as to achieve network optimization [41].
In order to improve training efficiency, the input matrix is normalized. More specifically, the input and output data of the neural network are limited to between 0 and 1. Equation (5) is generally used to calculate this value:
x i ¯ = x i x min x max x min
where x i represents input or output data, x min is the minimum value of the data variation range and x max denotes the maximum value of the data variation range. Equation (6) is also used:
M = m + n + a
where m and n are the number of neurons in the output layer and input layer, respectively, and a is a constant between 0 and 10).

2.4. Process of the Proposed Method

The block diagram of the proposed method is illustrated in Figure 2, which can be divided into four stages.
(1)
 Image acquisition: the camera is used to collect pictures of rice, including pictures of rice with rice sheath blight and pictures of healthy rice.
(2)
 Image preprocessing: median filtering, histogram equalization and edge segmentation are performed on the images that have been processed into the same size to remove the complex background.
(3)
 Feature extraction: based on the color and texture features of rice sheath blight images, the feature parameters of each image are extracted for the training unit.
(4)
 Image recognition: after building the BP neural network, training and recognition test are carried out, and the number of hidden layers is optimized to improve the recognition efficiency.
Figure 2. Block diagram of the proposed method.
Figure 2. Block diagram of the proposed method.
Electronics 10 02907 g002

3. Results and Discussion

Regarding the configuration of the hardware, we used an AMD Ryzen 7 4800H with 16 GB RAM and an NVIDIA GeForce RTX 2060, and the software is Matlab 2020a.

3.1. Data Collection

A total of 230 photos of rice with sheath blight were taken under natural light and saved in JPG format. In order to preserve the main diseased spots without compromising the integrity of the image, the image has been cropped to a size of 50 × 50 pixels. A total of 600 sample pictures have been obtained. Representative sample pictures are shown in Figure 3.

3.2. Preprocessing and Feature Extraction

As shown in Figure 4, after the experiment, the medfilt2 operator was found to have the best median filtering effect. Then the histeq operator was used to equalize the histogram of the image. Finally, by comparing the edge segmentation algorithm, Otsu method and region segmentation algorithm, it was found that the effect of edge segmentation using the Sobel operator is the best.
Figure 5 shows the feature extraction in the RGB color space for the facilitation of the recognition of rice sheath blight according to the color information. According to Table 1, R is 97.8240, G is 88.8168 and B is 45.9972 in the first order moment of the diseased spots picture. In addition, the red component has the highest value and the blue component has the lowest, indicating that red is the most obvious color in the diseased spots picture.
Figure 4. Image during preprocessing: (a) the original image; (b) median filter image; (c) histogram equalization image; (d) edge segmentation image.
Figure 4. Image during preprocessing: (a) the original image; (b) median filter image; (c) histogram equalization image; (d) edge segmentation image.
Electronics 10 02907 g004
Figure 5. RGB component and histogram of rice sheath blight sample image: (a) red component; (b) green component; (c) blue component; (d) histogram of red, green and blue components.
Figure 5. RGB component and histogram of rice sheath blight sample image: (a) red component; (b) green component; (c) blue component; (d) histogram of red, green and blue components.
Electronics 10 02907 g005
Table 1. Color moment of rice sheath blight.
Table 1. Color moment of rice sheath blight.
LabelRGB
Mean value of first moment97.824088.816845.9972
Variance of second moment41.110434.418026.0005
The results of the gray difference statistical features comparison of leaves and diseased spots are shown in Table 2.
It can be seen from Table 2 that both the mean value (Mean) and entropy (Ent) of healthy leaves are smaller than those of diseased spots, which indicates that the texture of diseased spots is relatively rough. In addition, the differences between the mean values (Mean) and contrast ratios (Con) of the two is large.

3.3. Sample Training and Testing of the BP Neural Network

The number of the BP neural network’s input elements is equal to the dimension of the feature vector of the recognition object. The resolution of each picture is 50 × 50, with 2500 points. For each picture’s color characteristics (R, G and B), each picture has 7500 characteristics. For the three texture features that have been extracted from each image, namely, the mean value (Mean), contrast ratio (Con) and entropy (Ent), each image has three features. In summary, each picture has a total of 7503 features. From among the 600 pictures of rice, 480 pictures have been used as the training set to form a 7503 × 480 input matrix. The other 120 pictures are used as the verification set to form a 7503 × 120 verification input matrix. According to the input matrix, a 1 × 480 output matrix and a 1 × 120 verification output matrix are established. According to Equation (6), the number of hidden layer nodes is approximately 90.
Table 3, Table 4 and Table 5 show the experimental results when the number of hidden layer nodes is 80, 90 and 100, respectively.
When the number of hidden layer nodes for the neural network training was set at 80, around 100 samples out of the 120 test samples were successfully recognized in the five experiments that were conducted under the same conditions, with a recognition rate of 83.5%.
Table 4. Test results with 90 hidden layer nodes.
Table 4. Test results with 90 hidden layer nodes.
LabelNumber of SamplesRecognition QuantityRecognition Rates (%)
112010688.3
212010385.8
312010285.0
412010184.2
512010385.8
Mean value\\85.8
When the number of hidden layer nodes for the neural network training was set at 90, around 103 samples out of the 120 test samples were successfully recognized in the five experiments that were conducted under the same conditions, with recognition rate of 85.8%.
Table 5. Test results with 100 hidden layer nodes.
Table 5. Test results with 100 hidden layer nodes.
LabelNumber of SamplesRecognition QuantityRecognition Rates (%)
112010486.7
21209881.7
312010285.0
412010083.3
512010486.7
Mean value\\84.7
When the number of hidden layer nodes for the neural network training was set at 100, around 102 samples out of the 120 test samples were successfully recognized in the five experiments that were conducted under the same conditions, with a recognition rate of 84.7%.
Generally, the recognition rates of the three hidden layer nodes are almost the same. When the number of hidden layer nodes is 90, the recognition rate is the highest, which is 2.3% higher than that when the number of hidden layer nodes is 80 and 1.1% higher than that when the number of hidden layer nodes is 100. It can be concluded that with the increase in the number of hidden layer nodes, before the critical point, the memory and learning ability of the network can be enhanced, and the training recognition rate improves. After the critical point, as the number of hidden layer nodes increases, the BP neural network’s learning ability and recognition ability declines, and the induction ability and training recognition accuracy decrease. Therefore, the number of hidden layer nodes of BP neural network is finally set to 90, with 85.8% accuracy.

4. Discussion

Table 6 shows the confusion matrix for the detection results of 120 pictures (96 sheath blight pictures and 24 healthy rice pictures) in the validation set when the number of hidden layer nodes is 90. It can be seen that the image recognition accuracy of rice sheath blight is about 88%. The correct recognition rate of healthy rice pictures is relatively low, at about 75%. There may be two reasons for this. Firstly, there are only 24 pictures of healthy rice in the verification set. In the actual verification process, mistakes with one or two wrong pictures will therefore lead to relatively large errors. Secondly, some healthy rice also has traces or spots similar to grain blight, which the neural network is exposed to in the training process, resulting in inevitable errors occurring in the final recognition.
In Figure 6, the second group of confusion matrices, whose recognition rate is close to the average value, is selected, the ROC plot is drawn and the AUC value is calculated. The ordinate is the true positive rate, which represents the ratio between the predicted number of positive samples and the actual number of positive samples, while the abscissa is the false positive rate, which represents the ratio between the predicted number of negative samples and the actual number of negative samples. The closer the curve is to the upper left corner, the larger the area formed by the curve and the horizontal axis, indicating that the recognition ability of the classification method is stronger. It can be seen that the method proposed in this paper has good classification ability for rice sheath blight images.
Table 6. Confusion matrix for detection results.
Table 6. Confusion matrix for detection results.
LabelActual ClassPredicting Class
Sheath Blight PictureHealthy Rice Picture
1Sheath blight picture879
Healthy rice picture519
2Sheath blight picture8313
Healthy rice picture420
3Sheath blight picture8511
Healthy rice picture717
4Sheath blight picture8412
Healthy rice picture717
5Sheath blight picture8511
Healthy rice picture618
This study has therefore realized the recognition and judgment of rice sheath blight images based on a BP neural network. The recognition accuracy can reach 85.8%, which is superior to that of the traditional manual recognition method. In other plant disease recognition experiments that used a neural network, the recognition rate of early blight and late blight of potato leaves reached 92% [9], the detection rate of cassava mosaic disease was 96.75% [12], and the average recognition accuracy of corn common leaf rust, corn common rust and corn northern leaf blight was more than 95% [15]. In this research, there are several points that need to be paid attention to. The first point is that the number of pictures collected is not enough. The recognition rate can be enhanced by continuously increasing the number of pictures in the training set. Another point is that the disease spot of rice sheath blight is complex, changeable and without a fixed shape, which creates some difficulties for the recognition process. Further optimization of the BP neural network, such as improving the system transfer function and the network structure optimization, will significantly improve its ability to recognize crop diseases.

5. Conclusions

Rice disease recognition is an important field of research for agricultural pest control. In this paper, a recognition method for identifying rice sheath blight based on the BP neural network has been proposed. The image is preprocessed through median filtering, histogram equalization and segmentation. Then, according to the characteristics of the picture, the color features and texture features are extracted. Finally, a BP neural network model has been built for training image recognition. The conclusions can be summarized as follows:
(1)
In the discovery and control of rice sheath blight, the BP neural network can play an important role in detection of rice sheath blight using images of crops. Its recognition speed is fast and the recognition rate is high. After preprocessing, feature extraction and network optimization, the training recognition rate can reach up to 85.8%.
(2)
By preprocessing the sample image, the disease recognition rate can be effectively and significantly enhanced. In the process of image preprocessing, the image recognition rate after median filtering is better than that after mean filtering.
(3)
Analyzing the color and texture features of rice sheath blight can significantly improve the recognition rate. The color and texture of diseased spots are quite different from those of healthy rice.
(4)
This method realizes the rapid and accurate recognition of rice sheath blight, which is beneficial for dramatically reducing the labor intensity of agricultural employees and thereby improving work efficiency. In the future, it will be necessary to increase the number of pictures of rice sheath blight that are collected, covering the whole process of disease evolution, and increase the number of training samples to improve the recognition rate. In addition, other common disease data can also be added.
The BP neural network model proposed in this contribution will shed new light on crop disease recognition due to its unique merits. It can work in real time quickly and accurately, and we believe that it can make huge contributions toward sustainable, green and automatic agriculture in the future.

Author Contributions

Conceptualization, Z.L. and Y.L.; methodology, Y.L. and H.N.; software, Y.L.; validation, Y.L. and X.Z.; formal analysis, Y.L. and S.L.; investigation, X.W.; resources, H.N.; data curation, K.W.; writing—original draft preparation, Y.L.; writing—review and editing, K.W.; visualization, Z.L.; supervision, Z.L. and X.Z.; project administration, Z.L.; funding acquisition, H.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions, grant number PAPD; Key R&D Projects in Jiangsu Province, grant number BE2019060; Science and technology project of Nantong, Jiangsu Province, grant number MS12019064.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, Y.; Bao, J.; Zhang, D.; Li, Y.; Li, H.; He, H. Effect of heterocystous nitrogen-fixing cyanobacteria against rice sheath blight and the underlying mechanism. Appl. Soil Ecol. 2020, 153, 103580. [Google Scholar] [CrossRef]
  2. Li, X.; Ma, W.; Chang, W.; Liang, C.; Zhao, H.; Guo, J.; Song, C.; Pan, G. Linkage disequilibrium analysis of rice sheath blight resistance markers of rice grown in the cold region of Northeast China. Genetika 2018, 50, 943–958. [Google Scholar] [CrossRef]
  3. Yu, Y.; Jiang, C.; Wang, C.; Chen, L.; Li, H.; Xu, Q.; Guo, J. An improved strategy for stable biocontrol agents selecting to control rice sheath blight caused by Rhizoctonia solani. Microbiol. Res. 2017, 203, 1–9. [Google Scholar] [CrossRef] [PubMed]
  4. Raj, T.S.; Graff, K.H.; Suji, H. Bio efficacy of fungicides against rice sheath blight caused by Rhizoctonia solani under in vitro condition. Int. J. Plant Prot. 2016, 9, 615–618. [Google Scholar] [CrossRef]
  5. Xue, X.; Cao, Z.; Zhang, X.; Wang, Y.; Zhang, Y.; Chen, Z.; Zuo, S. Overexpression of OsOSM1 enhances resistance to rice sheath blight. Plant Dis. 2016, 100, 1634–1642. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, R.; Lu, L.; Pan, X.; Hu, Z.; Ling, F.; Yan, Y.; Lin, Y. Functional analysis of OsPGIP1 in rice sheath blight resistance. Plant Mol. Biol. 2015, 87, 181–191. [Google Scholar] [CrossRef]
  7. Willocquet, L.; Lore, J.S.; Srinivasachary, S.; Savary, S. Quantification of the components of resistance to rice sheath blight using a detached tiller test under controlled conditions. Plant Dis. 2011, 95, 1507–1515. [Google Scholar] [CrossRef] [Green Version]
  8. Zeng, Y.; Ji, Z.; Li, X.; Yang, C. Advances in mapping loci conferring resistance to rice sheath blight and mining Rhizoctonia solani resistant resources. Rice Sci. 2011, 18, 56–66. [Google Scholar] [CrossRef]
  9. Rozaqi, A.J.; Sunyoto, A. Identification of Disease in Potato Leaves Using Convolutional Neural Network (CNN) Algorithm. In Proceedings of the 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 November 2020; pp. 72–76. [Google Scholar]
  10. Li, X.; Rai, L. Apple Leaf Disease Identification and Classification using ResNet Models. In Proceedings of the 2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT), Shenzhen, China, 13–15 November 2020; pp. 738–742. [Google Scholar]
  11. Almadhor, A.; Rauf, H.T.; Lali, M.I.U.; Damaševičius, R.; Alouffi, B.; Alharbi, A. AI-Driven Framework for Recognition of Guava Plant Diseases through Machine Learning from DSLR Camera Sensor Based High Resolution Imagery. Sensors 2021, 21, 3830. [Google Scholar] [CrossRef]
  12. Oyewola, D.O.; Dada, E.G.; Misra, S.; Damaševičius, R. Detecting cassava mosaic disease using a deep residual convolutional neural network with distinct block processing. PeerJ Comput. Sci. 2021, 7, e352. [Google Scholar] [CrossRef]
  13. Abayomi-Alli, O.O.; Damaševičius, R.; Misra, S.; Maskeliūnas, R. Cassava disease recognition from low-quality images using enhanced data augmentation model and deep learning. Expert Syst. 2021, 38, e12746. [Google Scholar] [CrossRef]
  14. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Woźniak, M. Iot and interpretable machine learning based framework for disease prediction in pearl millet. Sensors 2021, 21, 5386. [Google Scholar] [CrossRef]
  15. Hu, R.; Zhang, S.; Wang, P.; Xu, G.; Wang, D.; Qian, Y. The identification of corn leaf diseases based on transfer learning and data augmentation. In Proceedings of the 2020 3rd International Conference on Computer Science and Software Engineering (CSSE 2020), Beijing, China, 22–24 May 2020; pp. 58–65. [Google Scholar]
  16. Phadikar, S.; Sil, J. Rice disease identification using pattern recognition techniques. In Proceedings of the 2008 11th International Conference on Computer and Information Technology, Khulna, Bangladesh, 24–27 December 2008; pp. 420–423. [Google Scholar]
  17. Anthonys, G.; Wickramarachchi, N. An image recognition system for crop disease identification of paddy fields in Sri Lanka. In Proceedings of the 2009 International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 28–31 December 2009; pp. 403–407. [Google Scholar]
  18. Majid, K.; Herdiyeni, Y.; Rauf, A. I-PEDIA: Mobile application for paddy disease identification using fuzzy entropy and probabilistic neural network. In Proceedings of the 2013 International Conference on Advanced Computer Science and Information Systems (ICACSIS), Sanur Bali, Indonesia, 28–29 September 2013; pp. 403–406. [Google Scholar]
  19. Suman, T.; Dhruvakumar, T. Classification of paddy leaf diseases using shape and color features. IJEEE 2015, 7, 239–250. [Google Scholar]
  20. Wang, K.; Zhang, J.; Ni, H.; Ren, F. Thermal Defect Detection for Substation Equipment Based on Infrared Image Using Convolutional Neural Network. Electronics 2021, 10, 1986. [Google Scholar] [CrossRef]
  21. Zhang, J.; Kang, X.; Ni, H.; Ren, F. Surface defect detection of steel strips based on classification priority YOLOv3-dense network. Ironmak. Steelmak. 2021, 48, 547–558. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Huang, C. An improved median filtering algorithm for image noise reduction. Phys. Procedia 2012, 25, 609–616. [Google Scholar] [CrossRef] [Green Version]
  23. Arias-Castro, E.; Donoho, D.L. Does median filtering truly preserve edges better than linear filtering? Ann. Stat. 2009, 37, 1172–1206. [Google Scholar] [CrossRef] [Green Version]
  24. Ngo, D.; Lee, S.; Nguyen, Q.H.; Ngo, T.M.; Lee, G.D.; Kang, B. Single image haze removal from image enhancement perspective for real-time vision-based systems. Sensors 2020, 20, 5170. [Google Scholar] [CrossRef]
  25. Subramani, B.; Veluchamy, M. Fuzzy gray level difference histogram equalization for medical image enhancement. J. Med. Syst. 2020, 44, 103. [Google Scholar] [CrossRef]
  26. Ewees, A.A.; Abd Elaziz, M.; Al-Qaness, M.A.; Khalil, H.A.; Kim, S. Improved artificial bee colony using sine-cosine algorithm for multi-level thresholding image segmentation. IEEE Access 2020, 8, 26304–26315. [Google Scholar] [CrossRef]
  27. Kirillov, A.; Wu, Y.; He, K.; Girshick, R. Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 9799–9808. [Google Scholar]
  28. Ghosh, S.; Das, N.; Das, I.; Maulik, U. Understanding deep learning techniques for image segmentation. ACM Comput. Surv. (CSUR) 2019, 52, 1–35. [Google Scholar] [CrossRef] [Green Version]
  29. Ren, Z.; Sun, Q.; Wu, B.; Zhang, X.; Yan, W. Learning latent low-rank and sparse embedding for robust image feature extraction. IEEE Trans. Image Process. 2019, 29, 2094–2107. [Google Scholar] [CrossRef]
  30. Yang, A.; Yang, X.; Wu, W.; Liu, H.; Zhuansun, Y. Research on feature extraction of tumor image based on convolutional neural network. IEEE Access 2019, 7, 24204–24213. [Google Scholar] [CrossRef]
  31. Choras, R.S. Image feature extraction techniques and their applications for CBIR and biometrics systems. Int. J. Biol. Biomed. Eng. 2007, 1, 6–16. [Google Scholar]
  32. Chen, W.T.; Liu, W.C.; Chen, M.S. Adaptive color feature extraction based on image color distributions. IEEE Trans. Image Process. 2010, 19, 2005–2016. [Google Scholar] [CrossRef]
  33. Li, H.; Chutatape, O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004, 51, 246–254. [Google Scholar] [CrossRef] [PubMed]
  34. Arun, C.H.; Emmanuel, W.S.; Durairaj, D.C. Texture feature extraction for identification of medicinal plants and comparison of different classifiers. Int. J. Comput. Appl. 2013, 62, 1–9. [Google Scholar]
  35. Prakasa, E. Texture feature extraction by using local binary pattern. INKOM J. 2016, 9, 45–48. [Google Scholar] [CrossRef] [Green Version]
  36. Yildiz, K. Identification of wool and mohair fibres with texture feature extraction and deep learning. IET Image Process. 2020, 14, 348–353. [Google Scholar] [CrossRef]
  37. Zhao, Z.; Xin, H.; Ren, Y.; Guo, X. Application and comparison of BP neural network algorithm in MATLAB. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; pp. 590–593. [Google Scholar]
  38. Manahi, A.; Zarei, B.; Saghafi, F. A conceptual model for E-entrepreneurship barriers in education. Int. J. Adv. Comput. Technol. 2012, 4, 65–80. [Google Scholar]
  39. Zhang, Y.; Wu, L. Stock market prediction of S&P 500 via combination of improved BCO approach and BP neural network. Expert Syst. Appl. 2009, 36, 8849–8854. [Google Scholar]
  40. Yang, A.; Zhuansun, Y.; Liu, C.; Li, J.; Zhang, C. Design of intrusion detection system for internet of things based on improved BP neural network. IEEE Access 2019, 7, 106043–106052. [Google Scholar] [CrossRef]
  41. Xiao, M.; Ma, Y.; Feng, Z.; Deng, Z.; Hou, S.; Shu, L.; Lu, Z. Rice blast recognition based on principal component analysis and neural network. Comput. Electron. Agric. 2018, 154, 482–490. [Google Scholar] [CrossRef]
Figure 1. BP neural network structure.
Figure 1. BP neural network structure.
Electronics 10 02907 g001
Figure 3. Images of rice sheath blight.
Figure 3. Images of rice sheath blight.
Electronics 10 02907 g003
Figure 6. Roc plots.
Figure 6. Roc plots.
Electronics 10 02907 g006
Table 2. Gray difference statistics of diseased spots and leaves.
Table 2. Gray difference statistics of diseased spots and leaves.
LabelMeanConEnt
Healthy plants0.0492968.50373.6157
Diseased spots0.0573783.39143.6813
Table 3. Test results with 80 hidden layer nodes.
Table 3. Test results with 80 hidden layer nodes.
LabelNumber of SamplesRecognition QuantityRecognition Rates (%)
112010184.2
21209982.5
312010184.2
41209881.7
512010285.0
Mean value\\83.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lu, Y.; Li, Z.; Zhao, X.; Lv, S.; Wang, X.; Wang, K.; Ni, H. Recognition of Rice Sheath Blight Based on a Backpropagation Neural Network. Electronics 2021, 10, 2907. https://doi.org/10.3390/electronics10232907

AMA Style

Lu Y, Li Z, Zhao X, Lv S, Wang X, Wang K, Ni H. Recognition of Rice Sheath Blight Based on a Backpropagation Neural Network. Electronics. 2021; 10(23):2907. https://doi.org/10.3390/electronics10232907

Chicago/Turabian Style

Lu, Yi, Zhiyang Li, Xiangqiang Zhao, Shuaishuai Lv, Xingxing Wang, Kaixuan Wang, and Hongjun Ni. 2021. "Recognition of Rice Sheath Blight Based on a Backpropagation Neural Network" Electronics 10, no. 23: 2907. https://doi.org/10.3390/electronics10232907

APA Style

Lu, Y., Li, Z., Zhao, X., Lv, S., Wang, X., Wang, K., & Ni, H. (2021). Recognition of Rice Sheath Blight Based on a Backpropagation Neural Network. Electronics, 10(23), 2907. https://doi.org/10.3390/electronics10232907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop