Next Article in Journal
In-Field Performance of Biomass Balers
Previous Article in Journal
Ammonium Cycling and Nitrification Stimulation during Oil Sludge Remediation by Gram-Positive Bacteria Lysinibacillus sphaericus Using Red Wiggler Earthworm Eisenia fetida
Previous Article in Special Issue
A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Black Spot of Rose Based on Hyperspectral Imaging and Convolutional Neural Network

School of Technology, Beijing Forestry University, No. 35 Tsinghua East Road, Beijing 100083, China
*
Author to whom correspondence should be addressed.
AgriEngineering 2020, 2(4), 556-567; https://doi.org/10.3390/agriengineering2040037
Submission received: 25 August 2020 / Revised: 30 October 2020 / Accepted: 3 November 2020 / Published: 17 November 2020
(This article belongs to the Special Issue Precision Agriculture Technologies for Management of Plant Diseases)

Abstract

:
Black spot is one of the seriously damaging plant diseases in China, especially in rose production. Hyperspectral technology reflects both external features and internal structure information of measured samples, which can be used to identify the disease. In this research, both the spectral and image features of two infected roses with black spot were used to train a convolutional neural network (CNN) model. Multiple scattering correction (MSC) and standard normal variable (SNV) methods were applied to preprocess the spectral data. Cropping, median filtering and binarization were pretreatments used on the hyperspectral images. Three CNN models based on Alexnet, VGG16 and neural discriminative dimensionality reduction (NDDR) were evaluated by analyzing the classification accuracy and loss function. The results show that the CNN model based on the fusion of features has higher accuracy. The highest accuracies of detection of blackspot in different roses are 12–26 (100%) and 13–54 (99.95%), applying the NDDR-CNN model. Therefore, this research indicates that the spectral analysis based on CNN can detect black spot of roses, which provides a reference for the detection of other plant diseases, and has favorable research significance as well as prospect for development.

1. Introduction

Roses have great commercial value as ornamental plants, food and exports as cut flowers in China [1]. Black spot is one of the most severe and devastating disease of roses [2]. It is caused by black spot fungus. Symptoms of infection mainly include round spots with a black and feathery edge on the front side of the leaves [3]. At present, the method to manage the disease is to use fungicides or pesticides. The primitive means of detection depends on the visual ratings of gardeners after spot spreading which is inefficient and inaccurate [4,5]. If the disease can be accurately identified in the early stage, it can be controlled in a timely manner and improve economic benefits. Therefore, it is urgent to comprehensively improve the detection ability of black spot disease.
Some biological methods are also used to detect plant diseases. Ju et al., developed an assay consisting of recombinase polymerase amplification combined with lateral-flow dipstick technology (RPA-LFD) for the rapid and sensitive detection of V. dahliae [6]. Shi et al., applied loop-mediated isothermal amplification (LAMP) to detect P. carotovorum in celery with soft rot using a primer set designed from the pmrA conserved sequence of P. carotovorum [7]. PCR analysis was used to identify and detect powdery and downy mildew on cucumber as well as Pectobacterium brasiliense on potato [8,9]. These research methods have the advantages of rapid speed and sensitivity, but also have shortcomings, such as being expensive, destructive to the leaves and requiring a cumbersome operation process and professional biochemical knowledge. Hyperspectral technology is an advanced technology which combines the traditional spectral technology and two-dimensional imaging technology organically. It can reflect both external feature information and internal structure information of the samples. Compared with manual and biological detection, it has the characteristics of being fast, non-destructive, non-polluting and easy to operate. It has been gradually applied in related fields and it proposes a potential solution to solve many problems faced by human visual-level detection of plant pathology in the field. Roscher et al., combined the information of hyperspectral features with 3D geometry features to detect leaf spot in Cercospora [10]. Ban et al. studied the SPAD value of apple leaf infected with apple mosaic virus using hyperspectral transmission measurement technology [11]. Mahlein et al. summarized the hyperspectral imaging technology used to evaluate the relationship between plants and pathogens [12]. Mehrubeoglu et al. focused on the hyperspectral images of grape leaves to identify the red blotch disease and different infected stages of the disease [13]. Laurel wilt disease of avocado, early Ganoderma boninense disease of oil palm trees, fusarium head blight in wheat kernels and black Sigatoka in banana leaves in the early stage were detected automatically based on hyperspectral imaging [14,15,16,17]. Wahabzada et al. presented several data mining techniques applied to discover the spectral characteristics of some specific diseases [18]. Hariharan et al., developed a novel method to analyze hyperspectral data using finite difference approximation (FDA) and bivariate correlation (BC) to distinguish laurel wilt infected avocado from healthy trees with an overall accuracy of 100% [19]. Gradually, hyperspectral imaging and machine learning were used in many studies to detect the symptoms before disease. Zhang et al. chose the classification and regression tree (CRT) algorithm to establish the prediction spectral model of wheat powdery mildew, considering the effects of wheatear and leaf shadow [20]. The K-nearest neighbor (KNN) method was used to establish the discriminant models to classify healthy and gray mold infected tomato leaves and muskmelon Cercospora leaf spot with hyperspectral imaging techniques [21,22]. Chen et al. established and selected the most appropriate leaf-level reflectance-based vegetation indices for bacterial wilt detection in peanuts. ANOVA, multilayer perception, and the reduced sampling method were used to analyze the spectral data [23]. Garhwal and Park et al. developed a partial least squares discriminant analysis (PLS-DA) model to predict Zebra Chip in potatoes, Marssonina blotch in apple leaves, oak wilt and yellow rust on wheat leaves, respectively. The spectral signatures were extracted by segmentation and morphological operations [24,25,26,27]. Zhang and Pan et al., obtained hyperspectral images of rice leaves and pear fruit infection. A support vector machine (SVM) model was constructed to identify different infection severities based on the transformed data [28,29].
Among the existing methods for detecting plant diseases, the convolutional neural network (CNN) rarely appears. These methods achieve early detection of plant diseases, but there is still space for improvement in accuracy. In addition, the classic machine learning algorithm usually requires complex feature engineering, while CNN does not. It only needs to input the data directly to the network, which usually achieves good performance.
This study used hyperspectral imaging to detect black spot in two varieties of roses. Spectral and images were extracted and different CNN structures were adopted. The accuracy and efficiency showed that CNN had great potential in detecting plant diseases.

2. Materials and Methods

2.1. Hyperspectral Imaging System

The hyperspectral imaging system is shown in Figure 1. It contains an SOC710VP HS line-scanning imaging spectrograph (AZUP Scientific Ltd., Beijing, China) which is fixed to a bracket, a C type infrared correction lens, an object stage, a notebook computer with HyperScanner_2.0.127 software (SURFACE OPTICS CORPORATION, San Diego, CA, USA) for collecting the image, and two 150 W tungsten halogen lamps that can provide stable and multi band light. The imager extracts 128 wavebands with a spectral range of 370–1042 nm, a spectral resolution of 4.7 nm and a spatial resolution of 696 × 520 pixels. The exposure time of the imaging system was set to 3 ms. The location and elevation of lights were adjusted according to the imaging result of samples at a 45° angle, 65 cm away from the stage.

2.2. Plant Samples

This study selected two kinds of roses consisting of 12–26 (susceptible) and 13–54 (resistant) which were both hybridization samples from the rose spot breeding laboratory in the School of Landscape Architecture. 90 plants of each kind were planted in a greenhouse with temperature (20 °C), humidity (60% relative), light (12 h) and an identical environment. Gathering five real leaves from the same position in the stem 20 cm from the ground where petiole length ranges from 2 to 4cm. Per kind, roses containing 450 leaves and 900 healthy leaves in total were set as the control group. The mycelium was scraped from the naturally occurring plants and prepared as a spore suspension with the potency of 0.5\1\1.5 mol/L under the microscope. Each kind of rose was divided in three groups and each group was inoculated with a concentration of suspension. Four bacterial drops were inoculated symmetrically on four positions on one leaf. It should be noted that the inoculation was carried out after the collection of the spectral data of all healthy leaves. Then all the inoculated leaves were put into the incubator to create conditions of temperature (25 °C), humidity (80% relative) and lucifugal for the growth of the black spot fungus. These were taken out every 24 h to collect spectral data until they were cultivated, at 7 days. After three weeks of continuous culture, the incidence of black spot fungus on each leaf was recorded and we divided the leaves into two levels of health and infection according to whether there were spots on the surface, as the label for later training. The samples after infection are shown in Figure 2. They were then put in a petri dish lined with filter paper and culture fluid. There was no obvious abnormality observed on the surface at the beginning of the inoculation.

2.3. Hyperspectral Imaging Acquisition and Calibration

Ambrose et al. put forward that the acquired raw hyperspectral images may be unavailable under the factors such as systematic noise and environmental influence [30]. Therefore, dark and white calibration for the images were needed. The calibration was achieved according to Equation (1)
I C = I r I d I w I d
where Ic is the calibrated reflectance image; Ir is the raw hyperspectral image; Id is the hyperspectral image of dark reference, which has almost 0% reflectance; and Iw is the white reference hyperspectral image, which has a reflectance of over 99%.

2.4. CNN Models for Detection

In this study, three CNN structures are used for model training.
a. AlexNet
AlexNet was designed by ImageNet competition champion Hinton and his student Alex Krizhevsky in 2012. Compared with traditional machine learning classification algorithms, the main new technology points used by AlexNet are (1) successfully used ReLU as the activation function of CNN, and verified that its effect surpassed Sigmoid in the deeper network, and successfully solved the gradient dispersion problem of Sigmoid in the deeper network, (2) dropout is used to randomly ignore some neurons during training to avoid overfitting the model. In AlexNet, the last few fully connected layers use dropout, and (3) data enhancement. If there is no data enhancement, only the original data volume, CNN with many parameters will fall into overfitting. After using data enhancement, it can greatly reduce overfitting and improve the generalization ability.
The structure used in this study is as Figure 3. It contains 5 convolutional layers, 3 fully connected layers and 3 pooling layers. It is of most importance that 2 dropout layers are added to prevent overfitting. In this study, the input to this network is the spectral feature.
b. VGG16
VGG is a CNN model proposed by Simonyan and Zisserman in the document “Very Deep Convolutional Networks for Large Scale Image Recognition”. The model participated in the 2014 ImageNet image classification and positioning challenge, ranking second in the classification task and first in the positioning task. The features of VGG are:
(1)
Small convolution kernel. The author replaced all convolution kernels with 3 × 3 (rarely used 1 × 1).
(2)
Small pooled core. Compared with AlexNet’s 3 × 3 pooled cores, VGG are all 2 × 2 pooled cores.
(3)
Fully connected to convolution. The network test phase replaces the three full connections in the training phase with three convolutions. The test reuses the parameters during training, so that the full convolutional network obtained by the test does not have the limit of full connection, so it can receive any width or height input.
According to the size of the convolution kernel and the number of convolution layers in VGG, it can be divided into 6 configurations (ConvNet configuration), A, A-LRN, B, C, D, and E; D and E are more commonly used and they are called VGG16 and VGG19.
In this study, VGG16 was selected as the training model; Figure 4 shows the structure. It contains 13 convolutional layers, 3 fully connected layers and 5 pooling layers. The input to this network structure is the hyperspectral image feature.
c. Neural discriminative dimensionality reduction (NDDR)-CNN
NDDR-CNN was proposed by Yuan Gao and others in 2019. It is a general-purpose multi-task CNN learning framework that can automatically integrate features of different layers of different tasks using the NDDR module, that is, no artificial hard design is required, which can achieve plug-and-play.
(1)
NDDR layer. Used for multi-task feature fusion and feature dimensionality reduction. When the features of different layers of multiple tasks enter the NDDR layer, NDDR will first stitch all the incoming features in the last dimension, and then convolve the obtained features separately for each task. After completing the convolution, the obtained feature shapes are respectively input into the original network for convolution operation.
(2)
Shortcuts. In order to prevent the gradient of the lower layer from disappearing, the Shortcuts module is used to directly pass the gradient from the last layer to the lower layer. Each mainline task will receive the feature from the NDDR layer multiple times. The Shortcuts layer of each task resplices multiple NDDR-features received by this task according to the last NDDR-feature and then stitches them together.
Experiments prove that the multitasking framework with NDDR has a certain degree of improvement compared to other frameworks. In addition, the task also processed many details, including the initialization of the weight of the NDDR layer and the selection of the learning rate, etc., which are all ways from which the network can learn.
The network structure of NDDR-CNN is shown in Figure 5. It uses the NDDR layer cascade to achieve multitasking. The input to this network structure in this study is a fusion of features, including spectral features and image features.

2.5. Data Processing

This study is based on Python 3.8 and processes the data in the environment of Pycharm + Tensorflow 2.1.0 + Keras 2.3.1. The data processing is as Figure 6.
First of all, we obtained hyperspectral images including ordinary roses and those were inoculated with black spot. Secondly, image calibration was achieved by subtracting images of dark reference which were acquired at absolutely no light. The third step was to extract the region of interest (ROI) from the hyperspectral images and extract the spectrum of ROIs, as is shown in Figure 7a. The average spectral of the sample points from each ROI was used to represent the spectrum of the ROI. The image data is also affected by the background and noise of the petri dish, so the image is preprocessed by cropping, median filtering denoising, and binarization. The processing flow and results are shown in Figure 7b. Finally, we chose 3 × 3 filtering to denoise the image, because this filter can detect more noise features and its denoising effect is better. For each waveband, 360 spectral samples obtained from the ROIs of each hyperspectral image were used as training sets for CNN modeling. After obtaining the CNN model, 90 new samples were used as the testing sets (Table 1). Modeling was then performed, based on spectral features, image features, and fusion features, respectively. Spectral features represent the reflectivity of the leaf to light at different wavelengths. Image features represent the grayscale pictures of the samples, containing some shape and texture features. When training the model, we used the spectrum, the image and a combination of the two as input. Finally various parameters of different CNN models were adjusted according to the training process, to obtain the optimal detection model.

2.6. Analysis

As can be seen from Figure 7a, the spectral reflectance of different samples differs in value, but their changing trends are consistent, with the same absorption peaks and valleys. After extracting the ROI, we take the average spectrum of the sample as the research object. From Figure 8a, we see that the spectral reflectance of the infected sample is always lower than that of the healthy sample. At the same time, with the increase in the number of infection days, that is, the degree of infection continued to decrease, the reflectivity continued to decrease, showing such a trend in both roses. On the other hand, the spectral curve reflects the light absorption and reflection characteristics of rose leaves. At 580 nm, there is an obvious absorption peak, which is caused by the nitrogen response of the material in the leaf. The overall low reflectance within 800 nm is caused by the strong absorption of pigments (chlorophyll, anthocyanin, carotene, etc.) by the leaves. Within 800–1000 nm, the spectral reflectance increases sharply, and then maintains a higher reflectance, which is caused by the multiple scattering of light by the leaf cells. The cell structure of the infected sample is destroyed, and the scattering ability is reduced, so the reflectivity is also reduced to a certain extent within this range. Due to the influence of the experimental instruments and the experimental environment, there is a lot of noise in the original spectral curve extracted from the ROI, which will affect the data quality and is not conducive to subsequent analysis and modeling. Therefore, in this paper, multiple scattering correction (MSC) and standard normal variable (SNV) methods are combined for preprocessing to eliminate the effects of high-frequency noise and baseline offset. Through pretreatment, the average spectral curves of healthy samples and infected samples of different types of roses can be seen more clearly in Figure 8b.

3. Results and Discussion

3.1. Optimizer Algorithm in the CNN Model

The optimizer algorithm is often used to find the optimal solution of the model. In this study, when using the CNN model to train the fully connected layer, the stochastic gradient descent (SGD) optimizer is used. It can use information more effectively when it is redundant, and the early iteration effect is excellent. A large number of theoretical and practical work proves that SGD can converge well in most instances. Especially when applying large data sets, training is performed at high speed. This study trains the SGD optimizer at different learning rates. Momentum is the historical gradient weight coefficient, set to 0.9, where the batch size is set to 32, and two iterations are performed first. The loss function is also the most critical element in model training. It also needs to be defined and optimized. The smaller the loss, the better the model. Taking the image feature in VGG16 as an example, we compare 5 loss functions. Table 2 shows the comparative results.
All 5 loss functions reach an accuracy of over 80% and the train loss and test loss were controlled to a level lower than 0.23, which is a relatively good performance. Table 2 shows that when the loss function is categorical_crossentropy, the train accuracy and test accuracy are both higher than the other loss functions. Therefore, this paper selects categorical_crossentropy as the loss function in the fully connected layer. Simultaneously, under the premise of two iterations, the appropriate learning rates were chosen as shown in Table 3.
It can be seen from Table 3 that with the increase of the learning rate, there is no obvious law for the changes of the other parameters. When the learning rate is set to 0.0004, better results can be achieved. The train accuracy and test accuracy are both highest while the train loss and test loss are lowest. In addition, we increased the number of iterations for training, and the results are shown in Figure 9.
It can be seen from Figure 9 that when the number of iterations is increased, the loss and accuracy of training increase or decrease regularly, the loss and accuracy of test show repeated fluctuations. The optimal accuracy of the training set can reach 99.60%, the loss is 0.0154, and the accuracy of the test set is 97.20%, with a loss of 0.0820. This shows that the adopted network can learn the features of the samples in more detail and accuracy.

3.2. The Test Result of CNN

Table 4 shows the classification accuracy of the three CNN models. It can be seen that the accuracy of detection for the two kinds of roses was above 80% for both in AlexNet. After MSC and SNV preprocessing, the accuracy of the training set and the test set have been improved to a certain extent, and the detection accuracy of 12–26 has reached a maximum of 100%, indicating that denoising preprocessing can effectively improve the accuracy of the model. At the same time, it can be seen from the table that under the spectral feature, for 12–26 after SNV treatment, and for 13–54 after MSC treatment, the result is better, so in the subsequent modeling, different pretreatments were taken for the two roses. From VGG16 we can see the detection results based on image feature. After image preprocessing, the accuracy of the training set and the test set has also been improved, and the maximum value appears on the detection of 12–26, which is 99.6%. Based on the image feature, the detection accuracy of the susceptible variety is slightly higher than that of resistant variety as a whole, probably because the bacteria infect the susceptible variety faster and the changes reflected in the spectrum perform more obviously. Finally, according to the processing process of the two features, the two varieties are pretreated differently, and the NDDR detection model is established based on the fusion features. It can be seen that the detection accuracy based on the fusion features has reached more than 95%, which is higher than the previous two methods. Compared with AlexNet and VGG16, it can be seen that the fusion features perform better than the single feature. The model detection results are better for two reasons. One is because the hyperspectral data based on spectral and image features contain rich information from the samples, which reflects the characteristics of multiple features in the input to the network. Another reason is that the NDDR hierarchical structure is more abundant, which makes the model perform well in multi task feature fusion and feature dimension reduction, which can effectively extract features and prevent the disappearance of the lower gradient.
Therefore, the NDDR-CNN model based on the fusion feature has the best detection result for rose spot disease in this paper. The CNN model can well solve the high-dimensional and nonlinear practical problems of hyperspectral data. It effectively improves the detection results and avoids over-learning and under-learning. This research has achieved the early non-destructive detection of rose spot disease, and provides a basis for pathological detection of other plants. In the future, more work is needed on the research of plant pathological detection based on hyperspectral images.

4. Conclusions

The development of the forestry and flower industries requires effective identification of plant diseases, but the traditional machine learning recognition model based on a single feature has low accuracy, low efficiency and strong randomness. Based on these problems, this study explored the early non-destructive detection of rose spot disease based on the CNN model. In the establishment of the detection model, hyperspectral data and image preprocessing methods were introduced, and the spectral and image features of the two rose leaves were extracted. For the small number of samples in the detection of black spot disease, CNN was applied to the study and three kinds of network structures were constructed. The effects of CNN’s loss function, learning rate and different initialization methods on network performance were analyzed. Combined with the NDDR strategy, the accuracy of the detection model was improved, and the effectiveness of the preprocessing and feature extraction methods was verified. All the three models performed well; the results show that the NDDR-CNN model based on the fusion feature detected different types of roses 12–26 (100%) and 13–54 (99.95%), with the highest correlation coefficient with the real results. Further work will combine the physical and chemical indexes and microstructure of rose leaves to establish correlations with hyperspectral images, explain the changes in the spectrum from a biological point of view, and establish a more effective and accurate detection model.

Author Contributions

Conceptualization, J.M. and L.Y.; methodology, J.M. and L.P.; software, J.M.; validation, J.M. and L.P.; formal analysis, L.Y.; investigation, L.P. and L.Y.; resources, L.Y.; data curation, J.M., L.P. and L.Y.; writing—original draft preparation, J.M.; writing—review and editing, J.M., L.P. and L.Y.; visualization, J.M.; supervision, L.Y. and J.X.; project administration, L.P. and L.Y.; funding acquisition, L.Y. and J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China [31770769] and the Fundamental Research Funds for the Central Universities [NO.2015ZCQ-GX-03].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Palou, L.; Taberner, V.; Guardado, A.; Montesinos-Herrero, C. First report of Alternaria alternatacausing postharvest black spot of persimmon in Spain. Australas. Plant Dis. Notes 2012, 7, 41–42. [Google Scholar] [CrossRef] [Green Version]
  2. Debener, T. The beast and the beauty: What do we know about black spot in roses? Crit. Rev. Plant Sci. 2019, 2019, 1–14. [Google Scholar] [CrossRef]
  3. Blechert, O.; Debener, T. Morphological characterization of the interaction between Diplocarpon rosae and various rose species. Plant Pathol. 2005, 54, 82–90. [Google Scholar] [CrossRef]
  4. Zurn, J.D.; Zlesak, D.; Holen, M.; Bradeen, J.M.; Hokanson, S.C.; Bassil, N.V. Mapping a Novel Black Spot Resistance Locus in the Climbing Rose Brite Eyes™ (‘RADbrite’). Front. Plant Sci. 2018, 9, 1730. [Google Scholar] [CrossRef]
  5. Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 98. [Google Scholar] [CrossRef]
  6. Ju, Y.; Li, C.; Shen, P.; Wan, N.; Han, W.; Pan, Y. Rapid and visual detection of Verticillium dahliae using recombinase polymerase amplification combined with lateral flow dipstick. Crop Prot. 2020, 136, 105226. [Google Scholar] [CrossRef]
  7. Shi, Y.; Jin, Z.; Meng, X.; Wang, L.; Xie, X.; Chai, A.; Li, B. Development and evaluation of a loop-mediated isothermal amplification assay for the rapid detection and identification of pectobacterium carotovorum on celery in the field. Hortic. Plant J. 2020, 6, 313–320. [Google Scholar] [CrossRef]
  8. Bandamaravuri, K.; Nayak, A.; Bandamaravuri, A.; Samad, A. Simultaneous detection of downy mildew and powdery mildew pathogens on Cucumis sativus and other cucurbits using duplex-qPCR and HRM analysis. AMB Express 2020, 10, 1–11. [Google Scholar] [CrossRef]
  9. Muzhinji, N.; Dube, J.P.; de Haan, E.G.; Woodhall, J.W.; van der Waals, J.E. Development of a TaqMan PCR assay for specific detection and quantification of Pectobacterium brasiliense in potato tubers and soil. Eur. J. Plant Pathol. 2020, 158, 521–532. [Google Scholar] [CrossRef]
  10. Roscher, R.; Behmann, J.; Mahlein, A.-K.; Dupuis, J.; Kuhlmann, H.; Plümer, L. Detection of disease symptoms on hyperspectral 3D plant models. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 88–96. [Google Scholar]
  11. Ban, S.T.; Tian, M.L.; Chang, Q.R. Estimating the severity of apple mosaic disease with hyperspectral images. Int. J. Agric. Biol. Eng. 2019, 12, 148–153. [Google Scholar] [CrossRef]
  12. Mahlein, A.K.; Kuska, M.T.; Behmann, J.; Polder, G.; Walter, A. Hyperspectral sensors and imaging technologies in phytopathology: State of the art. Annu. Rev. Phytopathol. 2018, 56, 535–558. [Google Scholar] [CrossRef] [PubMed]
  13. Mehrubeoglu, M.; Orlebeck, K.; Zemlan, M.J.; Autran, W. Detecting red blotch disease in grape leaves using hyperspectral imaging. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII; SPIE: Baltimore, MD, USA, 2016. [Google Scholar]
  14. Ahmadi, P.; Muharam, F.M.; Ahmad, K.; Mansor, S.; Abu Seman, I. Early detection of ganoderma basal stem rot of oil palms using artificial neural network spectral analysis. Plant Dis. 2017, 101, 1009–1016. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Abdulridha, J.; Ehsani, R.; Castro, A. Detection and differentiation between laurel wilt disease, phytophthora disease, and salinity damage using a hyperspectral sensing technique. Agriculture 2016, 6, 56. [Google Scholar] [CrossRef] [Green Version]
  16. Ropelewska, E.; Zapotoczny, P. Classification of Fusarium-infected and healthy wheat kernels based on features from hyperspectral images and flatbed scanner images: A comparative analysis. Eur. Food Res. Technol. 2018, 2018, 1453–1462. [Google Scholar] [CrossRef] [Green Version]
  17. Fajardo, J.U.; Andrade, O.B.; Bonilla, R.C.; Cevallos-Cevallos, J.; Mariduena-Zavala, M.; Donoso, D.O.; Villardón, J.L.V. Early detection of black Sigatoka in banana leaves using hyperspectral images. Appl. Plant Sci. 2020, 8, 8. [Google Scholar]
  18. Wahabzada, M.; Mahlein, A.K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Metro maps of plant disease dynamics—Automated mining of differences using hyperspectral images. PLoS ONE 2015, 10, e0116902. [Google Scholar] [CrossRef]
  19. Hariharan, J.; Fuller, J.; Ampatzidis, Y.; Abdulridha, J.; Lerwill, A. Finite difference analysis and bivariate correlation of hyperspectral data for detecting laurel wilt disease and nutritional deficiency in avocado. Remote. Sens. 2019, 11, 1748. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, D.; Lin, F.; Huang, Y.; Zhang, L. Detection of wheat powdery mildew by differentiating background factors using hyperspectral imaging. Int. J. Agric. Biol. 2016, 18, 747–756. [Google Scholar] [CrossRef]
  21. Xie, C.Q.; Yang, C.; He, Y. Hyperspectral imaging for classification of healthy and gray mold diseased tomato leaves with different infection severities. Comput. Electron. Agric. 2017, 135, 154–162. [Google Scholar] [CrossRef]
  22. Zhang, J.Y.; Chen, J.C.; Fu, X.P.; Ye, Y.F.; Fu, G.; Hong, R.X. Hyperspectral imaging detection of cercospora leaf spot of muskmelon. Spectrosc. Spectr. Anal. 2019, 10, 3184–3188. [Google Scholar]
  23. Chen, T.; Yang, W.; Zhang, H.; Zhu, B.; Zeng, R.; Wang, X.; Wang, S.; Wang, L.; Qi, H.; Lan, Y.; et al. Early detection of bacterial wilt in peanut plants through leaf-level hyperspectral and unmanned aerial vehicle dat. Comput. Electron. Agric. 2020, 177, 105708. [Google Scholar] [CrossRef]
  24. Garhwal, A.S.; Pullanagari, R.R.; Li, M.; Reis, M.M.; Archer, R. Hyperspectral imaging for identification of Zebra Chip disease in potatoes. Biosyst. Eng. 2020, 197, 306–317. [Google Scholar] [CrossRef]
  25. Park, S.H.; Hong, Y.; Shuaibu, M.; Kim, S.; Lee, W.S. Detection of apple marssonina blotch with PLSR, PCA, and LDA using outdoor hyperspectral imaging. Spectrosc. Spectr. Anal. 2020, 40, 319–324. [Google Scholar]
  26. Fallon, B.; Yang, A.; Lapadat, C.; Armour, I.; Juzwik, J.; Montgomery, R.A.; Cavender-Bares, J. Spectral differentiation of oak wilt from foliar fungal disease and drought is correlated with physiological changes. Tree Physiol. 2020, 40, 377–390. [Google Scholar] [CrossRef]
  27. Bohnenkamp, D.; Kuska, M.T.; Mahlein, A.K.; Behmann, J. Hyperspectral signal decomposition and symptom detection of wheat rust disease at the leaf scale using pure fungal spore spectra as reference. Plant. Pathol. 2019, 68, 1188–1195. [Google Scholar] [CrossRef]
  28. Zhang, G.; Xu, T.; Tian, Y.; Xu, H.; Song, J.; Lan, Y. Assessment of rice leaf blast severity using hyperspectral imaging during late vegetative growth. Australas. Plant Pathol. 2020, 49, 1–8. [Google Scholar] [CrossRef]
  29. Pan, T.-T.; Chyngyz, E.; Sun, D.-W.; Paliwal, J.; Pu, H. Pathogenetic process monitoring and early detection of pear black spot disease caused by Alternaria alternata using hyperspectral imaging. Postharvest Biol. Technol. 2019, 154, 96–104. [Google Scholar] [CrossRef]
  30. Dai, L.R.; Zhang, S.L. Deep speech signal and information processing: Research progress and prospects. Data Acquis. Process. 2014, 29, 171–179. [Google Scholar]
Figure 1. Hyperspectral imaging system.
Figure 1. Hyperspectral imaging system.
Agriengineering 02 00037 g001
Figure 2. Rose leaves infected with black spot fungus.
Figure 2. Rose leaves infected with black spot fungus.
Agriengineering 02 00037 g002
Figure 3. The architecture of AlexNet with dropout.
Figure 3. The architecture of AlexNet with dropout.
Agriengineering 02 00037 g003
Figure 4. The basic structure of VGG16.
Figure 4. The basic structure of VGG16.
Agriengineering 02 00037 g004
Figure 5. Neural discriminative dimensionality reduction (NDDR)-CNN architecture for black spot detection.
Figure 5. Neural discriminative dimensionality reduction (NDDR)-CNN architecture for black spot detection.
Agriengineering 02 00037 g005
Figure 6. Schematic overview of the data procedures.
Figure 6. Schematic overview of the data procedures.
Agriengineering 02 00037 g006
Figure 7. (a) Spectra extraction from ROIs; (b) noise removal from hyperspectral images.
Figure 7. (a) Spectra extraction from ROIs; (b) noise removal from hyperspectral images.
Agriengineering 02 00037 g007
Figure 8. (a) Raw mean spectral of the healthy and infected samples and (b) mean spectral preprocessed by multiple scattering correction (MSC) + standard normal variable (SNV).
Figure 8. (a) Raw mean spectral of the healthy and infected samples and (b) mean spectral preprocessed by multiple scattering correction (MSC) + standard normal variable (SNV).
Agriengineering 02 00037 g008
Figure 9. (a) The training and test loss of different iterations and (b) the training and test accuracy of different iterations.
Figure 9. (a) The training and test loss of different iterations and (b) the training and test accuracy of different iterations.
Agriengineering 02 00037 g009
Table 1. Information regarding healthy and infected samples for modeling.
Table 1. Information regarding healthy and infected samples for modeling.
VarietyTreatmentsTraining SizeTesting Size
12–26(Susceptible)Health36090
Infection36090
Total720180
13–54(Resistant)Health36090
Infection36090
Total720180
Table 2. The loss and accuracy results of different loss functions.
Table 2. The loss and accuracy results of different loss functions.
Loss FunctionTrain_LossTrain_Accuracy (%)Test_LossTest_Accuracy (%)
categorical_crossentropy0.223195.630.227792.95
mean_squared_error0.091092.050.110789.50
mean_absolute_error0.143390.840.212285.75
mean_squared_logarithmic_error0.046487.120.055783.95
hinge0.655585.830.704080.10
Table 3. The loss and accuracy results of different learning rates.
Table 3. The loss and accuracy results of different learning rates.
Learning RateTrain_LossTrain_Accuracy (%)Test_LossTest_Accuracy (%)
0.000090.299486.500.326983.75
0.00010.252390.500.281087.10
0.00020.248588.920.290387.30
0.00030.276687.870.285499.05
0.00040.223198.630.227797.75
0.00050.259992.680.242992.40
0.00060.235296.870.329790.95
0.0010.263398.350.252893.30
0.010.694079.530.693280.00
Table 4. The classification accuracy of the three models to detect black spot.
Table 4. The classification accuracy of the three models to detect black spot.
AlexNet
VarietyData SetTrain_Accuracy (%)Test_Accuracy (%)
12–26 (Susceptible)Raw92.3689.58
Raw + MSC96.5395.83
Raw + SNV10097.92
13–54 (Resistant)Raw87.5083.33
Raw + MSC95.8393.75
Raw + SNV94.4491.67
VGG16
VarietyData SetTrain_Accuracy (%)Test_Accuracy (%)
12–26 (Susceptible)Raw97.4090.95
Raw + preprocessing99.6097.20
13–54 (Resistant)Raw97.1093.53
Raw + preprocessing98.8097.12
NDDR-CNN
VarietyData SetTrain_Accuracy (%)Test_Accuracy (%)
12–26 (Susceptible)Raw98.5098.95
Raw + SNV + preprocessing100.0099.63
13–54 (Resistant)Raw97.8796.57
Raw + MSC + preprocessing99.9599.10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, J.; Pang, L.; Yan, L.; Xiao, J. Detection of Black Spot of Rose Based on Hyperspectral Imaging and Convolutional Neural Network. AgriEngineering 2020, 2, 556-567. https://doi.org/10.3390/agriengineering2040037

AMA Style

Ma J, Pang L, Yan L, Xiao J. Detection of Black Spot of Rose Based on Hyperspectral Imaging and Convolutional Neural Network. AgriEngineering. 2020; 2(4):556-567. https://doi.org/10.3390/agriengineering2040037

Chicago/Turabian Style

Ma, Jingjing, Lei Pang, Lei Yan, and Jiang Xiao. 2020. "Detection of Black Spot of Rose Based on Hyperspectral Imaging and Convolutional Neural Network" AgriEngineering 2, no. 4: 556-567. https://doi.org/10.3390/agriengineering2040037

APA Style

Ma, J., Pang, L., Yan, L., & Xiao, J. (2020). Detection of Black Spot of Rose Based on Hyperspectral Imaging and Convolutional Neural Network. AgriEngineering, 2(4), 556-567. https://doi.org/10.3390/agriengineering2040037

Article Metrics

Back to TopTop