Next Article in Journal
Crop Row Segmentation and Detection in Paddy Fields Based on Treble-Classification Otsu and Double-Dimensional Clustering Method
Next Article in Special Issue
Modeling of Diurnal Changing Patterns in Airborne Crop Remote Sensing Images
Previous Article in Journal
Spatiotemporal Downscaling of GRACE Total Water Storage Using Land Surface Model Outputs
Previous Article in Special Issue
Detection of Defoliation Injury in Peanut with Hyperspectral Proximal Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery

by
Pouria Sadeghi-Tehran
*,
Nicolas Virlet
and
Malcolm J. Hawkesford
Department of Plant Sciences, Rothamsted Research, Harpenden AL5 2JQ, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(5), 898; https://doi.org/10.3390/rs13050898
Submission received: 8 February 2021 / Revised: 22 February 2021 / Accepted: 23 February 2021 / Published: 27 February 2021
(This article belongs to the Special Issue Precision Agriculture Using Hyperspectral Images)

Abstract

:
(1) Background: Information rich hyperspectral sensing, together with robust image analysis, is providing new research pathways in plant phenotyping. This combination facilitates the acquisition of spectral signatures of individual plant organs as well as providing detailed information about the physiological status of plants. Despite the advances in hyperspectral technology in field-based plant phenotyping, little is known about the characteristic spectral signatures of shaded and sunlit components in wheat canopies. Non-imaging hyperspectral sensors cannot provide spatial information; thus, they are not able to distinguish the spectral reflectance differences between canopy components. On the other hand, the rapid development of high-resolution imaging spectroscopy sensors opens new opportunities to investigate the reflectance spectra of individual plant organs which lead to the understanding of canopy biophysical and chemical characteristics. (2) Method: This study reports the development of a computer vision pipeline to analyze ground-acquired imaging spectrometry with high spatial and spectral resolutions for plant phenotyping. The work focuses on the critical steps in the image analysis pipeline from pre-processing to the classification of hyperspectral images. In this paper, two convolutional neural networks (CNN) are employed to automatically map wheat canopy components in shaded and sunlit regions and to determine their specific spectral signatures. The first method uses pixel vectors of the full spectral features as inputs to the CNN model and the second method integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with the CNN to increase the feature discrimination and improves computational efficiency. (3) Results: The proposed technique alleviates the limitations and lack of separability inherent in existing pre-defined hyperspectral classification methods. It optimizes the use of hyperspectral imaging and ensures that the data provide information about the spectral characteristics of the targeted plant organs, rather than the background. We demonstrated that high-resolution hyperspectral imagery along with the proposed CNN model can be powerful tools for characterizing sunlit and shaded components of wheat canopies in the field. The presented method will provide significant advances in the determination and relevance of spectral properties of shaded and sunlit canopy components under natural light conditions.

Graphical Abstract

1. Introduction

Hyperspectral imaging (HSI) was a breakthrough for remote sensing applications [1,2,3,4]. It combines imaging and spectroscopy to attain simultaneously and non-invasively both spatial and spectral information and forms a three-dimensional data cube. HSI provides a vast source of information by sampling the reflective portion of the electromagnetic spectrum covering a wide range from the visible region to the short-wave infrared region. These optical datasets, known as hypercubes, comprise two spatial dimensions and one spectral dimension. Each plane of the hypercube is a grayscale image corresponding to a single wavelength with each pixel displaying the radiance intensity reflected by the observation. Thus, each pixel of the hypercube contains the spectral signature of the underlying object. Since the spatial information is available, the source of each spectrum can be located, which makes it possible to investigate the light interactions with the material surface, vegetation, plant component, etc.
In the past decades, a surge of interest in hyperspectral imaging has been seen in the life sciences with applications in fields as diverse as food quality and control [5], pharmaceuticals [6], healthcare [7] and agriculture [1,8,9]. It has been applied in precision agriculture for weed detection [10], plant disease and stress detection [2], plant water and nitrogen content [11]. Hyperspectral technologies are becoming one of the most promising techniques to assess functional plant traits [12] in plant phenotyping [8,9]. Despite the advantages of using HSI in these areas of research, hyperspectral imaging still faces various tradeoffs and is not exempt from issues and drawbacks. For instance, spectral reflectance captured by HSI at the canopy scale is more complex and influenced by multiple variabilities, such as factors associated with plant architecture and geometry, soil background, and leaf scattering properties [13,14]. The acquisition of HSI in uncontrolled environments produces additional challenges including rapidly varying light exposure and the influence of wind turbulence [15], especially for push-broom cameras. Also, the high dimensionality of spectral bands and the high spatial resolution pose serious challenges for the quantitative analysis of the data. Although the high-dimensional features may provide some advantages for more accurate classification, it may cause algorithmic instability, i.e. the Hughes phenomenon [16] which causes a negative impact on the accuracy and efficiency of data analysis models.
Computer vision is a fundamental yet important step in extracting quantitative and qualitative information from hyperspectral imaging. The image analysis techniques range from straightforward extraction of average spectra from entire images (which is equivalent to the use of non-imaging spectrometers) to segmenting plants of interest. One aim of segmentation is to eliminate complex and varying backgrounds (e.g., soil, rock, etc.) within the sensor field of view that do not correspond to the object(s) of interest and retain spatial information about the patterns of spectral variation. HSI segmentation methods can be grouped into three main categories: threshold-based methods, model-based methods and feature-based methods. In threshold-based techniques, a fixed threshold is applied to a selected vegetation index, such as normalized difference vegetation index (NDVI) [17] or Photochemical Reflectance Index (PRI) [18], to segment residual background or target plants. These methods are very sensitive to the choice of threshold, which tends to be a subjective choice or requires a trial-and-error approach. Model-based methods, on the other hand, use prior information such as geometric features of objects and digital surface model data to reconstruct 3D models. Due to the complexity associated with 3D models, model-based methods are computationally slow and are difficult to implement in practical applications [18,19]. Moreover, 3D models are often not very accurate due to misalignment between 3D data and hyperspectral images as they are obtained from different resources and geo-referenced independently. The feature-based methods mainly consist of spectral classification and spatial classification. Spectral classification methods rely on the spectral signature of each pixel in hyperspectral images, whereas spatial classification techniques only employ the spatial information and are limited to one spectral band without fully exploiting the spectral information in HSI. The difficulty associated with these models involves algorithmic instability [20] due to the high dimensionality of using an entire spectral signature.
In the case of vegetation mapping, the identification of shaded and sunlit (non-shadowed) regions has also become an important part of remote sensing. Vegetation canopies represent complex dynamic spaces in which light is absorbed or transmitted by leaves; thus, shadows might have a considerable impact on the prediction of biochemical or physiological status [21,22]. Under natural conditions, shadows may occur due to the canopy structure known as self-shadows, or when a fraction of direct light from solar illumination is blocked by objects presented in the scene (e.g., the observing platform in our case) known as cast-shadows. Nonetheless, shadows may cause a reduction or total loss of information in a hyperspectral image, which leads to the corruption of biophysical parameters derived from pixel values, such as vegetation indexes [23,24].
Several studies investigated the impact of shadow on plant predictors and only focused on forestry applications with a predominance on the effect of the shaded section (self-shadows) on the PRI for biomass, light use efficiency and photosynthesis [25,26] and orchard to only works on pure vegetation pixels [27,28,29]. Camino et al. [30] investigated the effect using pure vegetation pixels or mixed pixels from tree crowns using HSI and thermal imaging to estimate water stress indicators. Maimaitiyiming et al. [31] developed a weighted index to consider the respective weights of sunlit and shaded pixels on the calculation of the sun-induced fluorescence index and the crop water stress index. However, to date, little is done to cereal crops to study the impact of shadow on vegetation indices and their importance as predictors of leaf area index, vegetative biomass, chlorophyll and nitrogen content and grain yield. Most of the studies are based on non-imaging spectrometers which cannot provide spatial information and do not allow investigation of the effect of shade on the aforementioned yield components. However, the rapid development of high-resolution imaging spectroscopy sensors mounted on close-range platforms opens new opportunities to investigate the effect of shade on individual plant organs, such as leaves and reproductive organs. Yang et al. [32] investigated the effect of shaded areas on PRI to evaluate water stress status in winter wheat. In rice, Zhou et al. [33] investigated the effect of shade on chlorophyll content prediction. A threshold-based method was proposed to segment panicles from vegetation in both sunlit and shaded areas.
Spectral-based classification techniques utilize spectral features to identify objects in HSI. These techniques can be grouped into spectral matching and statistical characteristics categories. In spectral matching methods, the discrimination of targets such as plant variety is based on the comparison of the similarity of the given spectrum with the reference spectrum. Based on the fine spectral information of HSI, methods such as spectral information divergence (SID) [34] and spectral angle mapper (SAM) [35] do not require complex analysis and dimension reduction; however, they highly rely on reference spectral data. On the other hand, the classification methods based on statistical characteristics, such as support vector machines (SVM), and logistic regression classifiers offer state-of-the-art performances in hyperspectral image classification [36,37] by finding the optimal decision boundaries among different classes. However, the effectiveness of such classifiers depends on the selection of some critical hyperparameters to control the learning process, which defines the classification model. There is a growing interest in applying convolutional neural networks (CNNs) as promising tools for computer vision analysis. CNNs endorse the powerful visual analysis capabilities that outperformed traditional computer vision techniques [38,39]; however, the complexity of HSI data structure and lack of available training samples pose a challenge in employing CNNs models for HSI. Recently, CNNs techniques have made breakthroughs in processing hyperspectral images [40,41,42]. Some models used the spectral domain directly, whereas other methods took the spatial features of hyperspectral images into account [43]. Nonetheless, both techniques, have shown promising performance in classifying hyperspectral imagery.
In this study, we introduce an image analysis pipeline along with a 1D-CNN model to automatically characterize the spectral variation of wheat leaves and spikes in shadowed and non-shadowed regions. To fulfil these objectives, two CNN models are presented for feature extraction and spectral-based HSI classification of shaded and sunlit components at canopy level. In the first method, we incorporate pixel vectors of the full spectral features as inputs to the CNN model. The second model integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with CNN to increase the feature discrimination and improves computational efficiency. Finally, the performance of the CNN-based techniques is evaluated from different perspectives such as classification accuracy and computational time.

2. Materials and Methods

In this study, several image analysis steps are developed to extract relevant information from the HSI data. The spectral properties are extracted at different growth stages to evaluate spectral variations of shaded and sunlit canopy components. The entire pre-processing and image analysis steps are coded in python environment using PyTorch [44] and OpenCV [45] libraries. Thus, it creates an end-to-end open-source pipeline for analysis of HSI data of multiple crop cultivars at the plot level.

2.1. Imaging Setup and Data Acquisition

The hyperspectral images at canopy level were collected using a Hyperspec® Inspector™ VNIR camera (Headwall Photonic). The VNIR camera is a push-broom imaging system that collects reflected light through an imaging slit. One row of spatial pixels is collected per frame as motion occurs, with each pixel containing the spectral data [46]. The target is scanned line by line, and spatial images are formed by recording simultaneously the spectral information of pixels distributed in a scan line (across-track direction), while the mirrors move horizontally. The sensor collects data within the 400 to 1000 nm region of the electromagnetic spectrum with a 0.7 nm step and an FWHM (full width at half maximum) image slit of 2.5 nm. It results in a hyperspectral data cube (hypercube) of 925 spectral bands with a dynamic range of 16 bits. The hyperspectral images were collected under natural light conditions using the fully automated Lemnatec Field Scanalyzer platform (Table 1). The data were collected at different growth stages according to the scale used by the AHDB wheat growth guide (AHDB Wheat Growth Guide, 2018 [47]): flag leaves fully emerged (GS39), advance heading time (GS57/59), 7 days after anthesis, and 22 days after anthesis (which is corresponding to early/mid stage of the senescence period). The VNIR camera (Figure 1A) was set up with two spatial configurations. The first configuration acquired images at a resolution of 533 × 667 pixels with 925 bands, and a second configuration collected images from the VNIR sensor at a resolution of 1600 × 1846 pixels with 925 bands. In both settings, the exposure time was fixed manually to adapt to brightness variations between scans.
The experiment was conducted in 2018–2019 at the Field Scanalyzer platform [48] located at Rothamsted Research, UK (51°48′34.56′′N, 0°21′22.68′′W). On the 25 October 2018, four wheat commercial cultivars (Triticum aestivum L. cv. Crusoe, Hereward, Istabraq and Maris Widgeon) were sown in three blocks according to a split plot design for a total of 72 plots of 3 m × 1 m with a planting density of 350 seeds/m2. Nitrogen (N) treatments were applied as ammonium nitrate in two splits during the spring for a total rate of 50 kgN.ha−1 (N1), 100 kgN.ha−1 (N2), 150 kgN.ha−1 (N3), 200 kgN.ha−1 (N4), 275 kgN.ha−1 (N5) and 350 kgN.ha−1 (N6). A first application split of 50 kg/ha was done on the 8 March 2019 and the remaining nitrogen was applied on the 10 April 2019. The experiment was managed according to local agronomic practices.

2.2. Pre-Processing of Raw Hyperspectral Images

Prior to any analysis, pre-processing the raw hypercube is mandatory to reduce artefacts generated during measurement and to normalized spectra from ambient illumination. In this study, the pre-processing steps involve (i) removing the effect of the illumination system, (ii) down-sampling by integrating/averaging hypercube images with similar wavelength, (iii) reducing noise:
Reflectance factors: Imaging under natural light conditions results in an additional challenge of rapidly varying light exposure. Thus, data radiometric correction is required to eliminate the spectral non-uniformity of the illumination and the influence of the dark current. In our experiment, the calibration was performed based on the flat field calibration method using a white reference panel (Zenith Lite™ Ultralight Targets 95%R, Sphereoptics®) mounted on a tripod (Figure 1B). The reference panel was scanned after every seven plots (~15 mins interval). Dark reference images were collected during the night without any light source. Then, for each, the reflectance (R) of each wavelength (w) was calculated using the closest reference scan in terms of time and/or light intensity (Photosynthetic Active Radiation collected simultaneously to the scan) following the Equation (1) [49,50].
R w = DN w raw_sample DN w DR DN w WR DN w DR   ×   T a r g e t   c o e f f i c i e n t
where D N r a w _ s a m p l e is the digital number of a pixel sample; D N D R and D N W R are the average values of the dark and white reference image at the same wavelength (w) as the sample image.
The target coefficients are correction factors associated to each wavelength. The values are obtained after calibration using an integrative sphere by the manufacturer.
Down-sampling: in order to match the spectral resolution of the reference target, spectra were down-sampled with an averaging window with a spectral width of 1 nm. As a result, the spectral resolution was reduced to 600 bands. This also reduces the computational complexity and the instrumentation noise from the spectrometer.
Smoothing of spectral data: To correct baseline drifts in NIR spectra and aid denoising, a method such as low-pass filter is frequently employed in HSI. Denoising aims to eliminate spikes and to smooth the spectral curves of each pixel while retaining the variations across different wavelengths. In the low-pass filter, noisy spectral values are replaced by taking the local average of neighboring data points. Since nearby spectral values measure very nearly the same underlying value, averaging can reduce the level of noise with minimum bias. The simplest low-pass filter computes a moving average of a fixed number of spectral data. However, the moving average filter is particularly aggressive and damaging when the filter passes through peaks that are narrower than the filter width. In this work, we used the Savitzky–Golay filter [51], an exceptionally effective and computationally fast smoothing filter. The Savitzky–Golay filter performs a least square fit of a small set of consecutive spectral data to a polynomial and takes the calculated central point of the fitted polynomial curve as the new smoothed spectral data point. It should be noted that too small a window will lead to the introduction of large artifacts in the corrected spectra and to a reduced signal-to-noise ratio. On the other hand, the larger the size of the window, the smaller the distinction between full and moving window pre-processing [52]. In our case, the filter width w = 11 and polynomial degree d = 2 were the optimal values for this study.

2.3. Training Dataset and Dimensionality Reduction

Upon completion of the pre-processing step, a supervised model is developed to classify crop canopy components into five classes of shaded leaves (SHL), shaded ears (SHE), sunlit leaves (SL), sunlit ears (SE), and background (BG). To build a supervised classification model, we first obtained a manually annotated dataset from shaded and sunlit wheat canopy components. Due to the sufficiently high spatial resolution of the hyperspectral images of the VNIR camera, we were able to build the spectral libraries of leaves and ears in both shaded, and sunlit areas derived from the user-defined region of interests (ROIs). ROIs were defined manually as rectangular areas based on visual identification in the HSI “pseudo” RGB data composed of the three bands 620 nm (Red), 535 nm (Green), and 445 nm (Blue). The annotated data was collected from 23 hyperspectral images of wheat cultivars at different crop growth stages. The total number of annotated data for 5 classes of SL, SE, SHL, SHE, and BG were 119,447, 164,223, 11,644, 4361, and 227,232 pixels, respectively. Each annotated patch is represented by w × h ×   λ ; where w × h is the width and height of the rectangle and λ is the number of wavelengths ( λ = 600). It should be noted that cast shadows did not occur during the data collection on 6th and 19th June; thus, the shadow samples (SHL and SHE) derived only from the collected data on 21st May and 4th July.
The high resolution of hyperspectral imagery employed in this study poses a significant challenge to the quantitative analysis of the hypercube. It can significantly increase the computational burden and storage space, leading to an increase in the data processing. As shown in similar studies [53], it is desirable to apply a spectral redundancy to select the most characterizing compact feature set. First, the hypercube ( x × y × λ ) is rearranged into a 2-D spectral matrix of dimension N × λ , where N = x × y is the total number of pixels; thus, each row represents the reflectance values from all bands at one pixel. Such a vectorization process will allow us to employ the second-order analysis procedures for reducing the data complexity. We applied a linear discriminant analysis (LDA) to find a linear combination of features that characterizes our pre-selected classes (SHE, SHL, SL, SE, BG). As opposed to principal component analysis (PCA), which takes into account only the spectral data and its variance regardless of their grouping, LDA explicitly attempts to make use of the labels to maximize the distance between five classes (SHL, SHE, SL, SE, BG). Figure 2, illustrates scatter plots of PCA and LDA to demonstrate the grouping, similarities and differences among classes. Figure 2A,B visualizes the data in two discriminant coordinates found by LDA and PCA. As shown in both figures, LDA performed better in separating the classes; on the other hand, the classes are not as clearly separated using the first two principal components found by PCA, even though together the first two principal components contain over 90% of the information.

2.4. The CNNs Framework for Spectral Classification

HSI classification is a fundamental yet important step to provide primary information for the subsequent tasks. The hierarchical architecture of CNNs can be an effective way to learn spectral signatures for HSI classification. In this study, we employed 1D-CNN technique to model the interclass appearance and shape variations of spectral channels obtained by the VNIR camera to improve the power of accurately differentiating wheat canopy components. The proposed CNN technique extracts effective features from the spectrum with the help of class-specific information, which is provided by the training samples.
The CNN framework was constructed by stacking several convolutional layers and max pooling layers to form a deep architecture (Figure 3). Two CNN models were tested for characterizing spectral signatures. In the first method (hereafter referred to as CNN-RAW), the full spectral bands ( λ = 600 ) were selected as the input to the network, whereas the second method (hereafter referred to as CNN-LDA) integrated the linear discriminate analysis (LDA) along with the 1D-CNN to increase the feature discrimination and improve computational efficiency. Both networks contain two convolutional layers, two pooling layers, fully connected layer and the output layer which is the label of the pixel vector assigned to each canopy component (SHL, SHE, SL, SE, and BG). Also, to address the problem of overfitting, regularization strategy was added to the network, including Rectified Linear Units (ReLU) and dropouts to achieve better model generalization.
x i l = b i l + k = 1 N l 1 c o n v 1 D ( w k i l 1 ,   s k l 1 )
where x i l represents the input, b i l denotes the bias of the i t h neuron at layer l, s k l 1 is the output of the k t h neuron at layer l − 1, w k i l 1 represents the kernel from the k t h neuron at layer l − 1 to the i t h neuron at layer l. c o n v 1 D ( . , . ) is used to perform 1D convolution without zero-padding.
Figure 3 illustrates the proposed 1D-CNN framework. ReLU is selected as the activation layer to increase the non-linear representations in the network. After three convolutional layers, ReLU layers and dropouts, the input pixel vector is converted into a feature vector. Then, the fully connected layer merges features obtained in the previous layer. Finally, logistic regression classifier is used to fulfil the classification step. The logistic regression classifier uses softmax as its output-layer activation to ensure that the activation of each output unit sums to 1 so that the output can be deemed as a set of conditional probabilities. The logistic loss function is selected to calculate the error between the predicted label and the ground-truth. The goal is to minimize the logistic loss function. We kept track of our losses after each epoch (set to 15), which represents the number of training iterations.
Figure 4 and Figure 5 present the corresponding classification maps to label canopy components using the proposed method (CNN-RAW). The classification result leads to a pseudo color map where each pixel vector of the spectral cube is assigned to a unique label. In Figure 4, shadowed vegetation (SH-All), sunlit vegetation (S-All), and background (BG) pixels were segmented and assigned to a unique label, whereas in Figure 5, the classification map is shown for three classes of sunlit wheat ears (SE), sunlit leaves (SL) and background (BG). Both figures illustrate the feasibility of classifying the HSI in field environments at the canopy scale with high accuracy. In the end, to capture the spectral properties of each canopy component, each assigned label is used as a binary mask over the hypercube to get an average reflectance value for each canopy component at all wavelengths (400–1000 nm).

3. Results

3.1. Classification Accuracy Assessment

To objectively assess the performance of HSI classification, we selected three widely used classification measurements: average accuracy (AA), F-score and recall-score (Equation (3)). Accordingly, the classification accuracy of the proposed methods was compared with conventional classification methods, such as stochastic gradient descent (SGD), and support vector machine (SVM) classifiers. It should be noted that the baseline parameters in scikit-learn API are used for both models.
To validate the effectiveness of the presented classifiers, we conducted stratified k-fold cross-validation method with 10 folds and three repeats. Table 2 summarizes the AA, F-score, and recall-score for each model, whereas Figure 6 depicts a box and whisker plot to summarize the distribution of accuracy scores. As shown in both Table 2 and Figure 6, the CNN model outperformed the SVM and SGD classifiers while using RAW spectral features with an average accuracy of 98.6%. On the other hand, SVM outperformed the other two methods using LDA with an average accuracy of 97.4%. The CNN and SGD classifiers came second and third with an average accuracy of 97.3% and 96.4%, respectively. Overall, the classifiers with the full spectral (RAW) achieved higher HSI classification accuracy than the dimensionality reduction LDA.
A c c u r a c y = TP + TN TP + TN + FP + FN ;   P r e c i s i o n = TP TP + FP ;  
R e c a l l = TP TP   +   FN ; F s c o r e = 2 P × R P + R
where TP: true positive; TN: true negative; FP: false positive; FN: false negative.

3.2. Computational Cost

In this section, the computational cost of the proposed methods is presented. As discussed in Section 3.1, the CNN-RAW method achieved a higher HSI classification accuracy; however, it is computationally expensive since it uses the full spectral features. The computational complexity of the CNN-RAW classification is O ( λ N ) , where λ is the number of wavelengths and N is the total number of pixels. On the other hand, the computational cost of the CNN-LDA method is O ( m N ) , where m is the number of LDA (m = 2) which is considerably lower than computing the full wavelength range of 400–1000 nm. The CNN-LDA method is also computationally faster than the CNN-RAW model. For this test, the average computational time was calculated over processing several HSI images with spatial resolutions of 553 × 667 with 600 bands. All the tests were performed on a PC with 3.2 GHz Quad-Core Intel Core i5, 16 GB memory, using Macintosh OS 11. As expected, the introduction of LDA dimensionality reduction method greatly improves the computational time. The CNN-LDA method is nearly two times faster on average (5.9 s) than the CNN-RAW algorithm with 11.12 s.

3.3. Comparison of the Reflectance Amplitude and Absorption Feature between Shaded and Sunlit Canopy Components

In this section, we present the spectral signature of five classes extracted from the CNN-RAW method. The classes include sunlit leaf and ears (SL and SE), shaded leaves and ears (SHL and SHE), and background (BG, Figure 7A). As the reflectance spectra from the shaded organs display low amplitudes, a normalization method known as continuum removal is applied to compare the shape of absorption features (Figure 7B). This approach allows comparison of individual absorption features from a common baseline by enhancing differences in absorption strength while normalizing for absolute differences of the reflectance peaks in SHL, SHE, SL, SE, and BG.
The reflectance signature of the sunlit fractions, SL and SE, shows higher reflectance values across the 400–1000 nm range compared to their shaded counterparts (SHE and SHL), which can be attributed to the lower irradiance of shadowed classes and thus a lower reflected signal (Figure 7A). While the amplitude and the reflectance values of the shaded organs are lower than the sunlit counterparts, all classes display a similar spectral signature pattern with lower reflectance in the visible domain (with a small peak in the green region), due to the light absorbed by chlorophyll, and higher reflectance in the NIR domain. This is confirmed in Figure 7B by illustrating the continuum removal reflectance values (CRR) which shows a drop in intensity at ~490 nm in all four classes, a peak around ~540 nm, followed by a continuous drop at 680 nm, before increasing sharply in the red-edge to reach its maximum in the NIR region. Interestingly, the reflectance of sunlit ears and leaf displays the highest intensities in the 510 to 570 nm range, but this is not the case for the sunlit leaves with the CRR. Indeed, ears obtained the highest intensities in the 510–570 nm range in both sunlit and shaded conditions. This may be attributed to the CRR as it is maximising the contrasting features of the reflectance spectra by normalising them to a common baseline. In our case, it would mean the relative difference between the peak in the green region compared to the baseline is higher for shaded ears than for the sunlit leaves.

3.4. Effects of Shadows in Vegetation Indices

The output of the CNN-RAW has been used to compute a set of vegetation indices for each of five classes: background, sunlit leaves and ears, shaded leaves and ears (Table 3, Figure 8). The aim here was to investigate the sensitivity of different VI in the canopy components and how they are affected by the shadows. The selection of VIs was inspired by the works in [33,54]. Figure 8A shows the pixel distribution of the sunlit and shaded fraction regardless of the organ component. It clearly shows that indices like DVI, EVI, MSAVI, MTVI, OSAVI, SARVI and TVI are able to discriminate background from vegetation as well as the sunlit fraction from the shaded fraction. All these indices are displaying a similar pattern in pixel distribution with low values for the background, intermediate values for the shaded fraction and the highest values for the sunlit fraction. Conversely, NDVI, G, MSR, PRI and VS display higher values for shaded areas compared to the sunlit areas, with no clear distinction of the pixel distribution able to discriminate both.
Figure 8B,C show the pixel distribution of the VIs for the organs, leaves and ears, for shaded and sunlit conditions, respectively. Only four indices are able to discriminate between ears and leaves within the shaded area: MSR, NDVI, PRI and VS (Figure 8B). For the sunlit area, MSAVI, MSR, NDVI, OSAVI, PRI, SARVI and VS show a potential to discriminate the ears from the leaves. However, most of them show an overlap between the pixel distribution of the two areas. Only the PRI seems to clearly distinguish the ears from the leaves in sunlit conditions.

4. Discussion

This study aimed to produce an accurate classification of wheat canopy components in shadowed and non-shadowed areas using high-resolution hyperspectral images. This was achieved by developing CNN-based models to automatically learn and evaluate the spectral characteristics of shaded and sunlit wheat canopy components. The analysis was carried out on high-resolution hyperspectral images acquired from the ground-based phenotyping platform, known as the Field Scanalyzer [48].
Two CNN based techniques were presented. In the first technique (CNN-RAW), pixels from the full spectral bands are used as the input layer. In the second method (CNN-LDA), the dimensionality of the spectral data was first reduced using the LDA technique and then the CNN model was deployed. The performance of the CNN techniques has been validated from the aspect of classification metrics and computational time. The accuracy of the proposed models was also compared against conventional classification methods (Section 3.1). As shown in Table 2, the proposed methods achieved over 98% (CNN-RAW) and 97% accuracy (CNN-LDA). Although less classification accuracy was achieved in CNN-LDA, it enhances the interpretability of the spectra information by replacing the original variables with a group of new variables while preserving original information. CNN-LDA also reduced the computational complexity and reduced the processing time by half compared to the CNN-RAW method, as described in detail in Section 3.2.
Previous studies on HSI classification often focused on the non-shadow portions of the canopy and neglected the importance of shaded regions [15,67,68]. This is partially due to the insufficient spatial resolution of HSI or poor signal to noise ratio in the shaded pixels. In this study, we demonstrated that high-resolution hyperspectral imagery can be used to characterize shaded and sunlit components in wheat canopies. The presented method can be used as a powerful tool to the reflectance signal across the spectrum from individual wheat canopy components. The results showed that shaded and sunlit components display separate spectral signatures and can be used to understand canopy biophysical and chemical characteristics. As shown in Figure 7A, the sunlit components (SE and SL) exhibited higher reflectance values than their shaded counterparts (SHL and SHE). In particular, the NIR reflectance of non-shadowed spikes and leaves was nearly three times bigger than the shaded counterparts. The absorption features were also investigated via continuum-removed reflectance.
Finally, it was shown that shade had significant effects on the estimation of vegetation parameters at the canopy scale. Some vegetation indices exhibited more distinctive distribution between shaded and sunlit components (Figure 8). For instance, DVI, EVI, MSAVI, MTVI, and TVI in SH-All vs S-All (Figure 8A), MSR and VS in SHL vs SHE (Figure 8B), and finally PRI and VS in SL vs SE (Figure 8C) showed separate distribution intervals between canopy components. This illustrates that the aforementioned VIs can play roles in discerning shaded and sunlit components as well. The results also show that VIs can be utilized to find key features that uniquely characterize shaded and sunlit spectroscopy pixels within canopies.

5. Conclusions

In this study, a one-dimensional convolutional neural network method was proposed to map shadowed and non-shadowed wheat canopy components from high-resolution hyperspectral imaging. The presented method has a unique capability to learn directly from the raw spectral signatures and voids the need for designing handcrafted feature extraction and it can surpass traditional classification approaches. Moreover, the 1D-CNN has a relatively shallow architecture with a smaller number of hidden layers and neurons compared to the state-of-the-art 2D-CNNs techniques. Thus, it is much easier to train and can offer the minimal computational complexity which is suitable for hand-held hyperspectral devices with limited computational power. However, the spectral-based 1D-CNN method does not take full advantage of the three-dimensional data cube characteristics. In future work, we aim to incorporate spatial information into the spectral-based feature extraction model to improve the classification performance of HSI in more complex scenes.
Furthermore, in terms of canopy component characterization, canopy physiological response to changing light conditions can cause additional complications to the assessment of component mapping. For instance, as discussed in another study [33], the spectral differences between shaded lower layer canopy components and sunlit upper layer counterparts likely depend not only on the illumination variations, but also on the non-uniform distribution of chlorophyll and nitrogen. Therefore, multi-angle viewing hyperspectral imaging is likely to be more effective than only vertically downward-facing sensors.

Author Contributions

P.S.-T. developed the method; N.V. planned and conducted the experiment; M.J.H. contributed to the revision of the manuscript and supervised the project; P.S.-T. and N.V. contributed to writing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

Rothamsted Research receives grant-aided support from the Biotechnology and Biological Sciences Research Council (BBSRC) and the project was directly funded by the Designing Future Wheat strategic program (BB/P016855/1).

Data Availability Statement

The materials used in this study can be accessed from the link: https://www.rothamsted.ac.uk/field-scanalyzer. All users need to verify their details before accessing the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stuart, M.B.; McGonigle, A.J.S.; Willmott, J.R. Hyperspectral Imaging in Environmental Monitoring: A Review of Recent Developments and Technological Advances in Compact Field Deployable Systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef] [Green Version]
  2. Lowe, A.; Harrison, N.; French, A.P. Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress. Plant Methods 2017, 13, 80. [Google Scholar] [CrossRef]
  3. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef] [Green Version]
  4. Goetz, A.F.H. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  5. Amigo, J.M.; Marti, I.; Gowen, A. Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010. [Google Scholar]
  6. Amigo, J.M. Practical issues of hyperspectral imaging analysis of solid dosage forms. Anal. Bioanal. Chem. 2010, 398, 93–109. [Google Scholar] [CrossRef] [PubMed]
  7. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  8. Thorp, K.R.; Gore, M.A.; Andrade-Sanchez, P.; Carmo-Silva, A.E.; Welch, S.M.; White, J.W.; French, A.N. Proximal hyperspectral sensing and data analysis approaches for field-based plant phenomics. Comput. Electron. Agric. 2015, 118, 225–236. [Google Scholar] [CrossRef] [Green Version]
  9. Agapiou, A.; Hadjimitsis, D.G.; Alexakis, D.D. Evaluation of Broadband and Narrowband Vegetation Indices for the Identification of Archaeological Crop Marks. Remote Sens. 2012, 4, 3892–3919. [Google Scholar] [CrossRef] [Green Version]
  10. Okamoto, H.; Murata, T.; Kataoka, T.; Hata, S.-I. Plant classification for weed detection using hyperspectral imaging with wavelet analysis. Weed Biol. Manag. 2007, 7, 31–37. [Google Scholar] [CrossRef]
  11. Vigneau, N.; Ecarnot, M.; Rabatel, G.; Roumet, P. Potential of field hyperspectral imaging as a non destructive method to assess leaf nitrogen content in Wheat. Field Crop. Res. 2011, 122, 25–31. [Google Scholar] [CrossRef] [Green Version]
  12. Ustin, S.L.; Gamon, J.A. Remote sensing of plant functional types. New Phytol. 2010, 186, 795–816. [Google Scholar] [CrossRef] [PubMed]
  13. Asaari, M.S.M.; Mishra, P.; Mertens, S.; Dhondt, S.; Inzé, D.; Wuyts, N.; Scheunders, P. Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform. ISPRS J. Photogramm. Remote Sens. 2018, 138, 121–138. [Google Scholar] [CrossRef]
  14. Fu, P.; Meacham-Hensold, K.; Guan, K.; Wu, J.; Bernacchi, C. Estimating photosynthetic traits from reflectance spectra: A synthesis of spectral indices, numerical inversion, and partial least square regression. Plant Cell Environ. 2020, 43, 1241–1258. [Google Scholar] [CrossRef]
  15. Williams, D.; Britten, A.; McCallum, S.; Jones, H.; Aitkenhead, M.; Karley, A.; Loades, K.; Prashar, A.; Graham, J. A method for automatic segmentation and splitting of hyperspectral images of raspberry plants collected in field conditions. Plant Methods 2017, 13, 74. [Google Scholar] [CrossRef] [Green Version]
  16. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  17. Carlson, T.N.; Ripley, D.A. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
  18. Liu, X.; Hou, Z.; Shi, Z.; Bo, Y.; Cheng, J. A shadow identification method using vegetation indices derived from hyperspectral data. Int. J. Remote Sens. 2017, 38, 5357–5373. [Google Scholar] [CrossRef]
  19. Luo, H.; Wang, L.; Shao, Z.; Li, D. Development of a multi-scale object-based shadow detection method for high spatial resolution image. Remote Sens. Lett. 2015, 6, 59–68. [Google Scholar] [CrossRef]
  20. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2020, 19, 17–28. [Google Scholar] [CrossRef]
  21. Jay, S.; Maupas, F.; Bendoula, R.; Gorretta, N.; Retrieving, L.A.I. Chlorophyll and nitrogen contents in sugar beet crops from multi-angular optical remote sensing: Comparison of vegetation indices and PROSAIL inversion for field phenotyping. Field Crop. Res. 2017, 210, 33–46. [Google Scholar] [CrossRef] [Green Version]
  22. Gerard, F.F.; North, P.R.J. Analyzing the effect of structural variability and canopy gaps on forest BRDF using a geometric-optical model. Remote Sens. Environ. 1997, 62, 46–62. [Google Scholar] [CrossRef]
  23. Leblon, B.; Gallant, L.; Granberg, H. Effects of shadowing types on ground-measured visible and near-infrared shadow reflectances. Remote Sens. Environ. 1996, 58, 322–328. [Google Scholar] [CrossRef]
  24. Zhang, L.; Sun, X.; Wu, T.; Zhang, H. An Analysis of Shadow Effects on Spectral Vegetation Indexes Using a Ground-Based Imaging Spectrometer. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2188–2192. [Google Scholar] [CrossRef]
  25. Hall, F.G.; Hilker, T.; Coops, N.C.; Lyapustin, A.; Huemmrich, K.F.; Middleton, E.; Margolis, H.; Drolet, G.; Black, T.A. Multi-angle remote sensing of forest light use efficiency by observing PRI variation with canopy shadow fraction. Remote Sens. Environ. 2008, 112, 3201–3211. [Google Scholar] [CrossRef] [Green Version]
  26. Hilker, T.; Coops, N.C.; Hall, F.G.; Black, T.A.; Wulder, M.A.; Nesic, Z.; Krishnan, P. Separating physiologically and directionally induced changes in PRI using BRDF models. Remote Sens. Environ. 2008, 112, 2777–2788. [Google Scholar] [CrossRef] [Green Version]
  27. Berni, J.A.J.; Zarco-Tejada, P.J.; Sepulcre-Cantó, G.; Fereres, E.; Villalobos, F. Mapping canopy conductance and CWSI in olive orchards using high resolution thermal remote sensing imagery. Remote Sens. Environ. 2009, 113, 2380–2388. [Google Scholar] [CrossRef]
  28. Zarco-Tejada, P.J.; González-Dugo, V.; Berni, J.A.J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef]
  29. Zarco-Tejada, P.J.; Guillén-Climent, M.L.; Hernández-Clemente, R.; Catalina, A.; González, M.R.; Martín, P. Estimating leaf carotenoid content in vineyards using high resolution hyperspectral imagery acquired from an unmanned aerial vehicle (UAV). Agric. For. Meteorol. 2013, 171, 281–294. [Google Scholar] [CrossRef] [Green Version]
  30. Camino, C.; Zarco-Tejada, P.J.; Gonzalez-Dugo, V. Effects of Heterogeneity within Tree Crowns on Airborne-Quantified SIF and the CWSI as Indicators of Water Stress in the Context of Precision Agriculture. Remote Sens. 2018, 10, 604. [Google Scholar] [CrossRef] [Green Version]
  31. Maimaitiyiming, M.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Miller, A.J.; Kwasniewski, M. Leveraging Very-High Spatial Resolution Hyperspectral and Thermal UAV Imageries for Characterizing Diurnal Indicators of Grapevine Physiology. Remote Sens. 2020, 12, 3216. [Google Scholar] [CrossRef]
  32. Yang, X.; Liu, S.; Liu, Y.; Ren, X.; Su, H. Assessing shaded-leaf effects on photochemical reflectance index (PRI) for water stress detection in winter wheat. Biogeosciences 2019, 16, 2937–2947. [Google Scholar] [CrossRef] [Green Version]
  33. Zhou, K.; Deng, X.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y.; Ustin, S.L.; Cheng, T. Assessing the Spectral Properties of Sunlit and Shaded Components in Rice Canopies with Near-Ground Imaging Spectroscopy Data. Sensors 2017, 17, 578. [Google Scholar] [CrossRef] [Green Version]
  34. Van der Meer, F. The effectiveness of spectral similarity measures for the analysis of hyperspectral imagery. Int. J. Appl. Earth Obs. 2006, 8, 3–17. [Google Scholar] [CrossRef]
  35. Yang, C.; Everitt, J.H.; Bradford, J.M. Yield Estimation from Hyperspectral Imagery Using Spectral Angle Mapper (SAM). Trans. ASABE 2008, 51, 729–737. [Google Scholar] [CrossRef]
  36. Camps-Valls, G.; Bruzzone, L. Kernel-Based Methods for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
  37. Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  38. Hay, E.A.; Parthasarathy, R. Performance of convolutional neural networks for identification of bacteria in 3D microscopy datasets. PLoS Comput. Biol. 2018, 14, e1006628. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Yu, Y.; Xu, T.; Shen, Z.; Zhang, Y.; Wang, X. Compressive spectral imaging system for soil classification with three-dimensional convolutional neural network. Opt. Express 2019, 27, 23029. [Google Scholar] [CrossRef]
  40. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  41. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  42. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
  43. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  44. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
  45. Home—OpenCV. Undefined. Available online: https://opencv.org/ (accessed on 1 February 2021).
  46. Hyperspectral Sensors|Hyperspectral Cameras. Undefined. Available online: https://www.headwallphotonics.com/hyperspectral-sensors (accessed on 3 August 2020).
  47. Wheat Growth Guide|AHDB. Undefined. Available online: https://ahdb.org.uk/wheatgg (accessed on 22 February 2021).
  48. Virlet, N.; Sabermanesh, K.; Sadeghi-Tehran, P.; Hawkesford, M.J. Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring. Funct. Plant Biol. 2017, 44, 143–153. [Google Scholar] [CrossRef] [Green Version]
  49. Malenovský, Z.; Turnbull, J.D.; Lucieer, A.; Robinson, S.A. Antarctic moss stress assessment based on chlorophyll content and leaf density retrieved from imaging spectroscopy data. New Phytol. 2015, 208, 608–624. [Google Scholar] [CrossRef] [Green Version]
  50. Kim, M.S.; Chen, Y.R.; Mehl, P.M. Hyperspectral reflectance and fluorescence imaging system for food quality and safety. Trans. ASAE 2001, 44, 721. [Google Scholar]
  51. Savitzky, A.; Golay, M.J. Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  52. Rinnan, Å.; van den Berg, F.; Engelsen, S.B. Review of the most common pre-processing techniques for near-infrared spectra. TrAC Trends Anal. Chem. 2009, 28, 1201–1222. [Google Scholar] [CrossRef]
  53. Li, X.; Zhang, L.; You, J. Hyperspectral Image Classification Based on Two-Stage Subspace Projection. Remote Sens. 2018, 10, 1565. [Google Scholar] [CrossRef] [Green Version]
  54. Zarco-Tejada, P.J.; Berjón, A.; López-Lozano, R.; Miller, J.R.; Martín, P.; Cachorro, V.; González, M.R.; De Frutos, A. Assessing vineyard condition with hyperspectral indices: Leaf and canopy reflectance simulation in a row-structured discontinuous canopy. Remote Sens. Environ. 2005, 99, 271–287. [Google Scholar] [CrossRef]
  55. Perry, C.R.; Lautenschlager, L.F. Functional equivalence of spectral vegetation indices. Remote Sens. Environ. 1984, 14, 169–182. [Google Scholar] [CrossRef]
  56. Buschmann, C.; Nagel, E. In vivo spectroscopy and internal optics of leaves as basis for remote sensing of vegetation. Int. J. Remote Sens. 1993, 14, 711–722. [Google Scholar] [CrossRef]
  57. Smith, R.; Adams, J.; Stephens, D.; Hick, P. Forecasting wheat yield in a Mediterranean-type environment from the NOAA satellite. Aust. J. Agric. Res. 1995, 46, 113. [Google Scholar] [CrossRef]
  58. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  59. Chen, J.M. Evaluation of Vegetation Indices and a Modified Simple Ratio for Boreal Applications. Can. J. Remote Sens. 1996, 22, 229–242. [Google Scholar] [CrossRef]
  60. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  61. Lichtenthaler, H.K.; Lang, M.; Sowinska, M.; Heisel, F.; Miehé, J.A. Detection of Vegetation Stress Via a New High Resolution Fluorescence Imaging System. J. Plant Physiol. 1996, 148, 599–612. [Google Scholar] [CrossRef]
  62. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  63. Gamon, J.A.; Peñuelas, J.; Field, C.B. A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sens. Environ. 1992, 41, 35–44. [Google Scholar] [CrossRef]
  64. Kaufman, Y.J.; Tanre, D. Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  65. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef] [Green Version]
  66. White, D.C.; Williams, M.; Barr, S.L. Detecting sub-surface soil disturbance using hyperspectral first derivative band rations of associated vegetation stress. Int. Soc. Photogramm. Remote Sens. 2008, 27, 243–248. [Google Scholar]
  67. Liang, H.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef] [Green Version]
  68. Pinto, F.; Damm, A.; Schickling, A.; Panigada, C.; Cogliati, S.; Müller-Linow, M.; Balvora, A.; Rascher, U. Sun-induced chlorophyll fluorescence from high-resolution imaging spectroscopy data to quantify spatio-temporal patterns of photosynthetic function in crop canopies. Plant Cell Environ. 2016, 39, 1500–1512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. (A) Camera bay of the Field Scanalyzer accommodating the VNIR sensor (B) The white referencing Ultra-lightweight Reflectance Target in pseudo-RGB (R:620 nm, G:535 nm, B:445 nm).
Figure 1. (A) Camera bay of the Field Scanalyzer accommodating the VNIR sensor (B) The white referencing Ultra-lightweight Reflectance Target in pseudo-RGB (R:620 nm, G:535 nm, B:445 nm).
Remotesensing 13 00898 g001
Figure 2. Scores scatter plots of the first and second LDA and PCAs. (A) shows the LDA analysis and (B) demonstrate the PCA analysis of 5 categories of shaded-ear (SHE), shaded-leaf (SHL), sunlit-ear (SE), sunlit-leaf (SL), and background (BG).
Figure 2. Scores scatter plots of the first and second LDA and PCAs. (A) shows the LDA analysis and (B) demonstrate the PCA analysis of 5 categories of shaded-ear (SHE), shaded-leaf (SHL), sunlit-ear (SE), sunlit-leaf (SL), and background (BG).
Remotesensing 13 00898 g002
Figure 3. Schematic representation of the proposed 1D-CNN framework with the full spectrum bands as inputs.
Figure 3. Schematic representation of the proposed 1D-CNN framework with the full spectrum bands as inputs.
Remotesensing 13 00898 g003
Figure 4. Classification maps; (A1,B1,C1) examples of digital images of the same wheat cultivar over the growing season. The pseudo-RGB images (R:620 nm, G:535 nm, B:445 nm). are generated from the hyperspectral imaging; (A2,B2,C2) the classification map obtained by the proposed algorithm which separates sunlit (S-All) and shaded pixels (SH-All) from the background.
Figure 4. Classification maps; (A1,B1,C1) examples of digital images of the same wheat cultivar over the growing season. The pseudo-RGB images (R:620 nm, G:535 nm, B:445 nm). are generated from the hyperspectral imaging; (A2,B2,C2) the classification map obtained by the proposed algorithm which separates sunlit (S-All) and shaded pixels (SH-All) from the background.
Remotesensing 13 00898 g004
Figure 5. Classification maps; (A1,B1,C1) examples of digital images from wheat cultivar at the heading stage. The pseudo-RGB images (R:620 nm, G:535 nm, B:445 nm) are generated from the hyperspectral imaging; (A2,B2,C2) the classification map obtained by the proposed algorithm which separates sunlit-ear (SE) and sunlit-leaf (SL) pixels from the background (BG).
Figure 5. Classification maps; (A1,B1,C1) examples of digital images from wheat cultivar at the heading stage. The pseudo-RGB images (R:620 nm, G:535 nm, B:445 nm) are generated from the hyperspectral imaging; (A2,B2,C2) the classification map obtained by the proposed algorithm which separates sunlit-ear (SE) and sunlit-leaf (SL) pixels from the background (BG).
Remotesensing 13 00898 g005
Figure 6. Box and whisker plot of classification accuracy scores for three algorithms. One dimensional convolutional neural network (1D-CNN), support vector machine (SVM) and stochastic gradient descent (SGD).
Figure 6. Box and whisker plot of classification accuracy scores for three algorithms. One dimensional convolutional neural network (1D-CNN), support vector machine (SVM) and stochastic gradient descent (SGD).
Remotesensing 13 00898 g006
Figure 7. (A) Reflectance spectral profiles of the averaged sunlit and shaded canopy components from manually annotated sample points (the x-axis id the wavelength in the range of 400–1000 nm, and the y-axis is the reflectance in the range of 0–1 (B) continuum-removed reflectance spectra of difference canopy components in wheat. Spectral values are shown as mean ± standard deviation (SD). SE: sunlit-ear; SL: sunlit-leaf; SHE: shaded-ear; SHL: shaded-leaf; BG: background.
Figure 7. (A) Reflectance spectral profiles of the averaged sunlit and shaded canopy components from manually annotated sample points (the x-axis id the wavelength in the range of 400–1000 nm, and the y-axis is the reflectance in the range of 0–1 (B) continuum-removed reflectance spectra of difference canopy components in wheat. Spectral values are shown as mean ± standard deviation (SD). SE: sunlit-ear; SL: sunlit-leaf; SHE: shaded-ear; SHL: shaded-leaf; BG: background.
Remotesensing 13 00898 g007
Figure 8. Histograms of normalized vegetation indices; (A) shaded vegetation (red), sunlit vegetation (green), and background (blue); (B) shaded-leaf (cyan), shaded-ear (purple); (C) sunlit-leaf (yellow), sunlit-ear (black).
Figure 8. Histograms of normalized vegetation indices; (A) shaded vegetation (red), sunlit vegetation (green), and background (blue); (B) shaded-leaf (cyan), shaded-ear (purple); (C) sunlit-leaf (yellow), sunlit-ear (black).
Remotesensing 13 00898 g008
Table 1. Summary of image acquisition conditions for the data collected in this study.
Table 1. Summary of image acquisition conditions for the data collected in this study.
Date Growing StageTime (GMT + 1) Range   of   PAR   Values ( μ m o l / m ^ 2 s ) Spatial and Spectral Resolution
21 May 2019GS39: Flag leaf fully emerged9:30–12:301204–1856533 × 667 × 925
6 June 2019GS57: GS59-advanced heading time16:18–17:33975–14291600 × 1846 × 925
19 June 20197 days after anthesis17:44–19:28230–4721600 × 1846 × 925
4 July 201922 days after anthesis9:36–11:361267–1645533 × 667 × 925
Table 2. Classification accuracy assessment.
Table 2. Classification accuracy assessment.
RAWLDA
Acc.FscoreRecallAcc.FscoreRecall
CNN 0.986 0.9790.983 0.973 0.9530.945
SVM 0.980 0.9720.972 0.974 0.9570.956
SGD 0.981 0.9740.973 0.964 0.9500.950
Table 3. List of vegetation indices used in this study.
Table 3. List of vegetation indices used in this study.
Vegetation IndexCalculation Formula
Difference Vegetation Index (DVI) [55] R 800 R 670
Enhanced Vegetation Index (EVI) [56] 2.5 × [ ( R 800 R 680 ) / ( R 800 + 6 × R 680 7.5 × R 450 + 1 ) ]
Greenness Index (G) [57] R 554 / R 677
Improved SAVI with self-adjustment factor L (MSAVI) [58] 0.5 × { 2 × R 800 + 1 ( 2 × R 800 + 1 ) 2 8 × ( R 800 R 670 ) }
Modified Simple Ratio (MSR) [59] ( R 800 / R 670 ) 1 / R 800 / R 670 + 1
Modified Triangular Vegetation Index (MTVI) [60] 1.2 × [ 1.2 × ( R 800 R 550 ) 2.5 × ( R 670 R 550 ) ]
Normalized Difference Vegetation Index (NDVI) [61] ( R 800 R 680 ) / ( R 800 + R 680 )
Optimized Soil Adjusted Vegetation Index (OSAVI) [62] ( 1 + 0.16 ) ( R 800 + R 670 ) / ( R 800 + R 670 + 0.61 )
Photochemical Reflectance Index (PRI) [63] ( R 531 R 570 ) / ( R 531 + R 570 )
Soil-adjusted Atmospherically Resistant Vegetation Index (SARVI) [64] R 800 ( R 670   1   ×   ( R 445     R 670 ) ) [ R 800 + ( R 670 1 × ( R 445 R 670 ) ) + 0.5 ] × ( 1 + 0.5 )
Triangular Veg Index (TVI) [65] 0.5 × [ 120 × ( R 750 R 550 ) 200 × ( R 670 R 550 ) ]
Vegetation Stress Ratio (VS) [66] R 725 / R 702
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sadeghi-Tehran, P.; Virlet, N.; Hawkesford, M.J. A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery. Remote Sens. 2021, 13, 898. https://doi.org/10.3390/rs13050898

AMA Style

Sadeghi-Tehran P, Virlet N, Hawkesford MJ. A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery. Remote Sensing. 2021; 13(5):898. https://doi.org/10.3390/rs13050898

Chicago/Turabian Style

Sadeghi-Tehran, Pouria, Nicolas Virlet, and Malcolm J. Hawkesford. 2021. "A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery" Remote Sensing 13, no. 5: 898. https://doi.org/10.3390/rs13050898

APA Style

Sadeghi-Tehran, P., Virlet, N., & Hawkesford, M. J. (2021). A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery. Remote Sensing, 13(5), 898. https://doi.org/10.3390/rs13050898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop