Next Article in Journal
Effect of Grassland Fires on Dust Storms in Dornod Aimag, Mongolia
Previous Article in Journal
Diffractive Sail-Based Displaced Orbits for High-Latitude Environment Monitoring
Previous Article in Special Issue
Contrastive Self-Supervised Two-Domain Residual Attention Network with Random Augmentation Pool for Hyperspectral Change Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reshaping Leaf-Level Reflectance Data for Plant Species Discrimination: Exploring Image Shape’s Impact on Deep Learning Results

1
Guangdong Provincial Public Laboratory of Geospatial Information Technology and Application, Guangzhou Institute of Geography, Guangdong Academy of Sciences, Guangzhou 510070, China
2
Faculty of Agriculture, Shizuoka University, Shizuoka 422-8529, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(24), 5628; https://doi.org/10.3390/rs15245628
Submission received: 26 September 2023 / Revised: 22 November 2023 / Accepted: 26 November 2023 / Published: 5 December 2023
(This article belongs to the Special Issue Computational Intelligence in Hyperspectral Remote Sensing)

Abstract

:
The application of hyperspectral imagery coupled with deep learning shows vast promise in plant species discrimination. Reshaping one-dimensional (1D) leaf-level reflectance data (LLRD) into two-dimensional (2D) grayscale images as convolutional neural network (CNN) model input demonstrated marked effectiveness in plant species distinction. However, the impact of the image shape on CNN model performance remained unexplored. This study addressed this by reshaping data into fifteen distinct rectangular formats and creating nine CNN models to examine the effect of image structure. Results demonstrated that irrespective of CNN model structure, elongated narrow images yielded superior species identification results. The ‘l’-shaped images at 225 × 9 pixels outperformed other configurations based on 93.95% accuracy, 94.55% precision, and 0.94 F1 score. Furthermore, ‘l’-shaped hyperspectral images consistently produced high classification precision across species. The results suggest this image shape boosts robust predictive performance, paving the way for enhancing leaf trait estimation and proposing a practical solution for pixel-level categorization within hyperspectral imagery (HSIs).

Graphical Abstract

1. Introduction

Remote sensing has increasingly been adopted for plant species classification in recent years [1,2,3]. However, most prior work in this domain has focused predominantly on digital image processing techniques applied to imaging spectra in the visible region [4,5]. Despite hyperspectral imagery providing a richer characterization of target spectra across a more extensive electromagnetic range, relatively few studies have fully leveraged this abundant spectral information [6]. While visual information carries diagnostic value for discrimination, the underutilization of hyperspectral signatures is notable given the potential for comprehensive discrimination when considering the full spectral profile captured [7,8,9]. A more comprehensive exploitation of the inherent biochemical specificities encoded in hyperspectral datasets may further advance automated plant taxonomy via remote observation methods.
The use of hyperspectral imagery (HSIs) combined with machine learning techniques exhibits the potential to distinguish different plant species [2,6,10]. The research by Badola et al. [11] leveraged hyperspectral data and a modified version of the random forest classifier aligned with Principal Component Analysis (PCA) to map tropical tree species. Meanwhile, Hu et al. [12] productively hardcoded a fluorescence hyperspectral device and machine learning to classify Oolong tea, attaining commendable accuracy via the Random Forest-Recursive Feature Elimination (RF-RFE) and Support Vector Machine (SVM). Cao et al. [13] effectively employed close-range hyperspectral imaging in conjunction with machine learning techniques to proficiently discern various mangrove species, achieved a remarkable accuracy of 93.54% by utilizing a Support Vector Machine (SVM) model that incorporated selected wavebands derived from the successive projections algorithm (SPA). Employing LiDAR and hyperspectral data in harmony with machine learning methods, Marrs and Ni-Meister [14] managed to classify dominant tree species with greater accuracy than what individual datasets could accomplish. Al-Awadhi and Deshmukh [15] proposed a machine learning methodology for classifying honey botanical origins using hyperspectral imaging, achieving impressive accuracy by leveraging Linear Discriminant Analysis (LDA), SVM, and K-Nearest Neighbors (KNN). Song and Wang [6] found that by combining the Bayesian optimization-based Support Vector Machine (SVM) model, and the Recursive Feature Elimination (RFE) method for feature selection, they were able to achieve a commendable classification accuracy of 86% across 52 different species.
Due to their hierarchical learning properties [16], deep learning algorithms have been effectively combined with plant species classification [2,17,18]. Among which, CNN-based models have generally yielded better results since they possess the capability to extract highly discriminatory features and leverage spatial and spectral information [2]. A profound study conducted by Avesh Ali et al. [19] appropriates a deep learning CNN methodology for plant species recognition, achieving a remarkable accuracy of 96.95% in differentiating plant leaf images. Similarly, research presented by Gawli and Gaikwad [20] employed a deep learning approach incorporating CNN for the automatic classification of 17 distinct plant species, predicated on the texture and color characteristics of their leaves, which culminated in an impressive accuracy rate exceeding 94.26%. The suitability and superior performance of deep learning—particularly CNNs—in the automatic plant species recognition and classification highlights its distinct advantage over conventional, hand-crafted methods [21,22,23].
Despite their prevalent application in multidimensional image data analysis [24,25], CNNs prove impractical for analyzing one-dimensional (1D) structured data, such as leaf-level reflectance data (LLRD), due to implicit constraints of lower dimensionality and limited sample size [26]. A plausible solution could be to transform 1D LLRD into a two-dimensional (2D) matrix [27,28]. The underlying difference between deploying a 1D array and a 2D image within a neural network lies in the data representation and process management [29]. Compared to 1D arrays, 2D images offer more robust data representation [30]. By converting 1D array data into a 2D array, spatial correlation between attributes can be inferred [31], including metric relationships (distance, direction or angle, and area) and topological links such as connectedness, containment, and relative location [32].
Converting the original 1D LLRD into 2D image formats changes the spatial relationships between wavelength features. Using images of varying shapes and dimensions enables CNNs to detect informative features at diverse scales [33]. However, improperly sized images can negatively impact both model training duration and effectiveness [34]. By reshaping the spectra into a spectral feature matrix, CNNs can capture meaningful spatial patterns and correlations within the data that are important for accurate classification, especially between samples with similar compositions [27]. This transformation from 1D vectors to 2D matrices enhances CNN performance for HSIs analysis. It facilitates comprehensive spectral information utilization, precise feature extraction, enhanced differentiation among classes, and reduced interference from highly correlated bands in the network architecture proposed [28]. CNNs have demonstrated notable accuracy discriminating between six plant species when LLRD were reshaped into 2D images, outperforming other models such as SVM [29]. Nevertheless, the degree to which CNN performance is influenced by the morphology of the input images utilized for plant species classification tasks remains insufficiently explored. Therefore, it is of paramount importance to perform a rigorous selection of the appropriate image dimensions and quality, ensuring that they align with the specific task requirements and the corresponding network architecture.
In the present study, we carefully investigate the effect of different image shapes on the functional capabilities of CNN models in discriminating plant species using LLRD. We aim to investigate the influence of image shape on species classification results when using reshaped hyperspectral data. The primary research question we aim to address is: How does the shape of reshaped hyperspectral images influence the performance of CNN models in plant species discrimination? Our study objectives are to (1) propose a method for transforming one-dimensional, leaf-level hyperspectral data into two-dimensional images of different shapes, (2) assess the performance of CNN models using the reshaped images, and (3) evaluate the effect of image shape on model performance.
This study applies a novel technique to reduce the dimensionality of LLRD data, resulting in two-dimensional grayscale images. This technique converts the conventional 1D structured data into a visual representation enabling greater extraction of complex patterns and characteristics. An innovative aspect is exploring different rectangular image shapes to train CNN models. Observing the impact of diverse image shapes on classification performance allows for CNN applications to push boundaries. Specifically, the discovery that ‘l’-shaped HSIs outperform other shapes in model performance is a novel perspective. Another key innovation is the use of multiple CNN models to determine the most effective architectures for plant classification. Particularly, cnn2A and cnn3B outperformed the other models, emphasizing the importance of selecting the appropriate model for a particular task.

2. Data and Methods

2.1. Data Source

The compiled database comprises six distinct datasets that are independent: ANGERS (AN, ANGERS Leaf Optical Properties Database (2003)) [35], KARLSRUHE (KA, Leaf reflectance plant functional gradient IFGG/KIT) [36], CANADA (CA, CABO 2018–2019 Leaf-Level Spectra v2) [37], NEON (NE, Fresh Leaf Spectra to Estimate LMA over NEON domains in eastern United States) [38], NCNE (NC, NASA FFT Project Leaf Reflectance Morphology and Biochemistry for Northern Temperate Forests) [39], and UPTON (UP, Hyperspectral leaf reflectance, biochemistry, and physiology of droughted and watered crops) [40]. Figure 1 shows the distribution of data sources used.
The reflectance spectra of four datasets, namely AN, NE, NC, and KA, were obtained using an ASD Field Spec spectrometer. Additionally, the reflectance spectra of the UP species were measured using a Spectral Evolution PSR+ (or SE_PSR+ or HR-1024I) instrument, while those of the CA species were measured solely with an HR-1024I instrument. In order to incorporate variability attributed to the developmental stage, certain species within the CA, KA, NC, and UP datasets were sampled multiple times. The datasets were measured at wavelength ranges of 350 (or 400) nm to 2450 (or 2500) nm, with the exception of the CA dataset which was measured between 400 nm and 2400 nm. To ensure the accuracy of our analysis, we excluded any data points falling outside the range of 400–2450 nm. For the CA dataset, we used the value corresponding to the wavelength of 2400 nm to fill the spectral region between 2401 and 2450 nm.

2.2. Data Preprocessing

2.2.1. Selection of Plant Species

Adequate sample quantities are crucial for both the training and validation stages of deep learning models. As recommended by BeamLab [41], an optimal range of 100 to 1000 samples is recommended. Building on the previous study, we selected species based on the sample sizes available in the collected dataset, ensuring an adequate number of instances for each species. From the six distinct datasets, we have identified 22 focal species, each with a sample size exceeding 90, resulting in a total of 3102 samples included in this study. Table 1 provides details on the Latin name, sample size, symbol, code, and group assigned to each species. The average reflectance spectra for the selected species are displayed in Figure 2.

2.2.2. Reflectance Data Preprocessing

In alignment with previous studies, the one-dimensional LLRD was converted to two-dimensional grayscale image data in order to serve as the input for the CNN model. To ascertain the most effective reflectance data image shape, we experimented with fifteen distinct image configurations (Figure 3) within the CNN models. The transformation of the LLRD into a two-dimensional array was facilitated through the Python platform using NumPy’s universal “reshape” function. After the aforementioned modification, the Keras’ image preprocessing technique was utilized to preserve the modified array in the form of a grayscale image. The wavelengths chosen spanned from 400 to 2424, covering a total of 2025 features, which were subsequently transformed into the intended grayscale image, with each pixel representing a unique feature. In the course of this implementation, leaf reflectance values ranging from 0 to 1 were rescaled to fit within the 0 to 255 range [29] (as shown in Figure 3).

2.3. CNN Model Architectures

A comprehensive summary of the CNN model architectures is given in Table 2. A detailed illustration of the architectures can be found in Figure 4.
We developed a total of nine comprehensive CNNs for the purpose of species detection using LLRD. Each CNN model architecture used in the study contains several underlying layers, starting with an input layer to accept the pre-processed gray-scale images. Several convolutional layers (Conv) are then applied to learn image features; for CNN1 family there is 1 Conv layer, for CNN2 family there are 2 Conv layers, and for CNN3 family there are 3 Conv layers. The ‘A’ type of model, such as CNN1A, CNN2A, and CNN3A, have no Max-Pooling layer, other types have Conv layer followed by pooling layers to reduce spatial dimensions while retaining important information. After the conv or Max-Pooling layer, a flattening layer transforms the output volume of the previous layers into a 1D vector, which is used as input for the following dense layers. Next, a dense layer (Dense) performs classification via multiclass non-linear projections. Finally, an output layer with twenty-two units, related to the twenty-two plant species considered, produces class predictions. The CNN models are differentiated by the number and configuration of the alternate Conv and Pooling layers used to process the images, allowing the impact of network depth on discrimination performance to be assessed.

2.4. Model, Image Shape Comparison and Evaluation

As the optimal pairing of image shape and CNN model remains uncertain, it is plausible to train and validate several models using datasets derived from different image shapes under feasible circumstances. By comparing the precision of these distinct models, we can identify the optimal model that corresponding with an optimal image shape. Comparing the highest accuracy achieved through fifteen different image shapes can enlighten us on the most efficient reshaped image shape for species identification, while disregarding the CNN model. In order to compare the model and image shapes, we will transform all LLRDs into fifteen distinct datasets with various shapes. These datasets will serve as the training data for all nine models. Subsequently, each model will assess accuracy, precision, and F1-score for every species (Figure 5). We will compare the highest average accuracy, precision, and F1-score for each model, along with the dataset used while ignoring specific model details. The F1-score can be interpreted as the harmonic mean of precision and recall, where an F1-score reaches its best value at 1 and its worst value at 0. Recall is defined as the number of true positives divided by the number of true positives plus false negatives. The formula for the F1-score is: F1 = 2 × (precision × recall)/(precision + recall).
In the current study, to assess the predictive model’s effectiveness, a random sample of 50 specimens was selected from each species, instead of the complete dataset, for the prediction aspect of each dataset. In terms of the evaluation method, the confusion matrix was obtained through comparing actual species values with those predicted by trained models. This was achieved through the use of classification metric functions from the scikit-learn library [43], including accuracy classification score, precision, and f1 score (balanced F1-score) functions. The primary goal was to determine the accuracy, precision, and F1-score of identification.
Each dataset was partitioned into training and validation subsets in a stochastic manner, with a ratio of 3:1. The models were trained on the training data and subsequently validated on the held-out validation data, with the implementation of a train-validation procedure that involved 30 iterations. The model exhibiting the best validation performance was saved for application to an independent prediction dataset composed of 50 randomly sampled examples per species from the original training or validation data. Predicted species labels from this application phase were then evaluated against true species labels to quantify prediction accuracy.

2.5. Flowchart of the Process

In this study, the whole process encompasses three phases, namely training and validation, simulation of application, and evaluation. During the training phase (depicted in orange in Figure 6), we reshape the training and validation data into 2D images to facilitate the generation of predictive models using CNN. For identification purposes, we randomly select 50 examples per species and simulate this phase (as shown in green in Figure 6) to assess the overall performance of each model. Subsequently, we leverage both the actual species labels and identified labels to calculate confusion metrics. Finally, the performance of each model is evaluated by computing metrics such as accuracy, precision, and F1-score (as illustrated in blue in Figure 6).

3. Results

3.1. The Performance of Models

The CNN models trained on ‘l’-shaped images (cnn2A, cnn2B, cnn3A, cnn3B, cnn3C, cnn3D) demonstrate higher accuracy, precision, and F1-scores compared to the models trained on ‘k’-shaped images (Table 3). Specifically, cnn2A and cnn3B achieve the highest accuracy, with 93.82% and 93.95%, respectively. Cnn2A obtains the maximal precision of 94.45%, while cnn3B attains the second-highest precision of 94.55%. Cnn2A and cnn3B obtain the same peak F1-score of 0.94. In contrast, cnn1A and cnn1B, the sole two models trained on ‘k’ and ‘j’-shaped images, produce the lowest accuracy, precision, and F1-scores. All models exhibit relatively small standard deviations for the performance metrics.
Overall, the cnn2 and cnn3 model families demonstrate better performance compared to the cnn1 family. Within each family, models A and B tend to achieve optimal results. However, models C and D, featuring more Max-Pooling layers, did not produce superior results. This suggests that the LLRD dataset, with only 2025 features, may be too small for CNN models. Therefore, it may not be necessary to increase the depth of the neural network for classification or regression purposes. In summary, the CNNs trained on ‘l’-shaped images, especially cnn2A and cnn3B, achieved the highest accuracy, precision, and F1-scores. Conversely, the models trained on ‘k’-shaped images demonstrated significantly lower performance.

3.2. The Best Performance of Each Image Shape (Regardless of the CNN Models)

The image shapes ‘l’, ‘h’, and ‘f’ achieved the highest accuracy, precision, and F1-scores, as illustrated in Table 4. Particularly, shape ‘l’ attains the highest accuracy of 93.95%, precision of 94.55%, and F1-score of 0.94. Conversely, shape ‘a’ displays the least accuracy, precision, and balanced F1-score compared to the remaining shapes. Shape ‘h’ also achieves a better accuracy of 93.07%, precision of 94.00%, and F1-score of 0.93, indicating that in some situations, square shaped image may also be a better choice for deep learning models.
The accuracy, precision, and F1-score show a general improvement as we move from shapes labeled ‘a’ to ‘l’, indicating enhanced performance on shapes occurring later in the alphabet. However, shapes ‘m’ and ‘n’ exhibit inferior scores in comparison to most other shapes, with performance similar to early alphabet shapes such as ‘b’ and ‘c’. This implies that overly elongated or truncated shapes tend to lead to lower performance for the purpose of species identification.
All metrics exhibit relatively low standard deviations, generally within the range of 1–3%, which indicates consistent performance across experimental runs. Additionally, the measurements demonstrate a comparable trend, as anticipated considering the inherent relationships between accuracy, precision, and F1-score. Increased accuracy is associated with elevated precision and F1-score.

3.3. Identification Results

Figure 7 illustrates a circular diagram showing the relationships between the reshaped image datasets and identified species in the study. Nodes in the upper portion of the diagram represent different reshaped image datasets, while lower nodes correspond to various identified species. Edges indicate links between them. Larger node size represents a higher average correct prediction ratio accumulated across datasets or species. Thicker edge thickness reflects a greater average correct prediction ratio weight between connected datasets and species.
The datasets labelled as ‘h’, ‘i’, ‘j’, ‘k’, and ‘l’ exhibit consistently higher accuracy across a wide range of species (refer to Figure 7 and Supplementary File Table S1). For example, the model exhibits a predictive accuracy greater than 95% for most species examined when utilizing the ‘l’ shape. However, it struggles to differentiate between species like TSCA and ACSM, or PIST and CUPE, despite these outcomes being the finest so far. More comprehensive results can be found in Figure 8 and Supplementary File Figure S2, which provide a comprehensive view of the prediction matrix. This finding provides evidence of the model’s strong ability to predict complex species using the ‘l’ configuration. Conversely, shapes such as ‘a’, ‘b’, ‘c’, ‘n’, ‘o’ were omitted from the diagram, as their average prediction accuracy consistently fell below the benchmark of 90%. Please refer to Supplementary File Table S1 for a comprehensive analysis and assessment.
In contrast, shapes ‘g’, ‘j’, and ‘m’ demonstrate relatively lower accuracy rates across many species. For example, shape ‘m’ scores below 90% accuracy for 6 out of the 20 evaluated species. Consequently, these shapes present a substantial challenge to the model’s predictive performance across a broad range of species. Shape ‘d’, however, demonstrates a bifurcated performance, achieving high accuracy (greater than 90%) for approximately half of the species, yet underperforming for the remaining half, placing it in an intermediate difficulty category.
When examining individual species, RASA and PHAR exhibit high accuracy across all shapes (Supplementary File Table S1). In contrast, PIST records low accuracy rates for all shapes. In the case of species with moderate accuracy, such as BEPA and QURU, a considerable disparity is observed between high (95% or more) and low (81–89%) accuracy rates across configurations.

4. Discussion and Future Work

4.1. Comparative Analysis with Prior Research

The utilization of 2D CNNs necessitated the preprocessing of input data. Chen et al. [27] notably reformatted laser-induced breakdown spectroscopy spectra into a spectral matrix (2D array) to facilitate model training, achieving a commendable validation accuracy of 98.77% in the classification of 5 geological samples. Similarly, Gao et al. [28] adopted a strategy wherein individual 1D spectral vectors, corresponding to pixels in hyperspectral imagery, were transformed into 2D spectral feature matrices, subsequently employed as inputs for a small convolutional kernel CNN. This approach yielded an impressive overall accuracy of 89.88% when classifying data from the Indian Pines dataset, encompassing 16 distinct classes. In our previous research [29], we extended the paradigm of transformation techniques by converting 1D LLRD into 2D grayscale images. These transformed images served as input data for CNN models tasked with species identification, and the results were promising, yielding an accuracy of 98.60%. It is essential to acknowledge, however, that these studies were constrained by their focus on a specific spectral matrix shape where the number of rows equaled the number of columns, which presents a limitation worth considering.
The present study expands upon our previous work in one crucial aspect: in addition to the 45 × 45 pixel shape, we evaluate the 14 other potential image shapes generated during the rescaling process. Each unique shape configuration was used to train and validate CNN models to assess impact on classification performance. As mentioned in our previous study, the 15 potential image shapes derived from the rescaling were already evaluated using the original six species datasets. Those results demonstrated the ‘j’ and ‘k’ shaped datasets produced the highest prediction accuracy and precision when input to CNN models. We, therefore, concluded that a square leaf hyperspectral image may not necessarily optimize deep learning-based species classification and that taller, narrower rectangular formats could yield superior results. The current study reinforces this finding, with the ‘k’, ‘j’, and ‘l’ shaped datasets enabling CNN models to significantly outperform models trained on the other rescaled image configurations according to our comparative evaluation (see Table 3).
It has been found that the ‘l’-shaped images measuring 225 × 9 pixels yielded better results in terms of accuracy, precision, and F1 metrics as compared to other configurations. The cnn2A and cnn3B models, trained on ‘l’-shaped images, achieved the highest accuracy, precision, and F1-scores. The cnn2 and cnn3 model families generally outperformed the cnn1 family, and among each family, models A and B tended to provide optimum results. However, Models C and D, which featured more Max-Pooling layers, did not achieve superior results, implying that the LLRD dataset was insufficient for deeper CNN models. These results demonstrate the effect of image shape on the performance of CNN models in discriminating between plant species.

4.2. Image Shape’s Impact on Species Discrimination Results

The primary difference among the modified datasets of images lies in their dimensions, wherein the length and width of the image are adjusted (assuming that the width corresponds to the left and right sides of the image, while the length corresponds to the top and bottom). Consequently, the position of the reflectance band also changes. By referring to Figure 3, it is possible to observe that as the width increases and the length decreases, the shape of the leaf pigments (400–700 nm), cell structure (700–1300 nm), and water content (1300–2500 nm) part transforms from a wide and short shape to a narrow and tall shape.
Figure 9 presents images of twenty-two species in both ‘d’ and ‘l’ shapes, with the left panel displaying the stacked ‘d’ shaped images. There are obvious dissimilarities between the species’ images, yet precisely identifying the specific locational differences is challenging. Exhibits organized ‘l’-shaped images. Noticeable differences in reflectance can be observed in the leaf pigments and water content sections of these images. From this, it becomes apparent that the characteristics of the wide and short-shaped image do not possess the same level of distinctiveness as those of the narrow and tall one. It is possible this is why CNN models can effectively extract each species’ characteristics and distinguish them when using the narrow tall-shaped images.
The findings of this research highlight the importance of hyperspectral image data morphology for identifying diverse plant species. As shown in Table 3 and Figure 10, CNN models using ‘l’, ‘k’, and ‘j’-shaped visualizations outperformed those employing alternative image shapes, indicating that slender images are ideal for discriminating between species irrespective of the CNN model structure. This demonstrates that taller, narrower images allow CNNs to better learn distinguishing characteristics compared to wider, shorter images when classifying plant species using hyperspectral data. The results emphasize the need to optimize input data preprocessing for deep learning applications in species identification.
There is a clear correlation between the image structure and its impact on precision, accuracy, and F1-scores. This is supported by the superior mean performance of ‘l’, ‘h’, and ‘f’ formations in comparison to ‘a’. Additionally, when transitioning from image shape ‘a’ to ‘k’, there is a noticeable increase in the precision of species prediction, rising from a modest 81.09% to a commendable 92.17%. Following this, there is a rapid decline in precision, reaching a minimum of 79.82% for the ‘o’ shaped images. In relation to the given context, we gradually altered the configuration of the LLRD from a broad and compact format to one that is elongated and slender. This modification consisted of 15 different shapes, and for systematic reference, we labelled each set of data for each specific shape in alphabetical order. The results suggest that the order of the alphabet may indicate a level of organization in shape-related performance (see Figure 11).
Regarding species differentiation, the ‘l’, ‘f’, and ‘h’ shapes consistently achieved high accuracy for a diverse array of species, experiencing only minimal deviations in isolated cases. This finding suggests a level of uniformity in classifications across several iterations, thereby demonstrating the models’ robustness.
The results suggest that the complexity of an image’s shape impacts the model’s accuracy rate. However, the effect of image shape does not apply universally across all species. For instance, species like RASA and PHAR achieved high accuracy regardless of shape. In contrast, shape had little effect on the mediocre performance of PIST. Species displaying modest accuracies, such as BEPA and QURU, showed more variation in accuracy between shapes. In these cases, certain shapes elicited higher performance compared to others. Overall, the influence of shape depends on the individual characteristics and distinguishability of each plant species.

4.3. Uncertainty of Approach and Future Studies

Spectral omics is a research field that links the optical properties of leaves with plant diversity and traits [9,44,45]. Leaf spectra can capture a wide range of functional traits and can be used to differentiate between species [9,46]. By converting 1D LLRD into 2D greyscale images as the input of a CNN model for tree species discrimination, the results outperform those of support vector machine and DCN models [29]. In the present study, we meticulously scrutinize the influence of various leaf-level hyperspectral image shapes on the operational proficiency of CNN models tasked with plant species differentiation. We discover that regardless of the CNN model’s structure, images of a long and narrow shape prove to be particularly effective for species discrimination.
This research has provided valuable insights while also revealing opportunities for additional investigation. Firstly, to maximize model performance across taxonomic classifications, optimal techniques for shape selection warrant further study. While fifteen rectangular geometries were examined herein, it remains untested whether alternative morphologies like circular transformations may prove to be preferable. Moreover, the findings suggest that certain higher-dimensional configurations may more comprehensively represent species characteristics and merit exploration. Secondly, the TensorFlow architecture employed here may not achieve the upper bounds of effectiveness, signifying that alternate paradigms including Transformers, deserve consideration in future work. Broadly, a more exhaustive design of shape manipulations and network formulations offers potential avenues to refine predictive capacity. Ongoing refinement of preprocessing and modeling techniques may also benefit inference extensions to cross-domain plant identification challenges. Overall, the evaluation of shape impacts lays a foundation for continued methodological and implementation advances in hyperspectral-based vegetative classification via deep learning.
Nevertheless, this study has certain limitations. It was conducted using a limited database of 22 species from six laboratories, so the generalizability of these findings to other plant species or ecosystems remains uncertain. Additionally, only 15 rectangular image shapes were utilized, and other shapes, such as circular transformations or higher-dimensional configurations, could potentially yield better outcomes. Furthermore, the CNN model’s performance could be improved by utilizing alternative CNN architectures or preprocessing techniques.

5. Conclusions

The process of transforming the one-dimensional LLRD into a two-dimensional grayscale image to be used as the input for CNN models was shown to be significantly efficient in differentiating various plant species. By carefully investigating different image shapes on the classification efficiency of CNN models in differentiating plant species, we found a significant performance difference related to the image shape. The results show that CNN models trained on ‘l’-shaped hyperspectral images significantly outperformed those trained on other shaped images in terms of plant species classification. In particular, the cnn2A and cnn3B models achieved unmatched precision, accuracy, and F1-scores. However, as the LLRD contains only two thousand or more features (pixels), it may not be necessary to develop a deeper CNN model for classification purposes.
The ‘l’-shape images produced superior overall plant species classification performance based on metrics like accuracy, precision, and F1-scores, outperforming shapes such as ‘a’, ‘b’, ‘n’, and ‘o’. As the shape changes from ‘a’ to ‘l’, the reflectance characteristics of leaf pigments, cell structure and water content become more specific, potentially explaining shape’s impact on model performance. This important finding lays the foundation for further developing leaf trait estimation and provides a feasible path for pixel-level classification within HSIs.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs15245628/s1, Figure S1: Greyscale sample images generated from reshaping a full-wavelength leaf reflectance measurement containing 2025 features ranging from 400 to 2424 nm. Figure S2: Confusion matrices of predicted species and true species using different image shape datasets. Figure S3: Difference between the (d) 9 × 225 pixel and (l) 225 × 9 pixel shaped greyscale sample images of twenty-two species. Figure S4: Comparison of the best hyperspectral image shapes for species discrimination based on CNN models. Table S1: Average ratio of correct predictions for the compared image shapes (%).

Author Contributions

Conceptualization, S.Y. and Q.W.; Data curation, J.C.; Formal analysis, S.Y. and G.S.; Resources, Q.G.; Supervision, Q.G.; Funding acquisition, Q.G., S.Y., J.W. and J.C. Writing—original draft, S.Y.; Writing—review and editing, G.S., Q.G., Q.W. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by National Natural Science Foundation of China (42271091, 41977413, 42101084), Natural Science Foundation of Guangdong Province (2022A1515011898), Zhuhai Science and Technology Plan Project in the Social Development Field (2320004000189).

Data Availability Statement

The original reflectance data can be downloaded from the website https://ecosis.org (accessed on 5 June 2022). For access to the reshaped reflectance image data and the Python codes used to generate the images, please contact S.Y.

Acknowledgments

We express our gratitude to Qiao Zeng for providing valuable advice on the discussion section. Additionally, we extend our thanks to the members of the Department of Physical Geography and Resources and Environment at the Guangzhou Institute of Geography, Guangdong Academy of Sciences, for their support in conducting the resource and data analyses. We would also like to acknowledge the Ecological Spectral Information System public datasets for generously providing the necessary data for our research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Farmonov, N.; Amankulova, K.; Szatmari, J.; Sharifi, A.; Abbasi-Moghadam, D.; Mirhoseini Nejad, S.M.; Mucsi, L. Crop Type Classification by DESIS Hyperspectral Imagery and Machine Learning Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1576–1588. [Google Scholar] [CrossRef]
  2. Liu, K.H.; Yang, M.H.; Huang, S.T.; Lin, C. Plant Species Classification Based on Hyperspectral Imaging via a Lightweight Convolutional Neural Network Model. Front. Plant Sci. 2022, 13, 763. [Google Scholar] [CrossRef] [PubMed]
  3. Mäyrä, J.; Keski-Saari, S.; Kivinen, S.; Tanhuanpää, T.; Hurskainen, P.; Kullberg, P.; Poikolainen, L.; Viinikka, A.; Tuominen, S.; Kumpula, T.; et al. Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks. Remote Sens. Environ. 2021, 256, 112322. [Google Scholar] [CrossRef]
  4. Hassoon, I.M.; Kassir, S.A.; Altaie, S.M. A Review of Plant Species Identification Techniques. Int. J. Sci. Res. 2018, 7, 2016–2019. [Google Scholar] [CrossRef]
  5. Cope, J.S.; Corney, D.; Clark, J.Y.; Remagnino, P.; Wilkin, P. Plant species identification using digital morphometrics: A review. Expert Syst. Appl. 2012, 39, 7562–7573. [Google Scholar] [CrossRef]
  6. Song, G.; Wang, Q. Species classification from hyperspectral leaf information using machine learning approaches. Ecol. Inform. 2023, 76, 102141. [Google Scholar] [CrossRef]
  7. Bahrami, M.; Mobasheri, M.R. Plant species determination by coding leaf reflectance spectrum and its derivatives. Eur. J. Remote Sens. 2020, 53, 258–273. [Google Scholar] [CrossRef]
  8. Meireles, J.E.; Cavender-Bares, J.; Townsend, P.A.; Ustin, S.; Gamon, J.A.; Schweiger, A.K.; Schaepman, M.E.; Asner, G.P.; Martin, R.E.; Singh, A.; et al. Leaf reflectance spectra capture the evolutionary history of seed plants. New Phytol. 2020, 228, 485–493. [Google Scholar] [CrossRef]
  9. Jantzen, J.R.; Laliberté, E.; Carteron, A.; Beauchamp-Rioux, R.; Blanchard, F.; Crofts, A.L.; Girard, A.; Hacker, P.W.; Pardo, J.; Schweiger, A.K.; et al. Evolutionary history explains foliar spectral differences between arbuscular and ectomycorrhizal plant species. New Phytol. 2023, 238, 2651–2667. [Google Scholar] [CrossRef]
  10. Hycza, T.; Stereńczak, K.; Bałazy, R. Potential use of hyperspectral data to classify forest tree species. N. Z. J. For. Sci. 2018, 48, 18. [Google Scholar] [CrossRef]
  11. Badola, A.; Padalia, H.; Belgiu, M.; Verma, P.A. Tree Species Mapping in Tropical Forests Using Hyperspectral Remote Sensing and Machine Learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5421–5424. [Google Scholar] [CrossRef]
  12. Hu, Y.; Xu, L.; Huang, P.; Luo, X.; Wang, P.; Kang, Z. Reliable identification of oolong tea species: Nondestructive testing classification based on fluorescence hyperspectral technology and machine learning. Agriculture 2021, 11, 1106. [Google Scholar] [CrossRef]
  13. Cao, J.; Liu, K.; Liu, L.; Zhu, Y.; Li, J.; He, Z. Identifying mangrove species using field close-range snapshot hyperspectral imaging and machine-learning techniques. Remote Sens. 2018, 10, 2047. [Google Scholar] [CrossRef]
  14. Marrs, J.; Ni-Meister, W. Machine learning techniques for tree species classification using co-registered LiDAR and hyperspectral data. Remote Sens. 2019, 11, 819. [Google Scholar] [CrossRef]
  15. Al-Awadhi, M.A.; Deshmukh, R.R. Honey Classification using Hyperspectral Imaging and Machine Learning. In Proceedings of the 2021 Smart Technologies, Communication and Robotics (STCR), Tamil Nadu, India, 9–10 October 2021. [Google Scholar] [CrossRef]
  16. Shenming, Q.; Xiang, L.; Zhihua, G. A new hyperspectral image classification method based on spatial-spectral features. Sci. Rep. 2022, 12, 1541. [Google Scholar] [CrossRef] [PubMed]
  17. Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef]
  18. Fricker, G.A.; Ventura, J.D.; Wolf, J.A.; North, M.P.; Davis, F.W.; Franklin, J. A convolutional neural network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery. Remote Sens. 2019, 11, 2326. [Google Scholar] [CrossRef]
  19. Khokhar, A.A.; Yadav, S.; Khan, F.; Gindi, S. Plant Species Classification with CNN. Int. J. Emerg. Technol. Innov. Res. 2021, 8, 236–240. [Google Scholar]
  20. Kiran, S.G.; Ashwini, S.G. Deep Learning for Plant Species Classification. Int. J. Emerg. Technol. Innov. Res. 2020, 7, 99–105. [Google Scholar]
  21. Sobha, P.G.M.; Thomas, P.A. Deep Learning for Plant Species Classification Survey. In Proceedings of the 2019 International Conference on Advances in Computing, Communication and Control (ICAC3), Mumbai, India, 20–21 December 2019. [Google Scholar] [CrossRef]
  22. Kiss, N.; Czuni, L. Mushroom image classification with CNNs: A case-study of different learning strategies. In Proceedings of the 2021 12th International Symposium on Image and Signal Processing and Analysis (ISPA), Zagreb, Croatia, 13–15 September 2021; pp. 165–170. [Google Scholar] [CrossRef]
  23. Liu, Q. The Development of Image Classification Algorithms Based on CNNs. Highlights Sci. Eng. Technol. 2023, 34, 275–280. [Google Scholar] [CrossRef]
  24. Tropea, M.; Fedele, G. Classifiers Comparison for Convolutional Neural Networks (CNNs) in Image Classification. In Proceedings of the 2019 IEEE/ACM 23rd International Symposium on Distributed Simulation and Real Time Applications (DS-RT), Cosenza, Italy, 7–9 October 2019. [Google Scholar] [CrossRef]
  25. Kan, M.; Aliev, R.; Rudenko, A.; Drobyshev, N.; Petrashen, N.; Kondrateva, E.; Sharaev, M.; Bernstein, A.; Burnaev, E. Interpretation of 3D CNNs for Brain MRI Data Classification. In Proceedings of the Communications in Computer and Information Science, Virtual Event, 27–30 September 2021; Volume 1357 CCIS, pp. 229–241, ISBN 9783030712136. [Google Scholar] [CrossRef]
  26. Zeng, F.; Peng, W.; Kang, G.; Feng, Z.; Yue, X. Spectral Data Classification by One-Dimensional Convolutional Neural Networks. In Proceedings of the 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC), Austin, TX, USA, 28–30 October 2021. [Google Scholar]
  27. Chen, J.; Pisonero, J.; Chen, S.; Wang, X.; Fan, Q.; Duan, Y. Convolutional neural network as a novel classification approach for laser-induced breakdown spectroscopy applications in lithological recognition. Spectrochim. Acta-Part B At. Spectrosc. 2020, 166, 105801. [Google Scholar] [CrossRef]
  28. Gao, H.; Yang, Y.; Li, C.; Zhou, H.; Qu, X. Joint alternate small convolution and feature reuse for hyperspectral image classification. Can. Hist. Rev. 2018, 7, 349. [Google Scholar] [CrossRef]
  29. Yuan, S.; Song, G.; Huang, G.; Wang, Q. Reshaping Hyperspectral Data into a Two-Dimensional Image for a CNN Model to Classify Plant Species from Reflectance. Remote Sens. 2022, 14, 3972. [Google Scholar] [CrossRef]
  30. Shahid, S.M.; Ko, S.; Kwon, S. Performance Comparison of 1D and 2D Convolutional Neural Networks for Real-Time Classification of Time Series Sensor Data. In Proceedings of the 2022 International Conference on Information Networking (ICOIN), Jeju-si, Republic of Korea, 12–15 January 2022; pp. 507–511. [Google Scholar] [CrossRef]
  31. Zhu, Y.; Brettin, T.; Xia, F.; Partin, A.; Shukla, M.; Yoo, H.; Evrard, Y.A.; Doroshow, J.H.; Stevens, R.L. Converting tabular data into images for deep learning with convolutional neural networks. Sci. Rep. 2021, 11, 11325. [Google Scholar] [CrossRef] [PubMed]
  32. Olson, J.M.  Cartography. In International Encyclopedia of the Social & Behavioral Sciences; Smelser, N.J., Baltes, P.B., Eds.; Pergamon: Oxford, UK, 2001; pp. 1495–1501. ISBN 978-0-08-043076-8. [Google Scholar] [CrossRef]
  33. Opiyo, J. How Does the Size of Input Affect the Performance of a Convolutional NEURAL Network (CNN)? Available online: https://www.quora.com/How-does-the-size-of-input-affect-the-performance-of-a-convolutional-neural-network-CNN (accessed on 24 August 2023).
  34. Aravind, R. How to Pick the Optimal Image Size for Training Convolution Neural Network? Available online: https://medium.com/analytics-vidhya/how-to-pick-the-optimal-image-size-for-training-convolution-neural-network-65702b880f05 (accessed on 24 August 2023).
  35. Jacquemound, S.; Bidel, L.; Francois, C.; Pavan, G. ANGERS Leaf Optical Properties Database. 2003. Available online: https://ecosis.org/package/angers-leaf-optical-properties-database--2003- (accessed on 5 February 2021).
  36. Kattenborn, T.; Schiefer, F.; Schmidtlein, S. Leaf Reflectance Plant Functional Gradient IFGG/KIT. Available online: https://ecosis.org/package/leaf-reflectance-plant-functional-gradient-ifgg-kit (accessed on 14 May 2022).
  37. Kothari, S.; Beauchamp-Rioux, R.; Blanchard, F.; Crofts, A.L.; Girard, A.; Guilbeault-Mayers, X.; Hacker, P.W.; Pardo, U.; Schweiger, A.K.; Demers-Thibeault, S.; et al. CABO 2018–2019 Leaf-Level Spectra v2. Available online: https://ecosis.org/package/cabo-2018-2019-leaf-level-spectra-v2 (accessed on 14 May 2022).
  38. Wang, Z. Fresh Leaf Spectra to Estimate LMA over NEON Domains in Eastern United States. Available online: https://ecosis.org/package/fresh-leaf-spectra-to-estimate-lma-over-neon-domains-in-eastern-united-states (accessed on 5 February 2021).
  39. Serbin, S.P.; Townsend, P.A. NASA FFT Project Leaf Reflectance Morphology and Biochemistry for Northern Temperate Forests. Available online: https://ecosis.org/package/nasa-fft-project-leaf-reflectance-morphology-and-biochemistry-for-northern-temperate-forests (accessed on 14 May 2022).
  40. Burnett, A.C.; Serbin, S.P.; Davidson, K.J.; Ely, K.S.; Rogers, A. Hyperspectral Leaf Reflectance, Biochemistry, and Physiology of Droughted and Watered Crops. Available online: https://ecosis.org/package/hyperspectral-leaf-reflectance--biochemistry--and-physiology-of-droughted-and-watered-crops (accessed on 14 May 2022).
  41. Beamlab You Can Probably Use Deep Learning Even If Your Data Isn’t that Big. Available online: https://beamandrew.github.io/deeplearning/2017/06/04/deep_learning_works.html (accessed on 30 June 2021).
  42. Gavrikov, P. Visualkeras. Available online: https://github.com/paulgavrikov/visualkeras (accessed on 25 November 2023).
  43. Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Mueller, A.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.; Grobler, J.; et al. {API} design for machine learning software: Experiences from the scikit-learn project. In Proceedings of the ECML PKDD Workshop: Languages for Data Mining and Machine Learning, Prague, Czech Republic, 23–27 September 2013; pp. 108–122. [Google Scholar]
  44. Song, G.; Wang, Q. Including leaf traits improves a deep neural network model for predicting photosynthetic capacity from reflectance. Remote Sens. 2021, 13, 4467. [Google Scholar] [CrossRef]
  45. Wang, Z.; Skidmore, A.K.; Wang, T.; Darvishzadeh, R.; Hearne, J. Applicability of the PROSPECT model for estimating protein and cellulose + lignin in fresh leaves Remote Sensing of Environment Applicability of the PROSPECT model for estimating protein and cellulose + lignin in fresh leaves. Remote Sens. Environ. 2015, 168, 205–218. [Google Scholar] [CrossRef]
  46. Castro-Esau, K.L.; Sánchez-Azofeifa, G.A.; Caelli, T. Discrimination of lianas and trees with leaf-level hyperspectral data. Remote Sens. Environ. 2004, 90, 353–372. [Google Scholar] [CrossRef]
Figure 1. Data source and distribution of different datasets, including ANGERS, KARLSRUHE, CANADA, NEON, NCNE, and UPTON.
Figure 1. Data source and distribution of different datasets, including ANGERS, KARLSRUHE, CANADA, NEON, NCNE, and UPTON.
Remotesensing 15 05628 g001
Figure 2. Average reflectance of selected species. The legend shows the symbol of each species, the respective species of each symbol refer to Table 1.
Figure 2. Average reflectance of selected species. The legend shows the symbol of each species, the respective species of each symbol refer to Table 1.
Remotesensing 15 05628 g002
Figure 3. Sample images generated from reshaping a full-wavelength leaf reflectance measurement containing 2025 features ranging from 400 to 2424 nm. (A): colorized reflectance structure data; (B): reshaped 2D image data. Fifteen image shapes were created by selecting factorizations of the total number of features (2025). Specifically, the feature vectors were reshaped into images with the following pixel dimensions: (a) 1 × 2025; (b) 3 × 675; (c) 5 × 405; (d) 9 × 225; (e) 15 × 135; (f) 25 × 81; (g) 27 × 75; (h) 45 × 45 as shown; (i) 75 × 27; (j) 81 × 25; (k) 135 × 15; (l) 225 × 9; (m) 405 × 5; (n) 675 × 3; (o) 2025 × 1 [29]. Each of these image shape variations generated a separate dataset that was then used individually to train and evaluate the CNN classification models. The shapes (a), (b), (c), (m), (n), and (o) that are either too long or too high were subsequently excluded in this figure. The original images were in greyscale hence we colorized all fifteen images for improved visualization purposes. For the greyscale figure please refer to Supplementary File Figure S1.
Figure 3. Sample images generated from reshaping a full-wavelength leaf reflectance measurement containing 2025 features ranging from 400 to 2424 nm. (A): colorized reflectance structure data; (B): reshaped 2D image data. Fifteen image shapes were created by selecting factorizations of the total number of features (2025). Specifically, the feature vectors were reshaped into images with the following pixel dimensions: (a) 1 × 2025; (b) 3 × 675; (c) 5 × 405; (d) 9 × 225; (e) 15 × 135; (f) 25 × 81; (g) 27 × 75; (h) 45 × 45 as shown; (i) 75 × 27; (j) 81 × 25; (k) 135 × 15; (l) 225 × 9; (m) 405 × 5; (n) 675 × 3; (o) 2025 × 1 [29]. Each of these image shape variations generated a separate dataset that was then used individually to train and evaluate the CNN classification models. The shapes (a), (b), (c), (m), (n), and (o) that are either too long or too high were subsequently excluded in this figure. The original images were in greyscale hence we colorized all fifteen images for improved visualization purposes. For the greyscale figure please refer to Supplementary File Figure S1.
Remotesensing 15 05628 g003
Figure 4. Proposed CNN3 model architecture. CNN3 architecture differs from CNN1 and CNN2 only in the number of Conv2D and MaxPooling2D layers used. The figure is generated by visualkeras [42].
Figure 4. Proposed CNN3 model architecture. CNN3 architecture differs from CNN1 and CNN2 only in the number of Conv2D and MaxPooling2D layers used. The figure is generated by visualkeras [42].
Remotesensing 15 05628 g004
Figure 5. Schematic representation of the model, diverse image datasets of various shapes and evaluation parameters. From left to right, each species is reshaped into a different-shaped image dataset (letters a–o refer to a certain shape, see Figure 3). Each dataset is trained by nine CNN models. The trained model is then used to evaluate accuracy, precision, and F1-score for each species. The greatest mean accuracy, precision, and F1-score for each model, along with the dataset (disregarding the model specifics), will be compared.
Figure 5. Schematic representation of the model, diverse image datasets of various shapes and evaluation parameters. From left to right, each species is reshaped into a different-shaped image dataset (letters a–o refer to a certain shape, see Figure 3). Each dataset is trained by nine CNN models. The trained model is then used to evaluate accuracy, precision, and F1-score for each species. The greatest mean accuracy, precision, and F1-score for each model, along with the dataset (disregarding the model specifics), will be compared.
Remotesensing 15 05628 g005
Figure 6. Flowchart of the process followed in this study. In the training stage, the 1D structured LLRD undergoes reshaping into a variety of 2D grayscale images with differing shapes, which serve as input data for CNN models. During the application stage, randomly selected examples are employed to make predictions using trained models. The predicted and actual species are then compared to produce prediction performance metrics.
Figure 6. Flowchart of the process followed in this study. In the training stage, the 1D structured LLRD undergoes reshaping into a variety of 2D grayscale images with differing shapes, which serve as input data for CNN models. During the application stage, randomly selected examples are employed to make predictions using trained models. The predicted and actual species are then compared to produce prediction performance metrics.
Remotesensing 15 05628 g006
Figure 7. Circular diagram showing the relationships between reshaped image datasets and identified species. Nodes in the upper portion represent different reshaped image datasets (lowercase letters d–m denote distinct shapes, refer to Figure 3 for visuals; shapes such as ‘a’, ‘b’, ‘c’, ‘n’, ‘o’ were excluded from the diagram as their prediction accuracy fell below 90%), while lower nodes correspond to identified species. Edges indicate connections between datasets and species. Larger node size represents a higher accumulated average correct prediction ratio. Thicker edges signify greater weights for the average correct prediction ratio between linked datasets and species.
Figure 7. Circular diagram showing the relationships between reshaped image datasets and identified species. Nodes in the upper portion represent different reshaped image datasets (lowercase letters d–m denote distinct shapes, refer to Figure 3 for visuals; shapes such as ‘a’, ‘b’, ‘c’, ‘n’, ‘o’ were excluded from the diagram as their prediction accuracy fell below 90%), while lower nodes correspond to identified species. Edges indicate connections between datasets and species. Larger node size represents a higher accumulated average correct prediction ratio. Thicker edges signify greater weights for the average correct prediction ratio between linked datasets and species.
Remotesensing 15 05628 g007
Figure 8. Accuracy matrix of prediction results for 22 species using the optimal model trained with ‘l’ shape. Both the y-axis and the x-axis are species codes, refer to Table 1 for the corresponding species.
Figure 8. Accuracy matrix of prediction results for 22 species using the optimal model trained with ‘l’ shape. Both the y-axis and the x-axis are species codes, refer to Table 1 for the corresponding species.
Remotesensing 15 05628 g008
Figure 9. Difference between the (d) 9 × 225 pixel and (l) 225 × 9 pixel shaped sample images of twenty-two species. The original images were in greyscale, images are shown in color only for visualization purposes. The input data provided to CNN models consisted of the original greyscale hyperspectral reflectance values. For the greyscale figure, please refer to Supplementary File Figure S3.
Figure 9. Difference between the (d) 9 × 225 pixel and (l) 225 × 9 pixel shaped sample images of twenty-two species. The original images were in greyscale, images are shown in color only for visualization purposes. The input data provided to CNN models consisted of the original greyscale hyperspectral reflectance values. For the greyscale figure, please refer to Supplementary File Figure S3.
Remotesensing 15 05628 g009
Figure 10. Comparison of the best hyperspectral image shapes for species discrimination based on CNN models. (Panel A) shows a sample original reflectance line. (Panel B) displays the optimal reshaped images for all models, including ‘j’, ‘k’, and ‘l’ shapes. To enhance understanding of each reflectance component, the original generated image is colorized. Reflectance values ranging from 0 to 1 were reshaped into a grayscale image, with integer values then scaled to a range of 0 to 255. As a result, the top left corner of each image (row 0, column 0) corresponds to the scaled value at 400 nm wavelength, while the bottom right corner represents the scaled value at 2424 nm (row 81, column 25 for ‘k’; row 135, column 15 for ‘j’; row 225, column 9 for ‘l’). Images are shown in color for visualization only. Please refer to Supplementary File Figure S4 for grayscale images.
Figure 10. Comparison of the best hyperspectral image shapes for species discrimination based on CNN models. (Panel A) shows a sample original reflectance line. (Panel B) displays the optimal reshaped images for all models, including ‘j’, ‘k’, and ‘l’ shapes. To enhance understanding of each reflectance component, the original generated image is colorized. Reflectance values ranging from 0 to 1 were reshaped into a grayscale image, with integer values then scaled to a range of 0 to 255. As a result, the top left corner of each image (row 0, column 0) corresponds to the scaled value at 400 nm wavelength, while the bottom right corner represents the scaled value at 2424 nm (row 81, column 25 for ‘k’; row 135, column 15 for ‘j’; row 225, column 9 for ‘l’). Images are shown in color for visualization only. Please refer to Supplementary File Figure S4 for grayscale images.
Remotesensing 15 05628 g010
Figure 11. Average prediction accuracy and precision for each image shape, irrespective of the convolutional neural network (CNN) models used. The x-axis represents the different image shapes labelled ‘a’ through ‘o’. The y-axis indicates the range of accuracy and precision values achieved.
Figure 11. Average prediction accuracy and precision for each image shape, irrespective of the convolutional neural network (CNN) models used. The x-axis represents the different image shapes labelled ‘a’ through ‘o’. The y-axis indicates the range of accuracy and precision values achieved.
Remotesensing 15 05628 g011
Table 1. Species Latin name, symbol, code, group, and source. AN: ANGERS Leaf Optical Properties Database (2003)); KA: KARLSRUHE leaf reflectance plant functional gradient IFGG/KIT): CA: CABO 2018-2019 Leaf-Level Spectra v2); NE: Fresh Leaf Spectra to Estimate LMA over NEON domains in eastern United States); NC: NCNE, NASA FFT Project Leaf Reflectance Morphology and Biochemistry for Northern Temperate Forests; UP: UPTON, Hyperspectral leaf reflectance, biochemistry, and physiology of droughted and watered crops.
Table 1. Species Latin name, symbol, code, group, and source. AN: ANGERS Leaf Optical Properties Database (2003)); KA: KARLSRUHE leaf reflectance plant functional gradient IFGG/KIT): CA: CABO 2018-2019 Leaf-Level Spectra v2); NE: Fresh Leaf Spectra to Estimate LMA over NEON domains in eastern United States); NC: NCNE, NASA FFT Project Leaf Reflectance Morphology and Biochemistry for Northern Temperate Forests; UP: UPTON, Hyperspectral leaf reflectance, biochemistry, and physiology of droughted and watered crops.
Latin NameSymbolCodeGroupSourceTotal
CAKAANUPNENC
Betula papyrifera MarshallBEPA0Tree31 511092
Quercus rubra L.QURU1Tree21 9084195
Raphanus sativus L.RASA2Herb 195 195
Acer saccharum MarshallACSA3Tree81 18 99
Betula populifolia MarshallBEPO4Tree, Shrub104 104
Phalaris arundinacea LinnaeusPHAR5Herb7522 97
Andropogon gerardii VitmanANGE6Herb 89291
Acer rubrum L.ACRU7Tree57 9642195
Tsuga canadensis (L.) CarrièreTSCA8Tree 112112
Acacia smallii IselyACSM9Tree, Shrub 119119
Capsicum annuum L.CAAN10Herb 195 195
Populus ×canadensis MoenchPOCA11Tree 195 195
Helianthus annuus L.HEAN12Herb 172 172
Populus tremuloides Michx.POTR13Tree120 120
Quercus alba L.QUAL14Tree10 7860148
Acer pseudoplatanus L.ACPS15Tree 181 181
Pinus strobus L.PIST16Tree12 8294
Cucurbita pepo L.CUPE17Vine, Herb 195 195
Abies balsamea (L.) Mill.ABBA18Tree19 133152
Setaria italica (L.) P. Beauv.SEIT19Herb 96 96
Sorghum bicolor (L.) MoenchSOBI20Herb 151 151
Fagus grandifolia Ehrh.FAGR21Tree47 3918104
Total 5772218111994616623102
Table 2. Summary of CNN Model Architecture. This table presents an overview of the CNN model architecture, indicating the number of convolutional layers—1, 2, or 3—by numbers following the term “CNN”. Also, various pooling layers—A, B, C, D—are demonstrated to showcase the variations employed in the architecture. The ‘L × W’ indicates the pixel dimensions of the input image dataset. When the Max-Pooling function is applied, the image dimensions change to ‘L1 × W1′ and/or ‘L2 × W2′. The Last Calculated Output Value varies depending on the datasets and models used. The Output layer consists of 22 dense neurons, each corresponding to one of the 22 species.
Table 2. Summary of CNN Model Architecture. This table presents an overview of the CNN model architecture, indicating the number of convolutional layers—1, 2, or 3—by numbers following the term “CNN”. Also, various pooling layers—A, B, C, D—are demonstrated to showcase the variations employed in the architecture. The ‘L × W’ indicates the pixel dimensions of the input image dataset. When the Max-Pooling function is applied, the image dimensions change to ‘L1 × W1′ and/or ‘L2 × W2′. The Last Calculated Output Value varies depending on the datasets and models used. The Output layer consists of 22 dense neurons, each corresponding to one of the 22 species.
CNN1ACNN1BCNN2ACNN2BCNN2CCNN3ACNN3BCNN3CCNN3D
InputL × W × 3
RescalingL × W × 3
Conv1Kernel3 × 3
Stride1 × 1
OutputL × W × 32
PoolingOutput-L1 × W1 × 32-L1 × W1 × 32-L1 × W1 × 32
Conv2Kernel--3 × 3
Stride--1 × 1
Output--L × W × 32L1 × W1 × 32L × W × 32L1 × W1 × 32
PoolingOutput----L2 × W2 × 32--L2 × W2 × 32
Conv3Kernel-----3 × 3
Stride-----1 × 1
Output-----L × W × 32L1 × W1 × 32L2 × W2 × 32
PoolingOutput--------L3 × W3 × 64
DropoutRate (%)0.2
FlattenLast Output Calculated Value
DenseFlatten × 128
Output1 × 22
Table 3. The best predictive results obtained by convolutional neural network (CNN) models for each image shape. This has been established by determining and analyzing the mean and standard deviation of the prediction accuracy, precision, and F1-score from a total of thirty models.
Table 3. The best predictive results obtained by convolutional neural network (CNN) models for each image shape. This has been established by determining and analyzing the mean and standard deviation of the prediction accuracy, precision, and F1-score from a total of thirty models.
ModelsImage ShapeAccuracy (%)Precision (%)F1-Score
cnn1Ak89.77 ± 4.6991.03 ± 3.940.90 ± 0.05
cnn1Bj91.15 ± 1.5092.07 ± 1.210.91 ± 0.02
cnn2Al93.82 ± 1.3294.45 ± 1.090.94 ± 0.01
cnn2Bl93.63 ± 1.1694.23 ± 0.980.94 ± 0.01
cnn2Ck92.55 ± 1.6093.23 ± 1.330.93 ± 0.02
cnn3Al93.84 ± 1.6594.41 ± 1.300.94 ± 0.02
cnn3Bl93.95 ± 1.3094.55 ± 1.030.94 ± 0.01
cnn3Cl92.67 ± 1.6193.45 ± 1.170.93 ± 0.02
cnn3Dl92.76 ± 1.3693.31 ± 1.150.93 ± 0.01
Table 4. The optimal predictive results, irrespective of the Convolutional Neural Network (CNN) models employed, are determined for each image shape. These results include the mean and standard deviation of prediction accuracy, precision, and F1-score from 30 models.
Table 4. The optimal predictive results, irrespective of the Convolutional Neural Network (CNN) models employed, are determined for each image shape. These results include the mean and standard deviation of prediction accuracy, precision, and F1-score from 30 models.
Image ShapeAccuracy (%)Precision (%)F1-Score
a79.27 ± 4.7581.09 ± 4.940.79 ± 0.05
b88.17 ± 2.5389.62 ± 2.020.88 ± 0.03
c87.69 ± 2.3289.15 ± 1.860.88 ± 0.02
d89.78 ± 2.1790.88 ± 1.690.90 ± 0.02
e90.51 ± 2.1991.44 ± 1.870.90 ± 0.02
f91.30 ± 2.0092.43 ± 1.470.91 ± 0.02
g91.40 ± 2.0692.64 ± 1.580.91 ± 0.02
h93.07 ± 1.8794.00 ± 1.260.93 ± 0.02
i92.62 ± 1.5293.38 ± 1.210.93 ± 0.02
j92.82 ± 2.1993.61 ± 1.720.93 ± 0.02
k92.82 ± 1.6493.58 ± 1.310.93 ± 0.02
l93.95 ± 1.3094.55 ± 1.030.94 ± 0.01
m91.08 ± 2.9492.09 ± 2.580.91 ± 0.03
n88.54 ± 3.0989.92 ± 2.680.88 ± 0.03
o77.99 ± 6.7279.82 ± 7.510.77 ± 0.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, S.; Song, G.; Gong, Q.; Wang, Q.; Wang, J.; Chen, J. Reshaping Leaf-Level Reflectance Data for Plant Species Discrimination: Exploring Image Shape’s Impact on Deep Learning Results. Remote Sens. 2023, 15, 5628. https://doi.org/10.3390/rs15245628

AMA Style

Yuan S, Song G, Gong Q, Wang Q, Wang J, Chen J. Reshaping Leaf-Level Reflectance Data for Plant Species Discrimination: Exploring Image Shape’s Impact on Deep Learning Results. Remote Sensing. 2023; 15(24):5628. https://doi.org/10.3390/rs15245628

Chicago/Turabian Style

Yuan, Shaoxiong, Guangman Song, Qinghua Gong, Quan Wang, Jun Wang, and Jun Chen. 2023. "Reshaping Leaf-Level Reflectance Data for Plant Species Discrimination: Exploring Image Shape’s Impact on Deep Learning Results" Remote Sensing 15, no. 24: 5628. https://doi.org/10.3390/rs15245628

APA Style

Yuan, S., Song, G., Gong, Q., Wang, Q., Wang, J., & Chen, J. (2023). Reshaping Leaf-Level Reflectance Data for Plant Species Discrimination: Exploring Image Shape’s Impact on Deep Learning Results. Remote Sensing, 15(24), 5628. https://doi.org/10.3390/rs15245628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop