Next Article in Journal
Carbon Dynamics in the Northeastern Qinghai–Tibetan Plateau from 1990 to 2030 Using Landsat Land Use/Cover Change Data
Next Article in Special Issue
Multi-Scale Context Aggregation for Semantic Segmentation of Remote Sensing Images
Previous Article in Journal
Reevaluating Mare Moscoviense And Its Vicinity Using Chang’e-2 Microwave Sounder Data
Previous Article in Special Issue
A Multi-Scale and Multi-Level Spectral-Spatial Feature Fusion Network for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inference in Supervised Spectral Classifiers for On-Board Hyperspectral Imaging: An Overview

1
Computer Architecture Group (gaZ), Department of Computer Science and Systems Engineering, Ada Byron Building, University of Zaragoza, C/María de Luna 1, E-50018 Zaragoza, Spain
2
Hyperspectral Computing Laboratory (HyperComp), Department of Computer Technology and Communications, Escuela Politecnica de Caceres, University of Extremadura, Avenida de la Universidad sn, E-10003 Caceres, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(3), 534; https://doi.org/10.3390/rs12030534
Submission received: 13 December 2019 / Revised: 31 January 2020 / Accepted: 4 February 2020 / Published: 6 February 2020
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)

Abstract

:
Machine learning techniques are widely used for pixel-wise classification of hyperspectral images. These methods can achieve high accuracy, but most of them are computationally intensive models. This poses a problem for their implementation in low-power and embedded systems intended for on-board processing, in which energy consumption and model size are as important as accuracy. With a focus on embedded and on-board systems (in which only the inference step is performed after an off-line training process), in this paper we provide a comprehensive overview of the inference properties of the most relevant techniques for hyperspectral image classification. For this purpose, we compare the size of the trained models and the operations required during the inference step (which are directly related to the hardware and energy requirements). Our goal is to search for appropriate trade-offs between on-board implementation (such as model size and energy consumption) and classification accuracy.

1. Introduction

Fostered by significant advances in computer technology that have taken place from the end of the last century to now, the Earth observation (EO) field has greatly evolved over the last 20 years [1]. Improvements in hardware and software have allowed for the development of more sophisticated and powerful remote sensing systems [2], which in turn has enhanced the acquisition of remote sensing data in terms of both quantity and quality, and also improved the analysis and processing of these data [3]. In fact, remote sensing technology has become a fundamental tool to increase our knowledge of the Earth and how human factors, such as globalization, industrialization and urbanization can affect the environment [4]. It provides relevant information to address current environmental problems such as desertification [5,6], deforestation [7,8], water resources depletion [9], soil erosion [10,11,12], eutrophication of freshwater and coastal marine ecosystems [13,14], warming of seas and oceans [15], together with global warming and abnormal climate changes [16] or urban areas degradation [17], among others.
In particular, advances in optical remote sensing imaging [18] have allowed for the acquisition of high spatial, spectral and temporal resolution images, gathered from the Earth’s surface in multiple formats, ranging from very-high spatial-resolution (VHR) panchromatic images to hyperspectral images with hundreds of narrow and continuous spectral bands. Focusing on hyperspectral imaging (HSI) [19], this kind of data comprises abundant spectral–spatial information for large coverage, obtained by capturing the solar radiation that is absorbed and reflected by ground targets at different wavelengths, usually ranging from the visible, to the near (NIR) and short wavelength infrared (SWIR) [20]. In this sense, HSI data obtained by airborne and satellite platforms consist of huge data cubes, where each pixel represents the spectral signature of the captured object. The shape of these spectral signatures depends on the physical and chemical behavior of the materials that compose it, working as a fingerprint for each terrestrial material. This signature allows for a precise characterization of the land cover, and is currently widely exploited in the fields of image analysis and pattern recognition [21]. Advances in HSI processing and analysis methods have enabled the widespread incorporation of these to a vast range of applications. Regarding the forest preservation and management [22,23,24,25], HSI data can be applied to invasive species detection [26,27,28], forestry health and diseases [29,30,31] and analyses of relationship between water precipitations, atmospheric conditions and forest health [32,33]. Also, regarding the management of other natural resources, there are works focused on freshwater and maritime resources [34,35,36,37] and geological and mineralogical resources [38,39,40,41]. In relation to agricultural and livestock farming activities, [42], the available literature compiles a large number of works about HSI applied to precision agriculture [43,44], analyzing the soil properties and status [45,46,47], investigating diseases and pests affecting crops [48,49] and developing libraries of spectral signatures specialized in crops [50]. Moreover, HSI data can be applied to urban planning [51,52,53], military and defense applications [54,55,56] and disaster prediction and management [57,58], among others.
The wide applications of HSI images call for highly efficient and accurate methods to make the most of the rich spectral information contained in HSI data. In this context, machine learning algorithms have been adopted to process and analyze the HSI data. These algorithms include spectral unmixing [59,60], image segmentation [61,62,63], feature extraction [64,65], spectral reduction [66,67], anomaly, change and target detection [68,69,70,71,72,73] and land-cover classification methods [74,75], among others. Among these algorithms, supervised pixel-wise classifiers can derive more accurate results and thence more widely used for images classification compared to unsupervised approaches. This higher accuracy is mainly due to the class-specific information provided during the training stage.
In order to define the classification problem in mathematical terms, let X R N × B { x 1 , , x N } denote the HSI scene—considered as an array of N vectors, where each one x i R B { x i , 1 , , x i , B } is composed by B spectral bands—and let Y { 1 , , K } be a set of K land-cover classes. Classification methods define f ( · , Θ ) : X Y as a mapping function with learnable parameters Θ that essentially describes the relationship between the spectral vector x i (input) and its corresponding label y i Y (output), creating feature–label pairs { x i , y i } i = 1 N . The final goal is to obtain the classification map Y R N { y 1 , , y N } by modeling the conditional distribution P ( y Y | x X , Θ ) in order to infer the class labels for each pixel. Usually, this posterior distribution is optimized by training the classifier on a subset D t r a i n composed by M random independent identically distributed (i.i.d.) observations that follow the joint distribution P ( x , y ) = P ( x ) P ( y | x ) , i.e., a subset of known and representative labeled data, adjusting parameters Θ to minimize the empirical risk R ( f ) [76] defined as Equation (1) indicates:
R ( f ) = L f ( x , Θ ) , y d P ( x , y ) = 1 M i = 1 M L f ( x i , Θ ) , y i
where L is the loss function defined over P ( x , y ) as the discrepancy between the expected label y and the obtained classifier’s output f ( x , Θ ) . A wide variety of supervised-spectral techniques have been developed within the machine learning field to perform the classification of HSI data [77]. Some of the most popular ones can be categorized into [74]: (i) probabilistic approaches, such as the multinomial logistic regression (MLR) [62,78] and its variants (sparse MLR (SMLR) [79,80] and subspace MLR -MLRsub- [81,82]), the logistic regression via variable splitting and augmented Lagrangian (LORSAL) [63,83] or the maximum likelihood estimation (MLE) [84], among others, which obtain as a result the probability of x i belonging to each of K considered classes [85]; (ii) decision tree (DT) [86,87,88], which defines a non-parametric classification/regression method with a hierarchical structure of branches and leaves; (iii) ensemble methods, which are composed of multiple classifiers to enhance the classification performance, for instance random forests (RFs) [89,90], whose output is composed by the collective decisions of several DTs to which majority voting is applied, or boosting and bagging-based methods such as RealBoost [91,92], AdaBoost [93,94,95,96], Gradient Boosting [97,98] or the ensemble extreme learning machine ( E 2 LM) [99], among others; (iv) kernel approaches, such as the non-probabilistic support vector machine (SVM) [100,101], which exhibits a good performance when handling high-dimensional data and limited training samples, (although its performance is greatly affected by the kernel selection and the initial hyperparameters setting) and (v) the non-parametric artificial neural networks (ANNs), which exhibit a great generalization power without prior knowledge about the statistical properties of the data, also offering a great variety of architectures thanks to their flexible structure based on the stacking of layers composed by computing neurons [75], allowing for the implementation of traditional shallow-fully-connected models (such as the multilayer perceptron (MLP) [102,103]) and deep-convolutional models (such as convolutional neural networks (CNNs) [104] and complex models as residual networks (ResNets) [105] and capsule models [106]).
These methods need to face the intrinsic complexity of processing HSI data, related to the huge amount of available spectral information (curse of dimensionality [107]), the spectral bands correlation and redundancies [108], the lack of enough labeled samples to perform supervised training [109] and overfitting problems. Moreover, current HSI classification methods must satisfy a growing demand for effective and efficient methodologies from a computational point of view [110,111,112], with the idea of being executed on low-power platforms that allow for on-board processing of data (e.g., smallsats [113,114]). In this sense, high performance computing (HPC) approaches such as commodity clusters [115,116] and graphic processing units (GPUs) have been widely employed to process HSI data [117]. However, the adaptation of these computing platforms to on-board processing is quite difficult due to their high requirements in terms of energy consumption.
Traditionally, the data gathered by remote sensors have to be downloaded to the ground segment, when the aircraft or spacecraft platform is within the range of the ground stations, in order to be pre-processed by applying registration and correction techniques and then distributed to the final users, which perform the final processing (classification, unmixing and object detection). Nevertheless, this procedure introduces important delays related to the communication of a large amount of remote sensing data (which is usually in the range of GB–TB) between the source and the final target, producing a bottleneck that can seriously reduce the effectiveness of real-time applications [118]. Hereof, real-time on-board processing is a very interesting topic within the remote sensing field that has significantly grown in recent years to mitigate these limitations, and to provide a solution to these types of applications [119,120,121,122,123]. In addition to avoiding communication latencies, the on-board processing can considerably reduce the amount of bandwidth and storage required in the collection of HSI data, allowing for the development of a more selective data acquisition and reducing the cost of on-the-ground processing systems [124]. As a result, low-power consumption architectures such as field-programmable gate array (FPGAs) [125,126] and efficient GPU architectures [110] have emerged as an alternative to transfer part of the processing from the ground segment to the remote sensing sensor. A variety of techniques have been adapted to be carried out on-board [127], ranging from pre-processing methods, such as data calibration [128], correction [129], compression [123,130] and georeferencing [131], to final user applications, for instance data unmixing [126], object detection [132] and classification [110,133]. In the context of classification, usually, the training of supervised methods should be performed offline (in external systems), so that only the trained model will be implemented in the device (which will only perform the inference operation). On embedded and on-board systems, the size and energy consumption of the model are crucial parameters, so it is necessary to find an appropriate trade-off between performance (in terms of accuracy measurements) and energy consumption (in terms of power consumption and execution times). In this paper, we perform a detailed analysis and study of the performance of machine learning methods in the task of supervised, spectral-based classification of HSI data, with particular emphasis on the inference stage, as it is the part that is implemented in on-board systems. Specifically, we conduct an in-depth review and analysis of the advantages and disadvantages of these methods in the aforementioned context.
The remainder of this paper is organized as follows. Section 2 provides an overview of the considered machine learning methods to perform supervised, spectral-based HSI classification. Section 3 presents the considered HSI scenes and the experimental setting configurations adopted to conduct the analysis among the selected HSI classifiers. Section 4 provides a detailed experimental discussion, highlighting the advantages and disadvantages of each method in terms of accuracy and computational measurements. Finally, Section 5 concludes the paper with some remarks and hints at plausible future research lines.

2. Inference Characteristics of Models for Hyperspectral Images Classification

We selected some of the most relevant techniques for HSI data classification to be compared in the inference stage. These techniques are: multinomial logistic regression (MLR), random forest (RF), support vector machine (SVM), multi-layer perceptron (MLP) and a shallow convolutional neural network (CNN) with 1D kernel as well as gradient boosting decision Trees (GBDT), a tree based technique that has successfully been used in other classification problems. In order to compare them, it is necessary to perform the characterization of each algorithm in the inference stage, measuring the size in memory of the trained model and analyzing the number and type of operations needed to perform the complete inference stage for the input data.
In the case of HSI classification, the input is a single pixel vector composed of a series of features, and each one of these features is a 16-bit integer value. Each model treats the data in different ways so, for instance, the size of the layers of a neural network will depend on the number of features of the pixels of each data set, while the size of a tree-based model will not. We will explain the characteristics of the different models and the inference process for each of them.

2.1. Multinomial Logistic Regression

The MLR classifier is a probabilistic model that extends the performance of binomial logistic regression for multi-class classification, approximating the posterior probability of each class by a softmax transformation. In particular, for a given HSI training set D t r a i n = { x i , y i } i = 1 M composed by M pairs of spectral pixels x i R B and their corresponding labels y i Y = { 1 , , K } , the posterior probability P ( y i = k | x i , Θ ) of the k-th class is given by Equation (2) [134]:
P ( y i = k | x i , Θ ) = exp θ k · h ( x i ) j = 1 K exp θ j · h ( x i )
where θ k is the set of logistic regressors for class k, considering Θ = { θ 1 , , θ K 1 , 0 } as all the coefficients of the MLR, while h ( · ) is a feature extraction function defined over the spectral data x i , which can be linear, i.e., h ( x i ) = { 1 , x i , 1 , , x i , B } , or non-linear (for instance, kernel approaches [135]). In this work, linear MLR is considered.
Standardization of the data set is needed before training, so the data are compacted and centered around the average value. This process implies the calculation of the average ( x ¯ ) and standard deviation (s) values of the entire data set X to apply Equation (3) to each pixel x i . In HSI processing, it is common to pre-process the entire data set before splitting it into the training and testing subsets, so x ¯ and s include the test set, which is already standardized to perform the inference after training. Nevertheless, in a real environment, x ¯ and s values will be calculated from the training data and then the standardization should be applied on-the-fly, applying these values to the input data received from the sensor. This implies not only some extra calculations to perform the inference for each pixel, but also some extra assumptions on the representativeness of the training data distribution. These extra calculations are not included in the measurements of Section 4.2.
x i = x i x ¯ s
The MLR model has been implemented in this work with the scikit learn logistic regression model with a multinomial approach and lbfgs solver [136]. The trained model consists of one estimator for each class, so the output of each estimator represents the probability of the input belonging to that class. The formulation of the inference for the class k estimator ( y k ) corresponds to Equation (4), where x i = { 1 , x i , 1 , , x i , B } is the input pixel and θ k = { θ k , 0 , , θ k , B } correspond to the bias value and the coefficients of the estimator of class k.
y k , i = θ k · x i = θ k , 0 + θ k , 1 x i , 1 + θ k , 2 x i , 2 + + θ k , B x i , B
As a result, the model size depends on the number of classes (K) and features (B), having K ( B + 1 ) parameters. The inference of one pixel requires K B floating point multiplications and K B floating point accumulations. In this case, we have a very small model and it does not require many calculations. However, since it is a linear probabilistic model, its accuracy may be limited in practice, although it can be very accurate when there is a linear relation between the inputs and outputs.

2.2. Decision Trees

A decision tree is a decision algorithm based on a series of comparisons connected among them as in a binary tree structure, so that the node comparisons lead the search to one of the child nodes, and so on, until reaching a leaf node that contains the result of the prediction. During training, the most meaningful features are selected and used for the comparisons in the tree. Hence the features that contain more information will be used more frequently for the comparison, and those that do not provide useful information for the classification problem will simply be ignored [137]. This is an interesting property of this algorithm since, based on the same decisions made during training to choose features, we can easily determine the feature importance. This means that decision trees can also be used to find out which features carry the main information load, and that information can be used to train even smaller models keeping most of the information of the image with much less memory impact.
Figure 1 shows the inference operation of a trained decision tree on a series of feature inputs with a toy example. In the first place, this tree takes feature 3 of the input and compares its value with the threshold value 14,300; as the input value is lower than the threshold it continues on the left child, and keeps with the same procedure until it reaches the leaf with 0.15 as output value.
One of the benefits of using decision trees over other techniques is that they do not need any input pre-processing such as data normalization, scaling or centering. They work with the input data as it is [138]. The reason is that features are never mixed. As can be seen in Figure 1, in each comparison the trees compare the value of an input feature with another value of the same feature. Hence, several features can have different scales. In other machine learning models, as we just saw in MLR for example, features are mixed to generate a single value so, if their values belong to different orders of magnitude, some features will initially dominate the result. This can be compensated for during the training process, but in general normalization or other pre-processing technique will be needed to speed up training and improve the results. Besides, the size of the input data does not affect the size of the model, so dimensionality reduction techniques such as principal component analysis (PCA) are not needed to reduce the model size, which substantially reduces the amount of calculation needed at inference.
Nevertheless, a single decision tree does not provide accurate results for complex classification tasks. The solution is to use an ensemble method that combines the results of several trees in order to improve the accuracy levels. We will analyze two of these techniques, random forest (RF) and gradient boosting decision trees (GBDT). In terms of computation, most of the machine learning algorithms need a significant amount of floating point operations on inference, and most of them are multiplications. By contrast, the inference with an ensemble of decision trees just need a few comparisons per tree. In terms of memory requirements, the size of this models depends on the number of trees and the number of nodes per tree, but the memory accesses, and therefore the used bandwidth, is much lesser than the size of the model because decision trees only need to access a small part of the model to perform an inference.
In the case of hyperspectral-images pixel classification, the input is a single pixel composed of a series of features. Each node specializes in a particular feature during training, meaning that, at the time of inference, one particular node performs a single comparison between its trained value and the value of the corresponding feature. Since the feature values of hyperspectral images are 16-bit integers, each node just needs an integer comparison to make their decision; i.e., left or right child. This is a very important characteristic for embedded and on-board systems. In most ML models the inputs are multiplied by a floating-point value, hence even when the input model is an integer, all the computations will be floating-point. However, a tree only need to know whether the input is smaller or greater than a given value, and that value can be an integer without any accuracy loss.
So in the case of hyperspectral images pixel-classification, this technique behaves exceptionally in terms of computation. Decision trees are fast and efficient during inference and can be executed even by low-power microcontrollers.

2.2.1. Random Forest

A typical approach to use as ensemble method is RF, where several trees are trained from the same data set, but each one of them from a random subsample of the entire data set. Moreover, the search of the best split feature for each node is done on a random subset of the features. Then each classifier votes for the selected class [139].
The RF model has been implemented with the scikit learn random forest classifier model [140]. In this implementation the final selection is done by averaging the predictions of every classifier instead of voting, which implies that each leaf node must keep a prediction value for each class, so after every tree performs the inference the class is selected from the average of every prediction. This generates big models but, as we said, it only needs to access a small part of it during inference.

2.2.2. Gradient Boosting

However, even better results can be obtained applying a different ensemble approach called gradient boosting. This technique is an ensemble method that combines the results of different predictors in such a way that each tree attempts to improve the results of the previous ones. Specifically, the gradient boosting method consists of training predictors sequentially so each new iteration tries to correct the residual error generated in the previous one. That is, each predictor is trained to correct the residual error of its predecessor. Once the trees are trained, they can be used for prediction by simply adding the results of all the trees [138,141].
The GBDT model has been implemented with the LightGBM library Classifier [142]. For multi-class classification, one-vs-all approach is used in GBDT implementation, which means that the model trains a different estimator for each class. The output of the correspondent estimator represents the probability that the pixel belongs to that class, and the estimator with the highest result is the one that corresponds to the selected class. On each iteration, the model adds a new tree to each estimator. The one-vs-all approach makes it much easier to combine the results given that each class has their own trees, so we just need to add the results of the trees of each class separately, as shown in Figure 2. This accumulations of the output values of the trees are the only operations in floating point that the GBDT need to perform.
Due to its iterative approach, the GBDT model also allows designers to trade-off accuracy for computation and model size. For example, if a GBDT is trained for 200 iterations, it will generate 200 trees for each class. Afterwards, the designer can decide whether to use all of them, or to discard the final ones. It is possible to find similar trade-off with other ML models, for instance reducing the number of convolutional layers in a CNN or the number of hidden neurons in a MLP. However, in that case, each possible design must be trained again, whereas in GBDT only one train is needed, and afterwards the designer can simply evaluate the results when using different number of trees and generate a Pareto curve with the different trade-offs.

2.3. Support Vector Machine

A support vector machine (SVM) is a kernel-based method commonly used for classification and regression problems. It is based on a two-class classification approach, the support vector network algorithm. To find the smallest generalization error, this algorithm searches for the optimal hyperplane, i.e., a linear decision function that maximizes the margin among the support vectors, which are the ones that define the decision boundaries of each class [143]. In the case of pixel-based classification of hyperspectral images, we need to generalize this algorithm to a multi-class classification problem. This can be done following a one-vs-rest, or one-vs-all, approach, training K separate SVMs, one for each class, so each two-class classifier will interpret the data from its own class as positive examples and the rest of the data as negative examples [134].
The SVM model has been implemented with the scikit learn support vector classification (SVC) algorithm [144], which implements one-vs-rest approach for multi-class classification. SVM model also requires pre-processing of the data applying the standardization Equation (3), with the same implications explained in Section 2.1. According to scikit learn SVC mathematical formulation [144], the decision function is found in Equation (5), where K ( v i , x ) corresponds to the kernel. We used the radial basis function (RBF) kernel, whose formulation is found in Equation (6). So the complete formulation of the inference operations is found in Equation (7), where v i corresponds to the i-th support vector, y i α i product is the coefficient of this support vector, x corresponds to the input pixel, ρ is the bias value and γ is the value of the g a m m a training parameter.
sgn i = 1 M y i α i K ( v i , x ) + ρ
exp γ v i x 2
sgn i = 1 M y i α i exp γ v i x 2 + ρ
The number of support vectors defined as M in Equation (7) of the SVM model will be the amount of data used for the training set. So this model does not keep too many parameters, which makes it small in memory size, but in terms of computation, it requires a great amount of calculus to perform one inference. The number of operations will depend on the number of features and the number of training data, which makes it unaffordable in terms of computation for really big data sets. Moreover, as it uses one-vs-all, it will also depends on the number of classes because it will train an estimator for each one of them.

2.4. Neural Networks

Neural networks have become one of the most used machine learning techniques for images classification, and they have also proved to be a good choice for hyperspectral images classification. A neural network consists of several layers sequentially connected so that the output of one layer becomes the input of the next one. Some of the layers can be dedicated to intermediate functions, like pooling layers that reduce dimensionality highlighting the principal values, but the main operation, as well as most of the calculations, of a neural network resides in the layers based on neurons. Each neuron implements Equation (8), where x is the input value and w and b are the learned weight and bias respectively, which are float values.
y = x w + b
Usually, when applying groups of neurons in more complex layers, the results of several neurons are combined such as in a dot product operation, as we will see for example in Section 2.4.1, and this w and b values are float vectors, matrices or tensors, depending on the concrete scenario. So the main calculations in neural networks are float multiplications and accumulations, and the magnitude of these computations depends on the number and size of the layers of the neural network. The information we need to keep in memory for inference consists in all these learned values, so the size of the model will also depend on the number and size of the layers.
Neural network models also require pre-processing of the data. Without it, the features with higher and lower values will initially dominate the result. This can be compensated during training process, but in general normalization will be needed to speed up training and improve the results. As for MLR and SVM models, a standardization Equation (3) of the data sets was applied.

2.4.1. Multi Layer Perceptron

A multi layer perceptron (MLP) is a neural network with at least one hidden layer, i.e., intermediate activation values, which requires at least two fully-connected layers. Considering the l-th fully connected layer, its operation corresponds to Equation (9), where X ( l 1 ) is the layer’s input, which can come directly from the original input or from a previous hidden layer l 1 , X ( l ) is the output of the current layer, resulting from applying the weights W ( l ) and biases ρ ( l ) of the layer. If the size of the input X ( l 1 ) is ( M , N ( l 1 ) ) , being M the number of input samples and N ( l 1 ) the dimension of the feature space, and the size of the weights W ( l ) is ( N ( l 1 ) , N ( l ) ) , the output size will be ( M , N ( l ) ) , i.e., the M samples represented in the feature space of dimension N ( l ) and defined by the l-th layer. In the case of hyperspectral imaging classification, the input size for one spectral pixel will be ( 1 , B ) , where B is the number of spectral channels, while the final output of the model size will be ( 1 , K ) , where K is the number of considered classes.
X ( l ) = X ( l 1 ) W ( l ) + ρ ( l )
The MLP model was implemented with the PyTorch neural network library [145], using the Linear classes to implement two fully-connected layers. The number of neurons of the first fully-connected layer is a parameter of the network, and the size of each neuron of the last fully-connected layer will depend on it. In the case of hyperspectral images pixel classification, the input on inference will be a single pixel ( M = 1 according to last explanation) with B features and the final output will be the classification for the K classes, so the size of each neuron of the first fully-connected layer will depend on the number of features, while the number of neurons of the last fully-connected layer will be the number of classes.
As the input for pixel classification is not very big, this model keeps a small size once trained. During inference it will need to perform a float multiplication and a float accumulation for each one of its parameters, among other operations, so even being small the operations needed are expensive in terms of computation.

2.4.2. Convolutional Neural Network

A convolutional neural network (CNN) is a neural network with at least one convolutional layer. Instead of fully-connected neurons, convolutional layers apply locally-connected filters per layer. These filters are smaller than the input and each one of them performs a convolution operation on it. During a convolution, the filter performs dot product operations within different sections of the input while it keeps moving along it. For hyperspectral images pixel classification, whose input consist in a single pixel, the 1D convolutional layer operation can be described with Algorithm 1, where input pixel x has B features, the layer has F filters and each filter Q has q values, i.e., weights in the case of 1D convolution, and one bias ρ , so the output X will be of shape ( B q + 1 , F ) . The initialization values L A Y E R _ F I L T E R S and L A Y E R _ B I A S E S correspond respectively to the learned weights and biases of the layer.
Algorithm 1 1D convolutional layer algorithm.
1: Input:
2:  x I N P U T _ P I X E L ▹ The input pixel is an array of B features
3: Initialize:
4:  f i l t e r s L A Y E R _ F I L T E R S ▹ Array of F filters, each one with q weights
5:  b i a s L A Y E R _ B I A S E S ▹ Array of F bias values, corresponding to each filter
6:  X N e w _ m a t r i x ( s i z e : [ B q + 1 , F ] ) ▹ Output structure generation
7: for ( f = 0 ; F 1 ; f + + ) do▹ For each filter
8:    Q = f i l t e r s [ f ] ▹ Get current filter
9:    ρ = b i a s [ f ] ▹ Get current bias value
10:   for ( i = 0 ; B q ; i + + ) do▹ Movement of the filter along the input
11:      X i , f = 0
12:     for ( j = 0 ; q ; j + + ) do▹ Dot product along the filter in current position
13:        X i , f + Q j x i + j Q j corresponds to the weight value
14:     end for
15:      X i , f + = ρ ρ corresponds to the bias value
16:   end for
17: end for
18: Return:
19:  X ▹ The output matrix of shape ( B q + 1 , F )
The CNN model was implemented with PyTorch neural network library [145], using the convolution, the pooling and the linear classes to define a Network with respectively one 1D convolutional layer, one max pooling layer and two fully connected layers at the end. The input of the 1D convolutional layer will be the input pixel, while the input of the rest of the layers will be the output of the previous one, in the specified order. The number and size of the filters of the 1D convolutional layer are parameters of the network, nevertheless the relation between the profundity of the filters and the number of features will determine the size of the first fully connected layer, which is the biggest one. The max pooling layer does not affect the size of the model, since it only performs a size reduction by selecting the maximum value within small sub-sections of the input, but it will affect the number of operations as it needs to perform several comparisons. The fully connected layers are actually an MLP, as explained in Section 2.4.1. The size of the last fully connected layer will also depend on the number of classes. In terms of computation, the convolutional layer is very intensive in calculations, as can be observed in Algorithm 1, and most of them are floating point multiplications and accumulations.

2.5. Summary of the Relation of the Models with the Input

Each discussed model has different characteristics on its inference operation, and the size and computations of each one depends on different aspects of the input and the selected parameters. Table 1 summarizes the importance of the data set size (in the case of hyperspectral images this is the number of pixels of the image), the number of features (number of spectral band of each hyperspectral pixel) and the number of classes (labels) in relation to the size and the computations of each model. The dots in Table 1 correspond to a qualitative interpretation, from not influenced at all (zero dots) to very influenced (three dots), regarding how each model size and number of computations are influenced by the size of the data set, the number of features of each pixel and the number of classes. This interpretation is not intended to be quantitative but qualitative, i.e., just a visual support for the following explanations.
The number of classes is an important parameter for every model, but it affects them in a very different way. Regarding the size of the model, the number of classes defines the size of the output layer in the MLP and CNN, while for the MLR, GBDT and SVM the entire estimator is replied to as many times as the number of classes. Since the RF needs to keep the prediction for each class on every leaf node, the number of classes is crucial to determine the final size of the model, and affects it much more. Regarding the computation, in the MLR, GBDT and SVM models the entire number of computations is multiplied by the number of classes, so it affects them very much. Furthermore, in the SVM model the number of classes will also affect the number of support vectors needed, because it is necessary to have enough training data for every class, so each new class not only increases the number of estimators, but also increases the computational cost by adding new support vectors. In neural networks, the number of classes defines the size of the output (fully connected) layer, which implies multiply and accumulate floating point operations, but this is the smallest layer for both models. In the case of RF, it only affects the final calculations of the results, but it is important to remark that these are precisely the floating point operations of this model.
The number of features is not relevant for decision tree models during inference, that is why they do not need any dimensionality reduction techniques. The size of each estimator of the MLR and the SVM models will depend directly on the number of features, so it influences the size as much as the number of classes. In neural networks, it affects the size of the first fully connected layer (which is the biggest one), so the size of these models is highly influenced by the number of features. Nevertheless, in the case of the MLP, it only multiplies the dimension of the fully connected layer so it does not impact that much as in the case of the CNN, where it will be also multiplied by the number of filters of the convolutional layer. In a similar way, the number of operations of each estimator of the MLR and the SVM models will be directly influenced by the number of features. Again, for the MLP it will increase the number of operations of the first fully connected layer and for the CNN also the convolutional layer, which is very intensive in terms of calculations.
The size of the data set (and specifically the size of the training set) only affects the SVM model, because it will generate as many support vectors as the number of different data samples used in the training process. Regarding the size of the model, it multiplies the number of parameters of each estimator, so it will affect the size of the model as much as the number of classes. Actually, both the training set and the number of classes are related to each other. Regarding the number of operations, the core of Equation (7) depends on the number of support vectors, so its influence is very high.
It is also worth noting that decision trees are the only ones that do not require any pre-processing to the input data. As we already explained in Section 2.1, this implies some extra calculations not included in the measurements of Section 4.2, but they can also be a source of possible inaccuracies because of the implications they could have once applied to a real system with entirely new data taken in different moments and conditions. For instance, applying standardization means that we will subtract the mean value of our training set to the data, and reduce it in relation to the standard deviation of our training set.

3. Data Sets and Training Configurations

The data sets selected for experiments are the Indian Pines (IP) [146], Pavia University (PU) [146], Kennedy Space Center (KSC) [146], Salinas Valley (SV) [146] and University of Houston (UH) [147]. Table 2 shows the ground truth and the number of pixels per class for each image.
  • The IP data set is an image of an agricultural region, mainly composed of crop fields, collected by the Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) sensor in Northwestern Indiana, USA. It has 145 × 145 pixels with 200 spectral bands after removal of the noise and water absorption bands. Of the total 21,025 pixels, 10,249 are labeled into 16 different classes.
  • The UP data set is an image of an urban area, the city of Pavia in Italy, collected by the Reflective Optics Spectrographic Imaging System (ROSIS), a compact airborne imaging spectrometer. It is composed of 610 × 340 pixels with 103 spectral bands. Only 42,776 pixels from the total 207,400 are labeled into nine classes.
  • The KSC data set is an image with water and vegetation collected by the AVIRIS sensor over the Kennedy Space Center in Florida, USA. It has 512 × 614 pixels with 176 spectral bands after removing the water absorption and low signal-to-noise bands. Only 5211 out of the available 314,368 pixels are labeled into 13 classes.
  • The SV data set is an image composed of agricultural fields and vegetation, collected by the AVIRIS sensor in Western California, USA. It has 512 × 217 pixels with 204 spectral bands after removing the noise and water absorption bands. Of the total 111,104 pixels, 56,975 are labeled into 16 classes.
  • The UH data set is an image of an urban area collected by the Compact Airborne Spectrographic Imager (CASI) sensor over the University of Houston, USA. It has 349 × 1905 pixels with 144 spectral bands. Only 15,029 of the total 664,845 pixels are labeled into 15 classes. As it was proposed as the benchmark data set for the 2013 IEEE Geoscience and Remote Sensing Society data fusion contest [148], it is already divided into training and testing sets, with 2832 and 12,197 pixels, respectively.
The implementation of the algorithms used in this review were developed and tested on a hardware environment with an X Generation Intel® Core™i9-9940X processor with 19.25M of Cache and up to 4.40GHz (14 cores/28 way multi-task processing), installed over a Gigabyte X299 Aorus, with 128GB of DDR4 RAM. Also, a graphic processing unit (GPU) NVIDIA Titan RTX with 24GB GDDR6 of video memory and 4608 cores was used. We detailed in Section 2 the libraries and classes used for the implementation of each model: MLR with scikit learn logistic regression, random forest with scikit learn random forest classifier, GBDT with LightGBM classifier, SVM with scikit learn support vector classification, MLP with PyTorch neural network linear layers and CNN1D with PyTorch neural network convolutional, pooling and linear layers.
For each dataset we trained the models applying cross-validation techniques to select the final training hyperparameters. After the cross-validation, the selected values did not always correspond to the best accuracy, but to the best relation between accuracy and model size and requirements. The selected hyperparameters shown in Table 3 are the penalty of the error (C) for the MLR, the number of trees (n), the minimum number of data to split a node (m) and maximum depth (d) for both the RF and the GBDT, and also the maximum number of features to consider for each split (f) for the RF, the penalty of the error (C) and kernel coefficient ( γ ) for the SVM, the number of neurons in the hidden layer (h.l.) for the MLP and for the CNN, the number of filters of the convolutional layer (f), the number of values of each filter (q), the size of the kernel of the max pooling layer (p) and the number of neurons of the first and last fully connected layers ( f 1 ) and ( f 2 ), respectively.
The final configurations of some models not only depend on the selected hyperparameters, but also on the training data set (for the SVM model) and the training process itself (for the RF and GBDT models). Table 4 shows the number of features (ft.) and classes (cl.) of each image and the final configurations of RF, GBDT and SVM models. For the tree models, the shown values are the total number of trees (trees), which in the case of the GBDT model depends on the number of classes of each image, the total number of non-leaf nodes (nodes) and leaf nodes (leaves) and the average depth of the trees of the entire model (depth). For the SVM model, the number of support vectors (s.v.) depends on the amount of training data.

4. Discussion of Results

First we present the accuracy results for all models and images, and then we report the size and computational measurements on inference. Then, we summarize and analyze the characteristics of each model in order to target an embedded or an on-board system.

4.1. Accuracy Results

Figure 3 depicts the accuracy evolution of each model when increasing the percentage of pixels for each class selected for training. Neural network models always achieve high accuracy values, with the CNN model outperforming all other models, and the SVM as a kernel-based model is always the next one or even outperforming the MLP. The only behavior that does not this pattern is the high accuracy values achieved by the MLR model on the KSC data set. Except for this case, the results obtained by neural networks, kernel-based models and the other models were expected [75]. Nevertheless, it is worth mentioning that, for a tree based model, GBDT achieves great accuracy values which are very close to those obtained by neural networks and the SVM, which always provide higher values than the RF, which is also a tree based model.
The results obtained with the UH data set are quite particular, since it is not an entire image to work with, but two separated structures already prepared as training and testing sets. As we can observe in the values of the overall accuracy in Table 9, the accuracy of all models is below the score obtained for other images. However, the distribution of the different models keeps the same behavior described for the rest of the data sets, with the particularity that the MLR model outperforms the GBDT in this case.
Table 5, Table 6, Table 7, Table 8 and Table 9 show the accuracy results of the selected configurations of each model. For the IP and KSC images, the selected training set is composed of 15% of the pixels from each class, while in the UP and SV only consists of 10% of the pixels from each class. The fixed training set for the UH image is composed of around 19% of the total pixels.
Figure 4 shows the classification maps obtained for the different data sets by all the models. As we can observe, most of the classification maps have the typical salt and pepper effect of spectral models, i.e., classified trough individual pixels. There are some particular classes that are better modeled by certain models. For instance, the GBDT and SVM perfectly define the contour of the soil–vinyard–develop class of SV, while CNN1D exhibits a very good behavior on the cloudy zone in the right side of the UH data set, and both tree based models (RF and GBDT) perform very well on the swampy area on the right side of the river in the KSC data set. Nevertheless, the most significant conclusion that can be derived from these class maps is that the different errors of each model are distributed in a similar way along classes for each model, as it can be seen on Table 5, Table 6, Table 7, Table 8 and Table 9, but here we can confirm that it is consistent for the entire classification map. In general, all the classification maps are quite similar and well defined in terms of the contours, and the main classes are properly classified. We can conclude that the obtained accuracy levels are satisfactory, and the errors are well distributed, without significant deviations due to a particular class nor significant overfitting of the models.

4.2. Size and Computational Measurements

To perform the characterization of each algorithm in inference it is necessary to analyze their structure and operation. The operation during inference of every model has been explained in Section 2, and the final sizes and configurations of the trained models after cross-validation for parameter selection has been detailed in Section 3. Figure 5 reports the sizes in Bytes of the trained models, while Figure 6 shows the number and type of operations performed during the inference stage.
It is very important to remark that these measurements have been realized theoretically, based on the described operations and model configurations. For instance, the size measurements do not correspond to the size of a file with the model dumped on it, which is software-dependent, i.e., depends on the data structures it uses to keep much more information for the framework than the actual learned parameters needed for inference. As a result, Figure 5 shows the theoretical size required in memory to store all the necessary structures for inference, based on the analysis of the models, exactly as it would be developed for a specific hardware accelerator or an embedded system.
As we can observe, the size of RF models is one order of magnitude bigger than the others. This is due to their need to save the values of the predictions for every class on each leaf node. This is a huge amount of information, even compared to models that train an entire estimator for each class, like GBDT. Actually, the size of MLR and SVM models is one order of magnitude smaller than GBDT, MLP and CNN1D models. Nevertheless, all the models (except the RF) are below 500 kilobytes, which makes them very affordable even for small low-power embedded devices.
In a similar way, the operational measurements shown on Figure 6 are based on the analysis of each algorithm, not in terms of software executions (that depend on the architecture, the system and the framework), and they are divided into four groups according to their computational complexity. The only models that use integers for the inference computations are the decision trees, and they only need integer comparisons. Floating point operations are the most common in the rest of the models, but they are also divided into three different categories. FP Add refers to accumulations, subtractions and comparisons, which can be performed on an adder and are less complex, FP Mul refers to multiplications and divisions and FP Exp are exponential which are only performed by the SVM model. High-performance processors include powerful floating point arithmetic units, but for low-power processors and embedded devices, these computations can be very expensive.
Focusing on operations, the SVM model is two or even three orders of magnitude larger than the other models. Moreover, most of their operations are floating point multiplications and additions, but it also requires a great amount of complex operations such as exponential ones. In most of the data sets, it requires more exponential operations that the entire number of operations of the other models, except for the CNN. The number of operations required by the CNN model is one order of magnitude higher than the rest of the models, and it is basically composed of floating point multiplications and accumulations. MLR and RF models are the ones that require less operations during inference, while GBDT and MLP require several times the number of operations of the latter, sometimes even one order of magnitude more.

4.3. Characteristics of the Models in Relation to the Results

In this section, we will review the characteristics of every model in relation to this results. RF and GBDT models are composed of binary trees. The number of trees of each model are decided in the training time according to the results of the cross-validation methods explained above. The non-leaf nodes of each tree keep the value of the threshold and the number of features to compare with, which are integer values, while the leaf nodes keep the prediction value, which is a float. In the case of RF, leaf nodes keep the prediction for every class, which makes them very big models. Although these models are not the smallest, during inference they do not need to operate with the entire system; they just need to take the selected path of each tree. In terms of operations, each non-leaf node of a selected path implies an integer comparison, while the reached leaf node implies a float addition.
Notice that addressing operations, such as using the number of features to address the corresponding feature value, are not taken into account and are not considered in Figure 6. The same occurs for the rest of the models, assuming that every computational operation needs its related addressing, so the comparison is fair.
The MLR model only requires, during inference, one float structure of the same size and shape as the entry, i.e., one hyperspectral pixel for each class. The operations realized are the dot product of the input and these structures and the result of each one of them is the prediction for the corresponding class.
The SVM model is small, in the same order of magnitude than the MLR, because it only needs the support vectors and the constants, some of which can be already pre-calculated together in just one value. But, in terms of computation, the calculation of Equation (7) requires an enormous amount of operations compared to the rest of the methods.
The size and number of operations of the MLP model depends on the number of neurons in the hidden layer and the number of classes. For each neuron, there is a float structure of the same size and shape of the entry, and then for each class there is a float structure of the same size and shape of the result of the hidden layer. The operations realized correspond to all these dot products.
In the case of the CNN, the size corresponds to the filters of the convolutional layer and then the structures corresponding to the MLP at the end of the model, but this MLP is much bigger than the MLP model, because its entry is the output of the convolutional layer, which is much bigger than the original input pixel. The main difference with the MLP model (in terms of operations) lies in the behavior of the convolutional layer. It requires a dot product between each filter and the corresponding part of the input for each step of the convolutional filters across the entire input. This model also has a max pooling layer that slightly reduces the size of the model, because it is supposed to be executed on the fly, but adds some extra comparisons to the operations.
Since embedded or on-board systems require small, efficient models, we analyze the trade-off between the hardware requirements of each model and its accuracy results. In summary, neural networks and SVMs are very accurate models, and while they do not have large memory requirements, they require a great amount of floating point operations during inference. Furthermore, most of them are multiplications or other operations which are very expensive in terms of resources. Hence, they are the best option when using high-performance processors, but they may not be suitable for low-power processors or embedded systems. In the case of the RF, the number of operations is really small, and most of them are just integer comparisons, but the size of the model is very big compared to the other models, and it also achieves the lowest accuracy values.
According to our comparison, it seems that the best trade-off is obtained for MLR and GBDT models. Both models are reasonably small for embedded systems and require very few operations during inference. GBDT is bigger, but it still has very small dimensions. In terms of operations, even if GBDT needs to perform some more operations than the MLR, its important to remark that MLR operations are floating point multiplications and additions, while most of the GBDT operations are integer comparisons, which makes them a perfect target for on-board and embedded systems. In terms of accuracy, GBDT achieves better values in most scenarios.

5. Conclusions

In this work, we analyze the size and operations during inference of several state-of-the-art machine learning techniques applied to hyperspectral image classification. The main target of this study is to characterize them in terms of energy consumption and hardware requirements, for implementation in embedded systems or on-board devices, with a goal to develop specific hardware accelerators for techniques that achieve a good trade-off between hardware requirements and accuracy values. Our main observations can be summarized as follows:
  • In terms of accuracy, neural networks and kernel-based methods (such as SVMs) usually achieve higher values than the rest of the methods, while the RF obtains the lowest values on every data set. The behavior of the MLR model is not very robust, obtaining high accuracy in some data sets and low values in others. The GBDT model always achieves higher accuracy than the RF and also gets very close to the accuracies obtained by some of the SVMs and neural networks.
  • Regarding the size of the trained models, most of them are reasonably small to fit into embedded and reconfigurable small devices, except for the RF that is one order of magnitude bigger than the rest of the models. The SVM and MLR models are specially small, in some cases even one order of magnitude less than the size of the CNN, the MLP and the GBDT.
  • Regarding the number and type of operations needed during inference, the RF and GBDT models clearly stand out from the rest (not only because they need very few operations during inference, but specially because most of these operations are integer comparisons). The rest of the models need floating point operations, and most of them are multiplications, which are more expensive in terms of hardware resources and power consumption. Even when some models (such as MLR and MLP) need few operations to perform the inference, the type of operations are not the most suitable for low-power embedded devices.
  • Neural networks and SVMs, in turn, are very expensive in terms of computations (not only in terms of quantity, but also regarding the type of operations they perform). As a result, for small energy-aware embedded systems, they do not represent the best choice. Depending on the specific characteristics of the target device and the accuracy requirements of the addressed problem, an MLP could be an interesting option. The RF model is very big for an embedded system and it generally achieves low accuracy values.
  • The MLR is one of the smallest models, and it also performs very few operations during inference. Nevertheless, even though the number of operations is small, they are expensive operations because they are entirely based on floating point additions and multiplications. Furthermore, it achieves high accuracy values in some data sets but low values in others, so its behavior is very dependent on the data set characteristics. If it adapts well to the target problem, it can be a good choice depending on the embedded system characteristics.
  • From our experimental assessment, we can conclude that GBDTs present a very interesting trade-off between the use of computational and hardware resources and the obtained accuracy levels. They perform very well in terms of accuracy, achieving in many cases better results than the other techniques not based in kernels or neurons, i.e., RF and MLR, while they use less computational resources than the techniques based on kernels or neurons, i.e., SVM, MLP and CNN. Moreover, most of their operations during inference are integer comparisons, which can be efficiently calculated even by very simple low-power processors, so they represent a good option for an embedded on-board system.

Author Contributions

The authors have contributed as equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by: Agencia Estatal de Investigación (AEI) and European Regional Development Fund (ERDF): TIN2016-76635-C2-1-R. Gobierno de Aragón and European Social Fund (ESF): T58_17R research group (gaZ). Gobierno de Aragón: ORDEN IIU/2023/2017, de 14 de diciembre por la que se convocan subvenciones destinadas a la contratación de personal investigador predoctoral en formación para el período 2017–2021 cofinanciadas con el Programa Operativo FSE Aragón 2014–2020. Ministerio de Educación: Resolución de 19 de noviembre de 2015, de la Secretaría de Estado de Educación, Formación Profesional y Universidades, por la que se convocan ayudas para la formación de profesorado universitario, de los subprogramas de Formación y de Movilidad incluidos en el Programa Estatal de Promoción del Talento y su Empleabilidad, en el marco del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013–2016. FPU15/02090. Junta de Extremadura: Decreto 14/2018, de 6 de febrero, por el que se establecen las bases reguladoras de las ayudas para la realización de actividades de investigación y desarrollo tecnológico, de divulgación y de transferencia de conocimiento por los Grupos de Investigación de Extremadura, Ref. GR18060. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Acknowledgments

We gratefully thank the Associate Editor and the five Anonymous Reviewers for their outstanding comments and suggestions, which greatly helped us to improve the technical quality and presentation of our work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, S. Advances in Land Remote Sensing: System, Modeling, Inversion and Application; Springer Science & Business Media: Berlin, Germany, 2008. [Google Scholar]
  2. Liang, S. Comprehensive Remote Sensing; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
  3. Gruen, A. Scientific-technological developments in photogrammetry and remote sensing between 2004 and 2008. In Advances in Photogrammetry, Remote Sensing and Spatial Information Sciences: 2008 ISPRS Congress Book; CRC Press: Boca Raton, FL, USA, 2008; pp. 39–44. [Google Scholar]
  4. Xu, H.; Wang, Y.; Guan, H.; Shi, T.; Hu, X. Detecting Ecological Changes with a Remote Sensing Based Ecological Index (RSEI) Produced Time Series and Change Vector Analysis. Remote Sens. 2019, 11, 2345. [Google Scholar] [CrossRef] [Green Version]
  5. Yang, X.; Zhang, K.; Jia, B.; Ci, L. Desertification assessment in China: An overview. J. Arid Environ. 2005, 63, 517–531. [Google Scholar] [CrossRef]
  6. Mariano, D.A.; dos Santos, C.A.; Wardlow, B.D.; Anderson, M.C.; Schiltmeyer, A.V.; Tadesse, T.; Svoboda, M.D. Use of remote sensing indicators to assess effects of drought and human-induced land degradation on ecosystem health in Northeastern Brazil. Remote Sens. Environ. 2018, 213, 129–143. [Google Scholar] [CrossRef]
  7. Grinand, C.; Rakotomalala, F.; Gond, V.; Vaudry, R.; Bernoux, M.; Vieilledent, G. Estimating deforestation in tropical humid and dry forests in Madagascar from 2000 to 2010 using multi-date Landsat satellite images and the random forests classifier. Remote Sens. Environ. 2013, 139, 68–80. [Google Scholar] [CrossRef]
  8. Reiche, J.; Hamunyela, E.; Verbesselt, J.; Hoekman, D.; Herold, M. Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2. Remote Sens. Environ. 2018, 204, 147–161. [Google Scholar] [CrossRef]
  9. Castellazzi, P.; Longuevergne, L.; Martel, R.; Rivera, A.; Brouard, C.; Chaussard, E. Quantitative mapping of groundwater depletion at the water management scale using a combined GRACE/InSAR approach. Remote Sens. Environ. 2018, 205, 408–418. [Google Scholar] [CrossRef]
  10. Zweifel, L.; Meusburger, K.; Alewell, C. Spatio-temporal pattern of soil degradation in a Swiss Alpine grassland catchment. Remote Sens. Environ. 2019, 235, 111441. [Google Scholar] [CrossRef]
  11. Xu, H.; Hu, X.; Guan, H.; Zhang, B.; Wang, M.; Chen, S.; Chen, M. A Remote Sensing Based Method to Detect Soil Erosion in Forests. Remote Sens. 2019, 11, 513. [Google Scholar] [CrossRef] [Green Version]
  12. Gonsamo, A.; Ter-Mikaelian, M.T.; Chen, J.M.; Chen, J. Does Earlier and Increased Spring Plant Growth Lead to Reduced Summer Soil Moisture and Plant Growth on Landscapes Typical of Tundra-Taiga Interface? Remote Sens. 2019, 11, 1989. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, X.; Lee, Z.; Zhang, Y.; Lin, J.; Shi, K.; Zhou, Y.; Qin, B.; Sun, Z. Remote Sensing of Secchi Depth in Highly Turbid Lake Waters and Its Application with MERIS Data. Remote Sens. 2019, 11, 2226. [Google Scholar] [CrossRef] [Green Version]
  14. Kratzer, S.; Kyryliuk, D.; Edman, M.; Philipson, P.; Lyon, S.W. Synergy of Satellite, In Situ and Modelled Data for Addressing the Scarcity of Water Quality Information for Eutrophication Assessment and Monitoring of Swedish Coastal Waters. Remote Sens. 2019, 11, 2051. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, H.; Beggs, H.; Majewski, L.; Wang, X.H.; Kiss, A. Investigating sea surface temperature diurnal variation over the Tropical Warm Pool using MTSAT-1R data. Remote Sens. Environ. 2016, 183, 1–12. [Google Scholar] [CrossRef]
  16. Du, J.; Watts, J.D.; Jiang, L.; Lu, H.; Cheng, X.; Duguay, C.; Farina, M.; Qiu, Y.; Kim, Y.; Kimball, J.S.; et al. Remote Sensing of Environmental Changes in Cold Regions: Methods, Achievements and Challenges. Remote Sens. 2019, 11, 1952. [Google Scholar] [CrossRef] [Green Version]
  17. He, C.; Gao, B.; Huang, Q.; Ma, Q.; Dou, Y. Environmental degradation in the urban areas of China: Evidence from multi-source remote sensing data. Remote Sens. Environ. 2017, 193, 65–75. [Google Scholar] [CrossRef]
  18. Prasad, S.; Bruce, L.M.; Chanussot, J. Optical remote sensing. In Advances in Signal Processing and Exploitation Techniques; Springer: Berlin, Germany, 2011. [Google Scholar]
  19. Goetz, A.F.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging spectrometry for earth remote sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef]
  20. Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  21. Van der Meer, F.D.; Van der Werff, H.M.; Van Ruitenbeek, F.J.; Hecker, C.A.; Bakker, W.H.; Noomen, M.F.; Van Der Meijde, M.; Carranza, E.J.M.; De Smeth, J.B.; Woldai, T. Multi-and hyperspectral geologic remote sensing: A review. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 112–128. [Google Scholar] [CrossRef]
  22. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of hyperspectral and LIDAR remote sensing data for classification of complex forest areas. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1416–1427. [Google Scholar] [CrossRef] [Green Version]
  23. Asner, G.P.; Jones, M.O.; Martin, R.E.; Knapp, D.E.; Hughes, R.F. Remote sensing of native and invasive species in Hawaiian forests. Remote Sens. Environ. 2008, 112, 1912–1926. [Google Scholar] [CrossRef]
  24. Shang, X.; Chisholm, L.A. Classification of Australian native forest species using hyperspectral remote sensing and machine-learning classification algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2481–2489. [Google Scholar] [CrossRef]
  25. Corbane, C.; Lang, S.; Pipkins, K.; Alleaume, S.; Deshayes, M.; Millán, V.E.G.; Strasser, T.; Borre, J.V.; Toon, S.; Michael, F. Remote sensing for mapping natural habitats and their conservation status–New opportunities and challenges. Int. J. Appl. Earth Obs. Geoinf. 2015, 37, 7–16. [Google Scholar] [CrossRef]
  26. Underwood, E.; Ustin, S.; DiPietro, D. Mapping nonnative plants using hyperspectral imagery. Remote Sens. Environ. 2003, 86, 150–161. [Google Scholar] [CrossRef]
  27. Somers, B.; Asner, G.P. Hyperspectral time series analysis of native and invasive species in Hawaiian rainforests. Remote Sens. 2012, 4, 2510–2529. [Google Scholar] [CrossRef] [Green Version]
  28. Somers, B.; Asner, G.P. Multi-temporal hyperspectral mixture analysis and feature selection for invasive species mapping in rainforests. Remote Sens. Environ. 2013, 136, 14–27. [Google Scholar] [CrossRef]
  29. Peerbhay, K.Y.; Mutanga, O.; Ismail, R. Random Forests Unsupervised Classification: The Detection and Mapping of Solanum mauritianum Infestations in Plantation Forestry Using Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3107–3122. [Google Scholar] [CrossRef]
  30. Lin, C.; Chen, S.Y.; Chen, C.C.; Tai, C.H. Detecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques. ISPRS J. Photogramm. Remote Sens. 2018, 142, 174–189. [Google Scholar] [CrossRef]
  31. Lin, Q.; Huang, H.; Wang, J.; Huang, K.; Liu, Y. Detection of Pine Shoot Beetle (PSB) Stress on Pine Forests at Individual Tree Level using UAV-Based Hyperspectral Imagery and Lidar. Remote Sens. 2019, 11, 2540. [Google Scholar] [CrossRef] [Green Version]
  32. Asner, G.P.; Carlson, K.M.; Martin, R.E. Substrate age and precipitation effects on Hawaiian forest canopies from spaceborne imaging spectroscopy. Remote Sens. Environ. 2005, 98, 457–467. [Google Scholar] [CrossRef]
  33. Zhou, X.M.; Wang, N.; Wu, H.; Tang, B.H.; Li, Z.L. Estimation of precipitable water from the thermal infrared hyperspectral data. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24 July 2011; pp. 3241–3244. [Google Scholar]
  34. Koponen, S.; Pulliainen, J.; Kallio, K.; Hallikainen, M. Lake water quality classification with airborne hyperspectral spectrometer and simulated MERIS data. Remote Sens. Environ. 2002, 79, 51–59. [Google Scholar] [CrossRef]
  35. Olmanson, L.G.; Brezonik, P.L.; Bauer, M.E. Airborne hyperspectral remote sensing to assess spatial distribution of water quality characteristics in large rivers: The Mississippi River and its tributaries in Minnesota. Remote Sens. Environ. 2013, 130, 254–265. [Google Scholar] [CrossRef]
  36. Underwood, E.; Mulitsch, M.; Greenberg, J.; Whiting, M.; Ustin, S.L.; Kefauver, S. Mapping invasive aquatic vegetation in the Sacramento-San Joaquin Delta using hyperspectral imagery. Environ. Monit. Assess. 2006, 121, 47–64. [Google Scholar] [CrossRef] [PubMed]
  37. El-Magd, I.A.; El-Zeiny, A. Quantitative hyperspectral analysis for characterization of the coastal water from Damietta to Port Said, Egypt. Egypt. J. Remote Sens. Space Sci. 2014, 17, 61–76. [Google Scholar] [CrossRef] [Green Version]
  38. Resmini, R.; Kappus, M.; Aldrich, W.; Harsanyi, J.; Anderson, M. Mineral mapping with hyperspectral digital imagery collection experiment (HYDICE) sensor data at Cuprite, Nevada, USA. Int. J. Remote Sens. 1997, 18, 1553–1570. [Google Scholar] [CrossRef]
  39. Kruse, F.A.; Taranik, J.V.; Coolbaugh, M.; Michaels, J.; Littlefield, E.F.; Calvin, W.M.; Martini, B.A. Effect of reduced spatial resolution on mineral mapping using imaging spectrometry—Examples using Hyperspectral Infrared Imager (HyspIRI)-simulated data. Remote Sens. 2011, 3, 1584–1602. [Google Scholar] [CrossRef] [Green Version]
  40. Mielke, C.; Rogass, C.; Boesche, N.; Segl, K.; Altenberger, U. EnGeoMAP 2.0–Automated Hyperspectral Mineral Identification for the German EnMAP Space Mission. Remote Sens. 2016, 8, 127. [Google Scholar] [CrossRef] [Green Version]
  41. Scafutto, R.D.M.; de Souza Filho, C.R.; de Oliveira, W.J. Hyperspectral remote sensing detection of petroleum hydrocarbons in mixtures with mineral substrates: Implications for onshore exploration and monitoring. ISPRS J. Photogramm. Remote Sens. 2017, 128, 146–157. [Google Scholar] [CrossRef]
  42. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef] [Green Version]
  43. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  44. Liaghat, S.; Balasundram, S.K. A review: The role of remote sensing in precision agriculture. Am. J. Agric. Biol. Sci. 2010, 5, 50–55. [Google Scholar] [CrossRef] [Green Version]
  45. Bannari, A.; Pacheco, A.; Staenz, K.; McNairn, H.; Omari, K. Estimating and mapping crop residues cover on agricultural lands using hyperspectral and IKONOS data. Remote Sens. Environ. 2006, 104, 447–459. [Google Scholar] [CrossRef]
  46. Ge, Y.; Thomasson, J.A.; Sui, R. Remote sensing of soil properties in precision agriculture: A review. Front. Earth Sci. 2011, 5, 229–238. [Google Scholar] [CrossRef]
  47. Schmid, T.; Rodríguez-Rastrero, M.; Escribano, P.; Palacios-Orueta, A.; Ben-Dor, E.; Plaza, A.; Milewski, R.; Huesca, M.; Bracken, A.; Cicuéndez, V. Characterization of soil erosion indicators using hyperspectral data from a Mediterranean rainfed cultivated region. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 845–860. [Google Scholar] [CrossRef]
  48. Zhang, M.; Qin, Z.; Liu, X.; Ustin, S.L. Detection of stress in tomatoes induced by late blight disease in California, USA, using hyperspectral remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2003, 4, 295–310. [Google Scholar] [CrossRef]
  49. Apan, A.; Held, A.; Phinn, S.; Markley, J. Detecting sugarcane ‘orange rust’disease using EO-1 Hyperion hyperspectral imagery. Int. J. Remote Sens. 2004, 25, 489–498. [Google Scholar] [CrossRef] [Green Version]
  50. Rao, N.R.; Garg, P.; Ghosh, S.K. Development of an agricultural crops spectral library and classification of crops at cultivar level using hyperspectral data. Precis. Agric. 2007, 8, 173–185. [Google Scholar] [CrossRef]
  51. Van der Linden, S.; Hostert, P. The influence of urban structures on impervious surface maps from airborne hyperspectral data. Remote Sens. Environ. 2009, 113, 2298–2305. [Google Scholar] [CrossRef]
  52. Heldens, W.; Heiden, U.; Esch, T.; Stein, E.; Müller, A. Can the future EnMAP mission contribute to urban applications? A literature survey. Remote Sens. 2011, 3, 1817–1846. [Google Scholar] [CrossRef] [Green Version]
  53. Heiden, U.; Heldens, W.; Roessner, S.; Segl, K.; Esch, T.; Mueller, A. Urban structure type characterization using hyperspectral remote sensing and height information. Landsc. Urban Plan. 2012, 105, 361–375. [Google Scholar] [CrossRef]
  54. Ardouin, J.P.; Lévesque, J.; Rea, T.A. A demonstration of hyperspectral image exploitation for military applications. In Proceedings of the 2007 10th International Conference on Information Fusion, Quebec, ON, Canada, 9–12 July 2007; pp. 1–8. [Google Scholar]
  55. Tiwari, K.; Arora, M.K.; Singh, D. An assessment of independent component analysis for detection of military targets from hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 730–740. [Google Scholar] [CrossRef]
  56. Ardouin, J.P.; Lévesque, J.; Roy, V.; Van Chestein, Y.; Faust, A. Demonstration of hyperspectral image exploitation for military applications. In Remote Sensing-Applications; IntechOpen: London, UK, 2012. [Google Scholar]
  57. Tralli, D.M.; Blom, R.G.; Zlotnicki, V.; Donnellan, A.; Evans, D.L. Satellite remote sensing of earthquake, volcano, flood, landslide and coastal inundation hazards. ISPRS J. Photogramm. Remote Sens. 2005, 59, 185–198. [Google Scholar] [CrossRef]
  58. Veraverbeke, S.; Dennison, P.; Gitas, I.; Hulley, G.; Kalashnikova, O.; Katagis, T.; Kuai, L.; Meng, R.; Roberts, D.; Stavros, N. Hyperspectral remote sensing of fire: State-of-the-art and future perspectives. Remote Sens. Environ. 2018, 216, 105–121. [Google Scholar] [CrossRef]
  59. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  60. Heylen, R.; Parente, M.; Gader, P. A review of nonlinear hyperspectral unmixing methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1844–1868. [Google Scholar] [CrossRef]
  61. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef] [Green Version]
  62. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef] [Green Version]
  63. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Hyperspectral image segmentation using a new Bayesian approach with active learning. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3947–3960. [Google Scholar] [CrossRef] [Green Version]
  64. Kumar, S.; Ghosh, J.; Crawford, M.M. Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1368–1379. [Google Scholar] [CrossRef] [Green Version]
  65. Ren, J.; Zabalza, J.; Marshall, S.; Zheng, J. Effective feature extraction and data reduction in remote sensing using hyperspectral imaging [applications corner]. IEEE Signal Process. Mag. 2014, 31, 149–154. [Google Scholar] [CrossRef] [Green Version]
  66. Bruce, L.M.; Koger, C.H.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar] [CrossRef]
  67. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A. Fast dimensionality reduction and classification of hyperspectral images with extreme learning machines. J. Real-Time Image Process. 2018, 15, 439–462. [Google Scholar] [CrossRef]
  68. Li, J.; Zhang, H.; Zhang, L.; Ma, L. Hyperspectral anomaly detection by the use of background joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2523–2533. [Google Scholar] [CrossRef]
  69. Ertürk, A.; Iordache, M.D.; Plaza, A. Sparse unmixing-based change detection for multitemporal hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 708–719. [Google Scholar] [CrossRef]
  70. Ertürk, A.; Plaza, A. Informative change detection by unmixing for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1252–1256. [Google Scholar] [CrossRef]
  71. Zhou, J.; Kwan, C.; Ayhan, B.; Eismann, M.T. A novel cluster kernel RX algorithm for anomaly and change detection using hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6497–6504. [Google Scholar] [CrossRef]
  72. Li, C.; Gao, L.; Wu, Y.; Zhang, B.; Plaza, J.; Plaza, A. A real-time unsupervised background extraction-based target detection method for hyperspectral imagery. J. Real-Time Image Process. 2018, 15, 597–615. [Google Scholar] [CrossRef]
  73. Bernabé, S.; García, C.; Igual, F.D.; Botella, G.; Prieto-Matias, M.; Plaza, A. Portability Study of an OpenCL Algorithm for Automatic Target Detection in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9499–9511. [Google Scholar] [CrossRef]
  74. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
  75. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  76. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [Green Version]
  77. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Signal Process. Mag. 2013, 31, 45–54. [Google Scholar] [CrossRef] [Green Version]
  78. Pal, M. Multinomial logistic regression-based feature selection for hyperspectral data. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 214–220. [Google Scholar] [CrossRef]
  79. Borges, J.S.; Bioucas-Dias, J.M.; Marçal, A.R. Fast Sparse Multinomial Regression Applied to Hyperspectral Data; Springer: Berlin, Germany, 2006; pp. 700–709. [Google Scholar]
  80. Wu, Z.; Wang, Q.; Plaza, A.; Li, J.; Sun, L.; Wei, Z. Real-time implementation of the sparse multinomial logistic regression for hyperspectral image classification on GPUs. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1456–1460. [Google Scholar]
  81. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Spectral–spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. IEEE Trans. Geosci. Remote Sens. 2011, 50, 809–823. [Google Scholar] [CrossRef]
  82. Khodadadzadeh, M.; Li, J.; Plaza, A.; Bioucas-Dias, J.M. A subspace-based multinomial logistic regression for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2105–2109. [Google Scholar] [CrossRef]
  83. Bioucas-Dias, J.; Figueiredo, M. Logistic Regression via Variable Splitting and Augmented Lagrangian Tools; Instituto Superior Técnico, TULisbon: Lisbon, Portugal, 2009. [Google Scholar]
  84. Richards, J.A.; Jia, X. Using suitable neighbors to augment the training set in hyperspectral maximum likelihood classification. IEEE Geosci. Remote Sens. Lett. 2008, 5, 774–777. [Google Scholar] [CrossRef]
  85. Waske, B.; Benediktsson, J.A. Pattern recognition and classification. In Encyclopedia of Remote Sensing; Springer: Berlin, Germany, 2014; pp. 503–509. [Google Scholar]
  86. Kuching, S. The performance of maximum likelihood, spectral angle mapper, neural network and decision tree classifiers in hyperspectral image analysis. J. Comput. Sci. 2007, 3, 419–423. [Google Scholar]
  87. Wang, Y.; Li, J. Feature-selection ability of the decision-tree algorithm and the impact of feature-selection/extraction on decision-tree results based on hyperspectral data. Int. J. Remote Sens. 2008, 29, 2993–3010. [Google Scholar] [CrossRef]
  88. Delalieux, S.; Somers, B.; Haest, B.; Spanhove, T.; Borre, J.V.; Mücher, C. Heathland conservation status mapping through integration of hyperspectral mixture analysis and decision tree classifiers. Remote Sens. Environ. 2012, 126, 222–231. [Google Scholar] [CrossRef]
  89. Joelsson, S.R.; Benediktsson, J.A.; Sveinsson, J.R. Random forest classifiers for hyperspectral data. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 29 July 2005; Volume 1, p. 4. [Google Scholar]
  90. Chan, J.C.W.; Paelinckx, D. Evaluation of Random Forest and Adaboost tree-based ensemble classification and spectral band selection for ecotope mapping using airborne hyperspectral imagery. Remote Sens. Environ. 2008, 112, 2999–3011. [Google Scholar] [CrossRef]
  91. Schapire, R.E.; Singer, Y. Improved boosting algorithms using confidence-rated predictions. Mach. Learn. 1999, 37, 297–336. [Google Scholar] [CrossRef] [Green Version]
  92. Fu, Z.; Caelli, T.; Liu, N.; Robles-Kelly, A. Boosted band ratio feature selection for hyperspectral image classification. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 24 August 2006; Volume 1, pp. 1059–1062. [Google Scholar]
  93. Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, 3–6 July 1996; Volume 96, pp. 148–156. [Google Scholar]
  94. Kawaguchi, S.; Nishii, R. Hyperspectral image classification by bootstrap AdaBoost with random decision stumps. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3845–3851. [Google Scholar] [CrossRef]
  95. Ul Haq, Q.S.; Tao, L.; Yang, S. Neural network based adaboosting approach for hyperspectral data classification. In Proceedings of the 2011 International Conference on Computer Science and Network Technology, Harbin, China, 26 December 2011; Volume 1, pp. 241–245. [Google Scholar]
  96. Ramzi, P.; Samadzadegan, F.; Reinartz, P. Classification of hyperspectral data using an AdaBoostSVM technique applied on band clusters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2066–2079. [Google Scholar] [CrossRef]
  97. Lawrence, R.; Bunn, A.; Powell, S.; Zambon, M. Classification of remotely sensed imagery using stochastic gradient boosting as a refinement of classification tree analysis. Remote Sens. Environ. 2004, 90, 331–336. [Google Scholar] [CrossRef]
  98. Filippi, A.M.; Güneralp, İ.; Randall, J. Hyperspectral remote sensing of aboveground biomass on a river meander bend using multivariate adaptive regression splines and stochastic gradient boosting. Remote Sens. Lett. 2014, 5, 432–441. [Google Scholar] [CrossRef]
  99. Samat, A.; Du, P.; Liu, S.; Li, J.; Cheng, L. E2LMs: Ensemble Extreme Learning Machines for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1060–1069. [Google Scholar] [CrossRef]
  100. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  101. Colgan, M.; Baldeck, C.; Féret, J.B.; Asner, G. Mapping savanna tree species at ecosystem scales using support vector machine classification and BRDF correction on airborne hyperspectral and LiDAR data. Remote Sens. 2012, 4, 3462–3480. [Google Scholar] [CrossRef] [Green Version]
  102. Goel, P.K.; Prasher, S.O.; Patel, R.M.; Landry, J.A.; Bonnell, R.; Viau, A.A. Classification of hyperspectral data by decision trees and artificial neural networks to identify weed stress and nitrogen status of corn. Comput. Electron. Agric. 2003, 39, 67–93. [Google Scholar] [CrossRef]
  103. Uno, Y.; Prasher, S.; Lacroix, R.; Goel, P.; Karimi, Y.; Viau, A.; Patel, R. Artificial neural networks to predict corn yield from Compact Airborne Spectrographic Imager data. Comput. Electron. Agric. 2005, 47, 149–161. [Google Scholar] [CrossRef]
  104. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  105. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.J.; Pla, F. Deep pyramidal residual networks for spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 740–754. [Google Scholar] [CrossRef]
  106. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2145–2160. [Google Scholar] [CrossRef]
  107. Verleysen, M.; François, D. The Curse of Dimensionality in Data Mining and Time Series Prediction; Springer: Berlin, Germany, 2005; pp. 758–770. [Google Scholar]
  108. Lunga, D.; Prasad, S.; Crawford, M.M.; Ersoy, O. Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning. IEEE Signal Process. Mag. 2013, 31, 55–66. [Google Scholar] [CrossRef]
  109. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
  110. Haut, J.M.; Bernabé, S.; Paoletti, M.E.; Fernandez-Beltran, R.; Plaza, A.; Plaza, J. Low–High-Power Consumption Architectures for Deep-Learning Models Applied to Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 776–780. [Google Scholar] [CrossRef]
  111. Camps-Valls, G.; Benediktsson, J.A.; Bruzzone, L.; Chanussot, J. Introduction to the issue on advances in remote sensing image processing. IEEE J. Sel. Top. Signal Process. 2011, 5, 365–369. [Google Scholar] [CrossRef]
  112. Khorram, S.; van der Wiele, C.F.; Koch, F.H.; Nelson, S.A.; Potts, M.D. Future trends in remote sensing. In Principles of Applied Remote Sensing; Springer: Berlin, Germany, 2016; pp. 277–285. [Google Scholar]
  113. Neeck, S.P.; Magner, T.J.; Paules, G.E. NASA’s small satellite missions for Earth observation. Acta Astronaut. 2005, 56, 187–192. [Google Scholar] [CrossRef]
  114. Sandau, R. Status and trends of small satellite missions for Earth observation. Acta Astronaut. 2010, 66, 1–12. [Google Scholar] [CrossRef]
  115. Plaza, A.; Valencia, D.; Plaza, J.; Martinez, P. Commodity cluster-based parallel processing of hyperspectral imagery. J. Parallel Distrib. Comput. 2006, 66, 345–358. [Google Scholar] [CrossRef]
  116. Bernabé, S.; Plaza, A. Commodity cluster-based parallel implementation of an automatic target generation process for hyperspectral image analysis. In Proceedings of the 2011 IEEE 17th International Conference on Parallel and Distributed Systems, Tainan, Taiwan, 9 December 2011; pp. 1038–1043. [Google Scholar]
  117. Plaza, A.; Du, Q.; Chang, Y.L.; King, R.L. High performance computing for hyperspectral remote sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 528–544. [Google Scholar] [CrossRef]
  118. Joyce, K.E.; Belliss, S.E.; Samsonov, S.V.; McNeill, S.J.; Glassey, P.J. A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Prog. Phys. Geogr. 2009, 33, 183–207. [Google Scholar] [CrossRef] [Green Version]
  119. Stellman, C.M.; Hazel, G.; Bucholtz, F.; Michalowicz, J.V.; Stocker, A.D.; Schaaf, W. Real-time hyperspectral detection and cuing. Opt. Eng. 2000, 39, 1928–1935. [Google Scholar] [CrossRef]
  120. Chang, C.I.; Ren, H.; Chiang, S.S. Real-time processing algorithms for target detection and classification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 760–768. [Google Scholar] [CrossRef]
  121. Du, Q. Unsupervised real-time constrained linear discriminant analysis to hyperspectral image classification. Pattern Recognit. 2007, 40, 1510–1519. [Google Scholar] [CrossRef]
  122. Zhao, C.; Wang, Y.; Qi, B.; Wang, J. Global and local real-time anomaly detectors for hyperspectral remote sensing imagery. Remote Sens. 2015, 7, 3966–3985. [Google Scholar] [CrossRef] [Green Version]
  123. Díaz, M.; Guerra, R.; Horstrand, P.; Martel, E.; López, S.; López, J.F.; Sarmiento, R. Real-time hyperspectral image compression onto embedded GPUs. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2792–2809. [Google Scholar] [CrossRef]
  124. Plaza, A.J.; Chang, C.I. High Performance Computing in Remote Sensing; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  125. Plaza, A.; Chang, C.I. Clusters versus FPGA for parallel processing of hyperspectral imagery. Int. J. High Perform. Comput. Appl. 2008, 22, 366–385. [Google Scholar] [CrossRef]
  126. Li, C.; Gao, L.; Plaza, A.; Zhang, B. FPGA implementation of a maximum simplex volume algorithm for endmember extraction from remotely sensed hyperspectral images. J. Real-Time Image Process. 2019, 16, 1681–1694. [Google Scholar] [CrossRef]
  127. Maurer, P.; Glumb, A.J. On-Board Processing of Hyperspectral Data. U.S. Patent 15/966,470, 2019. [Google Scholar]
  128. Tadono, T.; Shimada, M.; Murakami, H.; Takaku, J. Calibration of PRISM and AVNIR-2 onboard ALOS “Daichi”. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4042–4050. [Google Scholar] [CrossRef]
  129. Henriksen, M.B.; Garrett, J.; Prentice, E.F.; Stahl, A.; Johansen, T.; Sigernes, F. Real-Time Corrections for a Low-Cost Hyperspectral Instrument. In Proceedings of the 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 26 September 2019; pp. 1–5. [Google Scholar]
  130. Rodriguez, A.; Santos, L.; Sarmiento, R.; De La Torre, E. Scalable hardware-based on-board processing for run-time adaptive lossless hyperspectral compression. IEEE Access 2019, 7, 10644–10652. [Google Scholar] [CrossRef]
  131. Liu, D.; Zhou, G.; Huang, J.; Zhang, R.; Shu, L.; Zhou, X.; Xin, C.S. On-Board Georeferencing Using FPGA-Based Optimized Second-Order Polynomial Equation. Remote Sens. 2019, 11, 124. [Google Scholar] [CrossRef] [Green Version]
  132. Du, Q.; Nekovei, R. Fast real-time onboard processing of hyperspectral imagery for detection and classification. J. Real-Time Image Process. 2009, 4, 273–286. [Google Scholar] [CrossRef]
  133. Qi, B.; Shi, H.; Zhuang, Y.; Chen, H.; Chen, L. On-board, real-time preprocessing system for optical remote-sensing imagery. Sensors 2018, 18, 1328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  134. Bishop, C.M. Pattern Recognition and Machine Learning; Springer Science + Business Media: Berlin, Germany, 2006. [Google Scholar]
  135. Prabhakar, T.N.; Xavier, G.; Geetha, P.; Soman, K. Spatial preprocessing based multinomial logistic regression for hyperspectral image classification. Procedia Comput. Sci. 2015, 46, 1817–1826. [Google Scholar] [CrossRef] [Green Version]
  136. Scikit Learn. Generalized Linear Models. Logistic Regression. Available online: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression (accessed on 1 February 2019).
  137. Breiman, L.; Friedman, J.; Olshen, R.; Stone, C. Classification and Regression Trees; Chapman & Hall, Taylor & Francis Group: Abingdon, UK, 1984. [Google Scholar]
  138. Géron, A. Hands-on Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Boston, MA, USA, 2017. [Google Scholar]
  139. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  140. Scikit Learn. Ensemble Methods. Forests of Randomized Trees. Available online: https://scikit-learn.org/stable/modules/ensemble.html#forest (accessed on 1 February 2019).
  141. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 45, 1189–1232. [Google Scholar] [CrossRef]
  142. LightGBM. LightGBM Docs. LGBMClassifier. Available online: https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html (accessed on 1 February 2019).
  143. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  144. Scikit Learn. Support Vector Machines. Mathematical Formulation. Available online: https://scikit-learn.org/stable/modules/svm.html#svm-mathematical-formulation (accessed on 1 February 2019).
  145. PyTorch. PyTorch Docs. Neural Network. Available online: https://pytorch.org/docs/stable/nn.html#module-torch.nn (accessed on 1 February 2019).
  146. GIC. Hyperspectral Remote Sensing Scenes, Grupo de Inteligencia Computacional de la Universidad del País Vasco. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 1 February 2019).
  147. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  148. IEEE. IEEE GRSS Data Fusion Contest. 2013. Available online: http://www.grss-ieee.org/community/technical-committees/data-fusion/2013-ieee-grss-data-fusion-contest/ (accessed on 1 February 2019).
Figure 1. Decision tree example.
Figure 1. Decision tree example.
Remotesensing 12 00534 g001
Figure 2. Gradient boosting decision trees (GBDT) results accumulation with one-vs-all approach.
Figure 2. Gradient boosting decision trees (GBDT) results accumulation with one-vs-all approach.
Remotesensing 12 00534 g002
Figure 3. Accuracy comparison for different training set sizes on (a) IP, (b) UP, (c) KSC and (d) SV.
Figure 3. Accuracy comparison for different training set sizes on (a) IP, (b) UP, (c) KSC and (d) SV.
Remotesensing 12 00534 g003
Figure 4. Classification maps obtained for the considered data sets by the different models: (a1,a2) Ground truth, (b1,b2) MLR, (c1,c2) RF, (d1,d2) GBDT, (e1,e2) SVM, (f1,f2) MLP and (g1,g2) CNN1D.
Figure 4. Classification maps obtained for the considered data sets by the different models: (a1,a2) Ground truth, (b1,b2) MLR, (c1,c2) RF, (d1,d2) GBDT, (e1,e2) SVM, (f1,f2) MLP and (g1,g2) CNN1D.
Remotesensing 12 00534 g004aRemotesensing 12 00534 g004b
Figure 5. Size of the trained models in bytes.
Figure 5. Size of the trained models in bytes.
Remotesensing 12 00534 g005
Figure 6. Number of operations performed during the inference stage.
Figure 6. Number of operations performed during the inference stage.
Remotesensing 12 00534 g006
Table 1. Summary of the size and computational requirements of the considered models.
Table 1. Summary of the size and computational requirements of the considered models.
Size DependenciesComputation Dependencies
Pre-ProcessingData SetFeaturesClassesData SetFeaturesClasses
MLRstandardization-●●●●-●●●●
RF---●●●--
GBDT---●●--●●
SVMstandardization●●●●●●●●●●●●●●
MLPstandardization-●●-●●
CNN1Dstandardization-●●●-●●●
Table 2. Number of samples of the Indian Pines (IP), University of Pavia (UP), Salinas Valley (SV), Kennedy Space Center (KSC) and University of Houston (UH) hyperspectral data sets.
Table 2. Number of samples of the Indian Pines (IP), University of Pavia (UP), Salinas Valley (SV), Kennedy Space Center (KSC) and University of Houston (UH) hyperspectral data sets.
Indian Pines (IP)University of Pavia (UP)Salinas Valley (SV)Kennedy Space Center (KSC)
Remotesensing 12 00534 i001 Remotesensing 12 00534 i002 Remotesensing 12 00534 i003 Remotesensing 12 00534 i004
ColorLand-cover typeSamplesColorLand-cover typeSamplesColorLand-cover typeSamplesColorLand-cover typeSamples
Background10,776 Background164,624 Background56,975 Background309,157
Remotesensing 12 00534 i005Alfalfa46 Remotesensing 12 00534 i005Asphalt6631 Remotesensing 12 00534 i005Brocoli-green-weeds-12009 Remotesensing 12 00534 i005Scrub761
Remotesensing 12 00534 i006Corn-notill1428 Remotesensing 12 00534 i008Meadows18,649 Remotesensing 12 00534 i006Brocoli-green-weeds-23726 Remotesensing 12 00534 i007Willow-swamp243
Remotesensing 12 00534 i007Corn-min830 Remotesensing 12 00534 i009Gravel2099 Remotesensing 12 00534 i007Fallow1976 Remotesensing 12 00534 i008CP-hammock256
Remotesensing 12 00534 i008Corn237 Remotesensing 12 00534 i010Trees3064 Remotesensing 12 00534 i008Fallow-rough-plow1394 Remotesensing 12 00534 i009Slash-pine252
Remotesensing 12 00534 i009Grass/Pasture483 Remotesensing 12 00534 i013Painted metal sheets1345 Remotesensing 12 00534 i009Fallow-smooth2678 Remotesensing 12 00534 i010Oak/Broadleaf161
Remotesensing 12 00534 i010Grass/Trees730 Remotesensing 12 00534 i014Bare Soil5029 Remotesensing 12 00534 i010Stubble3959 Remotesensing 12 00534 i012Hardwood229
Remotesensing 12 00534 i011Grass/pasture-mowed28 Remotesensing 12 00534 i017Bitumen1330 Remotesensing 12 00534 i011Celery3579 Remotesensing 12 00534 i013Swap105
Remotesensing 12 00534 i012Hay-windrowed478 Remotesensing 12 00534 i019Self-Blocking Bricks3682 Remotesensing 12 00534 i012Grapes-untrained11,271 Remotesensing 12 00534 i014Graminoid-marsh431
Remotesensing 12 00534 i013Oats20 Remotesensing 12 00534 i020Shadows947 Remotesensing 12 00534 i013Soil-vinyard-develop6203 Remotesensing 12 00534 i015Spartina-marsh520
Remotesensing 12 00534 i014Soybeans-notill972 Remotesensing 12 00534 i014Corn-senesced-green-weeds3278 Remotesensing 12 00534 i016Cattail-marsh404
Remotesensing 12 00534 i015Soybeans-min2455 Remotesensing 12 00534 i015Lettuce-romaine-4wk1068 Remotesensing 12 00534 i018Salt-marsh419
Remotesensing 12 00534 i016Soybean-clean593 Remotesensing 12 00534 i016Lettuce-romaine-5wk1927 Remotesensing 12 00534 i019Mud-flats503
Remotesensing 12 00534 i017Wheat205 Remotesensing 12 00534 i017Lettuce-romaine-6wk916 Remotesensing 12 00534 i020Water927
Remotesensing 12 00534 i018Woods1265 Remotesensing 12 00534 i018Lettuce-romaine-7wk1070
Remotesensing 12 00534 i019Bldg-Grass-Tree-Drives386 Remotesensing 12 00534 i019Vinyard-untrained7268
Remotesensing 12 00534 i020Stone-steel towers93 Remotesensing 12 00534 i020Vinyard-vertical-trellis1807
Total samples21,025 Total samples207,400 Total samples111,104 Total samples314,368
University of Houston (UH)
ColorLand cover typeSamples trainSamples test
Remotesensing 12 00534 i021 Background649,816
Remotesensing 12 00534 i005Grass-healthy1981053
Remotesensing 12 00534 i006Grass-stressed1901064
Remotesensing 12 00534 i007Grass-synthetic192505
Remotesensing 12 00534 i008Tree1881056
Remotesensing 12 00534 i009Soil1861056
Remotesensing 12 00534 i010Water182143
Remotesensing 12 00534 i011Residential1961072
Remotesensing 12 00534 i012Commercial1911053
Remotesensing 12 00534 i014Road1931059
Remotesensing 12 00534 i015Highway1911036
Remotesensing 12 00534 i016Railway1811054
Remotesensing 12 00534 i017Parking-lot11921041
Remotesensing 12 00534 i018Parking-lot2184285
Remotesensing 12 00534 i019Tennis-court181247
Remotesensing 12 00534 i020Running-track187473
Total samples283212,197
Table 3. Selected training parameters of the different tested models.
Table 3. Selected training parameters of the different tested models.
MLRRFGBDTSVMMLPCNN1D
CnmfdnmdC γ h.l.fqpf1f2
IP1200210102002020100 2 9 1432024510016
PU1020021010150303010 2 4 78202451009
KSC10020021010300305200 2 1 1272024510013
SV1020024060150802510 2 4 1462024510016
HU 1 e 5 200210401503035 1 e 5 2 6 1062024510015
Table 4. Final configurations of random forest (RF), gradient boosting decision Trees (GBDT) and support vector machine (SVM) models.
Table 4. Final configurations of random forest (RF), gradient boosting decision Trees (GBDT) and support vector machine (SVM) models.
RFGBDTSVM
ft.cl.TreesNodesLeavesDepthTreesNodesLeavesDepths.v.
IP2001620028,66328,8638.39320054,03657,2365.651538
PU103920033,15933,3598.63135036,41555,64684278
KSC1761320015,06715,2678.9390017,88375,8142.64782
SV2041620023,97924,1798.46240050,25352,6536.025413
HU1441520044,59744,78712.34225067,60169,8517.312832
Table 5. IP data set results.
Table 5. IP data set results.
ClassMLRRFGBDTSVMMLPCNN1D
122.5 ± 6.7118.0 ± 7.3140.0 ± 5.040.5 ± 9.040.5 ± 20.2733.5 ± 13.93
275.04 ± 1.1762.73 ± 2.3776.623 ± 1.96780.3 ± 1.1279.32 ± 2.681.52 ± 1.51
357.17 ± 1.7650.14 ± 1.9665.354 ± 1.75970.06 ± 1.7469.89 ± 3.2268.07 ± 2.9
445.94 ± 3.6430.5 ± 3.9740.297 ± 2.35667.82 ± 6.0859.9 ± 6.0360.99 ± 9.05
589.68 ± 2.3286.18 ± 2.9690.414 ± 1.33893.19 ± 2.4189.39 ± 1.7990.27 ± 2.22
695.56 ± 1.594.78 ± 1.1696.039 ± 1.01195.97 ± 1.3397.13 ± 1.4197.39 ± 0.44
742.5 ± 16.758.33 ± 5.2732.5 ± 23.48271.67 ± 7.1760.83 ± 10.0753.33 ± 17.95
898.72 ± 0.4297.74 ± 0.3998.133 ± 0.48197.3 ± 1.3198.08 ± 0.5799.16 ± 0.51
921.18 ± 7.980.0 ± 0.014.118 ± 8.80447.06 ± 9.8457.65 ± 15.9650.59 ± 10.26
1066.55 ± 2.7666.31 ± 4.7175.84 ± 4.36775.62 ± 1.1979.11 ± 0.4475.38 ± 3.68
1180.24 ± 1.2789.08 ± 1.2787.877 ± 1.1884.99 ± 1.0883.56 ± 1.3285.05 ± 0.53
1260.59 ± 3.3647.96 ± 6.5755.604 ± 1.55176.83 ± 4.5173.31 ± 1.9783.25 ± 3.31
1398.29 ± 1.0292.8 ± 2.7493.371 ± 1.93398.86 ± 1.299.2 ± 0.2899.2 ± 0.69
1493.4 ± 0.6495.61 ± 0.9995.967 ± 0.60794.07 ± 0.8795.13 ± 0.394.89 ± 1.29
1565.71 ± 2.0640.91 ± 0.8156.839 ± 2.01664.8 ± 1.3566.08 ± 2.6869.06 ± 2.94
1684.75 ± 3.182.5 ± 2.0988.5 ± 3.10287.75 ± 2.8989.0 ± 4.7789.0 ± 2.67
OA77.81 ± 0.4275.32 ± 0.4480.982 ± 0.78383.46 ± 0.3583.04 ± 0.4483.93 ± 0.5
AA68.61 ± 1.5160.22 ± 0.5769.217 ± 1.62777.92 ± 0.8877.38 ± 2.4576.92 ± 1.93
K(x100)74.54 ± 0.4771.42 ± 0.5378.16 ± 0.89781.08 ± 0.4180.62 ± 0.5181.61 ± 0.59
Table 6. UP data set results.
Table 6. UP data set results.
ClassMLRRFGBDTSVMMLPCNN1D
192.41 ± 0.8691.35 ± 0.9890.044 ± 0.62793.82 ± 0.6294.31 ± 1.0995.37 ± 1.3
296.02 ± 0.2198.25 ± 0.1896.571 ± 0.42598.41 ± 0.2397.98 ± 0.3998.16 ± 0.27
372.75 ± 1.1361.51 ± 3.4774.952 ± 1.42278.8 ± 1.3380.38 ± 1.1280.55 ± 1.91
488.17 ± 0.7487.2 ± 1.2590.986 ± 1.11393.06 ± 0.6793.72 ± 1.0295.43 ± 1.54
599.41 ± 0.398.43 ± 0.5699.026 ± 0.40398.86 ± 0.2599.36 ± 0.4899.8 ± 0.17
677.5 ± 0.7245.2 ± 1.5286 ± 0.83787.97 ± 0.6291.58 ± 1.092.26 ± 1.54
754.77 ± 4.3875.27 ± 4.5684.194 ± 1.24584.58 ± 1.5785.23 ± 2.5189.29 ± 3.78
886.05 ± 0.788.2 ± 1.0387.827 ± 0.80589.67 ± 0.4487.32 ± 1.588.07 ± 1.59
999.7 ± 0.0699.41 ± 0.2999.906 ± 0.08899.53 ± 0.399.62 ± 0.1799.79 ± 0.2
OA89.63 ± 0.1286.8 ± 0.2591.869 ± 0.18193.98 ± 0.1594.26 ± 0.1894.92 ± 0.22
AA85.2 ± 0.5482.76 ± 0.4389.945 ± 0.21191.63 ± 0.3892.17 ± 0.1693.19 ± 0.47
K(x100)86.13 ± 0.1781.98 ± 0.3589.195 ± 0.22891.99 ± 0.292.37 ± 0.2493.25 ± 0.3
Table 7. KSC data set results.
Table 7. KSC data set results.
ClassMLRRFGBDTSVMMLPCNN1D
195.92 ± 1.2295.49 ± 1.2195.425 ± 1.18895.12 ± 0.7796.23 ± 0.4397.03 ± 1.19
292.27 ± 3.2888.02 ± 2.3586.377 ± 2.01390.92 ± 3.5688.89 ± 2.7891.3 ± 4.26
387.25 ± 4.7786.79 ± 2.8986.697 ± 2.22884.95 ± 3.4690.92 ± 2.9892.29 ± 1.89
468.09 ± 4.6171.44 ± 4.7764.372 ± 1.70569.4 ± 6.8872.74 ± 3.3581.21 ± 8.79
575.18 ± 3.1357.66 ± 5.455.036 ± 8.64463.94 ± 6.9562.04 ± 4.3376.93 ± 5.09
674.97 ± 3.3351.79 ± 3.8461.641 ± 4.34464.92 ± 7.6666.87 ± 3.9378.36 ± 5.54
780.67 ± 5.978.22 ± 4.1382.444 ± 4.41171.33 ± 6.0683.33 ± 7.9585.56 ± 8.46
891.77 ± 1.5983.65 ± 2.986.975 ± 2.54191.77 ± 2.6292.7 ± 2.3393.62 ± 3.49
997.01 ± 0.9294.52 ± 2.2593.394 ± 3.03394.75 ± 1.5997.6 ± 0.798.55 ± 0.99
1095.99 ± 0.6788.78 ± 0.7993.198 ± 2.17294.83 ± 1.597.44 ± 1.3698.26 ± 0.58
1198.1 ± 1.1197.82 ± 1.1794.678 ± 3.01296.92 ± 1.5998.26 ± 0.797.98 ± 0.88
1295.09 ± 0.4289.81 ± 1.5793.505 ± 1.1290.75 ± 2.8193.83 ± 1.0896.45 ± 1.58
13100.0 ± 0.099.62 ± 0.2499.67 ± 0.12999.16 ± 0.46100.0 ± 0.099.92 ± 0.06
OA92.69 ± 0.2388.88 ± 0.4389.506 ± 0.60490.51 ± 0.5692.42 ± 0.2394.59 ± 0.32
AA88.64 ± 0.6183.36 ± 0.8384.109 ± 0.97385.29 ± 1.2287.76 ± 0.491.34 ± 0.59
K(x100)91.86 ± 0.2687.61 ± 0.4888.308 ± 0.67489.43 ± 0.6291.55 ± 0.2693.97 ± 0.35
Table 8. SV data set results.
Table 8. SV data set results.
ClassMLRRFGBDTSVMMLPCNN1D
199.19 ± 0.4799.64 ± 0.1599.514 ± 0.28999.44 ± 0.3699.64 ± 0.4699.87 ± 0.17
299.93 ± 0.0699.86 ± 0.0999.827 ± 0.07499.72 ± 0.1599.83 ± 0.2199.86 ± 0.22
398.85 ± 0.2999.08 ± 0.5398.775 ± 0.3799.51 ± 0.1299.47 ± 0.299.63 ± 0.25
499.39 ± 0.3199.54 ± 0.2799.554 ± 0.2699.59 ± 0.1199.62 ± 0.1299.46 ± 0.06
599.19 ± 0.2697.96 ± 0.4998.109 ± 0.33298.71 ± 0.5199.11 ± 0.3299.0 ± 0.36
699.94 ± 0.0399.72 ± 0.1199.624 ± 0.29299.78 ± 0.1299.84 ± 0.0999.9 ± 0.07
799.74 ± 0.0799.34 ± 0.1899.559 ± 0.16499.61 ± 0.1699.71 ± 0.1299.7 ± 0.1
888.07 ± 0.1184.26 ± 0.4285.507 ± 0.34989.11 ± 0.3488.84 ± 0.790.36 ± 1.01
999.79 ± 0.0799.01 ± 0.2499.219 ± 0.16599.66 ± 0.2199.88 ± 0.0799.85 ± 0.13
1096.34 ± 0.5291.35 ± 0.6193.473 ± 0.73795.28 ± 0.8796.32 ± 0.7997.63 ± 0.57
1196.9 ± 1.094.2 ± 1.2394.782 ± 0.53298.0 ± 0.7197.75 ± 0.7698.42 ± 0.85
1299.79 ± 0.0398.42 ± 0.6799.239 ± 0.50799.55 ± 0.3499.8 ± 0.1199.93 ± 0.11
1399.05 ± 0.3398.08 ± 0.6797.818 ± 0.95798.5 ± 0.5298.55 ± 0.799.25 ± 0.56
1495.89 ± 0.1791.48 ± 1.2995.202 ± 0.99795.02 ± 0.8198.17 ± 0.5598.05 ± 0.85
1566.85 ± 0.1860.42 ± 1.3274.91 ± 0.52471.72 ± 0.6974.61 ± 1.6680.02 ± 2.35
1698.45 ± 0.3697.31 ± 0.1997.406 ± 1.01798.35 ± 0.1798.7 ± 0.7298.94 ± 0.53
OA92.45 ± 0.0790.08 ± 0.1792.544 ± 0.07993.2 ± 0.1793.75 ± 0.194.91 ± 0.16
AA96.09 ± 0.194.35 ± 0.1695.782 ± 0.05696.35 ± 0.1596.86 ± 0.0297.49 ± 0.15
K(x100)91.58 ± 0.0788.93 ± 0.1991.696 ± 0.08792.42 ± 0.1993.03 ± 0.1194.33 ± 0.18
Table 9. UH data set results.
Table 9. UH data set results.
ClassMLRRFGBDTSVMMLPCNN1D
182.24 ± 0.3582.53 ± 0.0682.336 ± 0.082.24 ± 0.081.29 ± 0.3282.55 ± 0.54
281.75 ± 1.1383.31 ± 0.2683.177 ± 0.080.55 ± 0.082.12 ± 1.1886.9 ± 3.87
399.49 ± 0.297.94 ± 0.197.228 ± 0.0100.0 ± 0.099.6 ± 0.1399.88 ± 0.16
490.81 ± 3.6791.59 ± 0.1594.981 ± 0.092.52 ± 0.088.92 ± 0.5592.8 ± 3.34
596.88 ± 0.0896.84 ± 0.1393.277 ± 0.098.39 ± 0.097.35 ± 0.3298.88 ± 0.3
694.27 ± 0.2898.88 ± 0.3490.21 ± 0.095.1 ± 0.094.55 ± 0.2895.94 ± 1.68
771.88 ± 1.2675.24 ± 0.1573.414 ± 0.076.31 ± 0.076.03 ± 1.7486.49 ± 1.29
861.8 ± 0.6833.2 ± 0.1535.138 ± 0.039.13 ± 0.064.27 ± 9.1778.02 ± 6.6
964.82 ± 0.2369.07 ± 0.468.839 ± 0.073.84 ± 0.075.09 ± 2.2778.7 ± 4.31
1046.18 ± 0.3543.59 ± 0.3141.699 ± 0.051.93 ± 0.047.28 ± 1.0768.22 ± 10.41
1173.51 ± 0.3369.94 ± 0.1672.391 ± 0.078.65 ± 0.076.11 ± 1.0782.13 ± 1.62
1267.74 ± 0.2654.62 ± 0.869.164 ± 0.069.03 ± 0.0572.93 ± 3.5690.85 ± 2.57
1369.75 ± 0.7260.0 ± 0.5967.018 ± 0.069.47 ± 0.072.28 ± 3.674.67 ± 3.49
1499.35 ± 0.4999.27 ± 0.4799.595 ± 0.0100.0 ± 0.099.35 ± 0.4199.11 ± 0.3
1594.38 ± 0.8997.59 ± 0.3295.137 ± 0.098.1 ± 0.098.1 ± 0.4898.48 ± 0.16
OA76.35 ± 0.2773.0 ± 0.0774.182 ± 0.076.96 ± 0.078.61 ± 0.4485.95 ± 0.94
AA79.66 ± 0.276.91 ± 0.0677.573 ± 0.080.35 ± 0.081.68 ± 0.2487.58 ± 0.8
K(x100)74.51 ± 0.2870.99 ± 0.0772.101 ± 0.075.21 ± 0.076.96 ± 0.4784.77 ± 1.02

Share and Cite

MDPI and ACS Style

Alcolea, A.; Paoletti, M.E.; Haut, J.M.; Resano, J.; Plaza, A. Inference in Supervised Spectral Classifiers for On-Board Hyperspectral Imaging: An Overview. Remote Sens. 2020, 12, 534. https://doi.org/10.3390/rs12030534

AMA Style

Alcolea A, Paoletti ME, Haut JM, Resano J, Plaza A. Inference in Supervised Spectral Classifiers for On-Board Hyperspectral Imaging: An Overview. Remote Sensing. 2020; 12(3):534. https://doi.org/10.3390/rs12030534

Chicago/Turabian Style

Alcolea, Adrián, Mercedes E. Paoletti, Juan M. Haut, Javier Resano, and Antonio Plaza. 2020. "Inference in Supervised Spectral Classifiers for On-Board Hyperspectral Imaging: An Overview" Remote Sensing 12, no. 3: 534. https://doi.org/10.3390/rs12030534

APA Style

Alcolea, A., Paoletti, M. E., Haut, J. M., Resano, J., & Plaza, A. (2020). Inference in Supervised Spectral Classifiers for On-Board Hyperspectral Imaging: An Overview. Remote Sensing, 12(3), 534. https://doi.org/10.3390/rs12030534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop