Next Article in Journal
A Study on the Thermomechanical Reliability Risks of Through-Silicon-Vias in Sensor Applications
Previous Article in Journal
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm

1
School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, China
2
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(2), 314; https://doi.org/10.3390/s17020314
Submission received: 8 December 2016 / Revised: 24 January 2017 / Accepted: 4 February 2017 / Published: 8 February 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
Recently, the sparse representation based classification methods have received particular attention in the classification of hyperspectral imagery. However, current sparse representation based classification models have not considered all the test pixels simultaneously. In this paper, we propose a hyperspectral classification method with spatial filtering and 2 , 1 norm (SFL) that can deal with all the test pixels simultaneously. The 2 , 1 norm regularization is used to extract relevant training samples among the whole training data set with joint sparsity. In addition, the 2 , 1 norm loss function is adopted to make it robust for samples that deviate significantly from the rest of the samples. Moreover, to take the spatial information into consideration, a spatial filtering step is implemented where all the training and testing samples are spatially averaged with its nearest neighbors. Furthermore, the non-negative constraint is added to the sparse representation matrix motivated by hyperspectral unmixing. Finally, the alternating direction method of multipliers is used to solve SFL. Experiments on real hyperspectral images demonstrate that the proposed SFL method can obtain better classification performance than some other popular classifiers.

1. Introduction

Over the past few decades, hyperspectral imagery has been widely used in different remote sensing applications owing to its high-resolution spectral information of the materials in the scene [1,2,3]. Various hyperspectral image classification techniques have been presented for a lot of real applications including material recognition, urban mapping and so on [4,5,6,7,8].
To date, a lot of hyperspectral image classification methods have been presented. Among them, the most representative method is the support vector machine (SVM) [9], which has shown desirable hyperspectral image classification performance. Recently, the sparse representation based classification methods have received a lot of attention in the area of image analysis [10,11,12,13,14], particularly in the classification of hyperspectral image. Chen et al. introduced a dictionary-based sparse representation framework for hyperspectral classification [15]. To be specific, a test pixel is sparsely represented by a few labeled training samples, and the class is determined as the one with the minimal class-specific representation error. In addition, Chen et al. also proposed the simultaneous orthogonal match pursuit (SOMP) to utilize the spatial information of hyperspectral data [15]. To take the additional structured sparsity priors into consideration, Sun et al. reviewed and compared several structured priors for sparse representation based hyperspectral image classification [16], which can exploit both the spatial dependences between the neighboring pixels and the inherent structure of the dictionary. In [17], Chen et al. extended the joint sparse representation to the kernel version for hyperspectral image classification, which can provide a higher classification accuracy than the conventional linear sparse representation algorithms. In addition, Liu et al. proposed a class-specific sparse multiple kernel learning framework for hyperspectral image classification [18], which determined the associated weights of optimal base kernels for any two classes and led to better classification performances. To take other spectral properties and higher order context information into consideration, Wang et al. proposed the spatial-spectral derivative-aided kernel joint sparse representation for hyperspectral image classification [19], and the derivative-aided spectral information can complement traditional spectral features without inducing the curse of dimensionality and ignoring discriminating features. Moreover, Li et al. proposed the joint robust sparse representation classification (JRSRC) method to take the sparse representation residuals into consideration, which can deal with outliers in hyperspectral classification [20]. To integrate the sophisticated prior knowledge about the spatial nature of the image, Roscher et al. proposed constructing a novel dictionary for sparse-representation-based classification [21], which can combine the characteristic spatial patterns and spectral information to improve the classification performance. In order to adaptively explore the spatial information for different types of spatial structures, Fu et al. proposed a new shape-adaptive joint sparse representation method for hyperspectral image classification [22], which can construct a shape-adaptive local smooth region for each test pixel. In order to capture the class-discriminative information, He et al. proposed a group-based sparse and low-rank representation to improve the dictionary for hyperspectral image classification [23]. To take different types of features into consideration, Zhang et al. proposed an alternative joint sparse representation by the multitask joint sparse representation model [24]. To overcome the high coherence of the training samples, Bian et al. proposed a novel multi-layer spatial-spectral sparse representation framework for hyperspectral image classification [25]. In addition, to take the class structure of hyperspectral image data into consideration, Shao et al. proposed a probabilistic class structure regularized sparse representation method to incorporate the class structure information into the sparse representation model [26].
It had been argued in [27] that the collaborative representation classification can obtain very competitive classification performance, while the time consumption was much lower than that of sparse representation. Thus, various collaborative representation methods had been proposed for hyperspectral image classification. Li et al. proposed the nearest regularized subspace (NRS) classifier by using the distance-weighted Tikhonov regularization [28]. Then, the Gabor filtering based nearest regularized subspace classifier had been proposed to exploit the benefits of using spatial features [29]. Collaborative representation with Tikhonov regularization (CRT) had also been proposed for hyperspectral classification [30]. The main difference between NRS and CRT was that the NRS only used within-class training data for collaborative representation while the latter adopted all the training data simultaneously [30]. In [31], the kernel version of a collaborative representation was proposed and denoted as kernel collaborative representation classifier (KCRC). In addition, Li et al. proposed proposed combining the sparse representation and collaborative representation for hyperspectral image classification to make a balance between sparse representation and collaborative representation in the residual domain [32]. Moreover, Sun et al. combined the active learning and semi-supervised learning to improve the classification performance when given a few initial labeled samples, and proposed the extended random walker [33] algorithm for the classification of hyperspectral image.
Very recently, some deep models had been proposed for hyperspectral image classification [34]. To the best of our knowledge, Chen et al. proposed a deep learning method named stacked autoencoder for hyperspectral image classification in 2014 [35]. Recently, convolutional neural networks have been very popular in pattern recognition, computer vision and remote sensing. Convolutional neural networks usually contained a number of convolutional layers and a classification layer, which can learn deep features from the training data and exploit spatial dependence among them. Krizhevsky et al. trained a large convolutional neural networks to classify the 1.2 million high-resolution images in the ImageNet, which had obtained superior image classification accuracy [36]. Since then, convolutional neural networks had been applied for hyperspectral image classification [37,38], which had achieved desirable classification performance. To take the spatial information into consideration, a novel convolutional neural networks framework for hyperspectral image classification using both spectral and spatial features was presented [39]. In addition, Aptoula et al. proposed a combined strategy of both attribute profiles and convolutional neural networks for hyperspectral image classification [40]. To overcome the imbalance between dimensionality and the number of available training samples, Ghamisi et al. proposed a self-improving band selection based convolutional neural networks method for hyperspectral image classification [41]. In addition, some patch based convolutional neural networks hyperspectral image classification methods had also been proposed, such as the method in [42,43]. In order to achieve low computational cost and good generalization performance, Li et al. proposed combining convolutional neural networks with extreme learning machines for hyperspectral image classification [44]. Furthermore, Shi et al. proposed a 3D convolutional neural networks (3D-CNN) method for hyperspectral image classification that can take both the spectral and spatial information into consideration [45].
However, all of the above mentioned methods, whether they are based on sparse representation, collaborative representation or deep models, adopt the pixel-wise classification strategy, i.e., they do not consider all the pixels simultaneously. In [46], theoretical work has demonstrated that multichannel joint sparse recovery is superior to applying standard sparse reconstruction methods to each single channel individually, and the probability of recovery failure decays exponentially with the increase in the number of channels. In addition, the probability bounds still hold true even for a small number of signals. For the classification of hyperspectral images, the multichannel means recovering multi hyperspectral pixels simultaneously. Therefore, inspired by the theoretical work in [46], in this paper, we propose a hyperspectral classification method with spatial filtering and 2 , 1 norm (SFL) to deal with all the test samples simultaneously, which can not only take much less time but also obtain comparable good or better classification performance. First, the 2 , 1 norm regularization is adopted to select correlated training samples among the whole training data set. Meanwhile, the 2 , 1 norm loss function which is robust for outliers is also implemented. Second, we adopt the simple strategy in [47] to exploit the local continuity, and all the training and testing samples are spatially averaged with their nearest neighbors to take the spatial information into consideration, which can be seen as spatial filtering. Third, the non-negative constraint is added in the sparse representation coefficient matrix motivated by hyperspectral unmixing. Finally, to solve SFL, we use the alternating direction method of multipliers [48], a simple but powerful algorithm that is well suited to distributed convex optimization.
The main contribution of this work lies in proposing an SFL for hyperspectral classification that can deal with all the test pixels simultaneously. Experiments on real hyperspectral images demonstrate that the proposed SFL method can obtain better classification performance than some other popular classifiers.

2. Related Work

In this section, we briefly introduce the classical sparse representation for the classification of hyperspectral images, which can be found in [16]. It is assumed that the pixels in the same class lie in the same low-dimensional subspace, and it has K different classes. Therefore, for an unknown test sample y R B , where B denotes the the number of bands, y is assumed to lie in the union of the K different subspaces, which can seen as the sparse linear combination of all the training samples
y = A 1 x 1 + A 2 x 2 + + A K x K = [ A 1 A K ] x 1 x K = A x .
Therefore, given the dictionary of training samples A R B × M , where M is the number of training samples. For an unknown test sample y , the sparse representation coefficient vector x R M can be obtained by solving the optimization problem as follows:
x ^ = arg min x y A x 2 2 + λ x 1 ,
where A consists of the class subdictionaries { A k } k = 1 , , K , and λ is the regularization parameter. In addition, Equation (2) can be solved by the alternating direction method of multipliers in [49]. Thus, the class label of x is determined as the one with the minimal class-specific reconstruction residual:
Class ( y ) = arg min k = 1 , , K y A k x ^ k 2 2 .

3. Proposed Classifiers

In [46], it has been proved that, with the increase in the number of channels, the failure probability of sparse reconstruction decreases exponentially. Thus, multichannel sparse reconstruction is superior to single channel sparse reconstruction. In addition, the probability bounds are valid even for a small number of signals. Based on this theory, we deal with all the test samples simultaneously, and the proposed SFL classification method will be briefly described.
Let Y = [ y 1 , y 2 , , y N ] R B × N , where { y n } n = 1 , , N denotes the columns of Y , and N denotes the number of test pixels. To deal with all the test pixels simultaneously, it is natural that the sparse representation coefficient matrix X = [ x 1 , x 2 , , x N ] R M × N for all the test pixels can be obtained by solving the optimization problem as follows:
X ^ = arg min X Y A X F 2 + λ X 1 ,
which also can be solved by the alternating direction method of multipliers in [49]. . F represents the matrix Frobenius norm, which is equal to the Euclidean norm of the vector of singular values, i.e.,
X F = X , X = ( i = 1 M j = 1 N X i j 2 ) 1 2 = ( i = 1 r σ i 2 ) 1 2 ,
where σ i ( i = 1 , . . . , r ) denotes the singular value of X . After the optimized X ^ is obtained, the classes of all test pixels can be obtained by the minimum class reconstruction error:
Class ( y n ) = arg min k = 1 , , K y n A k x n ^ k 2 2 , n = 1 , , N .
However, Equation (4) adopts the pixel-wise independent regression, which ignores the correlation among the whole training data set. Recent research shows that the high-dimensional data space is smooth and locally linear, and it has been versified in image reconstruction and classification problems [50,51]. For joint consideration of the classification of neighborhoods, in this paper, we introduce the 2 , 1 norm regularization and adapt it to extract correlated training samples among the whole training data set with joint sparsity, which is defined as follows:
X 2 , 1 = i = 1 M j = 1 N X i j 2 .
The 2 , 1 norm was first introduced by Ding et al. [52], which makes the traditional principal component analysis more robust for outliers. The outliers are defined as data points that deviate significantly from the rest of data. Traditional principal component analysis optimizes the sum of squared errors, since the few data points that have large squared errors will dominate the sum. Therefore, the traditional principal component analysis is sensitive to outliers. It has been shown that minimizing the 1 norm is more robust and can resist a larger proportion of outliers compared with quadratic 2 norms [53]. The 2 , 1 norm is identical to a rotational invariant 1 norm, and the solution of 2 , 1 norm based robust principal component analysis is the principal eigenvectors of a more robust re-weighted covariance matrix, which can alleviate the effects of outliers. In addition, the 2 , 1 norm has the advantage of being rotation invariant compared with the 1 norm [52,54,55], i.e., applying the same rotation to all points has no effect on its performance. Due to the above-mentioned advantages, the 2 , 1 norm has been applied in feature selection [56], multi-task learning [57], multi-kernel learning [58], and non-negative matrix factorization [59]. Nie et al. [56] introduced the 2 , 1 norm to feature selection, and they used 2 , 1 norm regularization to select features across all data points with joint sparsity. The 2 , 1 norm based loss function is used to remove outliers, and the feature selection process is proved to be effective and efficient.
Similarly, we adopt the 2 , 1 norm regularization to select correlated training samples among the whole training data set with joint sparsity for hyperspectral image classification. Thus, the corresponding optimization problem is as follows:
X ^ = arg min X Y A X F 2 + λ X 2 , 1 ,
which can be solved by the alternating direction method of multipliers in [60]. This model can be seen as an instance of the methodology in [61], which can impose sparsity across the pixels both at the group and individual levels. In addition, to make it more robust for outliers, the 2 , 1 norm loss function is adopted. Thus, the corresponding optimization problem is as follows:
X ^ = arg min X Y A X 2 , 1 + λ X 2 , 1 .
Due to limited resolution of hyperspectral image sensors and the complexity of ground materials, mixed pixels can easily be found in hyperspectral images. Therefore, a hyperspectral unmixing step is needed [62,63]. Hyperspectral unmixing is a process to identify the pure constituent materials (endmembers) and estimate the proportion of each material (abundance) [64]. The linear mixture model has been prevalently used in hyperspectral unmixing, and the abundance is considered to be non-negative in a linear mixture model [65]. If we deem A as the spectral library consisting of endmembers, then X can be seen as the abundance matrix. Therefore, X is also non-negative. When adding the non-negative constraint into the sparse representation matrix, the corresponding optimization problem is as follows:
X ^ = arg min X 0 Y A X F 2 + λ X 2 , 1 ,
X ^ = arg min X 0 Y A X 2 , 1 + λ X 2 , 1 .
In addition, since the spectral signatures of neighboring pixels are highly correlated, which make them belong to the same material with high probability, we thus adopt the simple strategy in [47] to exploit the local continuity, and all the training and testing samples are spatially averaged with their nearest neighbors to take the spatial information into consideration, which can be seen as spatial filtering. Moreover, when N=1, it is easy to see that Equation (8) reduces to Equation (2), and Equation (9) reduces to the optimization problem as follows:
x ^ = arg min x y A x 1 + λ x 1 .
To sum up, the detailed procedure of our proposed method can be seen from Figure 1. Finally, to solve the optimization problem from Equation (9) to Equation (12), Equation (10) can be solved by the alternating direction method of multipliers in [60], and Equations (9) and (12) are special cases of Equation (11). Thus, it comes down to solving Equation (11). For simplification, Equation (11) can be written as:
min X A X Y 2 , 1 + λ X 2 , 1 + l R + ( X ) ,
where l R + ( X ) = i = 1 P l R + ( X i ) is the indicator function of nonnegative quadrant R + , and X i is the i-th column of X . If X i belongs to the nonnegative quadrant, then l R + ( X i ) is zero. If not, it is + .
In order to solve Equation (11), the alternating direction method of multipliers [48] method is implemented. By introducing auxiliary variables P , Q and W , Equation (11) could be rewritten as:
min X P 2 , 1 + λ W 2 , 1 + l R + ( X ) , s . t . A Q Y = P , Q = W , Q = A .
A compact version of it is:
min V , Q g ( V ) s . t . G Q + B V = Z ,
where g ( V ) = P 2 , 1 + λ W 2 , 1 + l R + ( A ) , G = A I I , B = I 0 0 0 I 0 0 0 I , Z = Y 0 0 , V ( P , W , X ) , and I is the unit matrix. Thus, the augmented Lagrangian function could be expressed as:
L ( V , Q , Λ ) = g ( V ) + μ 2 G Q + B V Z Λ F 2 ,
where μ > 0 , Λ / μ stands for the Lagrange multipliers. In order to update P , we solve
P k + 1 = arg min P P 2 , 1 + μ 2 A Q k Y P Λ 1 k F 2 ,
and its solution is the famous vector soft threshold operator [10], which updates each row independently
P k + 1 ( r , : ) = vect-soft ( ζ ( r , : ) , 1 μ ) ,
where ζ = A Q k Y Λ 1 k , and the vect-soft-threshold function g ( b , τ ) = b max { b 2 τ , 0 } max { b 2 τ , 0 } + τ . To update W , we solve
W k + 1 = arg min W λ W 2 , 1 + μ 2 Q k W Λ 2 k F 2 ,
and its solution is also the vector soft threshold operator [10]:
W k + 1 ( r , : ) = vect-soft ( γ ( r , : ) , λ μ ) ,
where γ = Q k Λ 2 k .
To update X , we solve
X k + 1 = arg min X l R + ( X ) + μ 2 Q k X Λ 3 k F 2 = max ( Q k Λ 3 k , 0 ) .
To update Q , we solve
Q k + 1 = arg min Q A Q Y P k + 1 Λ 1 k F 2 + Q W k + 1 Λ 2 k F 2 + Q X k + 1 Λ 3 k F 2 , = ( A T A + 2 I ) 1 [ A T ( Y + P k + 1 + Λ 1 k ) + W k + 1 + Λ 2 k + X k + 1 + Λ 3 k ] .
The stopping criterion is G Q k + B V k Z F 2 < ε * ( J * K ) , where ε is the error threshold, and J and K are the number of rows and columns of Z . μ is updated in the same way as [48], which keeps the ratio between the alternating direction method of multiplier primal norms and dual residual norms within a given positive interval. Based on this, we can get Proposition 1, whose proof of convergence is given in [48].
Proposition 1.
Function g in Equation (15) is closed, proper, and convex. If there exist solutions V * and Q * , then iterative sequence { V k } and { Q k } converge to V * and Q * , respectively. If not, at least one of { V k } and { Q k } diverge [48].

4. Experiments

4.1. Experimental Data

Two datasets are used in the experiment. The first dataset is Indiana Pines obtained by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in 1992. The image size is 145 × 145, and 220 bands are taken in the spectral range from 0.4–2.5 μ m. After removal of water absorption bands (No. 104–108, 150–163, 220), 200 bands are used, and the ground truth image is shown in Figure 2a. There are 16 material classes in Indiana Pines and 10,249 labeled samples. In addition, 1027 samples (about 10%) are used as training data, as shown in Table 1. Thus, the rest is used for testing.
The second dataset is Pavia University obtained by a Reflective Optics System Imaging Spectrometer (ROSIS) in 2001 at Paiva University, Pavia, Italy. The size of the image is 610 × 340 with a spatial resolution of 1.3 m. The number of bands is 103, and the ground truth image is shown in Figure 2b. There are nine classes and 42,776 labeled samples, 426 of them (about 1%) are chosen as the training data, and the others are used as test data, as shown in Table 2.

4.2. Parameter Setting

In experiments, we mainly compare the classification performance when using the pixel-wise strategy and dealing with all the test pixels simultaneously. In addition, we also made a step-by-step comparison by adding or removing spatial filtering and/or constraints to see which step’s contribution is more important. For these methods, there are mainly five parameters: i.e., the neighbor size T, the regularization parameter λ, the Lagrange multiplier regularization parameter μ, the error tolerance ε and the maximum number of iteration. The neighbor size T and the regularization parameter λ play an important role in the proposed method, which control the size of spatial filtering and the trade-off between fidelity to the data and sparsity of the solution, respectively. While the Lagrange multiplier regularization parameter μ, the error tolerance ε and the maximum number of iteration, which have lesser impact on the efficiency of the corresponding algorithms, are set to a fixed value, i.e., μ = 10 2 , ε = 10 6 , and the maximum number of iteration is 1000. For the neighbor size T, we use the same parameter setting in [16]. For the Indian Pine data set, a spatial window of 9 × 9 (T = 81) is adopted, which is due to this image consisting of mostly large homogeneous regions. For the University of Pavia data set, a spatial window of 5 × 5 (T = 25) is used, which is due many narrow regions being present in this image. The regularization parameter λ is chosen from the given intervals { 10 6 , 10 5 , 10 4 , 10 3 , 10 2 , 10 1 }.
Figure 3 shows the performance of overall accuracy as a function of the regularization parameter λ using the hyperspectral image of Indian Pines and Pavia University. For convenience, the “Spatial Filtering” and “Non-negative Constraint” are abbreviated as “SF” and “NC”, respectively. For example, for the “ 2 , 1 + 2 , 1 +SF+NC”, the first “ 2 , 1 ” denotes the loss function norm, the second “ 2 , 1 ” denotes the regularization term norm, “SF” denotes using the spatial filtering, and “NC” denotes using the non-negative constraint. Thus, they are the same as the abbreviation of the other compared methods. It can be seen from Figure 3 that the overall accuracy remains stable when ε < 10 2 . It then decreases when ε > 10 2 . In addition, “ 2 , 1 + 2 , 1 +SF+NC” and “ 2 , 1 + 2 , 1 +SF” have much better overall accuracy than “ 2 , 1 + 2 , 1 +NC” and “ 2 , 1 + 2 , 1 ”, respectively, which demonstrate that it is significant to improve the overall accuracy when taking the spatial filtering into consideration. Moreover, “ 2 , 1 + 2 , 1 +SF+NC” and “ 2 , 1 + 2 , 1 +NC” have better overall accuracy than “ 2 , 1 + 2 , 1 +SF” and “ 2 , 1 + 2 , 1 ”, respectively, which demonstrate that it helps to improve the overall accuracy when taking the non-negative constraint into consideration. Furthermore, the elevation of overall accuracy when using the spatial filtering is much larger than those when using the non-negative constraint, which suggests that the spatial filtering has a larger effect on the overall accuracy than the non-negative constraint.

4.3. Classification Performance

The experiments are performed on a desktop with 3.5 GHz Intel Core CPU, 64 GB memory and Matlab Code. To evaluate the classification performance of different methods, the overall accuracy, average accuracy and kappa statistic [16] are used to evaluate the performances of these methods. Table 3 and Table 4 show the classification performances for Indian Pines data set when using the pixel-wise strategy and dealing with all the test pixels simultaneously, respectively. It can be seen from Table 3 and Table 4 that methods using the spatial filtering generally obtain better overall accuracy, average accuracy and kappa statistics than those without spatial filtering. For example, “ 2 + 1 +SF+NC” and “ 2 + 1 +SF” have much better overall accuracy than “ 2 + 1 +NC” and “ 2 + 1 ”, respectively, which demonstrates that it helps a lot to improve overall accuracy by using the spatial filtering. In addition, methods using the non-negative constraint generally obtain better overall accuracy than those without non-negative constraints. For example, “ 1 + 1 +SF+NC” and “ 1 + 1 +NC” have better overall accuracy than “ 1 + 1 +SF” and “ 1 + 1 ”, respectively, which demonstrates that it helps to improve overall accuracy by using the non-negative constraint. It also can be clearly seen that the spatial filtering has a larger effect on the classification performance than the non-negative constraint. Moreover, methods using 2 , 1 norm regularization term can generally obtain better classification performance than methods using 1 norm regularization term, for example, “F+ 2 , 1 +SF+NC” and “F+ 2 , 1 ” generally have better overall accuracy than “F+ 1 +SF+NC” and “F+ 1 ”, respectively, which demonstrate that it is beneficial to select correlated training samples among the whole training data set, and can impose sparsity across the pixels both at the group and individual levels. Furthermore, methods using 2 , 1 norm loss function can generally obtain better classification performance than methods using F norm loss function. For example, “ 2 , 1 + 2 , 1 +SF+NC” and “ 2 , 1 + 2 , 1 ” generally have better overall accuracy than “F+ 2 , 1 +SF+NC” and “F+ 2 , 1 ”, respectively, which demonstrate that the 2 , 1 norm loss function is more robust for outliers than F norm loss function. Table 5 and Table 6 show the classification performances for Pavia University data set when using the pixel-wise strategy and dealing with all the test pixels simultaneously, respectively. We can also obtain the above-mentioned conclusion from Table 5 and Table 6 when using the Pavia University data. In addition, from Table 3, Table 4, Table 5 and Table 6, it can be observed that these methods when dealing with all the test pixels simultaneously can obtain comparable or better overall accuracy than these regression based pixel-wise sparse representation methods, and they are much faster than these pixel-wise sparse representation methods, which demonstrates that it is significant to considerer all the test pixels simultaneously. Figure 4 and Figure 5 show the classification maps for the Indian Pines and Pavia University data sets, respectively, which can give a visual comparison between different methods.
We also choose other eight methods for comparison, i.e., SVM [9,66], NRS [28,67], CRT [30,67], KCRC [31,68], OMP [15], SOMP [15], JRSRC [20] and 3D-CNN [45,69]. The SVM is a very popular classifier, the 3D-CNN is a deep neural network based classifier, and the other six compared methods are collaborative representation and sparse representation based classifiers. Table 7 and Table 8 show the classification performances of the proposed SFL and eight compared methods using the Indian Pines and Pavia University data set, respectively. In addition, Figure 6 and Figure 7 show the classification maps of the Indian Pines and Pavia University data set when using the proposed SFL and eight compared methods, which can give a visual comparison between different methods. From Table 7 and Table 8, it can be clearly seen that the proposed SFL can obtain the best classification performance, which demonstrates that our proposed SFL is efficient for hyperspectral image classification. In addition, the SVM is the fastest, the reason lies in that it is implemeted in C Lagnuage which is much faster than Matlab. NRS, CRT and KCRC are very fast due to the fact that they are collaborative representation methods, and they have closed solutions, which do not need iteration. The OMP and SOMP are also very fast due to the fact that they are greedy sparse representation methods, while the JRSRC method is very time-consuming due to the fact that JRSRC is a regression based sparse representation method. In addition, the 3D-CNN is not fast because the main time-consuming aspect lies in the training. Our proposed method is also a regression based method, which takes more time than the collaborative representation methods and greedy sparse representation methods. There are several possible ways for us to improve the time consumed in the process. One way is to use C Language and graphic processing unit for fast implementation. Another way is to use the first-order primal-dual algorithm in [70] to achieve faster convergence.

5. Conclusions

In this paper, we propose an SFL method for a hyperspectral image classification method based on the multichannel joint sparse recovery theory in [46], which can deal with all the test pixels simultaneously. The proposed SFL can not only obtain comparably good or better classification performance than using the pixel-wise classification strategy but also takes much less time. In addition, spatial filtering and the non-negative constraints are both adopted to improve the classification performance, and the spatial filtering has a larger effect on the classification than the non-negative constraint. Moreover, methods using 2 , 1 norm regularization term can generally obtain better classification performance than methods using an 1 norm regularization term, which demonstrate that it is beneficial to select correlated training samples among the whole training data set, and the 2 , 1 norm regularization term can impose sparsity across the pixels both at the group and individual levels. Furthermore, methods using 2 , 1 norm loss function can generally obtain better classification performance than methods using F norm loss function, which demonstrate that the 2 , 1 norm loss function is more robust for outliers than F norm loss function. Finally, experiments on two real hyperspectral image data sets demonstrate that the proposed SFL method outperforms some other popular classifiers. In our future work, we can adopt the CNN framework to extract deep features of hyperspectral images, which can be integrated into our method to improve the classification performance.

Acknowledgments

Financial support for this study was provided by the National Natural Science Foundation of China under Grants 61272278, 61275098 and 61503288; the Ph.D. Programs Foundation of Ministry of Education of China under Grant 20120142110088; the China Postdoctoral Science Foundation 2015M572194, 2015M570665; and the Hubei Province Natural Science Foundation 2014CFB270, 2015CFA061.

Author Contributions

All authors have made great contributions to the work. Hao Li and Cong Zhang designed the research and analyzed the results. Hao Li, Chang Li, Zhe Liu and Chengyin Liu performed the experiments and wrote the manuscript. Chang Li gave insightful suggestions for the work and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  2. Mei, X.; Ma, Y.; Fan, F.; Li, C.; Liu, C.; Huang, J.; Ma, J. Infrared ultraspectral signature classification based on a restricted Boltzmann machine with sparse and prior constraints. Int. J. Remote Sens. 2015, 36, 4724–4747. [Google Scholar] [CrossRef]
  3. Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6469–6481. [Google Scholar] [CrossRef]
  4. Ma, L.; Zhang, X.; Yu, X.; Luo, D. Spatial Regularized Local Manifold Learning for Classification of Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 609–624. [Google Scholar] [CrossRef]
  5. Poona, N.; Van Niekerk, A.; Ismail, R. Investigating the utility of oblique tree-based ensembles for the classification of hyperspectral data. Sensors 2016, 16, 1918. [Google Scholar] [CrossRef] [PubMed]
  6. Mei, X.; Ma, Y.; Li, C.; Fan, F.; Huang, J.; Ma, J. A real-time infrared ultra-spectral signature classification method via spatial pyramid matching. Sensors 2015, 15, 15868–15887. [Google Scholar] [CrossRef] [PubMed]
  7. Yang, X.; Hong, H.; You, Z.; Cheng, F. Spectral and image integrated analysis of hyperspectral data for waxy corn seed variety classification. Sensors 2015, 15, 15578–15594. [Google Scholar] [CrossRef] [PubMed]
  8. Liu, S.; Jiao, L.; Yang, S. Hierarchical Sparse Learning with Spectral-Spatial Information for Hyperspectral Imagery Denoising. Sensors 2016, 16, 1718. [Google Scholar] [CrossRef] [PubMed]
  9. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  10. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009, 57, 2479–2493. [Google Scholar] [CrossRef]
  11. Jiang, J.; Ma, J.; Chen, C.; Jiang, X.; Wang, Z. Noise robust face image super-resolution through smooth sparse representation. IEEE Trans. Cybern. 2016. [Google Scholar] [CrossRef] [PubMed]
  12. Ma, J.; Zhao, J.; Ma, Y.; Tian, J. Non-rigid visible and infrared face registration via regularized Gaussian fields criterion. Pattern Recognit. 2015, 48, 772–784. [Google Scholar] [CrossRef]
  13. Ma, J.; Zhao, J.; Tian, J.; Yuille, A.L.; Tu, Z. Robust point matching via vector field consensus. IEEE Trans. Image Process. 2014, 23, 1706–1721. [Google Scholar]
  14. Ma, J.; Zhao, J.; Tian, J.; Bai, X.; Tu, Z. Regularized vector field learning with sparse approximation for mismatch removal. Pattern Recognit. 2013, 46, 3519–3532. [Google Scholar] [CrossRef]
  15. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  16. Sun, X.; Qu, Q.; Nasrabadi, N.M.; Tran, T.D. Structured priors for sparse-representation-based hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1235–1239. [Google Scholar]
  17. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef]
  18. Liu, T.; Gu, Y.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Class-Specific Sparse Multiple Kernel Learning for Spectral–Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7351–7365. [Google Scholar] [CrossRef]
  19. Wang, J.; Jiao, L.; Liu, H.; Yang, S.; Liu, F. Hyperspectral Image Classification by Spatial–Spectral Derivative-Aided Kernel Joint Sparse Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2485–2500. [Google Scholar] [CrossRef]
  20. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral Image Classification with Robust Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  21. Roscher, R.; Waske, B. Shapelet-Based Sparse Representation for Landcover Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1623–1634. [Google Scholar] [CrossRef]
  22. Fu, W.; Li, S.; Fang, L.; Kang, X.; Benediktsson, J.A. Hyperspectral Image Classification Via Shape-Adaptive Joint Sparse Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 556–567. [Google Scholar] [CrossRef]
  23. He, Z.; Liu, L.; Zhou, S.; Shen, Y. Learning group-based sparse and low-rank representation for hyperspectral image classification. Pattern Recognit. 2016, 60, 1041–1056. [Google Scholar] [CrossRef]
  24. Zhang, E.; Jiao, L.; Zhang, X.; Liu, H.; Wang, S. Class-Level Joint Sparse Representation for Multifeature-Based Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4160–4177. [Google Scholar] [CrossRef]
  25. Bian, X.; Chen, C.; Xu, Y.; Du, Q. Robust Hyperspectral Image Classification by Multi-Layer Spatial-Spectral Sparse Representations. Remote Sens. 2016, 8, 985. [Google Scholar] [CrossRef]
  26. Shao, Y.; Sang, N.; Gao, C.; Ma, L. Probabilistic class structure regularized sparse representation graph for semi-supervised hyperspectral image classification. Pattern Recognit. 2017, 63, 102–114. [Google Scholar] [CrossRef]
  27. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478.
  28. Li, W.; Tramel, E.W.; Prasad, S.; Fowler, J.E. Nearest regularized subspace for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 477–489. [Google Scholar] [CrossRef]
  29. Li, W.; Du, Q. Gabor-filtering-based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
  30. Li, W.; Du, Q.; Xiong, M. Kernel collaborative representation with Tikhonov regularization for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 48–52. [Google Scholar]
  31. Wang, D.; Lu, H.; Yang, M.H. Kernel collaborative face recognition. Pattern Recognit. 2015, 48, 3025–3037. [Google Scholar] [CrossRef]
  32. Li, W.; Du, Q.; Zhang, F.; Hu, W. Hyperspectral Image Classification by Fusing Collaborative and Sparse Representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4178–4187. [Google Scholar] [CrossRef]
  33. Sun, B.; Kang, X.; Li, S.; Benediktsson, J.A. Random-Walker-Based Collaborative Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 212–222. [Google Scholar] [CrossRef]
  34. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  35. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  36. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105.
  37. Slavkovikj, V.; Verstockt, S.; De Neve, W.; Van Hoecke, S.; Van de Walle, R. Hyperspectral image classification with convolutional neural networks. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1159–1162.
  38. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  39. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  40. Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep Learning With Attribute Profiles for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
  41. Ghamisi, P.; Chen, Y.; Zhu, X.X. A Self-Improving Convolution Neural Network for the Classification of Hyperspectral Data. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1537–1541. [Google Scholar] [CrossRef]
  42. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  43. Liang, H.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
  44. Li, Y.; Xie, W.; Li, H. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognit. 2017, 63, 371–383. [Google Scholar] [CrossRef]
  45. Shi, C.; Liu, F.; Jiao, L.; Bibi, I. 3-D Deep Convolutional Neural Networks for Hyperspectral classification. IEEE Tr. Geosci. Remote Sens. 2017, in press. [Google Scholar]
  46. Eldar, Y.C.; Rauhut, H. Average case analysis of multichannel sparse recovery using convex relaxation. IEEE Trans. Inf. Theory 2010, 56, 505–519. [Google Scholar] [CrossRef]
  47. Li, W.; Du, Q. Joint within-class collaborative representation for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2200–2208. [Google Scholar] [CrossRef]
  48. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  49. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the IEEE 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4.
  50. Jiang, J.; Hu, R.; Wang, Z.; Han, Z. Noise robust face hallucination via locality-constrained representation. IEEE Trans. Multimedia 2014, 16, 1268–1281. [Google Scholar] [CrossRef]
  51. Jiang, J.; Hu, R.; Wang, Z.; Han, Z.; Ma, J. Facial image hallucination through coupled-layer neighbor embedding. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 1674–1684. [Google Scholar] [CrossRef]
  52. Ding, C.; Zhou, D.; He, X.; Zha, H. R1-PCA: Rotational invariant L1-norm principal component analysis for robust subspace factorization. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 281–288.
  53. Ma, J.; Qiu, W.; Zhao, J.; Ma, Y.; Yuille, A.L.; Tu, Z. Robust L2E estimation of transformation for non-rigid registration. IEEE Trans. Signal Process. 2015, 63, 1115–1129. [Google Scholar] [CrossRef]
  54. Xu, H.; Caramanis, C.; Sanghavi, S. Robust PCA via Outlier Pursuit. IEEE Trans. Inf. Theory 2012, 58, 3047–3064. [Google Scholar] [CrossRef]
  55. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
  56. Nie, F.; Huang, H.; Cai, X.; Ding, C.H. Efficient and robust feature selection via joint L2,1-norms minimization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 6–11 December 2010; pp. 1813–1821.
  57. Evgeniou, A.; Pontil, M. Multi-task feature learning. Adv. Neural Inf. Process. Syst. 2007, 19, 41. [Google Scholar]
  58. Bach, F.R. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res. 2008, 9, 1179–1225. [Google Scholar]
  59. Kong, D.; Ding, C.; Huang, H. Robust nonnegative matrix factorization using l21-norm. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Glasgow, UK, 24–28 October 2011; pp. 673–682.
  60. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Collaborative sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 341–354. [Google Scholar] [CrossRef]
  61. Sprechmann, P.; Ramirez, I.; Sapiro, G.; Eldar, Y.C. C-HiLasso: A collaborative hierarchical sparse modeling framework. IEEE Trans. Signal Process. 2011, 59, 4183–4198. [Google Scholar] [CrossRef]
  62. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  63. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral unmixing with robust collaborative sparse regression. Remote Sens. 2016, 8, 588. [Google Scholar] [CrossRef]
  64. Ma, Y.; Li, C.; Mei, X.; Liu, C.; Ma, J. Robust Sparse Hyperspectral Unmixing with 2,1 Norm. IEEE Trans. Geosci. Remote Sens. 2017. [Google Scholar] [CrossRef]
  65. Li, C.; Ma, Y.; Huang, J.; Mei, X.; Liu, C.; Ma, J. GBM-Based Unmixing of Hyperspectral Data Using Bound Projected Optimal Gradient Method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 952–956. [Google Scholar] [CrossRef]
  66. Chang, C.-C.; Lin, C.J. LIBSVM—A Library for Support Vector Machines. Available online: https://www.csie.ntu.edu.tw/cjlin/libsvm/ (accessed on 8 January 2017).
  67. Li, W. Wei Li’s Homepage. Available online: http://research.cs.buct.edu.cn/liwei/ (accessed on 8 January 2017).
  68. Lu, H. Huchuan Lu’s Homepage. Available online: http://202.118.75.4/lu/publications.html (accessed on 8 January 2017).
  69. Liu, F. Fang Liu’s Homepage. Available online: http://web.xidian.edu.cn/fliu/en/paper.html (accessed on 8 January 2017).
  70. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed method.
Figure 1. Flow chart of the proposed method.
Sensors 17 00314 g001
Figure 2. Ground truth image of (a) Indian Pines; (b) Pavia University.
Figure 2. Ground truth image of (a) Indian Pines; (b) Pavia University.
Sensors 17 00314 g002
Figure 3. Performance of overall accuracy as a function of the parameter λ using the hyperspectral image of (a) Indian Pines; (b) Pavia University.
Figure 3. Performance of overall accuracy as a function of the parameter λ using the hyperspectral image of (a) Indian Pines; (b) Pavia University.
Sensors 17 00314 g003
Figure 4. Classification maps for the Indian Pines data set. (a) 2 + 1 ; (b) 2 + 1 +NC; (c) 2 + 1 +SF; (d) 2 + 1 +SF+NC; (e) 1 + 1 ; (f) 1 + 1 +NC; (g) 1 + 1 +SF; (h) 1 + 1 +SF+NC; (i) F+ 1 ; (j) F+ 1 +NC; (k) F+ 1 +SF; (l) F+ 1 +SF+NC; (m) F+ 2 , 1 ; (n) F+ 2 , 1 +NC; (o) F+ 2 , 1 +SF; (p) F+ 2 , 1 +SF+NC; (q) 2 , 1 + 2 , 1 ; (r) 2 , 1 + 2 , 1 +NC; (s) 2 , 1 + 2 , 1 +SF; (t) 2 , 1 + 2 , 1 +SF+NC (SFL).
Figure 4. Classification maps for the Indian Pines data set. (a) 2 + 1 ; (b) 2 + 1 +NC; (c) 2 + 1 +SF; (d) 2 + 1 +SF+NC; (e) 1 + 1 ; (f) 1 + 1 +NC; (g) 1 + 1 +SF; (h) 1 + 1 +SF+NC; (i) F+ 1 ; (j) F+ 1 +NC; (k) F+ 1 +SF; (l) F+ 1 +SF+NC; (m) F+ 2 , 1 ; (n) F+ 2 , 1 +NC; (o) F+ 2 , 1 +SF; (p) F+ 2 , 1 +SF+NC; (q) 2 , 1 + 2 , 1 ; (r) 2 , 1 + 2 , 1 +NC; (s) 2 , 1 + 2 , 1 +SF; (t) 2 , 1 + 2 , 1 +SF+NC (SFL).
Sensors 17 00314 g004
Figure 5. Classification maps for the Pavia University data set. (a) 2 + 1 ; (b) 2 + 1 +NC; (c) 2 + 1 +SF; (d) 2 + 1 +SF+NC; (e) 1 + 1 ; (f) 1 + 1 +NC; (g) 1 + 1 +SF; (h) 1 + 1 +SF+NC; (i) F+ 1 ; (j) F+ 1 +NC; (k) F+ 1 +SF; (l) F+ 1 +SF+NC; (m) F+ 2 , 1 ; (n) F+ 2 , 1 +NC; (o) F+ 2 , 1 +SF; (p) F+ 2 , 1 +SF+NC; (q) 2 , 1 + 2 , 1 ; (r) 2 , 1 + 2 , 1 +NC; (s) 2 , 1 + 2 , 1 +SF; (t) 2 , 1 + 2 , 1 +SF+NC (SFL).
Figure 5. Classification maps for the Pavia University data set. (a) 2 + 1 ; (b) 2 + 1 +NC; (c) 2 + 1 +SF; (d) 2 + 1 +SF+NC; (e) 1 + 1 ; (f) 1 + 1 +NC; (g) 1 + 1 +SF; (h) 1 + 1 +SF+NC; (i) F+ 1 ; (j) F+ 1 +NC; (k) F+ 1 +SF; (l) F+ 1 +SF+NC; (m) F+ 2 , 1 ; (n) F+ 2 , 1 +NC; (o) F+ 2 , 1 +SF; (p) F+ 2 , 1 +SF+NC; (q) 2 , 1 + 2 , 1 ; (r) 2 , 1 + 2 , 1 +NC; (s) 2 , 1 + 2 , 1 +SF; (t) 2 , 1 + 2 , 1 +SF+NC (SFL).
Sensors 17 00314 g005
Figure 6. Classification maps for the Indian Pines data set using the compared methods and the proposed method. (a) SVM; (b) NRS; (c) CRT; (d) KCRC; (e) OMP; (f) SOMP; (g) JRSRC; (h) 3D-CNN; (i) SFL.
Figure 6. Classification maps for the Indian Pines data set using the compared methods and the proposed method. (a) SVM; (b) NRS; (c) CRT; (d) KCRC; (e) OMP; (f) SOMP; (g) JRSRC; (h) 3D-CNN; (i) SFL.
Sensors 17 00314 g006
Figure 7. Classification maps for the Pavia University data set using the compared methods and the proposed method. (a) SVM; (b) NRS; (c) CRT; (d) KCRC; (e) OMP; (f) SOMP; (g) JRSRC; (h) 3D-CNN; (i) SFL.
Figure 7. Classification maps for the Pavia University data set using the compared methods and the proposed method. (a) SVM; (b) NRS; (c) CRT; (d) KCRC; (e) OMP; (f) SOMP; (g) JRSRC; (h) 3D-CNN; (i) SFL.
Sensors 17 00314 g007
Table 1. Sixteen ground-truth classes in Aviris Indian Pines and the training and test sets for each class.
Table 1. Sixteen ground-truth classes in Aviris Indian Pines and the training and test sets for each class.
ClassSamples
No.NameTrainTest
1Alfalfa541
2Corn-notill1431285
3Corn-min83747
4Corn24213
5Grass/Pasture48435
6Grass/Trees73657
7Grass/Pasture-mowed325
8Hay-windrowed48430
9Oats218
10Soybeans-notill97875
11Soybeans-min2462209
12Soybeans-clean59534
13Wheat21184
14Woods1271138
15Buildings-Grass-Trees-Drives39347
16Stone-Steel Towers984
Table 2. Nine classes in the University of Pavia and the training and test sets for each class.
Table 2. Nine classes in the University of Pavia and the training and test sets for each class.
ClassSamples
NoNameTrainTest
1Asphalt666565
2Meadows18618,463
3Gravel212078
4Trees313033
5Metal sheets131332
6Bare soil504679
7Bitumen131317
8Bricks373645
9Shadows9938
Table 3. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when using pixel-wise strategy.
Table 3. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when using pixel-wise strategy.
Spatial FilteringNoNoYesYesNoNoYesYes
MethodNon-Negative ConstraintNoYesNoYesNoYesNoYes
Norm 2 + 1 1 + 1
114.6339.0295.12100.0017.0753.6695.12100.00
267.1673.0797.9099.8468.4076.1997.7499.46
336.6856.2292.9097.3237.0858.3796.3997.19
420.6633.3397.1897.1821.1330.0596.7198.12
575.6390.1196.7899.5476.0991.4996.0999.31
686.1594.6795.8999.3987.5296.0498.6399.70
74.0024.0032.0084.004.0032.0036.0080.00
897.67100.00100.00100.0097.9199.77100.00100.00
Class95.5611.1138.8955.565.5616.6733.3383.33
1038.8629.1495.0995.4338.2932.8096.4696.34
1167.2280.6297.9697.6967.9081.5898.2898.46
1249.8134.4697.1997.3849.0639.8997.9497.75
1376.0981.5291.3094.0276.6388.0490.2295.65
1496.8498.5999.12100.0097.0198.9599.38100.00
1539.7731.4196.5496.5439.4836.0298.5698.85
1691.6792.8685.7194.0591.6792.8691.6792.86
Overall Accuracy (%)65.6371.3296.6498.0666.0773.3497.4498.47
Average Accuracy (%)54.2760.6488.1094.2554.6864.0288.9196.06
Kappa Statistic0.6030.6670.9620.9780.6080.6900.9710.983
Time (s)553656134953483230,16831,84332,64332,998
Table 4. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when dealing with all the test pixels simultaneously.
Table 4. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when dealing with all the test pixels simultaneously.
Spatial FilteringNoNoYesYesNoNoYesYesNoNoYesYes
MethodNon-Negative ConstraintNoYesNoYesNoYesNoYesNoYesNoYes
NormF+ 1 F+ 2 , 1 2 , 1 + 2 , 1
126.8314.6395.12100.0036.5931.7195.12100.0036.5980.49100.00100.00
249.3467.0098.1399.8465.2163.4297.8299.3069.5766.5499.4699.47
350.8734.2795.7297.3238.9644.4494.7896.5244.1869.7597.1997.86
414.0822.5497.1897.1822.5421.6098.12100.0027.7031.4697.6597.65
579.7776.3297.4799.5485.9883.6897.0198.8585.0691.2699.3199.31
693.0087.2197.2699.3993.4694.2298.0299.5495.8998.4899.7099.85
720.004.0052.0084.004.0016.0052.0084.008.0040.0080.0092.00
899.0797.91100.00100.0099.7799.53100.00100.00100.0099.53100.00100.00
Class95.565.5633.3355.665.5611.1183.33100.005.5622.2283.33100.00
1028.9138.7495.8995.4325.1426.0696.6996.9131.5449.4996.3497.71
1178.8667.8198.3797.6980.8176.2898.1498.7878.3283.8898.4698.64
1231.2750.1997.7597.3832.2134.0897.7598.6941.0163.8697.7598.50
1374.4678.8089.6794.0287.5082.6190.7696.2094.0298.3795.6597.28
1498.3396.4999.21100.0098.2497.9898.9599.9198.1598.51100.00100.00
1525.9438.9098.2796.5436.3133.1497.6998.5642.0744.0998.8598.85
1692.8688.1084.5294.0588.1094.0584.5288.1091.6796.4392.8692.86
Overall Accuracy (%)65.4265.6797.3198.4967.9671.4997.4198.5970.1577.3598.4898.88
Average Accuracy (%)54.3454.2889.3794.2556.2756.8793.0797.2159.3370.9096.0398.14
Kappa Statistic0.5970.6030.9700.9780.6250.6160.9690.9840.6530.7370.9820.987
Time (s)1682871883665225368337488527539551
Table 5. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when using pixel-wise strategy.
Table 5. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when using pixel-wise strategy.
Spatial FilteringNoNoYesYesNoNoYesYes
MethodNon-Negative ConstraintNoYesNoYesNoYesNoYes
Norm 2 + 1 1 + 1
158.4295.3175.8390.9261.4282.0784.5290.94
290.7895.2699.9399.8392.6293.4599.9299.73
324.2156.3071.9485.1326.9059.2469.4482.82
483.1287.2794.3392.7183.9882.1093.3793.31
Class599.7799.77100.00100.0099.7799.70100.00100.00
637.2649.4361.9286.6436.9662.0065.7486.06
710.784.4880.6497.729.9542.9056.8795.44
852.1838.7959.9267.8553.8339.6273.8573.83
962.5872.4942.3283.9060.9889.0249.4785.39
Overall Accuracy (%)69.5079.2284.6392.5071.0179.3986.8592.80
Average Accuracy(%)57.6866.1176.3189.4158.4972.2377.0289.72
Kappa Statistic0.5870.7160.7920.9000.6060.7230.8220.904
Time (s)271527882729275618,21818,25618,27318,286
Table 6. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when dealing with all the test pixels simultaneously.
Table 6. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when dealing with all the test pixels simultaneously.
Spatial FilteringNoNoYesYesNoNoYesYesNoNoYesYes
MethodNon-Negative ConstraintNoYesNoYesNoYesNoYesNoYesNoYes
NormF+ 1 F+ 2 , 1 2 , 1 + 2 , 1
162.1382.0081.7890.2567.7881.7882.3890.2767.2795.3186.6089.61
293.6593.4799.6499.7194.8293.5399.8699.7296.1495.2699.9699.67
324.1159.1068.3385.6121.1759.1970.1285.4239.5156.3079.5088.64
484.7082.2093.6492.6885.0082.2392.9192.7586.5887.2794.5693.60
Class599.7799.70100.00100.0099.7099.70100.00100.0099.7799.77100.00100.00
637.7661.9874.0588.9735.8761.7871.9289.0536.7749.4381.3489.68
73.1942.5240.3997.112.4342.5243.4396.7418.380.3895.5298.03
857.5339.7873.3970.8960.5240.3074.1070.8658.9338.7978.7472.87
943.9288.7051.8183.8043.5088.9155.5484.5457.8972.4965.0387.53
Overall Accuracy (%)71.7279.5086.7592.8872.5779.6287.0992.9076.1081.2392.2993.34
Average Accuracy (%)56.3172.1675.8989.8956.7576.7066.1189.9362.3674.1186.8091.07
Kappa Statistic0.6080.7220.8210.9050.6240.7230.8240.9050.6560.7160.8870.911
Time (s)14745216943766433108477611637621648
Table 7. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when using the compared methods and the proposed methods.
Table 7. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Indian Pines data set when using the compared methods and the proposed methods.
MethodSVMNRSCRTKCRCOMPSOMPJRSRC3D-CNNSFL
141.4660.9826.8336.5960.9868.2968.2975.61100.00
280.3951.9887.2472.8467.3294.6390.9784.9899.77
366.9326.9155.0255.1551.6786.4895.3172.8297.86
468.5430.0527.7024.4138.5089.2066.6766.6797.65
588.5188.7491.9583.2285.9895.6395.1789.2099.31
694.6794.8299.2498.1793.1599.2499.3996.9699.85
732.0048.0020.0024.0036.0012.004.0040.0092.00
899.53100.00100.00100.0099.53100.00100.0099.30100.00
Class933.3333.3311.1122.2238.8911.1111.1122.22100.00
1074.8624.9165.8356.1151.5480.6971.0980.2397.71
1184.5297.8784.4390.0070.0892.2198.1082.1698.64
1284.8332.5867.4244.1946.0788.0196.2580.9098.50
1399.4697.2898.9198.3795.6599.46100.0099.4697.28
1497.1098.3397.3698.5195.08100.00100.0095.08100.00
1543.2353.3151.5940.6341.5067.7265.7152.7498.85
1694.0585.7191.6790.4886.9098.81100.0092.8692.86
Overall Accuracy (%)82.8170.7480.6576.9570.5791.4492.0284.0498.88
Average Accuracy (%)73.9664.0567.2764.6866.1879.5278.1876.9598.14
Kappa Statistic0.8030.6520.7770.7320.6620.9020.9080.8180.987
Time (s)340147331014542311895551
Table 8. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when using the compared methods and the proposed methods.
Table 8. Overall Accuracy, Average Accuracy, Kappa Statistic and Time of the Pavia University data set when using the compared methods and the proposed methods.
MethodSVMNRSCRTKCRCOMPSOMPJRSRC3D-CNNSFL
189.0292.6082.9682.9967.8391.9699.1989.3889.61
299.0098.9499.2697.8793.48100.0098.9499.8899.67
314.5858.2349.3729.8455.6365.7867.2888.9888.64
485.4685.2386.3880.4279.4985.0093.3493.4493.60
Class598.3598.2799.6297.7599.62100.00100.00100.00100.00
647.9652.1655.1134.6157.9663.0476.6486.9389.68
76.4543.4361.9660.0655.2886.4873.5898.4898.03
896.8487.9687.2291.9168.2371.0886.2668.8672.87
999.6891.6890.72100.0080.7080.3889.2386.4687.53
Overall Accuracy (%)83.2786.5885.8081.8979.0288.3192.3492.7293.34
Average Accuracy (%)70.8278.7179.1875.0573.1482.6487.1690.2791.07
Kappa Statistic0.7700.8170.8070.7510.7190.8410.8970.9030.911
Time (s)36011056153691086663648

Share and Cite

MDPI and ACS Style

Li, H.; Li, C.; Zhang, C.; Liu, Z.; Liu, C. Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm. Sensors 2017, 17, 314. https://doi.org/10.3390/s17020314

AMA Style

Li H, Li C, Zhang C, Liu Z, Liu C. Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm. Sensors. 2017; 17(2):314. https://doi.org/10.3390/s17020314

Chicago/Turabian Style

Li, Hao, Chang Li, Cong Zhang, Zhe Liu, and Chengyin Liu. 2017. "Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm" Sensors 17, no. 2: 314. https://doi.org/10.3390/s17020314

APA Style

Li, H., Li, C., Zhang, C., Liu, Z., & Liu, C. (2017). Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm. Sensors, 17(2), 314. https://doi.org/10.3390/s17020314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop