Next Article in Journal
Evaluation and Improvement of SMOS and SMAP Soil Moisture Products for Soils with High Organic Matter over a Forested Area in Northeast China
Previous Article in Journal
Parallel Agent-as-a-Service (P-AaaS) Based Geospatial Service in the Cloud
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective

1
School of Earth Sciences and Engineering, Hohai University, Nanjing 211100, China
2
Key Laboratory for Satellite Mapping Technology and Applications of National Administration of Surveying, Mapping and Geoinformation of China, Nanjing University, Nanjing 210023, China
3
Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210023, China
4
Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing University, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(4), 386; https://doi.org/10.3390/rs9040386
Submission received: 27 November 2016 / Revised: 13 April 2017 / Accepted: 16 April 2017 / Published: 19 April 2017

Abstract

:
This paper presents a novel semi-supervised joint dictionary learning (S 2 JDL) algorithm for hyperspectral image classification. The algorithm jointly minimizes the reconstruction and classification error by optimizing a semi-supervised dictionary learning problem with a unified objective loss function. To this end, we construct a semi-supervised objective loss function which combines the reconstruction term from unlabeled samples and the reconstruction–discrimination term from labeled samples to leverage the unsupervised and supervised information. In addition, a soft-max loss is used to build the reconstruction–discrimination term. In the training phase, we randomly select the unlabeled samples and loop through the labeled samples to comprise the training pairs, and the first-order stochastic gradient descents are calculated to simultaneously update the dictionary and classifier by feeding the training pairs into the objective loss function. The experimental results with three popular hyperspectral datasets indicate that the proposed algorithm outperforms the other related methods.

Graphical Abstract

1. Introduction

Hyperspectral remote sensing sensors can provide plenty of useful information that increases the accurate discrimination of spectrally similar materials of interest and allow for the acquisition of hundreds of contiguous bands for the same area on the surface of the Earth [1]. The acquired hyperspectral images have been extensively exploited for classification tasks [2,3,4,5], which aim at assigning each pixel with one thematic class for an object in a scene.
Recently, sparse representation has emerged as an effective way to solve various computer vision tasks, such as face recognition, image super-resolution, motion tracking, image segmentation, image denoising and inpainting, background modeling, photogrammetric stereo, and image classification [6], where the concept of sparsity often leads to state-of-the-art performances. Sparse representation has also been used for hyperspectral image classification [4,7], target detection [7,8], unmixing [9], pansharpening [10], image decomposition [11,12], and dimensionality reduction [13,14], where the high-dimensional pixel vectors can be sparsely represented by a few training samples (atoms) from a given dictionary and the encoded sparse vectors carry out the class-label information. In order for sparse representation to yield superior performances, the desired dictionary should have good representation power [15].
Traditional methods for learning a representative and compact dictionary have been extensively exploited [16,17,18,19]. The method of optimal directions (MOD) [16] is an improvement of matching pursuit with an iterative optimization strategy, which iteratively calculates the optimal adjustment for the atoms. Singular value decomposition generalized from K-means (K-SVD) [17] is an iterative method that alternates between sparse coding of instances based on the current dictionary and updating the dictionary to better represent the data. The majorization method (MM) [18] is an optimization strategy that substitutes the original objective function with a surrogate function updated in each optimization step. The recursive least squares algorithm (RLS-DLA) [19] adopts a continued update approach as each atom is being processed. These methods have shown good performances in various computer vision tasks. However, these dictionary learning algorithms are designed for reconstruction tasks but not for the purpose of classification. Moreover, these algorithms often result in high computational complexity, less representative power for specific class, and lower discriminative power.
Advanced dictionary learning algorithms have been recently proposed to incorporate a discriminative term into the objective function in the dictionary learning problem [20,21,22,23,24,25,26,27,28,29,30], which has been regarded as discriminative sparse representation (DSR) in our previous work [31]. DSR allows for jointly learning reconstructive and discriminative parts instead of only the reconstructive one. Existing DSR models include the following categories:
(i)
The seminal studies that paved the way for considering discrimination in dictionary learning. Mairal [20] adopted MOD and K-SVD to update the dictionary by using a truncated Newton iteration method. However, this method is not strictly convex and it did not explore the discrimination capability of sparse coefficients. Later, Mairal [22] adopted a logistic loss function to build a binary classification problem. This work also illustrated the possibility of extending the proposed binary classification problem to multi-class classification problems using soft-max loss function. Pham [21] designed a constrained optimization problem by adopting a linear classifier with quadratic loss and 2 -norm regularization to jointly minimize the reconstruction and classification errors. This approach may suffer from a local minimum issue due to the fact that it iteratively alternates between reconstruction and classification terms.
(ii)
Exploiting a different loss function and discriminative criterion. Lian [23] adopted a hinge loss function inspired from support vector machines (SVMs) to design a unified objective loss function that links classification with dictionary learning. Such a framework is able to further increase the margins of a binary classifier, which consequently decreases the error bound of the classifier. Yang [26] presented a novel discrimination dictionary learning (FDDL) method by using a Fisher discrimination criterion for penalizing the sparse coefficients, where a structured dictionary was used for minimizing the reconstruction error. Henao [32] developed a new Bayesian formulation for nonlinear SVM, based on a Gaussian process and with the hinge loss expressed as a scaled mixture of normals.
(iii)
Incorporating K-SVD with a DSR model. Following the work [21], Zhang [25] proposed a discriminative K-SVD (D-KSVD) by incorporating the classification error term into the K-SVD-based objective function. However, this approach does not guarantee the ability of discrimination when acting on a small training set. In order to overcome this issue, Jiang [29] presented a label consistent K-SVD (LC-KSVD) algorithm, where the class-label information is associated with dictionary atoms to enforce discriminative property, and the optimal solution is efficiently obtained by using the K-SVD algorithm.
(iv)
Exploiting the hybrid supervised and unsupervised DSR model. Lian [24] presented a probabilistic model that combines an unsupervised model (i.e., Gaussian mixture model) and a supervised model (i.e., logistic regression) for supervised dictionary learning. Marial [33] presented an online dictionary learning (OnlineDL) algorithm. Following this work, Zhang [30] presented an online semi-supervised dictionary learning (OnlineSSDL) algorithm by optimizing the reconstruction error from labeled and unlabeled data, and the classification error from labeled data. However, for the sake of simplicity, OnlineSSDL droops the weight decay for classifier parameters and makes the problem strictly convex, resulting in a suboptimal solution.
(v)
Exploiting structured sparsity in the DSR model. Compared with traditional sparse representation methods, structured sparsity-based methods are always more robust to noise due to the stability associated with group structure [34]. In addition, the structured sparsity-inducing dictionary learning methods require a smaller sample size to obtain the optimal solution [35,36,37,38]. Based on graph topology, Jiang [27] proposed a submodular dictionary learning (SDL) algorithm by optimizing an objective function that accounts for the entropy rate of random walk on a graph and a discriminative term. This dictionary learning problem can be considered as a graph partitioning problem, where the dictionary is updated by finding a graph topology that maximizes the objective function.
These works also stated that a good dictionary learning method should find a proper balance between reconstruction, discrimination, and compactness. Particularly, some studies have exploited DSR for hyperspectral image processing. Charles [39] modified an existing unsupervised learning method to learn the dictionary for hyperspectral image classification. Later, Castrodad [40] exploited DSR, where block-structured dictionary learning and subpixel unsupervised abundance mapping were jointly considered. More recently, Wang [41] designed a hinge loss function inspired from learning vector quantization to address the discriminative dictionary learning problem. Wang [42] proposed a semi-supervised classification method by jointly learning the classifier and dictionary in a task-driven framework, where logistic loss function is adopted to build the discriminative term. Our previous work presented a new method for DSR by learning a reconstructive dictionary and a discriminative classifier in a sparse representation model regularized with total variation [31].
Despite the good performances of these dictionary learning methods, some shortcomings can be observed. On the one hand, most of these approaches deal with supervised dictionary learning problems, and the performances of the learnt dictionary for classification greatly depend on the number of labeled samples. Unfortunately, the collection of labeled training samples is generally difficult, expensive and time-consuming, whereas unlabeled training samples can be generated in a much easier way, which has fostered the idea of exploiting semi-supervised learning (SSL) for hyperspectral image classification [43]. On the other hand, the loss function adopted in most of these approaches is square loss that considers classification as a regression problem. In addition, square loss often suffers from one critical flaw that the data outliers are punished too heavily when squaring the errors.
In this paper, we consider the above issues by jointly learning a reconstructive and discriminative dictionary in a semi-supervised fashion. To this end, we first employ a soft-max loss function to build the multi-class discriminative term to address the multi-class classification problem. Different from square loss, soft-max loss is overparameterized, which means for any hypothesis that needs to fit, there are multiple parameter settings giving rise to exactly the same hypothesis function mapping from inputs to the predictions. We then calculate the first-order stochastic gradient descents (SGD) [44] to simultaneously update the dictionary and classifier. The dictionary learning phase is iteratively performed in a semi-supervised learning fashion with the obtained labeled and unlabeled training pairs. The ultimate goal of this study is classification, while dictionary is an implicit variable when applying the proposed DSR model on hyperspectral image classification. Note that the recent study [42] is related to our work. However, we adopt soft-max loss to build the discriminative term, whereas [42] used logistic loss.
Although our previous studies [11,14,45,46] have exploited sparse representation for hyperspectral image classification, the methodologies are quite different from this work. Xue [11] focused on hyperspectral image decomposition for spectral-spatial classification, Xue [14] addressed hyperspectral image dimensionality reduction using sparse graph embedding, and Xue [45,46] exploited sparse graph regularization for hyperspectral image classification with very few labeled samples.
In this context, the main contribution of our work is the proposed semi-supervised joint dictionary learning (S 2 JDL) algorithm, which leverages the information from labeled and unlabeled samples, allowing more accurate classification performance. Note that the proposed algorithm is unique compared to previously proposed approaches in the hyperspectral image classification community. In addition, we adopted a soft-max loss function to build the DSR problem, which is beneficial to hyperspectral image classification since the multi-class classification problem is very common in this community.

2. Background

Let X = [ X l , X u ] = [ x 1 , . . . , x N ] R n × N be a hyperspectral dataset with an n-dimensional signal for each pixel x i = [ x 1 , . . . , x n ] T , i 1 , . . . , N . Let superscripts l and u be the labeled and unlabeled sample or dataset, and let the subscripts u and s be unsupervised or supervised objective loss function (i.e., Γ or L ). Let Y = [ y 1 , . . . , y N ] R m × N represent the label matrix for the input data, where the index of the nonzero value (i.e., 1) of y i represents its label. Let Y ( x i ) { 1 , . . . , m } denote the label of x i . Let D = [ d 1 , . . . , d K ] R n × K be the dictionary. Let W = [ w 1 T , . . . , w m T ] T R m × K be the classifier. Let Z = [ z 1 , . . . , z N ] R K × N be the sparse coefficients for X.

2.1. Sparse Representation

In the context of sparse representation, the sparse coefficients of X with respect to dictionary D can be obtained by optimizing an 1 -norm regularization problem [6]
arg min Z 1 2 X DZ F 2 + λ z i 1 ,
where λ is a regularization parameter controlling the tradeoff between reconstruction error and sparsity.

2.2. Dictionary Learning for Classification

For various dictionary learning algorithms, the construction of D can be achieved by minimizing the reconstruction error and satisfying the sparsity constraint as
arg min D , Z 1 2 X DZ F 2 + λ z i 1 .
K-SVD [17], which iteratively alternates between sparse coding and dictionary updating to better fit the data, is an efficient algorithm generalized from the K-means clustering process to solve Equation (2). However, K-SVD is not explicitly designed for classification tasks, as it only focuses on minimizing the reconstruction error.
Separating dictionary learning from classification may result in a suboptimal D for classification. Therefore, it is generally preferred to jointly learn the dictionary and classifier by solving [21,22,25,28,29,30]
arg min D , W , Z 1 2 X DZ F 2 + 1 N i = 1 N Γ [ y i , f ( z * ( x i , D ) , W ) ] + λ 1 2 W F 2 + λ z i 1 ,
where the classifier can be obtained by optimizing the model parameter W = [ w 1 , . . . , w m ] T R m × K as
W = arg min W 1 N i = 1 N Γ [ y i , f ( z * ( x i , D ) , W ) ] + λ 1 2 W F 2 ,
where Γ denotes the objective loss function which can be of square loss, logistic loss, soft-max loss, and hinge loss forms, z * ( x i , D ) denotes the sparse code z i obtained by solving Equation (1), and λ 1 is another regularization parameter preventing overfitting.

2.3. Related Work

Recently proposed joint dictionary learning methods mainly focus on supervised dictionary learning, which take the form [22]
arg min D , W , Z 1 2 X DZ F 2 + 1 N i = 1 N { Γ [ y i , f ( z * ( x i , D ) , W ) ] Γ [ y i , f ( z * ( x i , D ) , W ) ] } + λ 1 2 W F 2 + λ z i 1 .
However, Equation (5) is designed for binary classification with y i { [ 1 0 ] T ; [ 0 1 ] T } .
K-SVD can be extended to discriminative K-SVD (D-KSVD) [25] by reconstructing an augmented dictionary with augmented training data, which can be formulated as
arg min D , W , Z α X DZ F 2 + β Y WZ F 2 + λ z i 1 , arg min D ˜ , W , Z X ˜ D ˜ Z F 2 + λ z i 1 ,
where X ˜ = [ α X T , β Y T ] T , D ˜ = [ α D T , β W T ] T , α and β are two scalars controlling the related contributions of the corresponding terms.
Recently, label consistent K-SVD (LC-KSVD) [29] has emerged as an effective way to solve Equation (6) by jointly adding a classification term and a label consistent regularization term into the square loss objective function, which is of the form
arg min D , W , G , Z X DZ F 2 + α Q GZ F 2 + β Y WZ F 2 + λ z i 1 ,
where the term Q GZ F 2 signifies the discriminative sparse code error, Q = [ q 1 , . . . , q N ] R K × N refers to the discriminative sparse code (0 or 1) corresponding to input data Z. The nonzero values (i.e., 1) of q i = [ q i 1 , . . . , q i K ] T R K occur at those indices where the input signal z i and the dictionary atom d k share the same label. G R K × K is a linear transformation matrix, which transforms the original sparse code to be the most discriminative one in sparse feature space R K . Similar to D-KSVD, LC-KSVD is solved by using K-SVD with an augmented dictionary D ˜ = [ α D T , β W T , γ G T ] T [29].
More recently, based on LC-KSVD, Zhang [30] tried to solve Equation (7) in an online SSL fashion by using a block-coordinate gradient descent algorithm to update the dictionary, which is named as OnlineSSDL and formulated as
arg min D , W , G , Z X u DZ u F 2 + α X l DZ l F 2 + β Y WZ l F 2 + γ Q GZ l F 2 + λ z i 1 .
Mairal [28] represented semi-supervised dictionary learning as an extension in the task-driven dictionary learning problem, which takes the form
arg min D , W ( 1 μ ) E x Γ u ( f ( z * ( x u , D ) ) ) + μ E y , x Γ s ( y , f ( z * ( x l , D ) , W ) ) + λ 1 2 W F 2 ,
where the loss functions Γ s and Γ u are, respectively, for supervised and unsupervised learning fashions, and μ ( 0 , 1 ) is a new parameter controlling the tradeoff between them.
However, Equation (9) adopts the logistic loss with a one-versus-all strategy and addresses classification as a regression problem, resulting in the scalability issues and large memory burden.

3. Proposed Method

In the proposed method, we first define a semi-supervised joint dictionary learning problem. Then, the optimization phase includes initialization, sparse coding, dictionary updating, and classifier updating. Finally, the class labels of unknown data are predicted by using the learnt classifier and the sparse coefficients. Figure 1 graphically illustrates the main idea.

3.1. Model Assumption

An attractive and promising research line for jointly learning the dictionary and classifier is to incorporate SSL. Inspired by the Equations (8) and (9), we now reformulate the semi-supervised joint dictionary learning problem into an improved form
arg min D , W , Z ( 1 μ ) 1 N i = 1 N Γ u [ f ( z * ( x i u , D ) ) ] + λ ψ ( z i u ) + μ 1 2 X l DZ l F 2 + 1 N i = 1 N Γ s [ y i , f ( z * ( x i l , D ) , W ) ] + λ 1 2 W F 2 + λ ψ ( z i l ) ,
where ψ is a sparsity-inducing function.
We adopt an 1 -norm for ψ , and adopt a soft-max loss to design the supervised objective loss function, which takes the form
Γ s [ y i , f ( z * ( x i l , D ) , W ) ] j = 1 m 1 { Y ( x i l ) = j } log exp ( w j T z i l ) p = 1 m exp ( w p T z i l ) ,
where 1 { · } is an indicator function, so that 1 { a true statement } = 1 and 1 { a false statement } = 0 , and j { 1 , 2 , . . . , m } refers to class.
Finally, the designed semi-supervised joint dictionary learning problem can be defined as
arg min D , W , Z ( 1 μ ) 1 2 X u D Z u F 2 + λ z i u 1 + μ { 1 2 X l D Z l F 2 1 N i = 1 N j = 1 m 1 { Y ( x i l ) = j } log exp ( w j T z i l ) p = 1 m exp ( w p T z i l ) + λ 1 2 W F 2 + λ z i l 1 }

3.2. Optimization

3.2.1. Initialization

Let us assume that we have a small labeled dataset X l spanning all classes and a large unlabeled dataset X u . Two variables need to be initialized since Y can be seen as a prior. For D 0 , we intend to initialize such a dictionary in a way that its atoms are uniformly allocated to each class, with the number of atoms proportional to the dictionary size. Thus, we randomly select multiple class-specific dictionaries with equal size from the training data. The initialization process is completely supervised and the class labels attached to the dictionary remain fixed during the dictionary learning process. As for W 0 , we employ a multivariate ridge regression model [47] as
arg min W 1 2 Y WZ F 2 + λ 1 2 W F 2 ,
which is equipped with square loss and 2 -norm regularization, and yields the following solution
W = YZ T ( ZZ T + λ 1 2 I K ) 1 ,
where I K denotes the identity diagonal matrix with degree K.
We employ a spectral unmixing by variable splitting and augmented Lagrangian (SUnSAL) algorithm [48] to obtain the sparse code Z for the input data X with respect to the initialized dictionary D 0 . Then, the initial W 0 can be computed by using Equation (14). Our previous studies have validated the good performance of SUnSAL for hyperspectral image processing [11,14,31,45,46].

3.2.2. Variables Updating

We can resort to the SGD algorithm to optimize Equation (12) since the objective loss function in our problem is highly nonlinear. In order to achieve semi-supervised optimization, we first regard the optimization process as two independent ingredients (i.e, unsupervised learning and supervised learning) and then calculate their gradients respectively. Then, we can combine the gradients with weighted summation strategy to obtain the final update.
At iteration t, we first select the t-th labeled sample x t l from X l . Assume currently, the dictionary D t and the label matrix y t are all given. We then randomly select an unlabeled sample x t u from X u . Next, we calculate the sparse codes (z t u , z t l ) for the training pair (x t u , x t l ) with respect to the current dictionary by adopting the SUnSAL algorithm. At last, D t and W t can be updated by feeding the training pair into the semi-supervised objective loss function. To this end, the gradients should be firstly formulated, which poses a critical challenge in optimizing the proposed dictionary learning problem.
For an unlabeled sample x t u , assume L u [ f ( z * ( x t u , D t ) ) ] 1 2 x t u D t z t u 2 2 + λ z t u 1 , then we compute the gradients of L u for x t u with respect to D t as
D t L u [ f ( z * ( x t u , D t ) ) ] = ( 1 μ ) ( D t z t u x t u ) z t u T .
For a labeled sample x t l , we solve the following problem
arg min D t , W t , z t l 1 2 x t l D t z t l 2 2 + λ z t l 1 j = 1 m 1 { Y ( x t l ) = j } log exp ( w j T z t l ) p = 1 m exp ( w p T z t l ) + λ 1 2 W t F 2 .
The skeleton optimization process of the presented semi-supervised joint dictionary learning (S 2 JDL) method is summarized in Algorithm 1. More details for the dictionary and classifier updating can be found in the Appendix A.
Algorithm 1 Semi-Supervised Joint Dictionary Learning (S 2 JDL).
1:Input: X l , X u , K, T, λ , λ 1 , λ 2 , μ , ρ .
2:Output:D and W.
3:Initialization: Initialize D 0 with K atoms from X l and obtain W 0 by Equation (14).
4:for each x t l in X l do
5:  Randomly select an unlabeled sample x t u from X u .
6:  Obtain (z t l , z t u ) for (x t l , x t u ) with the current dictionary D t using SUnSAL.
7:  Find the support set Λ t for z t l .
8:  Calculate the learning rate: ρ t min ( ρ , ρ t 0 / t ) .
9:  Update D and W by Equations (A4) and (A5), respectively.
10:  Remove the selected unlabeled sample: X u X u x t u .
11:end for
12:return D t + 1 and W t + 1 .

3.3. Classification

Once we have obtained the learnt dictionary D ^ and the classifier parameter W ^ from Algorithm 1, we can predict a new incoming test sample x t e s t . To this end, we first compute its sparse code z t e s t using SUnSAL and then assign its label by the position corresponding to the largest value (also the most possible value) in the label vector by
Y ( x t e s t ) = H ( x t e s t , W ^ ) arg min j W ^ z t e s t .
Since the proposed method can produce probability as Equation (17), we adopt graph cuts [49] to obtain smoother classification maps. Graph cuts are as an energy minimization algorithm, which can tackle the combinatorial optimization problem involving unary and pairwise interaction terms, i.e., the maximum a posteriori (MAP) segmentation problem using multinomial logistic regression with multilevel logistic prior. Graph cuts can yield very good approximations to the MAP segmentation and are quite efficient from a computational point of view [50]. Overall accuracy (OA) and the Kappa value ( κ ) generated from the confusion matrix are used to evaluate the classification performance based on the ground-truth [51]. In addition, the Kappa [52] and McNemar z-score statistical tests [53] are also adopted for accuracy assessment.

4. Experimental Results and Discussion

4.1. Experimental Settings

For performance comparison, some strongly related dictionary learning algorithms are considered. The unsupervised methods are MOD, K-SVD, D-KSVD, and OnlineDL. The supervised category includes LC-KSVD and SDL. We also implemented a semi-supervised method, semi-supervised joint dictionary learning with logistic loss function (S 2 JDL-Log, “Log” for Logistic loss function), which is a variant of the proposed method. We then denote by S 2 JDL-Sof (“Sof” for soft-max loss function) the proposed method to differentiate them. It is worth noting that all the methods employ a multivariate ridge regression model for classifier learning (see Equation (14)), and adopt orthogonal matching pursuit (OMP) [54] for sparse coding.
It is worth noting that MOD, K-SVD, D-KSVD, LC-KSVD, OnlineDL, and SDL are implemented based on their original source codes released to the public by their owners, and they share the common parameters of sparsity level (T = 5), reconstruction error (E = 1 × 10 4 ), and 1 -norm penalty ( λ = 1 × 10 5 ). In addition, three other parameters in SDL have been set as the recommended values. We also note that these parameters have been selected by careful cross-validation. In this context, the parameter settings always ensure a fair comparison even if they are suboptimal. For dictionary and classifier update, the initial learning rate is set to ρ = 1 × 10 3 , and the tradeoff between the unsupervised and supervised learning ingredients is set to 0.5.
We randomly select labeled training samples from the ground-truth to initialize the dictionary and classifier. The 2 -norm penalty parameter is set to λ 1 = 1 × 10 5 when initializing the classifier using Equation (14).
For classification, we set the maximum number of iterations to k m a x = 10 , which means that the reported overall accuracies (OAs), average accuracies (AAs), kappa statistic ( κ ), and class individual accuracies are derived by averaging the results after conducting ten independent Monte Carlo runs with respect to the initialized labeled training set. In addition, the smoothness parameter ( τ ) in graph cuts is set to 3, and we adopt graph cuts for all the methods.
Finally, it is worth noting that all the implementations were carried out using Matlab R2015b on a desktop PC equipped with an Intel Xeon E3 CPU (at 3.4 GHz) and 32 GB of RAM. It should be emphasized that all the results are derived from our own implementations and the involved parameters for these methods have been carefully optimized in order to ensure a fair comparison.

4.2. Hyperspectral Datasets

Three real hyperspectral datasets [55] collected by Airborne Visible Infrared Imaging Spectrometer (AVIRIS) and the Reflective Optics Spectrographic Imaging System (ROSIS) are used in our experiments. The first hyperspectral image was acquired by the AVIRIS sensor over the Indian Pines region in northwestern Indiana in 1992. The image size in pixels is 145 × 145, with moderate spatial resolution of 20 m. The number of data channels in the acquired image is 220 (with spectral range from 0.4 to 2.5 μ m). A total of 200 radiance channels are used in the experiments by removing several noisy and water absorbed bands. A three-band false color composite image and the ground-truth map are shown in Figure 2. A total of 10,366 samples containing 16 classes are available.
The second hyperspectral image was acquired by the ROSIS sensor over the urban area of the University of Pavia, Italy. The image size in pixels is 610 × 340, with very high spatial resolution of 1.3 m. The number of data channels in the acquired image is 103 (with spectral range from 0.43 to 0.86 μ m). A three-band false color composite image and the ground-truth map are shown in Figure 3. A total of 42,776 samples containing nine classes are available.
The third hyperspectral image was acquired by the AVIRIS sensor over Salinas Valley in southern California, USA. The image size in pixels is 512 × 217, with a spatial resolution of 3.7 m. The number of data channels in the acquired image is 224 (with spectral range from 0.4 to 2.5 μ m). A total of 204 radiance channels are used in the experiments by removing the noisy and water absorbed bands. A three-band false color composite image and the ground-truth map are shown in Figure 4. A total of 54,129 samples containing 16 classes are available.

4.3. Experiments with AVIRIS Indian Pines Dataset

Experiment 1: We first analyze the sensitivity of parameters for the proposed method. Figure 5 illustrates the sensitivity of the proposed method to parameters λ , λ 1 , and μ under the condition that 10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary. As we can see from Figure 5a, the OA is insensitive to λ and λ 1 . Therefore, we roughly set λ = 1 × 10 5 and λ 1 = 1 × 10 5 . This observation is reasonable since λ controls the uniqueness of sparse coding and λ 1 determines the initial performance of the classifier parameter. The impacts of the two parameters are reduced in a iterative learning scheme since sparse coding and classifier update are alternatively conducted. According to Figure 5b, we found that OA is sensitive to μ , and the optimal value is set to μ = 3 . This observation is in accordance with that made in [50].
Experiment 2: In this test, Gaussian random noise with a pre-defined signal-to-noise ratio (SNR) ( SNR 10 log 10 E [ X 2 2 ] / E [ N 2 2 ] ) is generated and added to the original imagery. Figure 6 illustrates the evolution of OA with SNR for different classifiers. As shown in the figure, the proposed method outperforms others in most cases with SNR = 5, 10, etc. The interval between the curve of the proposed method and others visually indicates the significances, which confirms the robustness of the proposed method to data noise.
Experiment 3: We then analyze the impact of training data size on classification accuracy. To this end, we randomly choose 10% of labeled samples per class (a total of 1027 samples) to initialize the training data and evaluate the impact of the number of atoms on the classification performance. Figure 7 shows the OAs as a function of the number of atoms per class obtained by different methods. As we can see from the figure, the proposed method obtains the highest accuracies in all cases. Another observation is that, as the number of atoms increases, the OAs also increase. When the number of atoms per class is equal to 15, the proposed method reaches a stable level, with an OA higher than 80%. It is interesting to note that D-KSVD and LC-KSVD obtain very similar results.
Figure 8 shows the OAs as a function of the ratio of labeled samples per class for different methods. As we can see from the figure, the proposed method obtains higher accuracies when the ratio is larger than 5%. It is worth noting that the proposed method can stably obtain improved classification performance with additional labeled samples. However, the other methods cannot produce higher OAs as the ratio ranges from 3% to 10%. This observation can be explained by the increase of the number of labeled samples; the proposed method can exploit more information from both the labeled and unlabeled training samples, whereas the supervised dictionary learning methods can only rely on the labeled information, and the performance cannot be improved significantly. Another observation is that S 2 JDL-Log yields a very competitive classification performance. This is due to the fact that S 2 JDL-Log is based on semi-supervised learning fashion.
Experiment 4: Table 1 reports the OA, AA, individual classification accuracies, and κ statistic. The results of the proposed algorithm are listed in the last column of the table and we have marked in bold typeface the best result for each class and the best results of OA, AA, and κ statistic. Our method achieves the best results compared to the other supervised dictionary learning methods. The improvements of classification accuracy are around 10%–40% when compared to other methods. Especially, when classifying the class Wheat, our method performs very well. Although our method may not always obtain the best accuracy in some specific classes, AA, OA, and kappa are more convincing metrics measuring the classification performance. In addition, the time costs of different methods are listed in the table, where we can see that the proposed method is more efficient than K-SVD, D-KSVD, and LC-KSVD. However, MOD, OnlineDL, and SDL take less time. The time cost of the proposed method mainly comes from the optimization step, which can be reduced by exploiting more efficient optimization strategy.
Table 2 reports the statistical tests between-classifier in terms of Kappa z-score and McNemar z-score. The critical value of z-score is 1.96 at a confidence level of 0.95, and all the tests are significant with 95% confidence, which indicates that the proposed method significantly outperforms the other methods. Another observation is that the lower value of z-score demonstrates the closer classification results, e.g., the Kappa z-score value for S 2 JDL-Log/S 2 JDL-Sof is 4.4. Similar observation can be made for the tests using McNemar z-score.
For illustrative purposes, Figure 9 shows the obtained classification maps (corresponding to the best one after ten Monte Carlo runs). The advantages obtained by adopting the semi-supervised dictionary learning approach with regard to the corresponding supervised and unsupervised cases can be visually appreciated in the classification maps displayed in Figure 9, where the classification OAs obtained for each method are reported in the parentheses. Compared to the ground-truth map, the proposed method obtains a more accurate and smoother classification map. Significant differences can be observed when classifying the class Wheat in this scene, which is in accordance with the former results. However, the classification maps obtained by D-KSVD and LC-KSVD are much less homogeneous than the other methods. This observation can be explained by the fact that Graph cuts are adopted as a post processing strategy, which largely relies on the initial classification probabilities obtained by the classifiers. If the initial classification results are poor, then the classification improvements may not be satisfied. That is the case for D-KSVD and LC-KSVD with an initial OA = 60.07% for the former and an initial OA = 60.52% for the latter.
Experiment 5: In the last experiment, we analyze the mechanism of the proposed method. Firstly, we plot the stem distributions of sparse coefficients obtained by different methods. As we can see from Figure 10, the distributions between-classifier are significantly different. Precisely, the corresponding atoms belonging to the same land cover type contribute more than the others, thus making the associated coefficients sparsely locate at those atoms. For example, the atoms indexed by 146–160 in the dictionary belong to the class Wheat, and the sparse coefficients will mainly locate at these atoms if this class is well represented. Obviously, the proposed method produces more accurate sparse coefficients since the stem distributions mainly locate at the corresponding atoms (see Figure 10g). As for the other methods, the associated sparse atoms cannot be accurately found, i.e., the stem distributions obtained by OnlineDL partially locate at the class Woods. Figure 11 spatially exhibits the sparse coefficients relative to the class Wheat for different methods. As shown in Figure 11, the proposed method yields more accurate sparse coefficients relative to the class Wheat. Therefore, the aggregation characteristics of sparse coefficients naturally enlarge the discrimination between different land cover types, and the spatial variations of sparse coefficients explain the accuracy of the prosed method for sparse representation. The above observations demonstrate the good performance of the proposed method in dictionary learning, and the discrimination performance of our method has been validated in the former experiments.
Secondly, we analyze the denoising power of the proposed method by plotting the original spectrum, the reconstructed spectrum, and the noise for the class Wheat. From the results shown in Figure 12, we can see the original spectrum can be accurately reconstructed with a very small Root-Mean-Square Error (RMSE) (The RMSE for two observations x i and x j can be defined as: RMSE = sqrt b = 1 B ( x i , b x j , b ) 2 / B ), which is the difference between the original spectrum (x) and the reconstructed spectrum (Dz). It is worth noting that the proposed method obtains a very small RMSE value. In this context, the proposed method can accurately reconstruct the original spectrum with high fidelity by removing noise, which explains the robustness to noise of the proposed method in the former experiment. We then evaluate the global reconstruction performance of the proposed method by considering all classes. As reported in Table 3, the proposed method obtains the smallest RMSE value. This experiment also hints at the good performance of the proposed method in dictionary learning.
Finally, we attempt to analyze the dictionary structure by visually illustrating the matrix D learnt by different methods. As shown in Figure 13, S 2 JDL-Sof and S 2 JDL-Log yield similar data structure, and the atoms belonging to the same class are more similar to each other, while the atoms belonging to different classes are more distinctive between each other. Similar observations can be made for D-KSVD, LC-KSVD, and SDL. However, the dictionaries learnt by OnlineDL model very little data structure, as shown in the figure. Note that we cannot currently explain the factors inducing the differences of dictionary structure.

4.4. Experiments with ROSIS University of Pavia Dataset

Experiment 1: We first analyze the impact of training data size on classification accuracy. We randomly choose 5% of labeled samples per class (a total of 2138 samples) to initialize the training data and evaluate the impact of the number of atoms on classification performance achieved by the proposed method for the ROSIS University of Pavia dataset. Figure 14 shows the OAs as a function of the number of atoms per class obtained by different methods. Again, the proposed method obtains the highest accuracies in all cases. Another observation is that, for most of the methods, the OAs increase as the number of atoms also increase. Different from the former experiments, when the number of atoms per class is larger than 10, the OAs obtained by D-KSVD and LC-KSVD become lower. In this scene, MOD does not perform very well in all cases.
Figure 15 depicts the OAs as a function of the ratio of labeled samples per class for different methods. As we can see from the figure, the proposed method obtains the highest accuracies in all cases. It is interesting to note that the proposed method cannot stably obtain improved classification performances with additional labeled samples. This may be due to the fact that the homogeneity in this scene is not so significant, and graph cuts reduce the effects of the learning phase since the classification accuracy cannot be significantly improved by using additional labeled samples. In addition, similar observations can be made for other methods. In this scene, D-KSVD does not perform very well in different cases. Again, S 2 JDL-Log provides competitive performance.
Experiment 2: Table 4 lists the OA, AA, individual classification accuracies, and κ statistic. As reported in the table, the proposed method achieves the best results compared to the other supervised dictionary learning methods. The improvements of classification accuracy are around 3%–30% when compared to other methods. When classifying the class Bare soil, our method obtains the highest accuracy, with an OA of 23.10%. Although this accuracy is not very high, it demonstrates the merit of the proposed method since Bare soil is very difficult to accurately classify. In addition, the time costs of different methods are listed in the table, where we can see that the proposed method is more efficient than K-SVD, D-KSVD, and LC-KSVD. Again, MOD, OnlineDL, and SDL take less time.
Table 5 also reports the statistical tests between-classifier in terms of Kappa z-score and McNemar z-score. Again, the results indicate that the proposed method significantly outperforms the other methods. In this scene, we observe that OnlineDL is closer to our method, i.e., the Kappa z-score value for OnlineDL/S 2 JDL is 14.2, which is in accordance with the results reported in Table 4.
Figure 16 visually depicts the obtained classification maps. The advantages obtained by adopting the semi-supervised dictionary learning approach with regard to the corresponding supervised case can be visually appreciated in the classification maps displayed in Figure 16, which also reports the classification OAs obtained for each method in the parentheses. As shown in the figure, the homogeneity is very clear for this scene, and the proposed method depicts a more accurate and smoother classification map. As expected, D-KSVD and LC-KSVD obtain poor classification maps for this scene.

4.5. Experiments with AVIRIS Salinas Dataset

Experiment 1: Similarly, we first analyze the impact of training data size on classification accuracy. A total of 5% of labeled samples per class (a total of 2706 samples) are randomly selected to initialize the training data. We evaluate the impact of the number of atoms on the classification performance achieved by the proposed method for the AVIRIS Salinas dataset. Figure 17 shows the OAs as a function of the number of atoms per class obtained by different methods. Similar to the experiments implemented for the AVIRIS Indian Pines dataset, the OAs become stable when 15 atoms per class are used to build the dictionary. Another observation is that when the number of atoms per class is larger than 10, our method stably outperforms the other methods. It is interesting to note that D-KSVD and LC-KSVD obtain similar results even when the latter is incorporated with the class-label information. For most of the cases, the OAs increase as the number of atoms also increases.
Figure 18 plots the OAs as a function of the ratio of labeled samples per class for different methods. As we can see from the figure, the proposed method obtains the highest accuracies in all cases. Different from the former experiments, the proposed method can stably obtain improved classification performance with the additional labeled samples in this scene. However, the additional labeled samples deteriorate the classification performance for SDL.
Experiment 2: Table 6 gives the OA, AA, individual classification accuracies, and κ statistic. As reported in the table, the proposed method achieves the best results compared to the other supervised dictionary learning methods. The improvements of classification accuracy are around 10%–20% when compared to the other methods. As for the specific classification accuracy, the proposed method obtains higher accuracy when classifying the class Lettuce_romaine_6wk. In addition, the time costs of different methods are listed in the table, where we can see that the proposed method is more efficient than D-KSVD and LC-KSVD. Again, MOD, OnlineDL, and SDL take less time.
Similarly, we conduct the statistical tests between-classifier in terms of Kappa z-score and McNemar z-score in this scene. According to the results reported in Table 7, we observe that the proposed method significantly outperforms the other methods since all the tests are significant with 95% confidence. Another observation is that K-SVD and the proposed method produce similar results, with the Kappa z-score value of 22.9.
The classification maps are given in Figure 19, where the OAs obtained for each method are reported in the parentheses. As shown in the figure, we can see clear differences between different methods. For example, when classifying the class Lettuce_romaine_6wk, the proposed method is more accurate compared to the other methods. In addition, the homogeneity is very clear for this scene, which is similar to the AVIRIS Indian Pines dataset. Also, our method produces a more accurate and smoother classification map compared to the other methods.

5. Conclusions

In this paper, we have developed a novel semi-supervised algorithm for jointly learning a reconstructive and discriminative dictionary for hyperspectral image classification. Precisely, w design a unified semi-supervised objective loss function which integrates a reconstruction term with a reconstruction–discrimination term built by soft-max loss to leverage the unsupervised and supervised information from the training samples. In the iteratively semi-supervised learning phase, we simultaneously update the dictionary and classifier by feeding the obtained training pairs into the unified objective function via a SGD algorithm. The experimental results obtained by using three real hyperspectral images indicate that the proposed algorithm leads to better classification performance compared to the other related methods. We should note that although dictionary learning serves as an important part in DSR, previous studies involving the DSR problem mainly focused on the classification performance. Our experiments also mainly focus on classification since it is the ultimate goal of this work. On the basis of the comprehensive experiments, we draw the following conclusions:
(i)
The proposed method is insensitive to λ and λ 1 , but it is sensitive to μ .
(ii)
The proposed method outperforms other related algorithms in terms of classification accuracy, which demonstrates the superiority of soft-max loss.
(iii)
Although the proposed method exhibits slightly higher computational complexity compared with MOD, OnlineDL, and SDL, its computational time is bearable.
Further experiments with additional scenes and comparison methods should be conducted in the future. Furthermore, we also envisage two future perspectives for the development of the presented work:
(i)
Given the fact that the dictionary and classifier are updated by randomly selected unlabeled samples, our future work will consider exploiting active learning [43] to select the most informative samples during the learning phase in the DSR model.
(ii)
Since the computational complexity of our algorithm is a bit high, in our future work we will exploit the objective function to speed up the optimization process by incorporating incremental learning [56].
(iii)
Inspired by the experimental results, our future work will exploit the theoretical reason for when and why more labeled samples help in the semi-supervised joint dictionary learning tasks, which is a crucial and interesting problem. The possible reason may be training data distribution, variance across the training and test data domain, feature dimensions, sample size, number of labels samples per class, [57].
(iv)
Since the spatial property is important in the proposed method, our future work will also focus on exploiting the dictionary structure in the DSR problem [37], and we will try to reveal the relation between dictionary structure and classification accuracy in DSR.

Acknowledgments

This work was mainly supported by the National Natural Science Foundation of China (NSFC) (41601347), the Natural Science Foundation of Jiangsu Province (BK20160860), the Fundamental Research Funds for the Central Universities (2015B29214), and the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), which also was partially supported by the NSFC founds (41571325, 41271420). The authors would like to thank D. Landgrebe for making the Airborne Visible/Infrared Imaging Spectrometer Indian Pines hyperspectral data set available to the community and P. Gamba for providing the Reflective Optics Spectrographic Imaging System data over Pavia, Italy, along with the training and test data sets. The authors would also like to thank the Associate Editor who handled this paper and the anonymous reviewers for providing truly outstanding comments and suggestions that significantly helped improve the technical quality and presentation of this paper.

Author Contributions

Zhaohui Xue conceived and designed the methodology, Peijun Du performed the experiments, Hongjun Su analyzed the results, and Shaoguang Zhou made the conclusion, and all authors jointly wrote the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Let L s [ y t , f ( z * ( x t l , D t ) , W t ) ] denote the unified objective loss function in Equation (16). At this point, we have to compute the gradient descents of L s for x t l with respect to D t and W t . Due to the fact that D t is implicit in z * ( x t l , D t ) and there is no explicit analytical link between z t l and D t [28], we have to compute the gradients of D t L s using a chain rule x t l L s D t x t l (Hereinafter, we use L s instead of L s [ y t , f ( z * ( x t l , D t ) , W t ) ] for simplicity in the differential formulations), which leads to a main difficulty in solving the objective loss function in Equation (12). Following Proposition 1 by the work [28], we can obtain
D t L s = μ ( D t fl z t l T + ( x t l D t z t l ) fl T ) , fl Λ = ( D t , Λ T D t , Λ + fl 2 I K , Λ ) 1 z t l , Λ L s W , fl Λ ¯ = 0 , z t , Λ l L s W = j = 1 m p = 1 m w p , Λ exp ( w p , Λ T z t , Λ l ) p = 1 m exp ( w p , Λ T z t , Λ l ) 1 { Y ( x t l ) = j } w j , Λ + λ I K , Λ ,
where γ is an auxiliary vector, Λ denotes the support set in sparse code z t (The support set contains indices of nonzero coefficients in a sparse vector), Λ ¯ refers to the zero indices, and L s W denotes Γ s [ y t , f ( z * ( x t , D t ) , W t ) ] + λ 1 2 W t F 2 . Note that we adopt the similar optimization scheme as stated by the work [28] with the designed objective loss function to solve our problem since both of them adopt the SGD algorithm for optimization, but we have derived z t , Λ l L s W by ourselves due to the fact that we adopt a different loss function.
On the other hand, the gradients of L s W with respect to w j can be easily obtained by
w j L s W = z t l T e w j T z t l p = 1 m e w p T z t l 1 { Y ( x t l ) = j } + λ 1 w j .
Then, the gradient descents of L s with respect to W t can be written as
W t L s = w 1 L s W w 2 L s W w m L s W .
Since both the gradients from unsupervised and supervised dictionary learning phases are obtained, we now can obtain the final update of the dictionary by the following expression
D t + 1 D t ρ t [ ( 1 μ ) ( D t z t u x t u ) z t u T + μ ( D t γ z t l T + ( x t l D t z t l ) γ T ) ] ,
where ρ t refers to the learning rate.
Similar to the procedure adopted for updating the dictionary D, the classification parameter W t can be updated by
W t + 1 W t ρ t W t L s .
So far, we have given the details in optimizing Equation (12). In addition, the learning rate for updating the dictionary is usually chosen according to a heuristic rule. Here, we follow the studies [28,29] by setting the learning rate to min ( ρ , ρ t 0 / t ) , where ρ is a constant that ensures the convergence of SGD with t 0 = T / 10 and T is the total number of iterations. Before applying another iteration, we remove x u from the candidate pool X u . The process of sampling, sparse coding, and updating is repeated as we loop through all the labeled samples x l in X l .

References

  1. Toth, C.; Jozkow, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  2. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  3. Bioucas-Dias, J.; Plaza, A.; Camps-Valls, G.; Paul, S.; Nasrabadi, N.M.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  4. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in Hyperspectral Image Classification. IEEE Signal Process. Mag. 2014, 31, 45–54. [Google Scholar] [CrossRef]
  5. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
  6. Wright, J.; Ma, Y.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S.C. Sparse Representation for Computer Vision and Pattern Recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef]
  7. Li, W.; Du, Q. A survey on representation-based classification and detection in hyperspectral remote sensing imagery. Pattern Recognit. Lett. 2016, 83, 115–123. [Google Scholar] [CrossRef]
  8. Nasrabadi, N.M. Hyperspectral Target Detection. IEEE Signal Process. Mag. 2014, 31, 34–44. [Google Scholar] [CrossRef]
  9. Ma, W.K.; Bioucas-Dias, J.M.; Chan, T.H.; Gillis, N.; Gader, P.; Plaza, A.J.; Ambikapathi, A.; Chi, C.Y. A Signal Processing Perspective on Hyperspectral Unmixing. IEEE Signal Process. Mag. 2014, 31, 67–81. [Google Scholar] [CrossRef]
  10. Loncan, L.; Almeida, L.B.D.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M.; et al. Hyperspectral Pansharpening: A Review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  11. Xue, Z.; Li, J.; Cheng, L.; Du, P. Spectral-Spatial Classification of Hyperspectral Data via Morphological Component Analysis-Based Image Separation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 70–84. [Google Scholar]
  12. Xu, X.; Li, J.; Huang, X.; Mura, M.D.; Plaza, A. Multiple Morphological Component Analysis Based Decomposition for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3083–3102. [Google Scholar] [CrossRef]
  13. Ly, N.H.; Du, Q.; Fowler, J.E. Sparse Graph-Based Discriminant Analysis for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3872–3884. [Google Scholar]
  14. Xue, Z.; Du, P.; Li, J.; Su, H. Simultaneous Sparse Graph Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6114–6133. [Google Scholar] [CrossRef]
  15. Rubinstein, R.; Bruckstein, A.M.; Elad, M. Dictionaries for Sparse Representation Modeling. Proc. IEEE 2010, 98, 1045–1057. [Google Scholar] [CrossRef]
  16. Engan, K.; Aase, S.O.; Husoy, J.H. Frame based signal compression using method of optimal directions (MOD). In Proceedings of the 1999 IEEE International Symposium on Circuits and Systems, Orlando, FL, USA, 30 May–2 June 1999; Volume 4, pp. 1–4. [Google Scholar]
  17. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  18. Yaghoobi, M.; Blumensath, T.; Davies, M. Dictionary learning for sparse approximation with majorization method. IEEE Trans. Signal Process. 2009, 57, 2178–2191. [Google Scholar] [CrossRef]
  19. Skretting, K.; Engan, K. Recursive Least Squares Dictionary Learning Algorithm. IEEE Trans. Signal Process. 2010, 58, 2121–2130. [Google Scholar] [CrossRef]
  20. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Discriminative learned dictionaries for local image analysis. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; Volume 1–12, pp. 2415–2422. [Google Scholar]
  21. Pham, D.S.; Venkatesh, S. Joint learning and dictionary construction for pattern recognition. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; Volume 1–12, pp. 517–524. [Google Scholar]
  22. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Supervised dictionary learning. arXiv, 2009; arXiv:0809.3083. [Google Scholar]
  23. Lian, X.C.; Li, Z.W.; Lu, B.L.; Zhang, L. Max-Margin Dictionary Learning for Multiclass Image Categorization. In Proceedings of the 2010 European Conference on Computer Vision ECCV, Pt IV, Hersonissos, Greece, 5–11 September 2010; Volume 6314, pp. 157–170. [Google Scholar]
  24. Lian, X.C.; Li, Z.W.; Wang, C.H.; Lu, B.L.; Zhan, L. Probabilistic Models for Supervised Dictionary Learning. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2305–2312. [Google Scholar]
  25. Zhang, Q.A.; Li, B.X. Discriminative K-SVD for Dictionary Learning in Face Recognition. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2691–2698. [Google Scholar]
  26. Yang, M.; Zhang, L.; Feng, X.C.; Zhang, D. Fisher Discrimination Dictionary Learning for Sparse Representation. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 543–550. [Google Scholar]
  27. Jiang, Z.L.; Zhang, G.X.; Davis, L.S. Submodular Dictionary Learning for Sparse Coding. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3418–3425. [Google Scholar]
  28. Mairal, J.; Bach, F.; Ponce, J. Task-Driven Dictionary Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 791–804. [Google Scholar] [CrossRef] [PubMed]
  29. Jiang, Z.L.; Lin, Z.; Davis, L.S. Label Consistent K-SVD: Learning a Discriminative Dictionary for Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2651–2664. [Google Scholar] [CrossRef] [PubMed]
  30. Zhang, G.X.; Jiang, Z.L.; Davis, L.S. Online Semi-Supervised Discriminative Dictionary Learning for Sparse Representation. In Proceedings of the 11th Asian Conference on Computer Vision (ACCV), Daejeon, Korea, 5–9 November 2012; Volume 7724, pp. 259–273. [Google Scholar]
  31. Du, P.; Xue, Z.; Li, J.; Plaza, A. Learning Discriminative Sparse Representations for Hyperspectral Image Classification. IEEE J. Sel. Top. Signal Process. 2015, 9, 1089–1104. [Google Scholar] [CrossRef]
  32. Henao, R.; Yuan, X.; Carin, L. Bayesian nonlinear support vector machines and discriminative factor modeling. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1754–1762. [Google Scholar]
  33. Marial, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. Proceeding of the International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009. [Google Scholar]
  34. Jenatton, R.; Audibert, J.Y.; Bach, F. Structured Variable Selection with Sparsity-Inducing Norms. J. Mach. Learn. Res. 2011, 12, 2777–2824. [Google Scholar]
  35. Jenatton, R.; Mairal, J.; ObozinskiF, G.; Bach, F. Proximal methods for sparse hi erarchical dictionary learning. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 487–494. [Google Scholar]
  36. Huang, J.; Zhang, T.; Metaxas, D. Learning with structured sparsity. In Proceedings of the 26th International Conference on Machine Learning (ICML), Montreal, QC, Canada, 14–18 June 2009. [Google Scholar]
  37. Mairal, J.; Jenatton, R. Obozinski, G. In Learning Hierarchical and Topographic Dictionaries with Structured Sparsity. In Proceedings of the SPIE Wavelets and Sparsity XIV 81381P, San Diego, CA, USA, 21 August 2011. [Google Scholar]
  38. Lian, W.; Rai, P.; Salazar, E.; Carin, L. Integrating Features and Similarities: Flexible Models for Heterogeneous Multiview Data. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2757–2763. [Google Scholar]
  39. Charles, A.S.; Olshausen, B.A.; Rozell, C.J. Learning Sparse Codes for Hyperspectral Imagery. IEEE J. Sel. Top. Signal Process. 2011, 5, 963–978. [Google Scholar] [CrossRef]
  40. Castrodad, A.; Xing, Z.M.; Greer, J.B.; Bosch, E.; Carin, L.; Sapiro, G. Learning Discriminative Sparse Representations for Modeling, Source Separation, and Mapping of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4263–4281. [Google Scholar] [CrossRef]
  41. Wang, Z.W.; Nasrabadi, N.M.; Huang, T.S. Spatial-Spectral Classification of Hyperspectral Images Using Discriminative Dictionary Designed by Learning Vector Quantization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4808–4822. [Google Scholar] [CrossRef]
  42. Wang, Z.Y.; Nasrabadi, N.M.; Huang, T.S. Semisupervised Hyperspectral Classification Using Task-Driven Dictionary Learning With Laplacian Regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1161–1173. [Google Scholar] [CrossRef]
  43. Persello, C.; Bruzzone, L. Active and Semisupervised Learning for the Classification of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
  44. Kushner, H.J.; Yin, G. Stochastic Approximation and Recursive Algorithms and Applications; Springer: New York, NY, USA, 2003. [Google Scholar]
  45. Xue, Z.; Du, P.; Li, J.; Su, H. Sparse graph regularization for robust crop mapping using hyperspectral remotely sensed imagery with very few in situ data. ISPRS J. Photogramm. Remote Sens. 2017, 124, 1–15. [Google Scholar] [CrossRef]
  46. Xue, Z.; Du, P.; Li, J.; Su, H. Sparse Graph Regularization for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2351–2366. [Google Scholar] [CrossRef]
  47. Golub, G.H.; Hansen, P.C.; O’Leary, D.P. Tikhonov regularization and total least squares. SIAM J. Matrix Anal. Appl. 1999, 21, 185–194. [Google Scholar] [CrossRef]
  48. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  49. Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef]
  50. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Hyperspectral Image Segmentation Using a New Bayesian Approach With Active Learning. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3947–3960. [Google Scholar] [CrossRef]
  51. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  52. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  53. Foody, G.M. Thematic map comparison: Evaluating the statistical significance of differences in classification accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
  54. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal Matching Pursuit-Recursive Function Approximation with Applications to Wavelet Decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  55. Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.es/ccwintco/index.php/HyperspectralRemoteSensingScenes (accessed on 25 November 2016).
  56. Roux, N.L.; Schmidt, M.; Bach, F. A stochastic gradient method with an exponential convergence rate for finite training sets. arXiv, 2012; arXiv:1202.6258. [Google Scholar]
  57. Chapelle, O.; Schölkopf, B.; Zien, A. Semi-Supervised Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
Figure 1. Graphical illustration of the proposed method.
Figure 1. Graphical illustration of the proposed method.
Remotesensing 09 00386 g001
Figure 2. (a) False color composition of the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Indian Pines scene (R: 57, G: 27, B: 17); (b) Ground truth-map containing 16 mutually exclusive land-cover classes.
Figure 2. (a) False color composition of the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Indian Pines scene (R: 57, G: 27, B: 17); (b) Ground truth-map containing 16 mutually exclusive land-cover classes.
Remotesensing 09 00386 g002
Figure 3. (a) False color composition of the Reflective Optics Spectrographic Imaging System (ROSIS) University of Pavia scene (R: 102, G: 56, B: 31); (b) Ground truth map containing nine mutually exclusive land-cover classes.
Figure 3. (a) False color composition of the Reflective Optics Spectrographic Imaging System (ROSIS) University of Pavia scene (R: 102, G: 56, B: 31); (b) Ground truth map containing nine mutually exclusive land-cover classes.
Remotesensing 09 00386 g003
Figure 4. (a) False color composition of the AVIRIS Salinas scene (R: 57, G: 27, B: 17); (b) Ground truth map containing 16 mutually exclusive land-cover classes.
Figure 4. (a) False color composition of the AVIRIS Salinas scene (R: 57, G: 27, B: 17); (b) Ground truth map containing 16 mutually exclusive land-cover classes.
Remotesensing 09 00386 g004
Figure 5. Parameter sensitivity analysis of the proposed method for the AVIRIS Indian Pines dataset (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary). (a) Overall accuracy (OA) as a function of λ and λ 1 ; (b) Overall accuracy (OA) as a function of τ . (a) OA vs. λ and λ 1 ; (b) OA vs. τ .
Figure 5. Parameter sensitivity analysis of the proposed method for the AVIRIS Indian Pines dataset (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary). (a) Overall accuracy (OA) as a function of λ and λ 1 ; (b) Overall accuracy (OA) as a function of τ . (a) OA vs. λ and λ 1 ; (b) OA vs. τ .
Remotesensing 09 00386 g005
Figure 6. The evolution of overall accuracy with SNR for different classifiers (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Figure 6. The evolution of overall accuracy with SNR for different classifiers (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Remotesensing 09 00386 g006
Figure 7. Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS dataset (10% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Figure 7. Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS dataset (10% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g007
Figure 8. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Indian Pines dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Figure 8. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Indian Pines dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g008
Figure 9. Classification maps obtained by different methods for the AVIRIS Indian Pines dataset. The OA in each case is reported in the parentheses. (a) MOD (48.48%); (b) K-SVD (75.79%); (c) D-KSVD (62.11%); (d) LC-KSVD (62.65%); (e) OnlineDL (55.02%); (f) SDL (71.77%); (g) S 2 JDL-Log (80.46%); (h) S 2 JDL-Sof (82.25%).
Figure 9. Classification maps obtained by different methods for the AVIRIS Indian Pines dataset. The OA in each case is reported in the parentheses. (a) MOD (48.48%); (b) K-SVD (75.79%); (c) D-KSVD (62.11%); (d) LC-KSVD (62.65%); (e) OnlineDL (55.02%); (f) SDL (71.77%); (g) S 2 JDL-Log (80.46%); (h) S 2 JDL-Sof (82.25%).
Remotesensing 09 00386 g009
Figure 10. Stem distributions of sparse coefficients relative to the class Wheat obtained by different methods for the AVIRIS Indian Pines dataset. The circles terminating different stems represent the sparse coefficients relative to the associated atoms which are marked with different colors representing different classes. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof.
Figure 10. Stem distributions of sparse coefficients relative to the class Wheat obtained by different methods for the AVIRIS Indian Pines dataset. The circles terminating different stems represent the sparse coefficients relative to the associated atoms which are marked with different colors representing different classes. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof.
Remotesensing 09 00386 g010
Figure 11. Graphical illustration of sparse coefficients relative to the class Wheat obtained by different methods for the AVIRIS Indian Pines dataset. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof; (i) Ground-truth.
Figure 11. Graphical illustration of sparse coefficients relative to the class Wheat obtained by different methods for the AVIRIS Indian Pines dataset. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof; (i) Ground-truth.
Remotesensing 09 00386 g011
Figure 12. Reconstruction and denoising power of sparse representation for different methods by taking the class Wheat as an example. The original spectrum (top), reconstructed spectrum with RMSE value (middle), and noise (bottom) are given for each case. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof; (i) Original.
Figure 12. Reconstruction and denoising power of sparse representation for different methods by taking the class Wheat as an example. The original spectrum (top), reconstructed spectrum with RMSE value (middle), and noise (bottom) are given for each case. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof; (i) Original.
Remotesensing 09 00386 g012
Figure 13. Graphical illustration of the dictionary structure learnt by different methods. The vertical dashed lines in each figure separate different atoms belonging to different classes. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof.
Figure 13. Graphical illustration of the dictionary structure learnt by different methods. The vertical dashed lines in each figure separate different atoms belonging to different classes. (a) MOD; (b) K-SVD; (c) D-KSVD; (d) LC-KSVD; (e) OnlineDL; (f) SDL; (g) S 2 JDL-Log; (h) S 2 JDL-Sof.
Remotesensing 09 00386 g013
Figure 14. Overall accuracy (OA) as a function of the number of atoms per class for the ROSIS University of Pavia dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Figure 14. Overall accuracy (OA) as a function of the number of atoms per class for the ROSIS University of Pavia dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g014
Figure 15. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the ROSIS University of Pavia dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Figure 15. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the ROSIS University of Pavia dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g015
Figure 16. Classification maps obtained by different methods for the ROSIS University of Pavia dataset. The OA in each case is reported in the parentheses. (a) MOD (59.82%); (b) K-SVD (70.25%); (c) D-KSVD (46.96%); (d) LC-KSVD (48.34%); (e) OnlineDL (75.21%); (f) SDL (72.37%); (g) S 2 JDL-Log (76.29%); (h) S 2 JDL-Sof (78.79%).
Figure 16. Classification maps obtained by different methods for the ROSIS University of Pavia dataset. The OA in each case is reported in the parentheses. (a) MOD (59.82%); (b) K-SVD (70.25%); (c) D-KSVD (46.96%); (d) LC-KSVD (48.34%); (e) OnlineDL (75.21%); (f) SDL (72.37%); (g) S 2 JDL-Log (76.29%); (h) S 2 JDL-Sof (78.79%).
Remotesensing 09 00386 g016
Figure 17. Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS Salinas dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Figure 17. Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS Salinas dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g017
Figure 18. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Salinas dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Figure 18. Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Salinas dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.
Remotesensing 09 00386 g018
Figure 19. Classification maps obtained by different methods for the AVIRIS Salinas dataset. The OA in each case is reported in the parentheses. (a) MOD (75.35%); (b) K-SVD (87.89%); (c) D-KSVD (82.90%); (d) LC-KSVD (83.56%); (e) OnlineDL (86.88%); (f) SDL (82.98%); (g) S 2 JDL-Log (89.59%); (h) S 2 JDL-Sof (92.50%).
Figure 19. Classification maps obtained by different methods for the AVIRIS Salinas dataset. The OA in each case is reported in the parentheses. (a) MOD (75.35%); (b) K-SVD (87.89%); (c) D-KSVD (82.90%); (d) LC-KSVD (83.56%); (e) OnlineDL (86.88%); (f) SDL (82.98%); (g) S 2 JDL-Log (89.59%); (h) S 2 JDL-Sof (92.50%).
Remotesensing 09 00386 g019
Table 1. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the AVIRIS Indian Pines dataset with a balanced training set (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 1. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the AVIRIS Indian Pines dataset with a balanced training set (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Class#SamplesMODK-SVDD-KSVDLC-KSVDOnlineDLSDLS 2 JDL-LogS 2 JDL-Sof
TrainTest
Alfalfa5413.66 ± 11.570.00 ± 0.0017.80 ± 17.3633.90 ± 30.6010.24 ± 31.550.00 ± 0.000.00 ± 0.000.00 ± 0.00
Corn-notill143128514.44 ± 8.9458.44 ± 11.2040.60 ± 6.8733.70 ± 6.5936.90 ± 15.7159.28 ± 7.6183.03 ± 5.6782.40 ± 3.89
Corn-mintill837470.00 ± 0.0048.63 ± 9.3926.85 ± 11.3529.84 ± 11.7219.38 ± 26.5153.13 ± 5.3150.96 ± 12.2555.85 ± 5.85
Corn242130.05 ± 0.152.35 ± 5.5213.85 ± 8.7315.02 ± 15.530.00 ± 0.0015.35 ± 29.161.08 ± 3.415.77 ± 5.11
Grass-pasture484350.02 ± 0.0771.91 ± 19.5470.46 ± 12.4364.85 ± 17.4311.03 ± 10.6424.34 ± 1.7885.03 ± 3.8488.41 ± 4.44
Grass-trees7365783.84 ± 20.1299.83 ± 0.2488.57 ± 10.8287.15 ± 12.7789.82 ± 31.5699.73 ± 0.5198.83 ± 0.9599.44 ± 0.46
Grass-pasture-mowed3250.80 ± 1.6926.80 ± 43.1738.00 ± 44.5124.80 ± 39.052.40 ± 7.5926.00 ± 42.090.00 ± 0.000.00 ± 0.00
Hay-windrowed4843075.19 ± 32.08100.00 ± 0.0097.88 ± 2.2296.95 ± 3.5989.81 ± 31.56100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
Oats2180.00 ± 0.000.00 ± 0.008.89 ± 18.567.78 ± 10.540.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.00
Soybeans-notill978750.17 ± 0.5473.65 ± 1.8249.38 ± 7.2549.20 ± 14.9120.11 ± 27.1374.26 ± 1.5155.66 ± 8.5953.69 ± 5.87
Soybeans-mintill246220999.84 ± 0.3699.49 ± 0.6779.00 ± 10.6681.21 ± 8.0188.47 ± 31.2598.25 ± 2.3498.71 ± 2.1197.15 ± 0.76
Soybean-clean595340.00 ± 0.0020.64 ± 13.0626.70 ± 12.9236.25 ± 13.042.68 ± 8.4714.38 ± 12.5355.21 ± 26.9575.67 ± 8.67
Wheat211849.84 ± 31.1199.13 ± 0.4675.49 ± 22.9593.15 ± 5.1387.88 ± 30.9187.12 ± 30.7999.29 ± 0.5299.18 ± 0.46
Woods127113899.85 ± 0.1899.88 ± 0.2292.50 ± 4.3092.30 ± 4.6989.53 ± 31.4699.88 ± 0.1199.85 ± 0.2599.67 ± 0.32
Bldg-grass-tree-drives393470.06 ± 0.1231.73 ± 16.7115.48 ± 9.0923.95 ± 12.609.86 ± 17.512.05 ± 6.4759.91 ± 15.9560.35 ± 5.73
Stone-steel-towers98456.79 ± 36.5599.64 ± 0.8099.05 ± 0.7587.38 ± 30.9078.69 ± 29.0334.29 ± 44.7837.98 ± 49.1598.57 ± 2.68
Average accuracy--27.78 ± 4.5058.26 ± 4.0752.53 ± 4.1053.59 ± 4.8239.80 ± 12.0149.25 ± 3.7857.85 ± 4.3563.51 ± 0.66
Overall accuracy--48.48 ± 3.3575.79 ± 2.1862.11 ± 2.8362.65 ± 1.9755.02 ± 19.5471.77 ± 1.5480.46 ± 2.5282.25 ± 1.08
κ statistic--0.365 ± 0.040.715 ± 0.030.561 ± 0.030.567 ± 0.020.478 ± 0.170.667 ± 0.020.771 ± 0.030.793 ± 0.01
Time (Seconds)--2.64 ± 0.41138.76 ± 8.02148.11 ± 6.20143.17 ± 9.031.96 ± 0.500.83 ± 0.1353.97 ± 4.6659.64 ± 4.67
Table 2. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the AVIRIS Indian Pines dataset with a balanced training set (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 2. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the AVIRIS Indian Pines dataset with a balanced training set (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Between-Classifier κ (z-score)McNemar (z-score)
MOD/S 2 JDL-Sof57.43.3 × 10 3
K-SVD/S 2 JDL-Sof12.22.7 × 10 2
D-KSVD/S 2 JDL-Sof31.71.3 × 10 3
LC-KSVD/S 2 JDL-Sof32.31.3 × 10 3
OnlineDL/S 2 JDL-Sof45.22.5 × 10 3
SDL/S 2 JDL-Sof18.45.8 × 10 2
S 2 JDL-Log/S 2 JDL-Sof4.45.5 × 10 1
The critical value of z-score is 1.96 at a confidence level of 0.95, and all the tests are significant with 95% confidence.
Table 3. Global reconstruction performance for different methods by considering all classes.
Table 3. Global reconstruction performance for different methods by considering all classes.
MODK-SVDD-KSVDLC-KSVDOnlineDLSDLS 2 JDL-LogS 2 JDL-Sof
RMSE2.60 × 10 3 1.72 × 10 4 1.53 × 10 4 1.25 × 10 4 3.69 × 10 4 3.05 × 10 4 3.45 × 10 5 3.42 × 10 5
Table 4. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the ROSIS University of Pavia dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 4. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the ROSIS University of Pavia dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Class#SamplesMODK-SVDD-KSVDLC-KSVDOnlineDLSDLS 2 JDL-LogS 2 JDL-Sof
TrainTest
Asphalt332629978.61 ± 21.4993.67 ± 1.5111.92 ± 4.7211.50 ± 14.4298.58 ± 0.9599.32 ± 0.3898.22 ± 0.5298.38 ± 0.98
Meadows932177,17100.00 ± 0.00100.00 ± 0.0087.58 ± 8.5681.89 ± 15.62100.00 ± 0.00100.00 ± 0.0099.80 ± 0.38100.00 ± 0.01
Gravel10519940.00 ± 0.0015.31 ± 21.139.35 ± 10.9713.23 ± 17.2814.06 ± 14.408.57 ± 14.8723.61 ± 19.4942.54 ± 9.39
Trees15329112.70 ± 4.5050.84 ± 4.3716.54 ± 11.4427.02 ± 19.6336.81 ± 8.2955.36 ± 3.7773.48 ± 2.4067.28 ± 6.93
Painted metal sheets67127899.66 ± 0.4097.68 ± 0.9027.07 ± 32.7593.32 ± 10.5199.49 ± 0.3499.05 ± 0.7299.87 ± 0.1399.86 ± 0.20
Bare soil25147780.70 ± 1.5622.12 ± 2.656.64 ± 7.3715.23 ± 10.4921.52 ± 4.0918.28 ± 5.9823.73 ± 2.5723.10 ± 4.96
Bitumen6712630.00 ± 0.000.00 ± 0.0017.03 ± 21.0932.71 ± 37.045.22 ± 16.506.67 ± 14.360.00 ± 0.0013.93 ± 18.64
Self-Blocking Bricks18434981.50 ± 2.9624.08 ± 6.0010.61 ± 12.663.69 ± 3.7061.70 ± 14.9940.54 ± 12.3760.43 ± 6.4459.83 ± 14.27
Shadows4790022.44 ± 20.080.00 ± 0.00100.00 ± 0.0099.89 ± 0.0584.66 ± 7.021.63 ± 3.890.00 ± 0.0072.27 ± 11.78
Average accuracy--33.96 ± 3.7744.85 ± 2.1831.86 ± 6.6742.05 ± 7.4658.00 ± 2.6947.71 ± 2.8753.24 ± 2.5364.13 ± 2.70
Overall accuracy--59.82 ± 3.6470.25 ± 1.0946.96 ± 4.0748.34 ± 9.4775.21 ± 1.4572.37 ± 1.5076.29 ± 1.2478.79 ± 1.10
κ statistic--0.382 ± 0.080.572 ± 0.020.337 ± 0.040.365 ± 0.090.645 ± 0.020.603 ± 0.020.665 ± 0.020.701 ± 0.02
Time (Seconds)--8.30 ± 2.68341.32 ± 42.9093.09 ± 8.54100.12 ± 9.406.17 ± 1.206.02 ± 0.90521.07 ± 27.3979.58 ± 8.17
Table 5. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the ROSIS University of Pavia dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 5. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the ROSIS University of Pavia dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Between-Classifier κ (z-score)McNemar (z-score)
MOD/S 2 JDL-Sof74.57.6 × 10 3
K-SVD/S 2 JDL-Sof33.92.8 × 10 3
D-KSVD/S 2 JDL-Sof98.81.2 × 10 4
LC-KSVD/S 2 JDL-Sof93.41.2 × 10 4
OnlineDL/S 2 JDL-Sof14.25.6 × 10 2
SDL/S 2 JDL-Sof24.52.5 × 10 3
S 2 JDL-Log/S 2 JDL-Sof8.92.0 × 10 2
The critical value of z-score is 1.96 at a confidence level of 0.95, and all the tests are significant with 95% confidence.
Table 6. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the AVIRIS Salinas dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 6. Overall (OA), average (AA), and individual class accuracies (%), kappa statistics ( κ ), and the standard deviation of ten conducted Monte Carlo runs obtained for different classification methods for the AVIRIS Salinas dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Class#SamplesMODK-SVDD-KSVDLC-KSVDOnlineDLSDLS 2 JDL-LogS 2 JDL-Sof
TrainTest
Brocoli_green_weeds_1100190999.25 ± 1.0299.13 ± 0.9197.52 ± 1.3397.77 ± 1.1599.66 ± 0.7599.98 ± 0.0799.18 ± 0.2299.70 ± 0.38
Brocoli_green_weeds_2186354099.41 ± 0.8790.19 ± 31.0296.98 ± 4.9398.16 ± 0.4899.90 ± 0.1698.63 ± 0.84100.00 ± 0.0099.25 ± 0.27
Fallow9918773.31 ± 7.3698.00 ± 5.9884.61 ± 14.2992.10 ± 6.2431.13 ± 10.6880.61 ± 6.51100.00 ± 0.0090.51 ± 5.46
Fallow_rough_plow70132452.39 ± 39.194.46 ± 5.7999.27 ± 0.5599.19 ± 1.0986.62 ± 23.0499.32 ± 0.2883.05 ± 3.9399.49 ± 0.33
Fallow_smooth134254474.28 ± 41.0999.94 ± 0.0793.06 ± 6.2090.39 ± 5.8599.49 ± 0.3799.24 ± 0.4499.33 ± 0.1198.79 ± 0.53
Stubble198376199.60 ± 0.2499.90 ± 0.0399.84 ± 0.1399.84 ± 0.1199.37 ± 0.2399.99 ± 0.0199.91 ± 0.0299.97 ± 0.04
Celery179340093.70 ± 15.5999.94 ± 0.0397.24 ± 2.6298.96 ± 0.6899.92 ± 0.0899.31 ± 0.2199.90 ± 0.0299.48 ± 0.26
Grapes_untrained56410,70799.35 ± 0.3495.35 ± 0.8066.13 ± 7.0867.38 ± 4.3999.17 ± 1.0999.46 ± 0.2794.01 ± 0.8598.21 ± 2.19
Soil_vinyard_develop310589399.92 ± 0.11100.00 ± 0.0097.39 ± 0.7497.36 ± 0.8699.89 ± 0.1499.06 ± 0.90100.00 ± 0.0099.57 ± 0.39
Corn_senesced_green_weeds164311473.17 ± 26.1595.28 ± 0.8576.27 ± 7.6376.61 ± 9.8583.24 ± 3.7383.16 ± 7.1895.17 ± 0.4994.24 ± 3.21
Lettuce_romaine_4wk53101542.24 ± 47.6894.56 ± 0.8991.28 ± 5.1592.71 ± 3.7794.09 ± 3.7630.38 ± 41.4394.47 ± 0.4195.79 ± 3.66
Lettuce_romaine_5wk96183195.09 ± 6.4099.95 ± 0.0297.13 ± 3.9496.35 ± 10.3399.49 ± 0.7298.96 ± 1.2393.01 ± 8.5399.89 ± 0.28
Lettuce_romaine_6wk4687078.51 ± 31.420.00 ± 0.0076.45 ± 40.6290.97 ± 20.6295.38 ± 2.6496.93 ± 3.854.71 ± 7.7097.83 ± 0.37
Lettuce_romaine_7wk54101668.47 ± 34.9897.92 ± 0.4482.44 ± 29.8283.97 ± 10.1494.29 ± 3.3485.03 ± 22.8997.08 ± 0.3196.41 ± 1.82
Vinyard_untrained36369050.01 ± 0.0257.27 ± 2.3558.95 ± 4.1556.54 ± 6.2338.54 ± 7.073.77 ± 11.2853.14 ± 2.2955.73 ± 10.69
Vinyard_vertical_trellis90171781.68 ± 10.0599.46 ± 0.1089.17 ± 12.0091.77 ± 8.4288.53 ± 9.3594.27 ± 4.2298.78 ± 0.2597.92 ± 0.64
Average accuracy--72.52 ± 6.5583.21 ± 2.3887.73 ± 2.6489.38 ± 2.1788.04 ± 1.8185.51 ± 2.8088.24 ± 0.9395.17 ± 0.82
Overall accuracy--75.35 ± 2.9687.89 ± 2.3782.90 ± 1.4783.56 ± 1.4186.88 ± 1.4882.98 ± 1.8189.59 ± 0.5392.50 ± 1.63
κ statistic--0.720 ± 0.030.865 ± 0.030.810 ± 0.020.817 ± 0.020.853 ± 0.020.808 ± 0.020.884 ± 0.010.916 ± 0.02
Time (Seconds)--11.93 ± 3.42448.47 ± 25.98617.16 ± 23.52630.04 ± 31.325.72 ± 0.645.38 ± 0.92593.68 ± 38.96500.24 ± 16.38
Table 7. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the AVIRIS Salinas dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Table 7. Pairwise statistical test in terms of Kappa z-score and McNemar z-score for the AVIRIS Salinas dataset with a balanced training set (5% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).
Between-Classifier κ (z-score)McNemar (z-score)
MOD/S 2 JDL-Sof77.88.7 × 10 3
K-SVD/S 2 JDL-Sof22.99.3 × 10 2
D-KSVD/S 2 JDL-Sof46.73.0 × 10 3
LC-KSVD/S 2 JDL-Sof42.32.3 × 10 3
OnlineDL/S 2 JDL-Sof29.52.3 × 10 3
SDL/S 2 JDL-Sof44.74.1 × 10 3
S 2 JDL-Log/S 2 JDL-Sof14.64.6 × 10 2
The critical value of z-score is 1.96 at a confidence level of 0.95, and all the tests are significant with 95% confidence.

Share and Cite

MDPI and ACS Style

Xue, Z.; Du, P.; Su, H.; Zhou, S. Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective. Remote Sens. 2017, 9, 386. https://doi.org/10.3390/rs9040386

AMA Style

Xue Z, Du P, Su H, Zhou S. Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective. Remote Sensing. 2017; 9(4):386. https://doi.org/10.3390/rs9040386

Chicago/Turabian Style

Xue, Zhaohui, Peijun Du, Hongjun Su, and Shaoguang Zhou. 2017. "Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective" Remote Sensing 9, no. 4: 386. https://doi.org/10.3390/rs9040386

APA Style

Xue, Z., Du, P., Su, H., & Zhou, S. (2017). Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective. Remote Sensing, 9(4), 386. https://doi.org/10.3390/rs9040386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop