Next Article in Journal
Stable and Fast-Response Capacitive Humidity Sensors Based on a ZnO Nanopowder/PVP-RGO Multilayer
Next Article in Special Issue
Dimension Reduction Aided Hyperspectral Image Classification with a Small-sized Training Dataset: Experimental Comparisons
Previous Article in Journal
Development of a Telemetric, Miniaturized Electrochemical Amperometric Analyzer
Previous Article in Special Issue
A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features

1
School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
2
Highly Automated Driving Team, HERE Technologies Automotive Division, Chicago, IL 60606, USA
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(10), 2421; https://doi.org/10.3390/s17102421
Submission received: 8 August 2017 / Revised: 7 October 2017 / Accepted: 13 October 2017 / Published: 23 October 2017
(This article belongs to the Special Issue Analysis of Multispectral and Hyperspectral Data)

Abstract

:
During recent years, convolutional neural network (CNN)-based methods have been widely applied to hyperspectral image (HSI) classification by mostly mining the spectral variabilities. However, the spatial consistency in HSI is rarely discussed except as an extra convolutional channel. Very recently, the development of pixel pair features (PPF) for HSI classification offers a new way of incorporating spatial information. In this paper, we first propose an improved PPF-style feature, the spatial pixel pair feature (SPPF), that better exploits both the spatial/contextual information and spectral information. On top of the new SPPF, we further propose a flexible multi-stream CNN-based classification framework that is compatible with multiple in-stream sub-network designs. The proposed SPPF is different from the original PPF in its paring pixel selection strategy: only pixels immediately adjacent to the central one are eligible, therefore imposing stronger spatial regularization. Additionally, with off-the-shelf classification sub-network designs, the proposed multi-stream, late-fusion CNN-based framework outperforms competing ones without requiring extensive network configuration tuning. Experimental results on three publicly available datasets demonstrate the performance of the proposed SPPF-based HSI classification framework.

1. Introduction

Hyperspectral image (HSI) classification deals with the problem of pixel-wise labeling of the hyperspectral spectrum, which has historically been a heavily studied, but not yet perfectly solved problem in remote sensing. With the recent development of hyperspectral remote capturing sensors, HSIs normally contain millions of pixels with hundreds of spectral wavelengths (channels). The HSI classification is intrinsically challenging. While more and more high-dimensional HSIs accumulate and are made public available, ground truth labels remain scarce, due to the immense manual efforts required to collect them. In addition, the generalization ability of neural networks is unsatisfactory if they are trained with insufficient labeled data, due to the curse of dimensionality [1].
In the early days, conventional feature extraction and classifier design was popular among HSI classification practitioners. Favuel et al. [2] provide a detailed review of recent advances in this area. Many varieties of conventional features have been applied, including raw spectral pixels, spectral pixel patches and their dimension reduced versions by methods such as principal component analysis [3], manifold learning [4], sparse coding [5] and latent space methods [6,7,8,9]. Likewise, various conventional classifiers have been applied, such as support vector machines [10], Markov random fields [11], decision trees [12], etc. In particular, some early approaches have already tried the incorporation of spatial information by extracting large homogeneous regions using majority voting [13], watershed [14] or hierarchical segmentation [15]. Those two-stage (Stage 1: feature extraction; Stage 2: classification.) or multistage (in addition to the feature extraction stage and classification stage, there could be additional pre-/post-processing stages) methods suffer from some limitations. Firstly, it is highly time consuming to choose the optimal variant of conventional features and optimal parameter values for different HSI datasets, due to their wildly different physical properties (such as the number of channels/wavelengths) and visual appearances. Secondly, conventional classifiers have recently been outperformed by deep neural networks, particularly the convolutional neural network (CNN)-based ones.
In the past few years, deep learning methods have become popular for image classification and labeling problems [16,17,18], as an end-to-end solution that simultaneously extracts features and classifies. Unlike image capturing with regular cameras with only red, green and blue channels, HSIs are generated by the accumulation of many spectrum bands [19,20,21], with each pixel typically containing hundreds of narrow bands/channels. Many deep learning-based methods have been adapted to address the HSI classification problem, such as CNN variants [22,23,24,25], autoencoders [26,27,28,29] and deep belief networks (DBN) [30,31]. Despite their promising performance, the scarcity of training labels remains a great challenge.
Recently, Li et al. [32] proposed the pixel pair features (PPF) to mitigate the problem of training label shortage. Unlike conventional spectral pixel-based or patch-based methods, the PPF is purely based on combined pairs of pixels from the entire training set. This pixel-pairing process is used both in the training and testing (label prediction for the target dataset) stages. During the training process, two pixels are randomly chosen from the entire labeled training set, and the label of each PPF pair is deduced by a subtle rule based on the existing labels of both pixels. For pairs with pixels of the same labeled class, the pair label is trivially assigned as the shared pixel label. The interesting case arises when two pixels in a pair are of different labeled classes (which is especially likely for multi-class problems; the likelihood increases as the number of classes increases): the pair is labeled as “extra”, an auxiliary class artificially introduced. This random combination of pixels significantly increases the number of labeled training instances (within a total of N c labeled pixels of class c in the training set, there could be N c 2 = 1 2 N c 2 1 2 N c randomly sampled PPF pairs for the class c), alleviating the training sample shortage.
Despite its success, the PPF implementation [32] includes randomly sampled pairs across the entire training set, which incurs very larger intra-class variances and changes the overall training set statistical distribution. Due to the intrinsic properties of HSI imaging, pixels with the same ground truth class labels could appear differently and statistically distribute differently across channels/wavelengths, especially for pixels geographically far apart from each other. Additionally, the PPF implementation [32] includes a pair label prediction rule inconsistent with its training pair labeling rule. At the testing (label prediction) stage, the “extra” class is deliberately removed, forcing the classifier (a Softmax layer) to pick an essentially wrong label for such PPF pairs.
Inspired by [32], a revamped version of the HSI classification feature is proposed in this paper, which is termed the “spatial pixel pair feature” (SPPF). The core differences of the proposed SPPF and the prior PPF are the geographically co-located pixel selection rule and the pair label assignment rule. For each location of interest, SPPF always selects the central pixel (at the location of interest) and one from its immediate eight-neighbor, at both the training and testing stage. In addition, an SPPF pixel pair is always labeled with the central pixel label, regardless of the other neighboring pixel.
Although the smaller neighborhood size yields fewer folds of training sample increase (the proposed SPPF offers up to eight-folds of training sample increase), the small neighborhood substantially decreases the intra-class variances. Statistically speaking, geographically co-located SPPF pixels are much more likely to share similar channel/wavelength measurement distributions than their PPF counterparts. Furthermore, the simple SPPF pair label assignment rule eliminates the complicated and possibly erroneous handling of “extra” class labels.
Alongside the shortage of training samples, the deep neural network structural design for the classifier is another challenge, especially for high dimensional HSI data. In this paper, we further propose a flexible multi-stream CNN-based classification framework that is compatible with multiple in-stream sub-networks. This framework is composed of a series of sub-networks/streams, each of which independently process one SPPF pixel pair. The outputs from these sub-networks are fused (in the “late fusion” fashion) with one average pooling layer that leverages the class variance and a few-fully connected (FC) layers for final predictions.
The remainder of this paper is structured as follows. Section 2 provides a brief overview of related work on HSI classification with deep neural networks. Section 3 presents the new SPPF feature and the new classification framework. Subsequently, Section 4 provides detailed experimental settings and results, with some discussions on potential future work. Finally, conclusions are drawn in Section 5.

2. Related Work

HSI classification has long been a popular research topic in the field of remote sensing, and the primary challenge is the highly-limited training labels. Recent breakthroughs were primarily achieved by deep learning methods, especially CNN-based ones [22,32].
Hu et al. [22] propose a CNN that directly processes each single HSI pixel (i.e., raw HSI pixel as the feature) and exploits the spatial information by embedding spectrum bands in lower dimensions. For notational simplicity, this method name is abbreviated to “Pixel- C N N 1 ” in this paper (no official abbreviation is provided in [22]), as it is based on individual HSI pixels, and the CNN inputs are all the channels/wavelengths from one pixel (hence the subscript 1). The Pixel- C N N 1 method outperforms many previous conventional methods without resorting to any prior knowledge or feature engineering. However, noise remains an open issue and heavily affects the prediction quality. Largely due to the lack of spatial information, the discontinuity artifacts (especially the ‘salt-and-pepper’ noise patterns) are widespread.
More recent efforts [33,34,35] have been made to incorporate spatial consistency in HSI classification. Qian et al. [36] especially claim that spatial information is often more critical than spectral information in the HSI classification task. Slavkovikj et al. [37] incorporate both spatial and spectral information by proposing a CNN that process a HSI pixel patch, which is abbreviated as Patch- C N N 9 in this paper (in [37], a patch typically covers 3 × 3 pixels, i.e., the inputs of CNN covers nine pixels, hence the 9 subscript). The Patch- C N N 9 method learns structured spatial-spectral information directly from the HSI data, while CNN extracts a hierarchy of increasingly spatial features [38]. Later, Yu et al. [39] proposed a similar approach with carefully crafted network structures to process three-dimensional training data. Meanwhile, Shi et al. [40] further enhanced the spatial consistency by adopting a 3D recurrent neural network (RNN) following the CNN processing. Recently, Zhang et al. [41] proposed a dual stream CNN, with one CNN stream extracting the spectral feature (similar to Pixel- C N N 1 [22]) and the other extracting the spatial-spectral feature (similar to Patch- C N N 9 [37]). Subsequently, both features are flattened and concatenated, before feeding into a prediction module.
Despite the success of the aforementioned approaches, the shortage of available training samples remains a vital challenge, especially when neural networks go deeper and wider [42], due to more parameters in a larger scale network model. To alleviate this problem, many efforts have been made. The first natural effort is better data augmentation. Slavkovikj et al. [37] introduce additional artificial noises based on the HSI class-specific spectral distributions. Granted that there is the risk of migrating the original spectral distribution, a reasonable amount of additive noise could lead to better generalization performance. Another way of augmenting training samples is proposed in [32], where randomly sampled pixel pairs (i.e., the PPF feature) from the entire training dataset are used. The PPF feature exploits the similarity between pixels of the same labeled class and ensures enough labeled PPF training pairs for its classifier.
Another group of work addresses the limited training labels challenge by exploiting the statistical characters in HSI channels/wavelengths. Methods like multi-scale feature extraction [23] extract multi-scale features using autoencoders followed by classifiers training. However, these methods are based on a lower dimensional spectral subspace, which may omit valuable information. Alternatively, some efforts have been made to combine HSI classification with auxiliary tasks such as super-pixel segmentation [43]. The super-pixel segmentation provides strong local consistency for labels and can also serve as a post-processing procedure. Romero et al. [44] present a greedy layer-wise unsupervised pre-training for CNNs, which leads to both performance gains and improved computational efficiency.
Additionally, the vast variations in the statistical distributions of channels/wavelengths also draw much research attention. Slavkovikj et al. [45] propose an unsupervised sub-feature learning method in the spectral domain. This dictionary learning-based method greatly enhanced the hyperspectral feature representations. Zabalza et al. [46] extract features from a segmented spectral space with autoencoders. The slicing of the original features greatly reduces the complexity of network design and improves the efficiency of data abstraction. Very recently, Ran et al. [47] propose the band-sensitive network (BsNet) for feature extraction from correlated band groups, with each band group earning a respective classification confidence. The BsNet label prediction is based on all available band group classification confidences.

3. SPPF and Proposed Classification Framework

In this section, the classification problem of HSI data is first formulated mathematically, followed by the introductions of early HSI features (raw spectral pixel feature and spectral pixel patch feature). Subsequently, the new SPPF feature is proposed and compared against the prior PPF feature. Finally, a new multi-stream CNN-based classification framework is introduced.
Let X R W × H × D be an HSI dataset, with W, H, D being the width (i.e., the total number of longitude resolution), height (i.e., the total number of latitude resolution) and dimension (i.e., spectral channels/wavelengths), respectively. Suppose among the total number of W × D pixels, N of them are labeled ones in the training set T :
T = x i , y i i = 1 N , where   x i R D , y i K , i = 1 , , N .
x i is a D-dimensional real-valued spectral pixel measurement; y i is its corresponding integer valued label; K = 1 , , K is the set of labeled classes, with K being the total number of classes.
The goal of HSI classification is to find a prediction function f : x j K for unlabeled pixels x j T , given the training set T = x i , y i i = 1 N .
For methods with incorporated spatial information (e.g., [37]), the prediction function f involves more inputs than x j itself. Additionally, a set of neighboring pixels N ( x i ) is also f’s inputs. Therefore, the predicted label y ^ j is,
y ^ j = f x j , N ( x j ) , x j T .
For notational simplicity, define S i as a set containing both the query pixel x i and its neighboring pixels N ( x i ) ,
S i = { x i , N ( x i ) } ,
and let L i represent the label of set S i . With different choices of neighborhoods N ( x i ) , both S i and L i change accordingly. In the following sections, a bracketed number in the superscript of S i and L i is used to distinguish such variations.

3.1. Raw Spectral Pixel Feature

As shown in Figure 1a, a spectral pixel is a basic component of HSI. The raw spectral pixel feature is used in early CNN-based work, such as Pixel- C N N 1 [22]. With such a feature, the label prediction task is:
y ^ j ( 1 ) = f ( 1 ) S j ( 1 ) , where   S j ( 1 ) = x j , x j T .
During f ( 1 ) ’s training process, labeling of S i ( 1 ) is straightforward,
L i ( 1 ) = y i , x i T ,
where y i is the label associated with x i .
Despite its simplicity, the lack of spatial consistency information often leads to erroneous predicted labels, especially within class boundaries.

3.2. Spectral Pixel Patch Feature

Proposed in [37], the spectral pixel patch feature uses the entire neighboring patch as basis for classification, as shown in Figure 1b,
y ^ j ( 2 ) = f ( 2 ) S j ( 2 ) , where   S j ( 2 ) = x j , N 8 ( x j ) , x j T ,
where N 8 ( x j ) denotes all eight pixels in x j ’s eight-neighbor.
During f ( 2 ) ’s training process, labeling of S i ( 2 ) is completely based on the label y i of the central pixel x i ,
L i ( 2 ) = y i , x i T ,
In contrast to the raw spectral pixel feature, the spectral pixel patch feature provides spatial information and promotes local consistency in labeling, leading to smoother class regions. The introduction of spatial information in N 8 ( x j ) significantly improves the prediction accuracy and contextual consistency.

3.3. PPF

The PPF feature [32] is illustrated in Figure 1c. Pairs of labeled pixels are randomly chosen across the entire training set T , and the label prediction is carried out by:
y ^ j ( 3 ) = f ( 3 ) S j ( 3 ) , where   S j ( 3 ) = x j , x q ; x j , x q T , y ^ j ( 3 ) K .
During f ( 3 ) ’s training process, labeling of S i ( 3 ) is based on the following rule,
L i ( 3 ) = y i if y i = y p Ψ if y i y p , where   x i , x p T ,
where y i and y p are the associated labels of x i and x p from pixel pair { x i , x p } , respectively. Ψ is a newly introduced class label, denoting an “auxiliary” class not in the original K . In this paper, we set it as Ψ = K + 1 .
Comparing Equation (8) and Equation (9), it is evident that the spans of L i ( 3 ) and y ^ j ( 3 ) are different. The element Ψ in Equation (9) is outside the range of f ( 3 ) . Therefore, there is a mismatch between the training labels and prediction labels, which could leads to suboptimal classification performance.
Additionally, there are no spatial constraints imposed on the choice of x i and x p while generating training samples S i ( 3 ) , L i ( 3 ) x i , x p T according to Equation (9). Therefore, x i and x p could be far apart from each other geographically. As Qian et al. argued in [36], spatial information could be more critical than its spectral counterpart, so such a pair generation scheme in Equation (8) could lead to high intra-class variances in training data S i ( 3 ) and possibly confuses classifier f ( 3 ) .
Practically, during the label prediction process in Equation (8), multiple x q ’s are normally chosen from the reference pixel x j ’s neighborhood N ( x j ) ,
M ( j ) = Card   x p | x p N ( x j ) , x p T , where   x j T ,
where M ( j ) denotes the total number of testing pixel pairs assembled for x j . “Card” in Equation (10) represents the cardinality of a set. A series of S j ( 3 ) ’s are constructed as:
S j ( 3 ) ( q 1 ) = x j , x q 1 , S j ( 3 ) ( q 2 ) = x j , x q 2 , , S j ( 3 ) ( q M ( j ) ) = x j , x q M ( j ) ,
and their respective predictions are:
y ^ j ( 3 ) ( 1 ) = f ( 3 ) S j ( 3 ) ( q 1 ) , , y ^ j ( 3 ) ( M ( j ) ) = f ( 3 ) S j ( 3 ) ( q M ( j ) ) .
The final predicted label is determined by a majority voting,
y ^ j ( 3 ) = mode   y ^ j ( 3 ) ( 1 ) , , y ^ j ( 3 ) ( M ( j ) ) ,
where “mode” in Equation (13) denotes statistical mode (i.e., the value that appears most often).

3.4. Proposed SPPF

To address the lack of spatial constraints while selecting { x i , x p } and eliminating the extra Ψ label in Equation (9), the new SPPF feature is proposed and illustrated in Figure 1d. The label prediction is carried out by y ^ j ( 4 ) K ,
y ^ j ( 4 ) = f ( 4 ) S j ( 4 ) ( m ) m = 1 8 , where   S j ( 4 ) ( m ) = x j , x q m , x j T , x q m m = 1 8 = N 8 ( x j ) .
The SPPF prediction function f ( 4 ) always processes exactly eight sets of SPPF pairs S j ( 4 ) ( m ) m = 1 8 , and these pairs holistically contribute to the prediction y ^ j ( 4 ) without resorting to majority voting.
During f ( 4 ) ’s training process, only pixels from the reference pixel x i ’s eight neighbors are used for constructing S i ( 4 ) ( m ) . In addition, labeling of S i ( 4 ) ( m ) m = 1 8 is purely based on the central reference pixel x i ’s label y i , eliminating any auxiliary class labels.
L i ( 4 ) = y i , x i T ,
The SPPF training set with pixel pairs and labels is:
S i ( 4 ) ( m ) m = 1 8 , L i ( 4 ) i = 1 N ,
where L i ( 4 ) = y i , S i ( 4 ) ( m ) = x i , x p m , x p m m = 1 8 = N 8 ( x i ) , x i T .

3.5. Proposed Classification Framework with SPPF

On top of the proposed SPPF feature, a multi-stream CNN architecture is proposed for classification, as shown in Figure 2. Overall, there are three major component layers: firstly, the multi-stream feature embedding layers; secondly, an aggregation layer; and finally, a classification layer.
The multi-stream feature embedding layers are designed to extract discriminative features from SPPFs. These layers are grouped into eight streams/sub-networks, each of which processes a single S i ( 4 ) ( m ) , where m = 1 , , 8 . A major advantage of this design is the flexibility of incorporating various nets as streams/sub-networks. We have adopted both classical CNN implementations, as well as alternatives such as [47]. All sub-networks have their input layers slightly adjusted to fit the pixel pair inputs; and last scoring layers (i.e., SoftMax layers) are removed. After passing the multi-stream feature embedding layers, HSI data are transformed into eight streams of K-dimensional feature vectors, ready to be fed to the next aggregation layer.
Empirically, the overall classification performance is insensitive to different choices of sub-networks, given a reasonable sub-network scale. In fact, due to the limited amount of training data, slightly shallower/simpler networks enjoy a marginal performance advantage.
The aggregation layer is one average pooling layer, which additively combines information from all streams together, leading to its robustness against noises and invariance from local rotations. The output of the prior multi-stream embedded layer is of dimension F K × 1 ( m ) ; m = 1 , , 8 are first concatenated to form F ( c ) :
F K × 8 ( c ) = F K × 1 ( 1 ) T , , F K × 1 ( 8 ) T T .
Subsequently, an average pooling layer process F ( c ) outputs a vector F K × 1 ( a ) . Even if there are noise contaminations in some streams/sub-networks, F K × 1 ( a ) is less susceptible to them, thanks to the averaging effect. Lastly, two fully-connected (FC) layers and a SoftMax layer serve as the classification layers, providing confidence scores and prediction labels.

4. Experimental Results and Discussion

In this section, we give the detailed configuration description of the datasets we use and the models we build for analyzing the new SPPF and the proposed classification framework. For HSI classification measurement, we choose the overall accuracy (OA) and average class accuracy (AA) as the evaluation strategy (the overall accuracy is defined as the ratio of correctly labeled samples to all test samples, and the average accuracy is calculated by simply averaging the accuracies for each class.).

4.1. Dataset Description

We test the proposed framework and competing state-of-the-art methods on three publicly available HSI datasets. All datasets are open accessible online (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes).
India Pines dataset: The NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Indian Pines image was captured over the agricultural Indian Pine test site located in the northwest of Indiana. The spectral image has a spatial resolution of 145 × 145 and a range of 220 spectral bands from 0.38–2.5 μ m. Prior to commencing the experiments, the water absorption bands were removed. Hence, we are dealing with a 200-band length spectrum. What is more, the labeled classes are highly unbalanced. We choose nine out of 16 classes, which consists of more than 400 samples. In total, 9234 samples are left for further analysis.
University of Pavia dataset: The University of Pavia image (PaviaU) was acquired by the ROSIS-03 sensor over the University of Pavia, Italy. The image measures 610 × 340 with a spatial resolution of 1.3 m per pixel. There are 115 channels whose coverage ranges from 0.43–0.86 μ m, and 12 absorption bands were discarded for noise concern. There are nine different classes in the PaviaU reference map and 42,776 labeled samples used in this paper.
Salinas scene dataset: This scene is also collected by the 224-band AVIRIS sensor over Salinas Valley, California, and is characterized by high spatial resolution ( 512 × 217 3.7-m pixels). As with Indian Pines scene, we discarded the 20 water absorption bands, in this case bands with the numbers: [108–112], [154–167], 224. This image was available only as at-sensor radiance data. It includes vegetables, bare soils and vineyard fields. The Salinas ground-truth contains 16 classes and 54,129 samples.

4.2. Model Setup and Training

For efficiency demonstration of the proposed method, we first have some classical shallow classification methods built in the comparison experiment, for example SVM [10] and ELM [48]. We use the LIBSVM [49] and ELM (http://www.ntu.edu.sg/home/egbhuang/elm_codes.html) development toolkit for building those two models separately. For the SVM model, we use the polynomial kernel function, with γ equal to one and the rest of the hyper-parameters set as the default. For the ELM model, we set the number of hidden neurons as 7000 and choose the Sine function as the activation function. Predictions are given by each model regarding raw input spectrum pixels.
As for CNN-based deep models, we have reproduced three works, CNN from [37], BsNet [47] and PPF [32]. To distinguish, we have adjusted the model’s name in different case. For example, we have C N N 1 and C N N 9 to present the model in [37] working on the spectrum pixel (one pixel) and spatial patch (nine pixels), respectively. C N N 2 is for the model in [37] when it is embedded into the proposed framework working on a pixel pair (two pixels). We further simplified the C N N 2 subnetwork, which we quote as C N N 2 -lite in this paper, with the number of neurons in the fully-connected layers set to 400 and 200, respectively. In this way, the total parameters have greatly dropped from 4,650,697 down to 2,100,297 for one single subnetwork. Meanwhile, the chance of overfitting is greatly decreased. In Table 1, we have listed out the different configurations for better clarification.
For the PPF framework [32], we have adjusted the voting size during the testing stage to be three (the default value is five), which matches the size with other spatial spectral methods ( C N N 9 ) in our paper.
When building the proposed framework in Figure 2, we choose C N N 2 , C N N 2 -lite and BsNet as the subnetworks, which we refer to as SPPF framework ( C N N 2 ), SPPF framework ( C N N 2 -lite), and SPPF framework (BsNet) respectively. The subnetworks are initialized with default configurations and further fine-tuned for advantageous performance during the training stage of the whole framework. The rest of the parameters are set to be the default.
The proposed and comparative CNN models are all developed using the Torch7 deep learning package [50], which makes many efforts to improve the performance and efficacy of benchmark deep learning methods. The training procedures for all models are run on an Intel Core-i7 3.4-GHz PC with an Nvidia Titan X GPU. For the proposed framework, we use the tricks in efficient backprop [51] for initialization and the adaptive subgradient online learning (Adagrad) strategy [52] for optimization, which allows us to derive strong regret guarantees. Further details and analysis of the performance of the network configurations are given in the following subsection of the experimental classification results.

4.3. Configuration Tests

For establishing a stable architecture, we have tested several configurations and verifications on different datasets. The major configurations are kept in the same, except parameters to be verified.
Effect of training samples: Figure 3 illustrates the classification performance with various numbers of training samples on different datasets. Generally, when the training size increases, the performance of all methods has a noticeable improvement. In the following experiments, we choose 200 samples from each class, which gives a relatively larger number of training samples against the overfitting problem. The remaining instances of each class are grouped into the testing set. The detailed dataset class description and configuration are presented in Table 2. All those samples are normalized to a new range with zero mean and unit variance before they are fed into different models.
Effect of batch size: The size of sample batches during the training stage can also have some impact on the model performance, especially for optimization methods like stochastic gradient decent. We have tested various batch size values ranging from 1–50 on three datasets. The results are shown in Figure 4. We can figure out that a larger batch size does not necessarily always help the model obtain better performance. Although the mini-batch training trick has the strength of faster convergence maintenance, a larger batch size with limited samples disturbs the convergence stability occasionally. In the following experiments, we choose a batch of 10 samples for training, which gets the top classification precision in our framework.
Effect of the average pooling layer: The average pooling layer is designed to introduce the framework with noise and variance stability, which also shares a similar property as the voting stage in the PPF framework. As can be seen from Table 3, the precision improved with about 0.8% for the Pines and Salinas datasets. For the PaviaU dataset, the influence is not that obvious. As the spectrum in PaviaU is much shorter (103 bands in fact), noises may not be that essential as the remaining two dataset with over 200 bands. With no harm to the final precision, we would keep the averaging layer in the following experiments for all datasets.
Effect of FC layers: There is no doubt that the capacity of fully-connected layers can directly influence the final classification accuracy. While a larger size of FC layers increases the classifier capacity, models with smaller ones are easier to train and suit problems with limited training samples better. During the framework design, we have tried to use two and three fully-connected layers alternatively. As shown in Table 4, the model with C N N 2 -lite as subnetworks working on different datasets shows various performance results. Take the Pines dataset as an example: when we change from two FC layers to three FC layers, the precision obtains about a 2% decrease. Meanwhile, we have a slight drop on the PaviaU dataset and about 0.5% improvement on the Salinas dataset. One reason might be that the higher capacity of a fully-connected classification module is needed when we have a superior number of classes in the dataset. In the remaining experiments, to make one unified framework, we choose two FC layers as the prediction module.

4.4. Classification Results

In this subsection, we give the detailed results from the proposed model, as well as comparative ones on three datasets. Table 8, Table 9 and Table 10 list out the details of OA and AA for the Pines, PaviaU and Salinas datasets, respectively.
One obvious observation would be that CNN-based deep models (Columns 4–10) show boosting performance over shallow models like SVM (Column 2) and ELM (Column 3). That reflects the booming development of deep learning methods in recent years with competing accuracy.
Spatial consistency reserves a strong constraint for HSI classification. For spatial spectral-based CNN models (Column 5), there is about a 4% improvement compared to spectral alone (Column 4). The introduction of neighbor pixels not only gives models more information during prediction, but also maintains contextual consistency.
For SPPF-based models, it is verified that the performance of conventional deep learning solutions for HSI classification can be further boosted by the proposed SPPF-based framework. When comparing Column 5 with Column 7 and Column 6 with Column 10, we can observe that both CNN [37] and BsNet get better performance when embedded in the proposed framework. This confirms that the proposed framework works efficiently without any increasing of training samples. Besides, the SPPF framework with C N N 2 -lite (Column 9) shows even better results than C N N 2 (Column 8), which contains much more parameters than the previous one. Hence, the framework also shows an advantage in controlling the model size on the same task. Moreover, we do not need that many parameters to maintain a good performance if the problem has limited samples. To be concise, by simply changing from conventional CNN models to the SPPF framework, we can greatly shrink the model size and meanwhile relieve the over-fitting chance.
Figure 5, Figure 6 and Figure 7 shows the thematic maps accompanying the results from Table 8, Table 9 and Table 10 separately. In Figure 5a, we give one pseudo color image of the Indian Pines dataset for a better illustration of the natural scene and also for the declaration of why spatial consistency holds. Figure 5b–g shows the classification results from SVM, ELM, Pixel- C N N 1 , Patch- C N N 9 , BsNet and the PPF framework, Figure 5h shows the result from the SPPF framework ( C N N 2 -lite), which is the best one of the SPPF models. Lastly, Figure 5i gives the ground truth map. Similar results are also shown in Figure 6 and Figure 7 for the remaining two datasets. It is straightforward to tell that the results from SPPF show superior performance to the previous methods, where less noisy labels are given and the spatial consistency is improved.
The statistical differences of SPPF over competitive models with the standardized McNemar test are given in Table 5. According to the definition in [53], the value Z from McNemar’s test reflects that one method is significantly statistically different from another when its absolute value is larger than 2 . 58 . The larger value of Z shows better accuracy improvement. In the table, all the values show that the proposed SPPF outperforms previous CNN-based solutions.
Table 6 gives the total time consumption of training models and the average time cost of testing one sample. All the experiments are fulfilled with the Torch7 package using the same PC described in Section 4.2. It is worth mentioning that as we have set the early-stopping criterion when training error stops dropping or validation accuracy starts decreasing, the training time does not necessarily reflect each model’s complexity. The testing time, on the other hand, is more suitable as a reflection of the model complexity. It is reasonable for the PPF model to take a much longer training time, as it is dealing with many more samples. Besides, the relatively high testing consumption of PPF relies on the fact that the model should be run eight times before the voting stage. Meanwhile, the voting stage, which is not a part of the model, also requires extra time. When conventional CNN models are embedded into the SPPF framework, their time consumptions increase accordingly. However, the testing time difference of BsNet when embedded is not that obvious. That is because most of the time consumption comes from the grouping stage, which shows no difference in those cases. Besides, the input channels are decreased, which also helps with time saving.

4.5. Discussion

Based on the comparison tests in Section 4.3 and the classification results in Section 4.4, a brief performance analysis is included in this section.
The primary comparative advantage of the proposed SPPF framework is the incorporation of contextual information, which has significantly boosted the spatial consistency. Unlike conventional CNN-based HSI classification methods (e.g., C N N 1 , C N N 9 ), the discontinuity artifact (such mislabeled pixels distribute randomly and sparsely and form a salt-and-pepper noise pattern in Figure 5, Figure 6 and Figure 7) has greatly attenuated. In addition, the proposed spatial information preserving pixel pair selection scheme in Section 3.4 are also evidently proven to be superior to the original PPF in Table 8, Table 9 and Table 10.
The advantages of the subnetwork-based multi-stream framework are also verified. Firstly, this particular design offers a scalable network structure template without requiring the formidable manual selection process to determine a suitable sub-network configuration. According to the last three columns in Table 8, Table 9 and Table 10, this multi-stream framework is robust to different selections of subnetworks: both C N N 2 and BsNet provide decent performances, only marginally inferior to the optimal C N N 2 -lite. Incidentally, from the comparison between C N N 2 and C N N 2 -lite in the SPPF framework, it appears that more parameters (in C N N 2 ) do not necessarily guarantee improved classification accuracy.
For future work, the incorporation of pixel-group features with three or more pixel-combinations shows great potential. Being a natural extension to the pixel pair feature, the triplet pixel feature is introduced and briefly tested bellow, with initial performance already better than the original pixel pair feature. Further analysis of the choices of adjacent pixels and label assignment strategy is going to be included in our future work.
Effect of spatial triplet pixel features: Since we have designed the framework to adopt multiple pixel-pairs as input rather than the normal patch, it is interesting to explore if more pixels as features can work better. In this subsection, we use triplet pixels as an input feature for subnetworks. Comparing with pixel-pairs, triplet pixels show the subnetworks more information and also greater potential of better classification capacity. For achieving this experiment, the first convolutional layer in subnetwork is changed to three input channels, slightly different from the two input channels for SPPF framework, which we present as C N N 3 -lite. The remainder of the whole framework is kept the same. For sample formatting, we have instance S i ( 5 ) ( t ) = { x i , x p , x l } , with x i being the query central pixel, x p , x l N 8 ( x i ) . The label prediction for one triplet pixel is carried out by y ^ j ( 5 ) K ,
y ^ j ( 5 ) = f ( 5 ) S j ( 5 ) ( t ) t = 1 T , where   S j ( 5 ) ( t ) = x j , x q t , x r t , x j T , x q t , x r t N 8 ( x j ) , t = 1 , , T ,
where T = 8 3 . The training label for this spatial triplet pixel features is defined as:
L i ( 5 ) = y i , x i T .
Since the combination of triplets are various for a α -neighbor, in practice, we randomly choose eight triplets to fit into the framework presented previously. The final prediction results are shown in Table 7. The precision shows noticeable increase on all datasets. This observation embraces our proposal that using spatial consistency can greatly improve the models’ capacity.

5. Conclusions

In this paper, a new spatial pixel pair-based, multi-stream convolutional neural network is proposed for HSI classification applications. Unlike conventional neural networks that regard the spatial information purely as an extensional channel, the proposed framework captures additional spatial information via a series of subnetworks (streams) with flexible subnetwork configurations and achieves superior classification accuracy on three publicly available datasets. Further discussion on grouping pixels for better improvement is open for future work. Source codes are available on the project page: https://hijeffery.github.io/HSI-SPPF/.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (No. 61671385, No. 61231016, No. 61672429, No. 61301192, No. 61272288, No. 61303123), The National High Technology Research and Development Program of China (863 Program) (No. 2015AA016402) and the Natural Science Basis Research Plan in Shaanxi Province of China (No. 2017JM6021).

Author Contributions

Lingyan Ran, Wei Wei and Qilin Zhang conceived of and designed the experiments. Lingyan Ran performed the experiments. Yanning Zhang and Wei Wei analyzed the data. Lingyan Ran and Qilin Zhang wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral image
CNNConvolutional neural network
PPFPixel pair feature
SPPFSpatial pixel pair feature

References

  1. Hughes, G.P. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  2. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  3. Ablin, R.; Sulochana, C.H. A survey of hyperspectral image classification in remote sensing. Int. J. Adv. Res. Comput. Commun. Eng. 2013, 2, 2986–3000. [Google Scholar]
  4. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  5. Fan, J.; Chen, T.; Lu, S. Superpixel Guided Deep-Sparse-Representation Learning For Hyperspectral Image Classification. IEEE Trans. Circuits Video Technol. 2017. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Hua, G.; Liu, W.; Liu, Z.; Zhang, Z. Can Visual Recognition Benefit from Auxiliary Information in Training? Computer Vision — ACCV 2014. In Lecture Notes in Computer Science; Springer International Publishing: New York, NY, USA, 2015; Volume 9003, pp. 65–80. [Google Scholar]
  7. Zhang, Q.; Hua, G.; Liu, W.; Liu, Z.; Zhang, Z. Auxiliary Training Information Assisted Visual Recognition. IPSJ Trans. Comput. Vision Appl. 2015, 7, 138–150. [Google Scholar] [CrossRef]
  8. Thompson, B. Canonical correlation analysis. In Encyclopedia of Statistics in Behavioral Science; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  9. Zhang, Q.; Hua, G. Multi-View Visual Recognition of Imperfect Testing Data. In Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, Brisbane, Australia, 26–30 October 2015; pp. 561–570. [Google Scholar]
  10. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
  11. Jia, X.; Richards, J.A. Managing the spectral-spatial mix in context classification using Markov random fields. IEEE Geosci. Remote Sens. Lett. 2008, 5, 311–314. [Google Scholar] [CrossRef]
  12. Bakos, K.L.; Gamba, P. Hierarchical hybrid decision tree fusion of multiple hyperspectral data processing chains. IIEEE Trans. Geosci. Remote Sens. 2011, 49, 388–394. [Google Scholar] [CrossRef]
  13. Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral–spatial classification of hyperspectral imagery based on partitional clustering techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [Google Scholar] [CrossRef]
  14. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef]
  15. Tilton, J.C.; Tarabalka, Y.; Montesano, P.M.; Gofman, E. Best merge region-growing segmentation with integrated nonadjacent region object aggregation. IEEE Trans. Geosci. Remote Sens. TGRS 2012, 50, 4454–4467. [Google Scholar] [CrossRef]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  17. Ran, L.; Zhang, Y.; Hua, G. CANNET: Context aware nonlocal convolutional networks for semantic image segmentation. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec, QC, Canada, 27–30 September 2015; pp. 4669–4673. [Google Scholar]
  18. Ran, L.; Zhang, Y.; Zhang, Q.; Yang, T. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images. Sensors 2017, 17, 1341. [Google Scholar] [CrossRef] [PubMed]
  19. Abeida, H.; Zhang, Q.; Li, J.; Merabtine, N. Iterative sparse asymptotic minimum variance based approaches for array processing. IEEE Trans. Signal Process. 2013, 61, 933–944. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Abeida, H.; Xue, M.; Rowe, W.; Li, J. Fast implementation of sparse iterative covariance-based estimation for source localization. J. Acoust. Soc. Am. 2012, 131, 1249–1259. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, Q.; Abeida, H.; Xue, M.; Rowe, W.; Li, J. Fast implementation of sparse iterative covariance-based estimation for array processing. In Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference onSignals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 2031–2035. [Google Scholar]
  22. Hu, W.; Huang, Y.Y.; Wei, L.; Zhang, F.; Li, H.C. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  23. Zhao, W.Z.; Guo, Z.; Yue, J.; Zhang, X.Y.; Luo, L.Q. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. Int. J. Remote Sens. IJRS 2015, 36, 3368–3379. [Google Scholar] [CrossRef]
  24. Liang, H.M.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 16. [Google Scholar] [CrossRef]
  25. Turra, G.; Arrigoni, S.; Signoroni, A. CNN-based Identification of Hyperspectral Bacterial Signatures for Digital Microbiology. In Proceedings of the International Conference on Image Analysis and Processing, Catania, Italy, 11–15 September 2017; pp. 500–510. [Google Scholar]
  26. Chen, Y.S.; Lin, Z.H.; Zhao, X.; Wang, G.; Gu, Y.F. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  27. Xing, C.; Ma, L.; Yang, X.Q. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images. J. Sens. 2016, 2016, 3632943. [Google Scholar] [CrossRef]
  28. Ma, X.R.; Geng, J.; Wang, H.Y. Hyperspectral image classification via contextual deep learning. Eurasip J. Image Video Process. 2015. [Google Scholar] [CrossRef]
  29. Ma, X.; Wang, H.; Geng, J.; Wang, J. Hyperspectral image classification with small training set by deep network and relative distance prior. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 3282–3285. [Google Scholar]
  30. Li, T.; Zhang, J.P.; Zhang, Y.; IEEE. Classification of Hyperspectral Image Based on Deep Belief Networks. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5132–5136. [Google Scholar]
  31. Chen, Y.S.; Zhao, X.; Jia, X.P. Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  32. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  33. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the International conference on Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  34. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Loupos, K. Deep learning-based man-made object detection from hyperspectral data. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 717–727. [Google Scholar]
  35. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  36. Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef]
  37. Slavkovikj, V.; Verstockt, S.; De Neve, W.; Van Hoecke, S.; Van de Walle, R. Hyperspectral Image Classification with Convolutional Neural Networks. In Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, Brisbane, Australia, 26–30 October 2015; pp. 1159–1162. [Google Scholar]
  38. Li, Y.; Xie, W.; Li, H. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognit. 2017, 63, 371–383. [Google Scholar] [CrossRef]
  39. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  40. Shi, C.; Pun, C.M. Superpixel-based 3D Deep Neural Networks for Hyperspectral Image Classification. Pattern Recognit. 2017. [Google Scholar] [CrossRef]
  41. Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef]
  42. Lee, H.; Kwon, H. Going Deeper With Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. TIP 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, Y.; Cao, G.; Sun, Q.; Siegel, M. Hyperspectral classification via deep networks and superpixel segmentation. Int. J. Remote Sens. 2015, 36, 3459–3482. [Google Scholar] [CrossRef]
  44. Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
  45. Slavkovikj, V.; Verstockt, S.; De Neve, W.; Van Hoecke, S.; Van de Walle, R. Unsupervised spectral sub-feature learning for hyperspectral image classification. Int. J. Remote Sens. 2016, 37, 309–326. [Google Scholar] [CrossRef] [Green Version]
  46. Zabalza, J.; Ren, J.C.; Zheng, J.B.; Zhao, H.M.; Qing, C.M.; Yang, Z.J.; Du, P.J.; Marshall, S. Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing 2016, 185, 1–10. [Google Scholar] [CrossRef] [Green Version]
  47. Ran, L.; Zhang, Y.; Wei, W.; Yang, T. Bands Sensitive Convolutional Network for Hyperspectral Image Classification. In Proceedings of the International Conference on Internet Multimedia Computing and Service, Xi’an, China, 19–21 August 2016; pp. 268–272. [Google Scholar]
  48. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. TGRS 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  49. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  50. Collobert, R.; Kavukcuoglu, K.; Farabet, C. Torch7: A matlab-like environment for machine learning. In Proceedings of the BigLearn, NIPS Workshop, Granada, Spain, 12–17 December 2011. [Google Scholar]
  51. LeCun, Y.A.; Bottou, L.; Orr, G.B.; Müller, K.R. Efficient backprop. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 9–48. [Google Scholar]
  52. Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  53. Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral image classification with independent component discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef]
Figure 1. Illustrations of popular features used in HSI classification. Early CNN-based HSI classification methods are either based on raw spectral pixel feature [22] in (a) or spectral pixel patch feature [37] in (b). As shown in (c), [32] adopts a random sampling scheme across the entire training set to construct a large number of labeled pixel pair features (PPF) pairs. The proposed spatial pixel pair feature (SPPF) feature chooses a tight eight-neighbor as N ( x 0 ) , from which SPPF pairs are built, such as x 0 , x 1 , x 0 , x 2 , etc. (a) Raw spectral pixel feature; (b) spectral pixel patch feature; (c) PPF feature; (d) proposed SPPF feature.
Figure 1. Illustrations of popular features used in HSI classification. Early CNN-based HSI classification methods are either based on raw spectral pixel feature [22] in (a) or spectral pixel patch feature [37] in (b). As shown in (c), [32] adopts a random sampling scheme across the entire training set to construct a large number of labeled pixel pair features (PPF) pairs. The proposed spatial pixel pair feature (SPPF) feature chooses a tight eight-neighbor as N ( x 0 ) , from which SPPF pairs are built, such as x 0 , x 1 , x 0 , x 2 , etc. (a) Raw spectral pixel feature; (b) spectral pixel patch feature; (c) PPF feature; (d) proposed SPPF feature.
Sensors 17 02421 g001
Figure 2. Proposed classification framework for HSI classification based on SPPF features.
Figure 2. Proposed classification framework for HSI classification based on SPPF features.
Sensors 17 02421 g002
Figure 3. Stability comparison results on three datasets with increasing number of training samples. Generally, more training samples result in models having better performance.
Figure 3. Stability comparison results on three datasets with increasing number of training samples. Generally, more training samples result in models having better performance.
Sensors 17 02421 g003
Figure 4. Batch size influence on training the SPPF framework ( C N N 2 -lite) with different datasets.
Figure 4. Batch size influence on training the SPPF framework ( C N N 2 -lite) with different datasets.
Sensors 17 02421 g004
Figure 5. Results on the University of Pines dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Figure 5. Results on the University of Pines dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Sensors 17 02421 g005
Figure 6. Results on the University of Pavia dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Figure 6. Results on the University of Pavia dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Sensors 17 02421 g006
Figure 7. Results on the University of Salinas dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Figure 7. Results on the University of Salinas dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Sensors 17 02421 g007
Table 1. Configuration list of selected models working on the India Pines dataset for the illustration of input and output modifications. FC, fully-connected.
Table 1. Configuration list of selected models working on the India Pines dataset for the illustration of input and output modifications. FC, fully-connected.
CNN 1 CNN 9 CNN 2 CNN 2 -Lite
Input1 × 200 × 19 × 200 × 12 × 200 × 12 × 200 × 1
Conv1(1 × 16) #32(9 × 16) #32(2 × 16) #32(2 × 16) #32
Conv2(1 × 16) #32(1 × 16) #32(1 × 16) #32(2 × 16) #32
Conv3(1 × 16) #32(1 × 16) #32(1 × 16) #32(2 × 16) #32
FC14960–8004960–8004960–8004960–400
FC2800–800800–800800–800400–200
FC3800-K800-K800-K200-K
OutputSoftMaxSoftMax--
Table 2. Detailed configuration of the three datasets (Pines, University of Pavia (PaviaU) and Salinas) used in this paper. For each class, 200 samples are randomly selected as the training set and the rest as the testing set. For Pines, we use only the top 9 classes with the largest number of samples.
Table 2. Detailed configuration of the three datasets (Pines, University of Pavia (PaviaU) and Salinas) used in this paper. For each class, 200 samples are randomly selected as the training set and the rest as the testing set. For Pines, we use only the top 9 classes with the largest number of samples.
No.PinesPaviaUSalinas
Class NameTrainTestClass NameTrainTestClass NameTrainTest
1Corn-notill2001228Asphalt2006431Brocoli_12001809
2Corn-mintill200630Meadows20018,449Brocoli_22003526
3Grass-pasture200283Gravel2001899Fallow2001776
4Grass-trees200530Trees2002864Fallow_plow2001194
5Hay-win.200278Sheets2001145Fallow_smooth2002478
6Soy.-notill200772Bare Soil2004829Stubble2003759
7Soy.-mintill2002255Bitumen2001130Celery2003379
8Soy.-clean200393Bricks2003482Grapes20011,071
9Woods2001065Shadows200747Soil_vinyard2006003
10 Corn_weeds2003078
11 Lettuce_4wk200868
12 Lettuce_5wk2001727
13 Lettuce_6wk200716
14 Lettuce_7wk200870
15 Vinyard_un.2007068
16 Vinyard_ve.2001607
Sum 18007434 180040,976 320050,929
Table 3. Classification performance of the SPPF framework using C N N 2 -lite as the subnetwork with/without the average pooling layer (OA shown in percentage).
Table 3. Classification performance of the SPPF framework using C N N 2 -lite as the subnetwork with/without the average pooling layer (OA shown in percentage).
PinesPaviaUSalinas
with avg layer96.0292.7395.16
without avg layer95.3392.7094.84
Table 4. Classification performance of the SPPF framework using C N N 2 -lite as subnetworks with different numbers of FC layers (OA shown in percentage).
Table 4. Classification performance of the SPPF framework using C N N 2 -lite as subnetworks with different numbers of FC layers (OA shown in percentage).
PinesPaviaUSalinas
2 FC layers93.8991.0294.54
3 FC layers91.7990.7395.13
Table 5. Statistical difference of SPPF over competitive models with the standardized McNemar test. BsNet, band-sensitive network.
Table 5. Statistical difference of SPPF over competitive models with the standardized McNemar test. BsNet, band-sensitive network.
SPPF vs. CNN 1 SPPF vs. CNN 9 SPPF vs. BsNetSPPF vs. PPF
Z21.019.0420.0220.00
Table 6. Time consumption in training and testing deep models on the Pines dataset.
Table 6. Time consumption in training and testing deep models on the Pines dataset.
CNN 1 CNN 9 BsNetPPFSPPF
CNN 2 CNN 2 -LiteBsNet
Training (h)0.800.500.7046.11.481.4210.15
Testing (ms)0.720.8132.016.45.645.0037.50
Table 7. Classification performance of the extended framework using C N N 2 -lite as subnetworks with triplet pixel features (OA shown in percentage).
Table 7. Classification performance of the extended framework using C N N 2 -lite as subnetworks with triplet pixel features (OA shown in percentage).
PinesPaviaUSalinas
Pair pixel features95.4794.5194.69
Triplet pixel features95.9194.7995.98
Table 8. Classification results on the Indian Pines dataset (accuracy shown in percentage).
Table 8. Classification results on the Indian Pines dataset (accuracy shown in percentage).
SVMELMPixel-
CNN 1
Patch-
CNN 9
Patch-
BsNet
PPF-
Framework
SPPF Framework
CNN 2 CNN 2 -LiteBsNet
178.2679.4081.4284.6985.8393.6592.5994.2293.65
281.2785.0889.8195.0887.7885.0896.0397.9492.54
398.5996.4795.7399.2997.8896.47100.0100.0100.0
498.6899.0697.7199.2599.06100.099.6299.4399.06
5100.0100.0100.0100.0100.0100.0100.0100.098.92
676.9486.6688.7692.8886.0189.9095.3495.8595.47
765.1069.8475.4281.4279.8274.0690.8292.2088.51
884.9989.3194.1297.4694.6697.2098.7398.4797.46
998.7898.4097.5598.7899.0699.3499.8199.8199.91
AA(%)86.9689.3691.1794.3292.2392.8596.9997.5596.17
OA(%)80.7283.8086.4590.2988.4988.3995.0595.9294.96
Table 9. Classification results on the Pavia University dataset (accuracy shown in percentage).
Table 9. Classification results on the Pavia University dataset (accuracy shown in percentage).
SVMELMPixel-
CNN 1
Patch-
CNN 9
Patch-
BsNet
PPF-
Framework
SPPF Framework
CNN 2 CNN 2 -LiteBsNet
182.6982.7186.8594.4489.5087.6292.3893.8993.81
287.6591.2384.1986.4190.1990.1690.9091.7192.19
379.3679.2080.7594.9588.4797.5384.8983.4683.10
494.2493.0293.5794.3496.6199.2597.1497.0797.42
599.8399.30100.099.9199.8399.6499.83100.0100.0
689.3691.5184.5488.4894.2285.6296.1595.9287.49
789.3892.9292.4797.2692.9273.6197.3597.3597.96
881.6886.7985.0690.9582.2595.9386.2487.7186.07
999.8799.8799.73100.099.8797.9399.2099.73100.0
AA(%)89.3490.7389.6894.0892.6591.9293.7894.0993.12
OA(%)87.2589.5586.1790.1790.7786.9592.0992.7391.94
Table 10. Classification results on the Salinas dataset (accuracy shown in percentage).
Table 10. Classification results on the Salinas dataset (accuracy shown in percentage).
SVMELMPixel-
CNN 1
Patch-
CNN 9
Patch-
BsNet
PPF-
Framework
SPPF Framework
CNN 2 CNN 2 -LiteBsNet
199.0099.7299.3999.7299.7899.56100.099.8399.67
299.6699.5598.0499.2699.8699.6399.9199.7299.94
399.94100.099.5599.4999.6199.7299.94100.099.55
498.7499.0899.0899.4199.6699.5099.6699.7599.50
598.9999.2798.3497.5498.7599.3599.3999.6499.48
699.7699.7999.5599.8799.3999.57100.0100.0100.0
799.5099.5399.4199.6799.5999.5399.94100.099.79
870.0780.8779.6082.2076.7387.2287.0087.5589.03
999.3099.6397.7098.8799.0798.5299.6299.4099.47
1097.6695.8191.2994.5492.8596.8297.9598.0598.57
1199.7799.4298.3998.1697.2499.5499.8899.8899.31
1299.94100.099.36100.099.9499.8899.88100.099.88
1399.5898.7498.6099.8697.4999.8699.4499.4499.02
1498.8598.2893.9197.4797.8297.0199.7799.6699.31
1570.0576.0668.8179.8475.1774.6585.2286.6274.31
1699.6399.0798.5198.1398.8899.6999.5699.2599.19
AA(%)95.6596.5594.9796.5095.7496.8897.9598.0597.25
OA(%)88.8891.9989.8792.4990.6293.1094.8795.1693.99

Share and Cite

MDPI and ACS Style

Ran, L.; Zhang, Y.; Wei, W.; Zhang, Q. A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features. Sensors 2017, 17, 2421. https://doi.org/10.3390/s17102421

AMA Style

Ran L, Zhang Y, Wei W, Zhang Q. A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features. Sensors. 2017; 17(10):2421. https://doi.org/10.3390/s17102421

Chicago/Turabian Style

Ran, Lingyan, Yanning Zhang, Wei Wei, and Qilin Zhang. 2017. "A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features" Sensors 17, no. 10: 2421. https://doi.org/10.3390/s17102421

APA Style

Ran, L., Zhang, Y., Wei, W., & Zhang, Q. (2017). A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features. Sensors, 17(10), 2421. https://doi.org/10.3390/s17102421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop