Next Article in Journal
Hyperspectral Image Classification via Deep Structure Dictionary Learning
Next Article in Special Issue
Hyperspectral Image Classification Based on Non-Parallel Support Vector Machine
Previous Article in Journal
A Machine Learning Approach to Waterbody Segmentation in Thermal Infrared Imagery in Support of Tactical Wildfire Mapping
Previous Article in Special Issue
Hyperspectral Image Classification Based on Spectral Multiscale Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification

1
College of Computer and Control Engineering, Qiqihar University, Qiqihar 161000, China
2
College of Information and Communication Engineering, Dalian Nationalities University, Dalian 116000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 2265; https://doi.org/10.3390/rs14092265
Submission received: 28 March 2022 / Revised: 3 May 2022 / Accepted: 6 May 2022 / Published: 8 May 2022
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)

Abstract

:
In recent years, hyperspectral image (HSI) classification has become a hot research direction in remote sensing image processing. Benefiting from the development of deep learning, convolutional neural networks (CNNs) have shown extraordinary achievements in HSI classification. Numerous methods combining CNNs and attention mechanisms (AMs) have been proposed for HSI classification. However, to fully mine the features of HSI, some of the previous methods apply dense connections to enhance the feature transfer between each convolution layer. Although dense connections allow these methods to fully extract features in a few training samples, it decreases the model efficiency and increases the computational cost. Furthermore, to balance model performance against complexity, the AMs in these methods compress a large number of channels or spatial resolutions during the training process, which results in a large amount of useful information being discarded. To tackle these issues, in this article, a novel one-shot dense network with polarized attention, namely, OSDN, was proposed for HSI classification. More precisely, since HSI contains rich spectral and spatial information, the OSDN has two independent branches to extract spectral and spatial features, respectively. Similarly, the polarized AMs contain two components: channel-only AMs and spatial-only AMs. Both polarized AMs can use a specially designed filtering method to reduce the complexity of the model while maintaining high internal resolution in both the channel and spatial dimensions. To verify the effectiveness and lightness of OSDN, extensive experiments were carried out on five benchmark HSI datasets, namely, Pavia University (PU), Kennedy Space Center (KSC), Botswana (BS), Houston 2013 (HS), and Salinas Valley (SV). Experimental results consistently showed that the OSDN can greatly reduce computational cost and parameters while maintaining high accuracy in a few training samples.

1. Introduction

Benefiting from the increased spectral resolution of remote sensing sensors, the hyperspectral imaging technique shows great potential for obtaining high-quality land-cover information. Hyperspectral image (HSI) contains much spectral and spatial information, and each pixel contains hundreds of continuous and narrow spectral bands ranging from visible to near-infrared. Therefore, it has been widely used in many fields, such as urban planning [1], precision agriculture [2], and mineral exploration [3]. Among these applications, HSI classification is an important technical tool that aims to assign a unique class to each pixel [4]. However, due to the insufficient labeled samples and much redundant information, HSI classification remains a challenging task [5].
In the last decade, various methods have been proposed for HSI classification. These classification methods can be divided into two main categories: traditional machine-learning-based (ML-based) and modern deep-learning-based (DL-based) methods [6]. Generally, in ML-based methods, researchers first perform feature extraction on the raw HSI and then use classifiers to classify the extracted features. According to the types of features, they can be further divided into the spectral-based method and the spatial–spectral-based method. Commonly, the spectral-based method directly classifies the spectral vector of each pixel, such as random forest [7], k-nearest neighbors [8], and support vector machine (SVM) [9]. Moreover, many methods focus on reducing redundant spectral dimensions, which aim to map the high-dimensional spectral vector into a low-dimensional space, such as principal component analysis (PCA) [10], linear discriminant analysis [11], and independent component analysis [12]. However, it is difficult to identify the land-cover types using spectral features alone. The classification results are often filled with much salt-and-pepper noise. Alternatively, many researchers have discovered that spatial features can provide additional useful information for classification tasks. On the basis of this consideration, researchers have proposed a series of spatial–spectral-based methods for HSI classification, such as Gabor wavelet transform [13], local binary patterns [14], and morphological profiles [15]. Although the above methods can improve the classification accuracy, the feature extraction process relies on a priori knowledge and appropriate parameter settings. These limitations may affect the robustness and discrimination of the extracted features, making it difficult to achieve satisfactory results in complex scenarios [16].
In recent years, with the continuous improvement of computing power, the development of deep learning techniques has been greatly promoted. Deep neural network models can automatically extract highly robust and discriminative features from the raw data. They have made significant breakthroughs in many computer vision tasks, including image classification [17], semantic segmentation [18], and remote sensing image processing [19]. Naturally, in the field of HSI classification, research methods are gradually converging to state-of-art deep learning techniques. Currently, many effective classification models based on deep learning methods have been proposed. Chen et al. [20] proposed a stacked autoencoder deep neural network for spatial–spectral classification. It is the first application of DL-based methods to HSI classification. After that, many DL-based classification methods were proposed, and especially convolutional neural networks have attracted much attention.
Convolutional neural network (CNN) with multiple hidden layers has a powerful feature learning capability. It can provide more discriminative features with fine quality for HSI classification. Hu et al. [21] first used a one-dimensional (1-D) CNN to extract deep spectral features from each pixel for HSI classification. In addition, Yu et al. [22] proposed an improved 1-D CNN framework, which embeds pre-extracted hashing features in the network. To fully utilize the spatial context information, some two-dimensional (2-D) CNN has been applied to HSI classification and achieved desirable performance. Chen et al. [23] extracted the first principal component from the HSI data by PCA along the spectral dimension and then fed it into a 2-D CNN model to extract the spatial depth features. Yu et al. [24] applied a multiple 2-D CNN layer with a 1 × 1 convolutional kernel to extract deep spatial features for HSI classification. However, the high spectral dimension in HSI may increase the number of learnable parameters of the 2-D CNN model, and the correlation of local spectra may be neglected. Compared with the 2-D CNN model, the three-dimensional (3-D) CNN model can simultaneously extract joint spatial–spectral features. Mei et al. [25] proposed an unsupervised 3-D convolutional autoencoder to extract the joint spatial-spectral feature. Roy et al. [26] proposed a hybrid 3-D and 2-D CNN model for HSI classification (HYNN). This model first uses 3-D CNN to extract shallow joint spatial-spectral features and then uses 2-D CNN to extract more abstract spatial texture features. Moreover, to reduce the computational cost of 3-D CNN, Zhang et al. [27] proposed a 3-D depth-wise separable CNN for HSI classification. Recently, inspired by the residual network [28], Zhong et al. [29] proposed a spectral–spatial residual network (SSRN), which uses spectral and spatial 3-D residual blocks to learn deep-level features of HSI. Subsequently, inspired by SSRN and DenseNet [30], Wang et al. [31] proposed an end-to-end fast densely connected spectral-spatial classification framework (FDSS), which can more effectively reuse features in a few training samples. Although these CNN-based classification models can extract rich spatial and spectral features of HSI, since the convolution kernel is localized, it needs to expand the field of perception by stacking convolution layers, which may lead to a large number of useless features propagated to the deeper convolutional layers. Those useless features will affect the learning efficiency of the model and eventually lead to a decrease in classification accuracy. Thus, finding and focusing on the discriminative features of HSI is an important problem.
Inspired by the human visual system, many researchers have introduced the attention mechanism to computer vision tasks, such as object detection [32], image caption [33], and image enhancement [34]. Since the attention mechanism can pay attention to valuable features or regions in the feature map, some researchers have successfully introduced it to HSI classification. Fang et al. [35] proposed a densely connected spectral-wise attention mechanism network, in which the squeeze-and-excitation (SE) attention module [36] is applied to recalibrate each spectral contribution. Later, many similar spectral attention modules were introduced for HSI classification to highlight valuable spectral and suppress unless ones. For example, Li et al. [37] proposed a spectral band attention module through the adversarial learning method, in which the attention module can explore the contribution of each band and avoid the spectral distortion. Roy et al. [38] proposed a fused SE attention module, in which two different squeezing operations, global pooling and max pooling, are used to generate the excitation weight. To make the network simultaneously boost and suppress features in both spectral and spatial dimensions, many networks based on spectral–spatial attention modules have been proposed for HSI classification. Inspired by SSRN and convolutional block attention module (CBAM) [39], Ma et al. [40] proposed a double-branch multi-attention network (DBMA), in which the spectral and spatial branches are equipped with spectral-wise attention and spatial-wise attention, respectively. Subsequently, Li et al. [41] constructed a double-branch dual attention (DBDA) network for HSI classification, in which the dual attention network (DANet) [42] is inserted separately into two branches. Compared with CBAM, DANet can adaptively integrate local features and global dependencies. In addition, to obtain the long-distance spatial and spectral features, Shi et al. [43] proposed a 3-D coordination attention mechanism network, and the 3-D attention module could be better adapted to the 3-D structure of the HSI. Li et al. [44] proposed a spectral–spatial global context attention [45] network (SSGC) with less time cost to capture more discriminative features. Moreover, in [46], Shi et al. proposed a pyramidal convolution and iterative attention network (PCIA), in which each branch can extract hierarchical features. Although the above three attention-based methods can achieve good classification results, they compress a large spatial or spectral resolution in obtaining the attention feature map. Meanwhile, the feature extraction process requires a high computational cost due to their simple application dense connection modules.
To solve the above problems, inspired by the latest technology and predecessors, we propose a one-shot dense network with polarized attention for HSI classification. Instead of following the 3-D dense connection method used by predecessors to extract features from HSI, we propose a one-shot dense connection block that maintains good classification accuracy and consumes less computational cost. Meanwhile, we add residual connections in this block, enhancing feature transfer and mitigating the gradient disappearance problem. In addition, the latest proposed polarized attention mechanism (PAM) [47] is introduced in the network to mine finer and higher quality features. Compared with other attention mechanisms [36,39,42,45], it can maintain a relatively high resolution in spectral and spatial dimensions and thus reduce the loss of features. Furthermore, the proposed network is composed of two branches that can perform feature extraction in spectral and spatial realms, respectively. The channel-only and spatial-only attention mechanisms are inserted into each branch to recalibrate feature maps. After extracting the enhanced features from the two branches, we fuse them with a concatenation operation to obtain the spectral–spatial features. Finally, the fused features are fed into the fully connected layer to obtain the classification results. The main contributions of this paper are summarized as follows:
(1)
We propose a novel spectral–spatial network based on one-shot dense block and polarized attention for HSI classification. The proposed network has two independent feature extraction branches: the spectral branch with channel-only polarized attention applied to obtain spectral features, and the spatial branch with spatial-only polarized attention used to capture spatial features.
(2)
By one-shot dense block, the number of parameters and computational complexity of the network are greatly reduced. Meanwhile, the residual connection is added to the block, which can alleviate the performance saturation and gradient disappearance problems.
(3)
We apply both channel-only and spatial-only polarized attention in the proposed network. The channel-only polarized attention emphasizes valuable channel features and suppresses useless ones. The spatial-only attention is more focused on areas with more discriminative features. In addition, the attention mechanism can preserve more resolution in both channel and spatial dimensions and consume less computational costs.
(4)
Some advanced technologies, including cosine annealing learning rate, Mish activation function [48], Dropout, and early stopping, are employed in the proposed network. For reproducibility, the code of the proposed network is available at https://github.com/HaiZhu-Pan/OSDN (accessed on 5 May 2022).
To show the effectiveness of the proposed network, a large number of experiments were carried out on five real-world HSI datasets, namely, PU, KSC, BS, HS, and SV. The experimental results consistently demonstrate that the proposed network can achieve better accuracy than several widely used ML- and DL-based methods in a few training samples and computational resources.
The remainder of this article is structured as follows: Some close backgrounds are reviewed in Section 2. In Section 3, our proposed network is presented with three parts in detail. In Section 4 and Section 5, comparative experiments and ablation analyses are performed to demonstrate the effectiveness of the proposed network. Finally, Section 6 provides some concluding remarks and suggestions for future work.

2. Background

In this section, we briefly introduce some important background techniques involved in the proposed HSI classification model, including 3D convolutional operation, ResNet and DenseNet, and attention mechanism.

2.1. 3-D Convolution Operation

Generally, convolutional operations are the core of CNNs. At present, there are three types of convolution operations in the CNN-based HSI classification model, which are 1-D CNN, 2-D CNN, and 3-D CNN. There are some drawbacks of using 1-D CNN or 2-D CNN, such as lack of spatial relationship features or very complex networks [26]. The main reason is that HSI is a 3-D data cube enriched with a large amount of spatial and spectral information. The 1-D CNN alone cannot extract good discriminative features from the spatial dimension. Similarly, a deep 2-D CNN is more computationally complex and may miss some spectral information between adjacent bands. This is our motivation for using the 3-D convolution operation, which can make up for the shortcomings of the first two convolution operations. The process of 3D convolution operation is shown in Figure 1.
As shown in Figure 1, the input data for the 3-D convolution operation is a 4-D tensor h x h n   ×   h n   ×   s n   ×   k n , where h n   ×   h n   ×   s n is the size of the input data, and k n is the number of channels (feature maps). The 3-D convolution operation contains k n + 1 convolutional kernels of size α n + 1   ×   α n + 1   ×   d n + 1 , and the stride of subsampling is ( s , s , s 1 ). The output size of the 3-D convolution operation is also a 4-D tensor h x + 1 h n + 1   ×   h n + 1   ×   s n + 1   ×   k n + 1 . More specifically, the spatial size of the output data is h n + 1 = 1 + h n α n + 1 s , and the depth s n + 1 = 1 + s n d n + 1 s 1 . The 3-D convolution operation is be defined as follows:
h l , i x , y , z = M m h = 0 H l 1 w = 0 W l 1 d = 0 D l 1 k l , i , m h , w , d   ×   h l 1 , m x + h , y + w , z + d + b l , i
where M is the Mish activation function. In addition, the height, the width, and the depth of the convolution kernel are denoted by H l , W l , and D l , respectively. Furthermore, k l , i , m h , w , d denotes the weight of the ith convolution kernel at position (h, w, d) on the mth feature map in the lth convolution layer. Moreover, h l 1 , m x + h , y + w , z + d denotes the neuron value at position (x + h, y + w, z + d) on the mth feature map in the (l − 1)th layer.

2.2. ResNet and DenseNet

Commonly, a trained deep neural network can extract features layer by layer to complete the classification task. However, as the number of convolutional layers increases, two main problems arise: gradient dispersion/explosion and network degradation. Numerous studies have shown that ResNet [28] and DenseNet [30] can alleviate the above problems and achieve feature reuse.
As illustrated in Figure 2, a shortcut connection is added to the base CNN structure in the residual block. The shortcut connections, also known as identity mapping, enable input features to be passed from a lower level to a higher level in a summative way. The output features of the lth residual block are defined as follows:
x l = f l x l 1 + x l 1
where f l · denotes hidden layers, including convolution, batch normalization (BN), and Mish activation layers.
To further promote the flow of features in the network, Huang et al. [30] proposed a densely connected network, in which the shortcut connections are used to concatenate the input features and output features at each layer. This structure is shown in Figure 3. The output features of the lth dense block are computed as follows:
x l = D l x 0 , x 1 , x 2 , , x l 1 , x l
where D l · includes BN, Mish activation function, and convolution operation. · is the connected operation. In particular, DenseNet with layer l has l(l + 1)/2 connections, while CNN with the same layer has only l connections.

2.3. Attention Mechanism

The attention mechanism is a common data processing method in deep learning. It helps the model assign different weights to each part of the feature maps to extract more critical and discriminative features, thereby enabling the model to make more accurate judgments without imposing more overhead on the computation and storage of the model. The existing attention mechanisms can be roughly divided into two types, i.e., soft attention and hard attention. The former is more attention to the channel or spatial information of the image, while the latter is more attention to the information of a certain position in the image. Most importantly, the soft attention mechanism is differentiable, in which the weight parameters can be updated by backpropagation during the training process. Therefore, soft attention is widely used in the field of computer vision. For example, the SE attention module [36] can recalibrate each channel’s contribution to the network. GCNet [45] not only extracts global contextual information but is also lightweight like SENet. In addition, CBAM [39] and DANet [42] can extract attention maps in both the channel and spatial dimensions. However, these attention models have a low internal attention resolution, which loses a great quantity of channels or spatial information. Moreover, these attention models are computationally intensive when paying attention to the channel and spatial dimensions. To alleviate these problems, the PAM [47] employs a distinctive filtering method to reduce the complexity of the model while maintaining high internal attention resolution in both the channel and spatial dimensions. The detailed implementation of the channel-only PAM and spatial-only PAM is described in Section 3.1 and Section 3.2

3. Methodology

3.1. Channel-Only Polarized Attention Mechanism

As shown in Figure 4, the channel-only PAM is constructed using the channel relations of the feature map. We assume that the input feature maps A c h   ×   w   ×   c are independent, where h, w, and c denote height, width, and channel, respectively. First, A c is fed into a 2-D convolution layer with the kernel size of 1 × 1. Next, a new feature map B c h   ×   w   ×   c / 2 is generated. After that, Bc is reshaped to D c n   ×   c / 2 , where n = h   ×   w . Simultaneously, Ac is also fed into a 2-D convolution layer with the kernel size of 1 × 1. A new feature map C c h   ×   w   ×   1 is generated. Then, Cc is reshaped to E c 1   ×   1   ×   n , and the SoftMax function is applied to enhance attention scope. Subsequently, the matrix multiplication operation is performed on matrices D c and E c , and the generated feature map F c 1   ×   1   ×   c / 2 . After that, Fc is fed into a bottleneck feature transform layer, which consists of two 1 × 1 convolution layers, a layer normalization operation, and a ReLU activation function to obtain the dependency of each channel and raise the channel dimension from c/2 to c. Next, the Sigmoid function is used to keep the channel weights G c 1   ×   1   ×   c between 0 and 1. Finally, a channel-wise multiplication operation is performed between Hc and Ac to generate the final channel-only polarized attention map H c h   ×   w   ×   c . The overall channel-only PAM implementation process can be defined as follows:
G c = F S G W 3 ζ 1 W 1 A c   ×   F S M ζ 2 W 2 A c
where W 1 , W 2 , and W 3 are 1 × 1 convolution layers; ζ 1 and ζ 2 are two tensor transformation operations; F S G · is a Sigmoid activation function; and F S M · is a SoftMax activation function. The internal channel resolution between W 1 | W 2 and W 3 , is c/2. The final output of the channel-only PAM is formulated as
H c = G c c A c h   ×   w   ×   c
where c is dot multiplication operation.

3.2. Spatial-Only Polarized Attention Mechanism

As is shown in Figure 5, the spatial-only PAM is constructed by the spatial contextual position relationship of the feature map, given an input tensor A s h   ×   w   ×   c . First, As is fed into two 1 × 1 convolution layers to generate feature maps B s h   ×   w   ×   c / 2 and C s h   ×   w   ×   c / 2 , respectively. Next, Bs is reshaped to D s c / 2   ×   n , where n = h   ×   w . Second, the global average pooling operation is used in Cs to compress the global spatial features into a feature vector E s 1   ×   1   ×   c / 2 ; meanwhile, since the spatial features of Cs are compressed, we use the SoftMax function to perform feature enhancement on Es. After that, a spatial-wise multiplication operation is conducted on attention maps Ds and Es. The generated feature map F c 1   ×   1   ×   n . Through reshape and Sigmoid operations, the spatial attention weight G s h   ×   w   ×   1 is generated. The overall spatial-only PAM implementation process can be defined as follows:
G s = F S G ζ 3 ( F S M ζ 1 F G P W 2 A s   ×   ζ 2 W 1 A s )
where W 1 and W 2 are two standard 1 × 1 convolution layers, ζ 1 and ζ 2 are two tensor transformation operations, F G P · is a global average pooling operation, F S M · is a SoftMax active operation, and F S G · is a Sigmoid active operation. The final output of the spatial-only PAM is formulated as
H s = G c s A s h   ×   w   ×   c
where c is dot multiplication operation.

3.3. One-Shot Dense Network with Polarized Attention

In this subsection, we describe in detail the proposed network, which consists of spectral feature extraction, spatial feature extraction, and spectral–spatial feature fusion. The structure of the proposed network is shown in Figure 6. In the following, we use the PU dataset as an example to illustrate the three components of the proposed network in detail.

3.3.1. Spectral and Spatial Feature Extraction of One-Shot Dense Block

As shown in Figure 6, this part contains two independent feature extraction processes, including the spectral feature extraction process and the spatial feature extraction process. In the process of feature extraction, inspired by ResNet and DenseNet, we propose a one-shot dense block. Unlike the dense block, the feature maps produced by each convolution (Conv) layer are concatenated only once, and each Conv layer has an equal number of input and output feature maps. Furthermore, we also insert the skip connection in the one-shot dense block, which enables this block to extract deeper features of HSI. Instead of an individual pixel vector, we first randomly select a 3-D patch cube 7   ×   7   ×   103 from the PU dataset as the network’s input. In this way, the network can consider both spatial background information and spectral information around the central pixel of the 3-D patch cube during the classification process. Before passing through the spectral one-shot dense block, we first use a 3-D Conv layer with BN and Mish to reduce the spectral dimension of the input data. The kernel size is ( 1   ×   1   ×   7 ); the filters is 24; the stride is (1, 1, 2); no padding operation. After that, the output size of the generated feature maps is 7   ×   7   ×   49 ,   24 . Next, they are fed into the spectral one-shot dense block, which consists of a one-shot connected part and a residual connected part. The kernel size, filters, stride, and padding of all 3-D Conv layers in the one-shot connected part is ( 1   ×   1   ×   7 ), 12, (1, 1, 1), and (0, 0, 3), respectively. Then, we connect the generated feature maps through the channel dimension, and thus the feature maps with the size of 7   ×   7   ×   49 ,   60 are generated. Meanwhile, to implement the residual connected part, we use a 1   ×   1   ×   1 3-D Conv layer to reduce the channel dimension from 60 to 24 and then add it to the last feature maps of the one-shot connected part. Finally, after the last 3-D Conv layer with a kernel size of ( 1   ×   1   ×   49 ), a 7   ×   7   ×   1 ,   24 feature map is generated.
Similar to the spectral feature extraction process, we only focus on the spatial features of the input data in the spatial feature extraction process. The input data size of the spatial one-shot dense block is ( 7   ×   7   ×   1 ,   24 ). All hyperparameters are the same as the spectral one-shot dense block except that the kernel size of the spatial one-shot dense block is ( 3   ×   3   ×   1 ). The detailed spectral and spatial feature extraction processes are listed in Table 1 and Table 2.

3.3.2. Spectral and Spatial Feature Enhancement of Polarized Attention Mechanism

After the spectral and spatial feature extraction process, the feature maps are enriched with a large amount of spectral and spatial information. However, different channels and positions in these feature maps may make different contributions to the classification results. Therefore, as shown in Figure 6, to enhance valuable features and suppress non-valuable features, the feature maps are fed into the channel-only polarized attention (COPA) block and spatial-only polarized attention (SOPA) block. The input size of both the COPA block and the SOPA block is ( 7   ×   7   ×   24 ). A detailed description of these two attention mechanisms is given in Section 3.1 and Section 3.2. In addition, the detailed implementation of the feature enhancement process is listed in Table 3 and Table 4.

3.3.3. Spectral and Spatial Feature Fusion and Classification

After the spectral and spatial feature enhancement process, the resulting feature maps are separately fed into an adaptive average pooling (AdaptiveAvgPool) layer with BN and Mish. Compared to the fully connected layer, the AdaptiveAvgPool layer can reduce the computation cost. The output size of this layer is ( 1   ×   24 ). Finally, we fuse the two feature maps along the channel dimension and then feed the fused feature maps into the linear layer to obtain the classification results. Since we use the cross-entropy loss in PyTorch as the loss function of the network, which automatically contains the probability distribution of the labels, we no longer use the SoftMax layer to obtain the final classification results. The detailed implementation of the feature fusion and classification process is listed in Table 5.

4. Experiment

4.1. Hyperspectral Dataset Description

In this paper, we employed five well-known HSI datasets, namely, PU, KSC, BS, HS, and SV, to validate the generality and effectiveness of our proposed method. A detailed description of the above five datasets is presented as follows:
PU: The PU dataset was photographed by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over the University of Pavia. Its spatial dimensions and geometric resolutions are 610 × 340 and 1.3 m, respectively. Every pixel includes 115 spectral bands ranging from 430 nm to 860 nm. After dropping 12 noise-contaminated spectral bands, the number of spectral bands used for the experiment was 103. The ground truth consists of nine urban land-cover types with 42,776 labeled samples.
KSC: The KSC dataset was taken by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the Kennedy Space Center, Florida, on 23 March 1996. The spatial dimensions and resolutions are 512 × 614 and 18 m, respectively. Each pixel includes 176 spectral bands ranging from 400 to 2500 nm. In addition, this dataset includes 13 land-cover types with 5211 labeled pixels.
BS: The BS dataset was acquired by the NASA EO-1 satellite over the Okavango Delta, Botswana, on 31 May 2001. The spatial size of this dataset is 1476 × 256, and the spatial resolution is 30 m. Furthermore, the dataset contains 145 spectral bands ranging from 400 to 2500 nm. The dataset contains 3248 labeled pixels, which are divided into 14 classes.
HS: The HS dataset was captured over the University of Houston campus and the neighboring urban area on 23 June 2012, through the NSF-funded Center for Airborne Laser Mapping (NCALM). Its height and width are 349 and 1905, respectively, and its spatial resolution is up to 2.5 m. This dataset consists of 144 spectral bands in the 380 to 1050 nm region. This dataset has 664,845 pixels with 15,029 labeled samples, divided into 15 land-cover types.
SV: The SV dataset was also gathered by the AVIRIS sensor, but it was collected in the Salinas Valley region of California. Its spatial dimensions and resolutions are 512 × 217 and 3.7 m, respectively. The raw SV dataset has 224 spectral bands ranging from 400 to 2500 nm. Twenty water absorption bands are abandoned. Therefore, this article uses 204 bands for the experimental dataset. This dataset contains 16 land-cover types with 54,129 labeled samples.

4.2. Experimental Evaluation Indicators

In this work, three evaluation indicators, namely, overall accuracy (OA), average accuracy (AA), and Kappa coefficient (Kappa), are used to assess the classification performance of the proposed method [49]. OA refers to the percentage of correctly classified labeled samples to the total labeled samples. AA is the average accuracy for each class, which assigns the same importance to each category. Kappa is the consistency between classification results and ground truth. It is calculated from −1 to 1, but usually, it falls between 0 and 1. All in all, the closer the above three indicators are to 1, the better the classification model will be.
To explain the above three evaluation indicators more intuitively, we first define the confusion matrix. In the confusion matrix, each column represents the predicted label, and each row represents the actual label. The composition of the confusion matrix ( A n   ×   n ) is defined as follows:
A = a 11 a 1 n a n 1 a n n
where element a i j indicates the number of samples in class i classified as class j, and i n a i j and j n a i j indicate the sum of samples in each row and column, respectively. Then, the values of OA, AA, and Kappa can be defined as follows:
O A = i = 1 n a i i i n j n a i j
A A = 1 n   ×   i = 1 n a i i a i j
K a p p a = O A i = 1 n i n a i j   ×   j n a i j 1 i = 1 n i n a i j   ×   j n a i j

4.3. Experimental Setting

The experiments were implemented on a deep learning workstation with a 2× Intel Xeon E5-2680 v4 processor, 35 M of L3 cache, a clock speed of 2.4 GHz, and 14 physical cores/28 way multitask processing. Furthermore, it is equipped with 128 GB of DDR4 RAM and 8× NVIDIA GeForce RTX 2080Ti super graphical processing unit (GPU) with 11 GB of memory. The software environment is CUDA v11.2, PyTorch 1.1.0, and Python 3.7.
To validate the effectiveness of our proposed method, we selected seven representative methods for comparison: one representative ML-based method and seven state-of-the-art DL-based methods. All comparison methods are briefly described as follows:
(1)
SVM: The SVM with radial basis function (RBF) kernel is employed as a representative of the traditional method for HSI classification. It is implemented by scikit-learn [50]. Each labeled sample in the HSI has a continuous spectral vector. They are directly fed into the SVM without feature extraction and dimensionality reduction. The penalty parameter C and the RBF kernel width σ are selected by Grid SearchCV, both in the range of (10−2, 102).
(2)
HYSN [26]: The HYSN model has three 3-D convolution layers, one 2-D convolution layer, and two fully connected layers. The sizes of the convolution kernels of the 3-D convolution layers are 3 × 3 × 7, 3 × 3 × 5, and 3 × 3 × 3, respectively. The size of the convolution kernel of the 2-D convolution layer is 3 × 3.
(3)
SSRN [29]: The SSRN model consists of two residual convolutional blocks with convolution kernel sizes of 1 × 1 × 7 and 3 × 3 × 1, respectively. They are connected sequentially to extract deep-level spectral and spatial features, in which BN and ReLu are added after each convolutional layer.
(4)
FDSS [31]: The network structure of FDSS is connected by three convolutional parts, including a densely connected spectral feature extraction part, a reducing dimension part, and a densely connected spatial feature extraction part. The shapes of the three partial convolution kernels are 1 × 1 × 7, 1 × 1 × b (b represents the spectral depth of the generated feature map), and 3 × 3 × 1, respectively. Moreover, BN and ReLu are added before each convolutional layer.
(5)
DBMA [40]: The DBMA model is designed with a two-branch network structure. Each branch has a dense block and an attention block. Its dense block is the same as in FDSS. Moreover, the attention block is inspired by CBAM [39].
(6)
DBDA [41]: The DBDA model uses DANet [42] as the attention mechanism, and the rest of the network structures are the same as DBMA. In particular, it adopts the Mish as the activation function.
(7)
PCIA [46]: The PCIA model uses an iterative approach to construct an attention mechanism. This network structure also consists of two branches, but each branch uses a pyramid convolution module to perform feature extraction.
(8)
SSGC [44]: The GCNet [45] attention mechanism is introduced to the SSGC. The rest of the network architecture is the same as DBMA.
To ensure the impartiality of the comparison experiments, we took the same hyperparameters on these methods. For the training set of the proposed method, we applied the Adam optimizer [51] to update the parameters for 100 training epochs, where the initial learning rate is 0.0005 for all datasets. The learning rate is dynamically adjusted every 25 epochs by the cosine annealing [52]. Furthermore, if the loss on the validation set does not change within 10 epochs, the network will move to the test session. To balance efficiency and effectiveness, the spatial size of the HSI patch cube was set to 7 × 7, and the batch size was set to 32. Table 6, Table 7, Table 8, Table 9 and Table 10 provide the detailed distribution of the training, validation, and testing samples of PU, KSC, BS, HS, and SA datasets. To seek reproducibility, the proposed network code is available publicly at https://github.com/HaiZhu-Pan/OSDN (accessed on 5 May 2022).

4.4. Experimental Results

Table 11, Table 12, Table 13, Table 14 and Table 15 report the classification accuracy of each category, OA, AA, and Kappa, on five datasets. It is clear that the proposed OSDN produces the best OA, AA, and Kappa and provides a significant improvement over the other methods on all datasets. For example, when 1% of the samples are randomly chosen for training on the PU dataset (Table 11), the improvement in OA compared to SVM, HYSN, SSRN, FDSS, DBMA, DBDA, PCIA, and SSGC methods are 9.96%, 5.87%, 3.60%, 1.72%, 2.16%, 2.06%, 2.99%, and 1.45%, respectively. Specifically, since SVM only uses spectral information to perform classification, its accuracy on all datasets is much lower than other methods. Conversely, the other eight DL-based methods (i.e., HYSN, SSRN, FDSS, DBMA, DBDA, SSGC, PCIA, and OSDN) all achieved good classification results on five datasets because they could automatically extract deep, high-level, and discriminative spatial–spectral information from the 3-D patch cube. Furthermore, compared to SSRN and HYSN, the OA of FDSS was improved approximately by 1–8% on all datasets, which indicates that the densely connected structure can extract features more adequately in a few training samples. In addition, the network structures of DBMA, DBDA, PCIA, and SSGC are very similar. Their classification models are based on two main ideas: dual-branch 3-D dense convolution block and dual-branch attention mechanism. Among these dual-branch attention models, SSGC achieved the best classification results in most datasets due to its ability to focus on global contextual information. In addition, the classification accuracy obtained by OSDN was higher than that of FDSS and SSCG because the PAM module in OSDN not only retained a large amount of spectral and spatial resolution but also dynamically enhanced the feature maps. Finally, compared with the best comparison methods in the five datasets, the OA of OSDN was improved by 1.45%, 1.86%, 1.46%, 1.62%, and 0.82%, respectively. At the same time, AA and Kappa improved to different degrees on the five datasets.
Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the ground truth, false-color image, and classification maps of all methods on the five datasets. Generally, the outline of each category was smoother and clearer in the proposed OSDN classification map on all datasets. Because the SVM method cannot effectively extract the spatial feature, its classification map had a large amount of salt-and-pepper noise on the five datasets (Figure 7b, Figure 8b, Figure 9b, Figure 10b and Figure 11b). In addition, benefiting from the PAM module, our proposed OSDN was found to be significantly better than other methods in predicting those unlabeled categories. Taking the PU dataset as an example, looking carefully at Figure 7k, we can see that there may have been several trees (C4) in the lower side area of the bare soil (C6). However, no method can predict as many trees in this area as possible. On the contrary, it is clear from Figure 7j that our proposed OSDN can predict eight trees in this area. Similarly, in the left area of these eight trees, the proposed OSDN was able to visualize the area more completely than other methods. All observations validate that our proposed OSDN can accurately predict labeled categories and reasonably predict unlabeled categories on all datasets. Moreover, the above results further verify that the proposed one-shot dense connection can also extract sufficient features in a few training samples, while the PAM module can focus on extracting finer features to perform classification.

5. Discussion

5.1. Comparison of Different Spatial Patch Size

In this subsection, we explore the effect between the spatial patch size and the classification accuracy of the proposed network. In general, if the spatial patch size is too small, it will not be enough to contain rich spatial features, and the classification performance might be decreased. Conversely, if the spatial patch size is too large, it will contain more mixed pixels and increase the computational cost. Therefore, an appropriate spatial patch size should be determined by classification accuracy and computational cost. Figure 12 depicts the OA with different spatial patch sizes ranging from 3 to 13 with a 2-pixel interval. According to Figure 12, with the increase in the spatial patch size, the classification accuracy of the five datasets gradually increased, and the best OA was acquired when the patch size was 7   ×   7. This phenomenon indicates that more and more spatial features were included in the data cubes as the spatial size increased. Thus, the classification results were improved to some extent. However, if the space size increased, the OA of most datasets will show a decreasing trend. In conclusion, to balance both the OA and computational costs, we used 7   ×   7 as the spatial patch size on the five datasets.

5.2. Comparison of Different Training Sample Proportions

It is well known that deep learning is a data-driven approach. In this subsection, we randomly chose 1%, 1.5%, 2%, 3%, 5%, 7%, 9%, and 60% of training samples from each dataset to explore the classification performance of different models with different training sample proportions. As shown in Figure 13, when the training samples were sufficient, these models maintained classification results above 99% on all five datasets. However, obtaining enough training samples is a time-consuming and labor-intensive task. Therefore, one of the motivations of our proposed OSDN was to obtain a good classification result in a few training samples. Compared with other methods, our proposed OSDN consistently maintained the most significant OA in various proportions of training samples. Especially with insufficient training samples, our proposed OSDN achieved the highest OA on the five datasets.

5.3. Comparison of Computational Cost and Complexity

One of the purposes of this article is to reduce the computational cost and complexity of the proposed network. Therefore, Table 16 compares the number of parameters, floating-point operations (FLOPs), training time, and testing time of different methods for five datasets. The FLOPs is an indicator to evaluate the model’s complexity, which is used to measure the computational cost of the model. All methods are counted in the state of the best accuracy and are trained with the same samples. Generally, from Table 16, it can be found that the proposed OSDN achieved good results on all four metrics. Specifically, HYSN had the largest number of parameters and highest FLOPs compared with other DL-based methods, owing to its deeper network structure. Compared with SSRN, although FDSS achieved good classification results (see Figure 14), it had more parameters and FLOPs than SSRN due to its dense connections. In addition, it is worth noting that the DBMA, DBDA, PCIA, and SSGC had a similar feature extraction backbone. Among these four methods, DBMA had the maximum number of parameters and the highest FLOPs since its attention module contained fully connected operations. Furthermore, DBDA, PCIA, and SSGC had roughly the same parameters. However, PCIA took lower FLOPs because of its multiscale pyramidal feature extraction block and iterative attention module. Our proposed OSDN required the least number of parameters and lowest FLOPs on the five datasets due to its lightweight one-shot dense block and effective PAM module. In addition, since SVM contained fewer parameters, it did not consume much time for training and testing. As for the time efficiency of OSDN, it is very competitive with other similar comparative models (i.e., DBMA, DBDA, PCIA, and SSGC). Finally, combining Table 16 and Figure 14 with the above analysis, we can conclude that the proposed OSDN presents satisfying classification accuracy with less computational cost and complexity.

5.4. Comparison of Different Dense Connections

To verify the effectiveness and lightness of the proposed one-shot dense block (OSDB), we compared it with two other classical dense blocks, namely, the dense block (DB) and the weak dense block (WDB) [53], as shown in Figure 15. Note that the overall structure of the proposed OSDN remained unchanged; only the feature extraction blocks of OSDN were replaced by DB and WDB, respectively. The number of parameters, FLOPs, and OA of the three dense blocks is listed in Table 17, Table 18, Table 19, Table 20 and Table 21. The experimental results show that OSDB had fewer parameters and FLOPs; meanwhile, the OA was acceptable on the five datasets. From these five tables, although DB achieved the highest OA, it had a large number of parameters and FLOPs, which increased the complexity of the model. In addition, since WDB only retained the skip connection between the two Conv layers in DB, the parameters and FLOPs were reduced to some extent. However, the OA was also reduced. Lastly, our proposed OSDN not only connected the subsequent feature maps at once but also incorporated the residual connections. It enabled the proposed OSDB to maintain accuracy and reduce the amount of computation. In conclusion, although our proposed OSDB did not achieve the best for all indicators, it is acceptable and reasonable from the motivation of reducing computation cost and complexity.

5.5. Ablation Analysis toward the Attention Module

This subsection describes an ablation analysis performed on the attention module on five datasets. For a fair comparison, all the networks were trained with the same hyperparameters and samples, as described in Section 4.3. As shown in Figure 16, “Model 0” represents that the PAM was not used in the OSDN; “Model 1” and “Model 2” represent that spatial-only PAM and channel-only PAM were used in the proposed network, respectively; and “Model 3” represents both spatial-only PAM and channel-only PAM being used in the OSDN. According to the results, we can observe that both Model 1 and Model 2 effectively improved the OA over Model 0 on the five datasets. It is worthwhile to note that even though the OA of Model 0 was already very high on the SA dataset, Model 1, Model 2, and Model 3 improved the OA by 0.38%, 0.57%, and 0.76%, respectively. The experimental results consistently show that compared with using Module 1 alone or Module 2 alone, Model 3 achieved the best OA on all datasets. Furthermore, we further analyzed the impact of the attention module (Model 3) on the computational cost of the OSDN. After extensive experiments, we observed that the computation times before and after incorporating Model 3 into the OSDN were ≈0.0051 and ≈0.0064 s, respectively. In addition, the FLOPs and parameters of Model 3 were 0.04 M and 0.001 M, respectively. Therefore, the introduced attention mechanism did not degrade the computational cost and complexity of the OSDN. At the same time, the introduced attention module can select the important channel and spatial features to improve the classification performance of OSDN.

6. Conclusions

In this article, we aimed to construct an OSDN to solve the current problems of high complexity and inadequate feature extraction of CNN-based HSI classification models in the case of small training samples. By incorporating one-shot dense block, the number of parameters and computational cost of the network were significantly reduced while guaranteeing an excellent feature extraction ability. Moreover, to fully extract refined and discriminative features, the polarized AMs were introduced in the proposed OSDN. Compared to other previous AMs used in HSI classification models, the polarized AMs can maintain high channel and spatial resolution during the training process. In addition, some advanced techniques, including the BN layer, the Mish activation function, the cosine annealing learning rate, the dropout layer, and early stop operation, were used in the OSDN to prevent overfitting and accelerate network convergence.
The experiments demonstrated the effectiveness of two crucial parts in the OSDN, namely, one-shot dense block and polarized AMs. Moreover, several state-of-the-art models, such as HYSN, SSRN, FDSS, DBMA, DBDA, PCIA, and SSGC, were used for comparison on five HSI datasets. In the case of a few training samples, the classification results consistently demonstrated that the OSDN not only accurately predicted the labeled samples but also reasonably predicted the unlabeled samples. At the same time, compared with other comparison models, the proposed OSDN is an efficient lightweight model, which can achieve good classification performance with less computational cost even under limited training samples. In our future work, we will investigate more effective and lightweight models to extract discriminative features for HSI classification. Finally, the code developed for OSDN is available at https://github.com/HaiZhu-Pan/OSDN (accessed on 5 May 2022).

Author Contributions

Conceptualization, H.P. and M.L.; data curation, M.L.; formal analysis, H.P., M.L. and H.G.; methodology, H.P. and M.L.; software, M.L.; validation, M.L. and H.P.; writing—original draft, M.L.; writing—review and editing, H.P., H.G. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62071084, and the Fundamental Research Funds in Heilongjiang Provincial Universities, grant number 135509116.

Data Availability Statement

The Pavia University, Kennedy Space Center, Botswana, and Salinas Valley datasets are available online at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 5 May 2022). The Houston 2013 dataset is available at https://www.grss-ieee.org/resources/tutorials/data-fusion-tutorial-in-spanish/ (accessed on 5 May 2022).

Acknowledgments

The authors would like to thank the handing editor and anonymous reviewers for their insights and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhong, Y.; Cao, Q.; Zhao, J.; Ma, A.; Zhao, B.; Zhang, L. Optimal decision fusion for urban land-use/land-cover classification based on adaptive differential evolution using hyperspectral and LiDAR data. Remote Sens. 2017, 9, 868. [Google Scholar] [CrossRef] [Green Version]
  2. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  3. Lorenz, S.; Salehi, S.; Kirsch, M.; Zimmermann, R.; Unger, G.; Vest Sørensen, E.; Gloaguen, R. Radiometric correction and 3D integration of long-range ground-based hyperspectral imagery for mineral exploration of vertical outcrops. Remote Sen. 2018, 10, 176. [Google Scholar] [CrossRef] [Green Version]
  4. Audebert, N.; Le Saux, B.; Lefèvre, S. Deep learning for classification of hyperspectral data: A comparative review. IEEE Geosci. Remote Sens. 2019, 7, 159–173. [Google Scholar] [CrossRef] [Green Version]
  5. Shahshahani, B.M.; Landgrebe, D.A. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1087–1095. [Google Scholar] [CrossRef] [Green Version]
  6. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active semi-supervised random forest for hyperspectral image classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef] [Green Version]
  8. Ma, L.; Crawford, M.M.; Tian, J. Local manifold learning-based k-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
  9. Kuo, B.-C.; Ho, H.-H.; Li, C.-H.; Hung, C.-C.; Taur, J.-S. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 317–326. [Google Scholar] [CrossRef]
  10. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  11. Bruce, L.M.; Koger, C.H.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar] [CrossRef]
  12. Falco, N.; Benediktsson, J.A.; Bruzzone, L. A study on the effectiveness of different independent component analysis algorithms for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2183–2199. [Google Scholar] [CrossRef]
  13. Jia, S.; Wu, K.; Zhu, J.; Jia, X. Spectral–spatial Gabor surface feature fusion approach for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1142–1154. [Google Scholar] [CrossRef]
  14. Jia, S.; Hu, J.; Zhu, J.; Jia, X.; Li, Q. Three-dimensional local binary patterns for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2399–2413. [Google Scholar] [CrossRef]
  15. Gu, Y.; Liu, T.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Nonlinear multiple kernel learning with multiple-structure-element extended morphological profiles for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
  16. Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
  18. Lateef, F.; Ruichek, Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019, 338, 321–348. [Google Scholar] [CrossRef]
  19. Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  20. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  21. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  22. Yu, C.; Zhao, M.; Song, M.; Wang, Y.; Li, F.; Han, R.; Chang, C.-I. Hyperspectral image classification method based on CNN architecture embedding with hashing semantic feature. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1866–1881. [Google Scholar] [CrossRef]
  23. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  24. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  25. Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
  26. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral classification based on lightweight 3-D-CNN with transfer learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef] [Green Version]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  30. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  31. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  32. Tian, Z.; Zhan, R.; Hu, J.; Wang, W.; He, Z.; Zhuang, Z. Generating anchor boxes based on attention mechanism for object detection in remote sensing images. Remote Sens. 2020, 12, 2416. [Google Scholar] [CrossRef]
  33. You, Q.; Jin, H.; Wang, Z.; Fang, C.; Luo, J. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4651–4659. [Google Scholar]
  34. Atoum, Y.; Ye, M.; Ren, L.; Tai, Y.; Liu, X. Color-wise attention network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 506–507. [Google Scholar]
  35. Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.-W. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sens. 2019, 11, 159. [Google Scholar] [CrossRef] [Green Version]
  36. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  37. Li, J.; Cui, R.; Li, B.; Song, R.; Li, Y.; Dai, Y.; Du, Q. Hyperspectral image super-resolution by band attention through adversarial learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4304–4318. [Google Scholar] [CrossRef]
  38. Roy, S.K.; Dubey, S.R.; Chatterjee, S.; Chaudhuri, B.B. FuSENet: Fused squeeze-and-excitation network for spectral-spatial hyperspectral image classification. IET Image Process. 2020, 14, 1653–1661. [Google Scholar] [CrossRef]
  39. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  40. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
  41. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
  42. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
  43. Shi, C.; Liao, D.; Zhang, T.; Wang, L. Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network. Remote Sens. 2022, 14, 608. [Google Scholar] [CrossRef]
  44. Li, Z.; Cui, X.; Wang, L.; Zhang, H.; Zhu, X.; Zhang, Y. Spectral and spatial global context attention for hyperspectral image classification. Remote Sens. 2021, 13, 771. [Google Scholar] [CrossRef]
  45. Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1971–1980. [Google Scholar]
  46. Shi, H.; Cao, G.; Ge, Z.; Zhang, Y.; Fu, P. Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1403. [Google Scholar] [CrossRef]
  47. Liu, H.; Liu, F.; Fan, X.; Huang, D. Polarized self-attention: Towards high-quality pixel-wise regression. arXiv 2021, arXiv:2107.00782. [Google Scholar]
  48. Misra, D. Mish: A self regularized non-monotonic activation function. arXiv 2019, arXiv:1908.08681. [Google Scholar]
  49. Liu, D.; Han, G.; Liu, P.; Yang, H.; Sun, X.; Li, Q.; Wu, J. A Novel 2D-3D CNN with Spectral-Spatial Multi-Scale Feature Fusion for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4621. [Google Scholar] [CrossRef]
  50. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  51. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  52. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  53. Ge, Z.; Cao, G.; Shi, H.; Zhang, Y.; Li, X.; Fu, P. Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3305. [Google Scholar] [CrossRef]
Figure 1. Illustration of the 3-D convolution operation.
Figure 1. Illustration of the 3-D convolution operation.
Remotesensing 14 02265 g001
Figure 2. Illustration of the residual block in the ResNet.
Figure 2. Illustration of the residual block in the ResNet.
Remotesensing 14 02265 g002
Figure 3. Illustration of the dense block in the DenseNet.
Figure 3. Illustration of the dense block in the DenseNet.
Remotesensing 14 02265 g003
Figure 4. Details of the channel-only polarized attention mechanism in our network.
Figure 4. Details of the channel-only polarized attention mechanism in our network.
Remotesensing 14 02265 g004
Figure 5. Details of the spatial-only polarized attention mechanism in our network.
Figure 5. Details of the spatial-only polarized attention mechanism in our network.
Remotesensing 14 02265 g005
Figure 6. The structure of the proposed network.
Figure 6. The structure of the proposed network.
Remotesensing 14 02265 g006
Figure 7. Full-factor classification maps for the PU dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Figure 7. Full-factor classification maps for the PU dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Remotesensing 14 02265 g007
Figure 8. Full-factor classification maps for the KSC dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Figure 8. Full-factor classification maps for the KSC dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Remotesensing 14 02265 g008
Figure 9. Full-factor classification maps for the BS dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Figure 9. Full-factor classification maps for the BS dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Remotesensing 14 02265 g009
Figure 10. Full-factor classification maps for the HS dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Figure 10. Full-factor classification maps for the HS dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Remotesensing 14 02265 g010
Figure 11. Full-factor classification maps for the SA dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Figure 11. Full-factor classification maps for the SA dataset. (a) Ground-truth. (b) SVM. (c) HYSN. (d) SSRN. (e) FDSS. (f) DBMA. (g) DBDA. (h) PCIA. (i) SSGC. (j) OSDN. (k) False-color image.
Remotesensing 14 02265 g011
Figure 12. Comparison of OA using different spatial window sizes for the five datasets.
Figure 12. Comparison of OA using different spatial window sizes for the five datasets.
Remotesensing 14 02265 g012
Figure 13. Comparison of OA using different training sample proportions for the five datasets: (a) PU, (b) KSC, (c) BS, (d) HS, and (e) SA.
Figure 13. Comparison of OA using different training sample proportions for the five datasets: (a) PU, (b) KSC, (c) BS, (d) HS, and (e) SA.
Remotesensing 14 02265 g013
Figure 14. Classification results at different methods on the five datasets. (a) OA. (b) AA. (c) Kappa.
Figure 14. Classification results at different methods on the five datasets. (a) OA. (b) AA. (c) Kappa.
Remotesensing 14 02265 g014
Figure 15. Different dense blocks. (a) Dense block. (b) Week dense block. (c) One-shot dense block.
Figure 15. Different dense blocks. (a) Dense block. (b) Week dense block. (c) One-shot dense block.
Remotesensing 14 02265 g015
Figure 16. OA (%) of OSDN with different attention models on five datasets.
Figure 16. OA (%) of OSDN with different attention models on five datasets.
Remotesensing 14 02265 g016
Table 1. Detailed steps of the spectral one-shot dense block.
Table 1. Detailed steps of the spectral one-shot dense block.
Input SizeLayer OperationsKernel SizeFiltersOutput Size
( 7   ×   7   ×   103, 1)BN-Mish-Conv3D ( 1   ×   1   ×   7)24 ( 7   ×   7   ×   49, 24)
( 7   ×   7   ×   49, 24)BN-Mish-Conv3D ( 1   ×   1   ×   7)12 ( 7   ×   7   ×   49, 12)
( 7   ×   7   ×   49, 12)BN-Mish-Conv3D ( 1   ×   1   ×   7)12 ( 7   ×   7   ×   49, 12)
( 7   ×   7   ×   49, 12)BN-Mish-Conv3D ( 1   ×   1   ×   7)12 ( 7   ×   7   ×   49, 12)
( 7   ×   7   ×   49, 12)BN-Mish-Conv3D ( 1   ×   1   ×   7)12 ( 7   ×   7   ×   49, 12)
( 7   ×   7   ×   49, 12)BN-Mish-Conv3D ( 1   ×   1   ×   7)12 ( 7   ×   7   ×   49, 12)
(7 × 7 × 49, 12)/(7 × 7 × 49, 12)/(7 × 7 × 49, 12)
/(7 × 7 × 49, 12)/(7 × 7 × 49, 12)
Concatenate//(7 × 7 × 49, 60)
(7 × 7 × 49, 60)BN-Mish-Conv3D ( 1   ×   1   ×   1)24(7 × 7 × 49, 24)
( 7   ×   7   ×     49 ,   24 ) / ( 7   ×   7   ×   49, 24)Element-wise Sum// ( 7   ×   7   ×   49, 24)
( 7   ×   7   ×   49, 24)BN-Mish-Conv3D ( 1   ×   1   ×   49)24 ( 7   ×   7   ×   1, 24)
Table 2. Detailed steps of the spatial one-shot dense block.
Table 2. Detailed steps of the spatial one-shot dense block.
Input SizeLayer OperationsKernel SizeFiltersOutput Size
( 7   ×   7   ×   103, 1)BN-Mish-Conv3D ( 1   ×   1   ×   103)24 ( 7   ×   7   ×   1, 24)
( 7   ×   7   ×   1, 24)BN-Mish-Conv3D ( 3   ×   3   ×   1)12 ( 7   ×   7   ×   1, 12)
( 7   ×   7   ×   1, 12)BN-Mish-Conv3D ( 3   ×   3   ×   1)12 ( 7   ×   7   ×   1, 12)
( 7   ×   7   ×   1, 12)BN-Mish-Conv3D ( 3   ×   3   ×   1)12 ( 3   ×   3   ×   1, 12)
( 7   ×   7   ×   1, 12)BN-Mish-Conv3D ( 3   ×   3   ×   1)12 ( 3   ×   3   ×   1, 12)
( 7   ×   7   ×   1, 12)BN-Mish-Conv3D ( 3   ×   3   ×   1)12 ( 3   ×   3   ×   1, 12)
( 7   ×   7   ×   1 ,   12 ) / ( 7   ×   7   ×   1 ,   12 ) / ( 7   ×   7   ×   1, 12)
/ ( 7   ×   7   ×   1 ,   12 ) / ( 7   ×   7   ×   1, 12)
Concatenate//(7 × 7 × 1, 60)
(7 × 7 × 1, 60)BN-Mish-Conv3D ( 1   ×   1   ×   1)24(7 × 7 × 1, 24)
( 7   ×   7   ×   1 ,   24 ) / ( 7   ×   7   ×   1, 24)Element-wise Sum// ( 7   ×   7   ×   1, 24)
Table 3. Detailed steps of the channel-only polarized attention block.
Table 3. Detailed steps of the channel-only polarized attention block.
Input SizeLayer OperationsKernel SizeFiltersOutput Size
( 7   ×   7   ×   24)Conv2D ( 1   ×   1)12 ( 7   ×   7   ×   12)
( 7   ×   7   ×   12)Reshape// ( 49   ×   12)
( 7   ×   7   ×   24)Conv2D ( 1   ×   1)1 ( 7   ×   7   ×   1)
( 7   ×   7   ×   1)Reshape// ( 1   ×   1   ×   49)
( 1   ×   1   ×   49)SoftMax// ( 1   ×   1   ×   49)
( 1   ×   1   ×   49 ) / ( 49   ×   12)Matrix Multiplication// ( 1   ×   1   ×   12)
( 1   ×   1   ×   12)Conv2D ( 1   ×   1)12/r ( 1   ×   1   ×   12/r)
( 1   ×   1   ×   12/r)LayerNorm and ReLu// ( 1   ×   1   ×   12/r)
( 1   ×   1   ×   12/r)Conv2D ( 1   ×   1)24 ( 1   ×   1   ×   24)
( 1   ×   1   ×   24)Sigmoid// ( 1   ×   1   ×   24)
( 1   ×   1   ×   24 ) / ( 7   ×   7   ×   24)Dot Multiplication// ( 7   ×   7   ×   24)
Table 4. Detailed steps of the spatial-only polarized attention block.
Table 4. Detailed steps of the spatial-only polarized attention block.
Input SizeLayer OperationsKernel SizeFiltersOutput Size
( 7   ×   7   ×   24)Conv2D ( 1   ×   1)12 ( 7   ×   7   ×   12)
( 7   ×   7   ×   12)Reshape// ( 12   ×   49)
( 7   ×   7   ×   24)Conv2D ( 1   ×   1)12 ( 7   ×   7   ×   12)
( 7   ×   7   ×   12)AvgPooling// ( 1   ×   1   ×   12)
( 1   ×   1   ×   12)SoftMax// ( 1   ×   1   ×   12)
( 1   ×   1   ×   12 ) / ( 12   ×   49)Matrix Multiplication// ( 1   ×   1   ×   49)
( 1   ×   1   ×   49)Reshape// ( 7   ×   7   ×   1)
( 7   ×   7   ×   1)Sigmoid// ( 7   ×   7   ×   1)
( 7   ×   7   ×   1 ) / ( 7   ×   7   ×   24)Dot Multiplication// ( 7   ×   7   ×   24)
Table 5. Detailed steps of the feature fusion and classification process.
Table 5. Detailed steps of the feature fusion and classification process.
Input SizeLayer OperationsOutput Size
( 7   ×   7   ×   24)AdaptiveAvgPool-BN-Mish and Squeeze ( 1   ×   24)
( 7   ×   7   ×   24)AdaptiveAvgPool-BN-Mish and Squeeze ( 1   ×   24)
( 1   ×   24 ) / ( 1   ×   24)Concatenate ( 1   ×   48)
( 1   ×   48)Dropout-Linear ( 1   ×   9)
Table 6. The number of total samples, training samples, validation samples, and testing samples for each category of the PU dataset.
Table 6. The number of total samples, training samples, validation samples, and testing samples for each category of the PU dataset.
NumberLand Cover TypeTotalTrainVal.Test
C1Asphalt663166666499
C2Meadows18,64918618618,277
C3Gravel209921212057
C4Trees306431313002
C5Painted metal sheets134513131319
C6Bare soil502950504929
C7Bitumen133013131304
C8Self-blocking bricks368237373608
C9Shadows94799929
Total42,77642842841,920
Table 7. The number of total samples, training samples, validation samples, and testing samples for each category of the KSC dataset.
Table 7. The number of total samples, training samples, validation samples, and testing samples for each category of the KSC dataset.
NumberLand Cover TypeTotalTrainVal.Test
C1Scrub7611515731
C2Willow swamp24355233
C3CP hammock25655246
C4Slash pine25255242
C5Oak/broadleaf16133155
C6Hardwood22955219
C7Swamp10522101
C8Graminoid marsh43199413
C9Spartina marsh5201010500
C10Cattail marsh40488388
C11Salt marsh41988403
C12Mud flats5031010483
C13Water9271919889
Total52111041045003
Table 8. The number of total samples, training samples, validation samples, and testing samples for each category of the BS dataset.
Table 8. The number of total samples, training samples, validation samples, and testing samples for each category of the BS dataset.
NumberLand Cover TypeTotalTrainVal.Test
C1Water27055260
C2Hippo grass1012297
C3Floodplain grasses125155241
C4Floodplain grasses221544207
C5Reeds126955259
C6Riparian26955259
C7Fierscar225955249
C8Island interior20344195
C9Acacia woodlands31466302
C10Acacia shrublands24855238
C11Acacia grasslands30566293
C12Short mopane18144173
C13Mixed mopane26855258
C14Exposed soils952291
Total324865653118
Table 9. The number of total samples, training samples, validation samples, and testing samples for each category of the HS dataset.
Table 9. The number of total samples, training samples, validation samples, and testing samples for each category of the HS dataset.
NumberLand Cover TypeTotalTrainVal.Test
C1Healthy grass125125251201
C2Stressed grass125425251204
C3Synthetic grass6971414669
C4Trees124425251194
C5Soil124225251192
C6Water32577311
C7Residential126825251218
C8Commercial124425251194
C9Road125225251202
C10Highway122725251177
C11Railway123525251185
C12Parking lot 1123325251183
C13Parking lot 246999451
C14Tennis court42899410
C15Running track6601313634
Total15,02930130114,427
Table 10. The number of total samples, training samples, validation samples, and testing samples for each category of the SV dataset.
Table 10. The number of total samples, training samples, validation samples, and testing samples for each category of the SV dataset.
NumberLand Cover TypeTotalTrainVal.Test
C1Brocoli-green-weeds_1200940401929
C2Brocoli-green-weeds_2372675753576
C3Fallow197640401896
C4Fallow-rough-plow139428281338
C5Fallow-smooth267854542570
C6Stubble395979793801
C7Celery359772723435
C8Grapes-untrained11,27122522510,821
C9Soil-vinyard-develop62031241245955
C10Corn-senesced-green-weeds327866663146
C11Lettuce-romaine-4wk106821211026
C12Lettuce-romaine-5wk192739391849
C13Lettuce-romaine-6wk9161818880
C14Lettuce-romaine-7wk107021211028
C15Vinyard-untrained72681451456978
C16Vinyard-vertical-trellis180736361735
Total54,1291083108351,963
Table 11. Classification results of the PU dataset based on 1% training samples.
Table 11. Classification results of the PU dataset based on 1% training samples.
NumberColorSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
C1 87.6597.5797.0399.6197.3595.1393.5598.3098.65
C2 91.7895.6698.2197.8597.9098.8398.5698.6899.63
C3 76.1394.0271.8893.5893.7890.3878.6298.9998.07
C4 93.8193.2798.56100.098.8197.8999.6499.2699.36
C5 98.1498.9499.7099.92100.099.5599.9299.9299.47
C6 85.8184.6996.2899.4999.1095.2198.1299.4699.98
C7 68.3989.6599.9181.9493.89100.099.79100.0100.0
C8 84.8881.5782.7191.3386.1291.5286.7384.4892.86
C9 99.8999.5699.4499.0499.0199.7897.3797.27100.0
OA (%) 88.8792.9695.2397.1196.6796.7795.8497.3898.83
AA (%) 87.3992.7793.7595.8696.2296.4894.7097.3798.67
Kappa × 100 85.1190.6993.6796.1695.5795.7194.4796.5298.44
Table 12. Classification results of the KSC dataset based on 2% training samples.
Table 12. Classification results of the KSC dataset based on 2% training samples.
NumberColorSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
C1 86.4999.8687.5888.8390.7297.2296.5887.6897.96
C2 76.4386.7667.4166.5788.1884.5591.1693.4298.60
C3 64.1486.1853.1956.5480.8877.3291.6280.4882.09
C4 45.7555.0061.5483.7861.4754.8664.5771.9092.35
C5 31.7626.80100.097.9669.2333.3382.8570.0795.60
C6 50.5896.70100.075.8172.8695.8181.0694.1780.78
C7 48.2067.61100.0100.084.4076.8094.3583.6796.12
C8 68.9583.8290.8997.6186.2289.7494.3899.0098.06
C9 72.3182.3398.2499.7986.8998.8197.59100.0100.0
C10 94.0098.5464.0998.97100.0100.099.94100.099.47
C11 86.3587.8298.5399.75100.0100.0100.099.48100.0
C12 81.4181.5191.0898.7298.9194.9799.2999.3592.26
C13 100.097.41100.098.54100.0100.0100.0100.099.78
OA (%) 77.2582.7284.9990.2490.6291.8594.2393.8196.09
AA (%) 69.7280.8085.5889.4586.1484.8891.8090.7094.85
Kappa × 100 74.6880.7983.2289.1089.5390.9293.5793.1095.64
Table 13. Classification results of the BS dataset based on 2% training samples.
Table 13. Classification results of the BS dataset based on 2% training samples.
NumberColorSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
C1 100.078.7198.8696.9897.0195.5798.3198.48100.0
C2 86.7697.73100.0100.0100.098.0086.27100.0100.0
C3 86.7088.03100.087.78100.099.5888.70100.099.17
C4 94.1987.0690.9599.0395.4191.9697.9091.5991.86
C5 77.0590.3790.2877.4187.3191.9697.6992.0686.75
C6 59.8657.5180.0896.6183.6996.0797.0491.2789.71
C7 100.088.4696.4899.2100.0100.0100.0100.0100.0
C8 86.4991.4696.2689.2398.0098.4396.1197.91100.0
C9 64.1069.0294.8981.1896.6996.7481.5790.9698.67
C10 85.0592.0681.60100.099.5885.1472.8391.4499.57
C11 44.0089.5193.3191.3100.0100.0100.094.83100.0
C12 91.3590.1298.0599.42100.084.91100.0100.098.08
C13 76.7998.1379.50100.083.0191.0596.3592.2892.13
C14 100.095.56100.0100.0100.0100.0100.0100.0100.0
OA (%) 73.4084.3291.4592.5494.7694.6392.8294.9596.41
AA (%) 82.3186.7092.8894.1595.7694.9693.7795.7796.85
Kappa × 100 71.0782.9990.7391.9194.3294.1892.2294.5396.11
Table 14. Classification results of the HS dataset based on 2% training samples.
Table 14. Classification results of the HS dataset based on 2% training samples.
NumberColorSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
C1 96.6991.2797.8197.4691.1896.4799.8096.8399.91
C2 98.2186.1099.9299.9294.6999.0092.2698.1597.86
C3 98.8194.63100.099.55100.0100.0100.0100.0100.0
C4 91.7889.3285.1495.1199.4897.2195.6497.6295.34
C5 89.8092.0992.3993.9093.0498.6696.7094.6199.66
C6 95.8591.50100.0100.0100.0100.0100.096.9196.53
C7 70.9681.3894.6885.1195.0092.8080.8895.5294.64
C8 69.3687.9899.5296.7197.1592.3388.5492.4699.63
C9 71.4780.5381.4582.3795.8195.4888.1596.1791.96
C10 76.4486.9467.2293.0982.4980.4289.3893.6188.59
C11 80.7186.3394.2689.5592.7797.5889.0792.9597.92
C12 71.9683.4291.8489.6092.0985.7791.9188.0992.84
C13 29.2594.0095.5187.1271.4085.8197.6277.9996.16
C14 92.7390.7995.77100.0100.0100.0100.0100.0100.0
C15 99.5390.7598.1598.7694.1799.6894.4399.21100.0
OA (%) 82.8287.5290.3092.8192.8593.9992.1894.6696.28
AA (%) 82.2488.4792.9193.8893.2994.7593.6294.6796.74
Kappa × 100 81.4186.5189.5192.2392.2793.5091.5594.2395.98
Table 15. Classification results of the SA dataset based on 2% training samples.
Table 15. Classification results of the SA dataset based on 2% training samples.
NumberColorSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
C1 99.90100.0100.0100.0100.0100.0100.0100.0100.0
C2 98.6299.94100.0100.099.97100.0100.0100.099.92
C3 91.6996.38100.097.57100.0100.099.36100.0100.0
C4 97.3299.1599.9399.7899.3399.32100.095.6197.39
C5 97.2796.5599.9299.8199.6599.4291.18100.0100.0
C6 99.9799.95100.0100.099.92100.0100.099.97100.0
C7 98.9299.4799.88100.0100.0100.0100.0100.099.83
C8 76.1585.3988.4796.9395.4496.5595.1490.4598.78
C9 98.8898.3599.7599.9799.4599.80100.099.8299.52
C10 92.2495.6898.2799.4896.4199.2099.7799.4698.39
C11 96.1697.93100.095.75100.0100.095.32100.099.90
C12 95.9796.3999.8499.0499.8999.5199.3099.6899.84
C13 93.3892.67100.0100.097.34100.099.88100.099.77
C14 97.0399.4999.7099.2293.2998.5699.4199.0398.55
C15 77.4782.6397.1291.6895.7888.5694.6198.0095.12
C16 99.19100.099.8999.94100.098.86100.0100.0100.0
OA (%) 90.2393.5396.8697.9497.9897.4697.6297.3998.80
AA (%) 94.3996.2598.9298.7098.5398.7498.3798.8899.19
Kappa × 100 89.0992.7996.4997.7097.7597.1897.3497.0998.66
Table 16. The number of parameters, FLOPs, training time, and testing time of different methods for five datasets.
Table 16. The number of parameters, FLOPs, training time, and testing time of different methods for five datasets.
ModelSVMHYSNSSRNFDSSDBMADBDAPCIASSGCOSDN
PUParameters (M)/1.370.210.340.320.200.230.190.05
FLOPs (M)/71.4148.4739.5574.4332.7226.5032.5121.18
Training time (s)5.1915.1440.4743.6239.4137.0035.2537.2130.19
Testing time (s)0.965.388.7912.9710.4111.5611.8910.977.53
KSCParameters (M)/2.030.310.930.520.330.350.320.07
FLOPs (M)/123.5583.2786.47128.6756.1544.3555.9536.36
Training time (s)0.6712.9721.4929.0322.7617.1027.7017.0215.80
Test time (s)0.061.051.521.791.531.391.251.641.08
BSParameters (M)/1.760.270.650.440.280.300.270.06
FLOPs (M)/101.8268.7764.92106.0746.3936.9146.1830.04
Training time (s)0.514.509.3215.0111.7210.0613.229.535.64
Testing time (s)0.041.011.331.641.991.781.811.661.15
HSParameters (M)/1.740.270.630.430.270.290.260.06
FLOPs (M)/100.3867.8163.80104.5645.7436.4245.5329.62
Training time (s)3.219.6720.9121.8524.6322.1822.1723.3713.39
Testing time (s)0.571.892.032.522.782.853.752.831.23
SAParameters (M)/2.470.391.530.660.420.440.410.08
FLOPs (M)/158.31106.47126.12164.8271.7756.2571.5742.26
Training time (s)41.4659.77222.32424.52298.42272.88326.39264.63120.11
Testing time (s)6.3610.7712.4716.3420.1528.2127.6126.2315.81
Table 17. The number of parameters, FLOPs, and OA of different dense blocks on the PU dataset.
Table 17. The number of parameters, FLOPs, and OA of different dense blocks on the PU dataset.
DatasetBlockParameters (M)FLOPs (M)OA (%)
PUDB0.2949.4998.99
WDB0.0525.1398.02
OSDB0.0521.1898.83
Table 18. The number of parameters, FLOPs, and OA of different dense blocks on the KSC dataset.
Table 18. The number of parameters, FLOPs, and OA of different dense blocks on the KSC dataset.
DatasetBlockParameters (M)FLOPs (M)OA (%)
KSCDB0.4784.8896.74
WDB0.0743.1295.93
OSDB0.0736.3696.09
Table 19. The number of parameters, FLOPs, and OA of different dense blocks on the BS dataset.
Table 19. The number of parameters, FLOPs, and OA of different dense blocks on the BS dataset.
DatasetBlockParameters (M)FLOPs (M)OA (%)
BSDB0.4170.1496.89
WDB0.0735.6396.28
OSDB0.0630.0496.41
Table 20. The number of parameters, FLOPs, and OA of different dense blocks on the HS dataset.
Table 20. The number of parameters, FLOPs, and OA of different dense blocks on the HS dataset.
DatasetBlockParameters (M)FLOPs (M)OA (%)
HSDB0.4069.1596.93
WDB0.0735.1396.11
OSDB0.0629.6296.28
Table 21. The number of parameters, FLOPs, and OA of different dense blocks on the SA dataset.
Table 21. The number of parameters, FLOPs, and OA of different dense blocks on the SA dataset.
DatasetBlockParameters (M)FLOPs (M)OA (%)
SADB0.60108.4899.01
WDB0.0955.1298.75
OSDB0.0842.2698.80
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pan, H.; Liu, M.; Ge, H.; Wang, L. One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sens. 2022, 14, 2265. https://doi.org/10.3390/rs14092265

AMA Style

Pan H, Liu M, Ge H, Wang L. One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sensing. 2022; 14(9):2265. https://doi.org/10.3390/rs14092265

Chicago/Turabian Style

Pan, Haizhu, Moqi Liu, Haimiao Ge, and Liguo Wang. 2022. "One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification" Remote Sensing 14, no. 9: 2265. https://doi.org/10.3390/rs14092265

APA Style

Pan, H., Liu, M., Ge, H., & Wang, L. (2022). One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sensing, 14(9), 2265. https://doi.org/10.3390/rs14092265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop