Next Article in Journal
Augmented Reality in Maintenance—History and Perspectives
Previous Article in Journal
Conv-ViT: A Convolution and Vision Transformer-Based Hybrid Feature Extraction Method for Retinal Disease Detection
Previous Article in Special Issue
Colorizing the Past: Deep Learning for the Automatic Colorization of Historical Aerial Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention

College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(7), 141; https://doi.org/10.3390/jimaging9070141
Submission received: 16 June 2023 / Revised: 4 July 2023 / Accepted: 5 July 2023 / Published: 10 July 2023

Abstract

:
In hyperspectral image (HSI) classification, convolutional neural networks (CNNs) have been widely employed and achieved promising performance. However, CNN-based methods face difficulties in achieving both accurate and efficient HSI classification due to their limited receptive fields and deep architectures. To alleviate these limitations, we propose an effective HSI classification network based on multi-head self-attention and spectral-coordinate attention (MSSCA). Specifically, we first reduce the redundant spectral information of HSI by using a point-wise convolution network (PCN) to enhance discriminability and robustness of the network. Then, we capture long-range dependencies among HSI pixels by introducing a modified multi-head self-attention (M-MHSA) model, which applies a down-sampling operation to alleviate the computing burden caused by the dot-product operation of MHSA. Furthermore, to enhance the performance of the proposed method, we introduce a lightweight spectral-coordinate attention fusion module. This module combines spectral attention (SA) and coordinate attention (CA) to enable the network to better weight the importance of useful bands and more accurately localize target objects. Importantly, our method achieves these improvements without increasing the complexity or computational cost of the network. To demonstrate the effectiveness of our proposed method, experiments were conducted on three classic HSI datasets: Indian Pines (IP), Pavia University (PU), and Salinas. The results show that our proposed method is highly competitive in terms of both efficiency and accuracy when compared to existing methods.

1. Introduction

Hyperspectral image (HSI) classification is a hot topic in the field of remote sensing. HSIs, captured by airborne visible/infrared imaging spectrometer (AVIRIS), provide rich spectral and spatial information that is highly valuable for the fine segmentation and identification of ground objects. Therefore, HSIs have been widely applied in various fields such as geological exploration, military investigation, environmental monitoring, and precision agriculture [1,2,3,4].
In the past decades, traditional feature extraction methods for HSI classification, such as k-nearest neighbor [5], random forest [6], Markov random fields [7], and support vector machines (SVM) [8], have been widely used. However, these methods require manual labeling and expert experience, which make them expensive and limited in their ability to extract high-level features. Additionally, HSIs with redundant information also pose challenges for classifiers.
Deep learning methods have received significant attention for their ability to automatically learn robust features from training samples. These methods have been successfully applied to HSI classification, including stacked autoencoder (SAE) [9], recurrent neural network (RNN) [10], deep belief network (DBN) [11], CNNs [12,13,14], and others. These approaches have achieved remarkable results when compared to traditional methods. Chen et al. [15] first introduced deep learning into hyperspectral data and used SAE to obtain spectral and spatial features, respectively, to achieve classification. Later, Hu et al. [16] used five 1×1 convolution layers to capture spectral information for HSI classification but ignored the importance of spatial information. Ying et al. [17] proposed a 3-D convolutional neural network to extract spectral-spatial features of 3-D hyperspectral images to achieve accurate classification of HSI. Yang et al. [18] proposed a two-channel deep convolutional neural network model (TCCNN) to extract joint spectral-spatial features of HSIs and two branches were used to extract spectral and spatial features, respectively. Chen et al. [19] added 3-D convolution to extract spectral-spatial features of HSI based on TCCNN, and the results show that the method is effective. However, 3-D CNN will cause excessive training parameters and high calculation costs. As neural networks become deeper, the extracted features become more abstract and robust. However, the limited number of training samples can lead to overfitting. To address this problem, Zhong et al. [20] and Wang et al. [21] used residual connections [22] and dense connections [23], respectively, to enhance the robustness of the network and avoid overfitting. Due to the ability to perform convolutions on arbitrary graph structures, graph convolutional networks (GCNs) have been applied to HSI. Qin et al. [24] proposed a semi-supervised GCN method, which can flexibly encode irregular non-Euclidean data and effectively express the relationship between each node. However, it requires a large amount of computing cost to construct an adaptive graph structure.
Attention mechanisms [25] have gained attention in the field of vision for their ability to focus on important information and disregard redundant information. The transformer model uses a multi-head self-attention (MHSA) module to capture long-range dependencies in input sequences. Song et al. [26] proposed a hierarchical transformer network that uses MHSA to better extract spectral-spatial information. However, the computational cost of MHSA is high due to the excessive dot-product operations involved. T et al. [27] combined the Squeeze-and-Excitation (SE) Network, known for its effectiveness in channel attention, with CNN for HSI classification, effectively utilizing spectral information. Similarly, Sun et al. [28] proposed a spectral-spatial attention mechanism, adding spectral and spatial attention to each traditional convolution, enabling a higher focus on useful information and improving the classification accuracy. Li et al. [29] proposed a double-branch dual-attention network (DBDA) to capture spectral and spatial features separately, to achieve refinement and optimization of the extracted features. And the coordinate attention network (CA) [30] was proposed to address the high computational cost and complexity of the attention mechanism. It retains spatial coordinate position information and captures global information of image pixels.
Although the above methods have already achieved promising results, they are still facing some problems. (1) The classification performance of CNN-based methods for HSI classification is limited by the size of the convolutional kernels, and it is difficult to capture long-range dependencies between pixels in HSI. (2) HSI typically contains hundreds of continuous spectral bands, but not all bands contribute equally to classification accuracy. The invalid bands not only increase computational cost but also degrade classification performance. (3) Existing methods for HSI classification have complex network architectures, which can lead to inefficient classification results.
Inspired by the attention mechanism, this paper proposes an effective HSI classification network based on MHSA and spectral-coordinate attention. The proposed method first uses a point-wise convolution network (PCN) to remove redundant spectral band information and provide more discriminative features. Then, an M-MHSA module is introduced, which down-samples the k and v projections to a low-dimensional embedding to alleviate the computing burden caused by dot-product operations in MHSA. The method also assigns weights based on pixel correlation to capture long-range dependencies among the HSI pixels, addressing the limitations of CNNs having a small receptive field. Furthermore, a lightweight spectral-coordinate attention fusion network is proposed. On the one hand, spectral attention is used to model the importance of each spectral feature and suppress invalid channels. On the other hand, the coordinate attention network is used to aggregate features along two spatial directions, which addresses the limitation of MHSA ignoring inherent position information and strengthens the connection between channels. Finally, we conducted experiments on three classical datasets, Indian Pines (IP), Pavia University (PU), and Salinas. The experimental results demonstrate that our proposed method is highly competitive among existing HSI classification methods.
The rest of this paper is organized as follows: the proposed method is described in Section 2. The experiments and analysis are presented in Section 3. The conclusion is drawn in Section 4.

2. Proposed Methods

The goal of HSI classification is to assign a specific label to each pixel in order to represent a particular category. In this paper, we propose an effective network based on multi-head self-attention and spectral-coordinate attention (MSSCA). The overall architecture of the proposed network is depicted in Figure 1.

2.1. Point-Wise Convolution Network (PCN)

HSIs often contain redundant bands, which not only increase computational complexity but also negatively impact classification accuracy. To reduce the redundant information and provide more discriminant features for subsequent networks, we propose the PCN to process the band information of the HSI. Specifically, let X R H × W × B as the HSI input, and the PCN is composed of two 1×1 convolutional layers. Using this network, the output feature map can be expressed as:
X j l = f W j l · X ˜ l 1 + b j l
where X l represents the output representation of the feature map of the l-th spectral convolution layer, X j l represents the value of the j-th output feature channel in the l-th layer, X ˜ l 1 = B N X l 1 denotes the input feature mapping of the (l-1)-th convolution layer after batch normalization, W j l and b j l represent the j-th convolutional kernel with the size of 1 × 1 and the bias in the l-th layer, respectively, and f · is the activation function. The resulting PCN output is then fed as input to subsequent networks, providing robust and discriminative initial spectral characteristics for these networks.

2.2. Modified Multi-Head Self-Attention (M-MHSA)

The transformer has gained significant attention in computer vision due to its successful applications. Specifically, the self-attention mechanism, which is a key component of the transformer, is capable of capturing long-range dependencies, making it an attractive technique. In this paper, an M-MHSA network is introduced, where K and V are projected to a low-dimensional embedding using a lightweight down-sampling. This operation reduces the computing burden caused by performing attention calculations on all pixels, while simultaneously enriches feature subspace’s diversity by independent attention heads. Moreover, it assigns weights based on the inter-pixel correlations, allowing for the extraction of global feature dependency and overcoming the limitation of the small receptive field of a traditional CNN. The network architecture of M-MHSA is shown in Figure 2.
Hyperspectral pixels can be viewed as a sequence of vectors X R ( H × W ) × B . Each vector is multiplied by three weight matrices to obtain Query (Q), Key (K), and Value (V). The linear transformation for this process can be expressed as follows:
Q = W q X K = W k X V = W v X
where W q , W k , and W v represent the transformation matrix of Q, K, and V, respectively.
The attention weight calculation can be expressed as:
A t t e n t i o n Q , K , V = s o f t max Q K Τ d k V
where dk represents the dimension of Q and K.
To focus on different parts of the feature representation and extract richer long-range dependencies, Q, K, and V are divided into h submatrix as follows:
Q = Q 1 , Q 2 , , Q i , , Q h K = K 1 , K 2 , , K i , , K h V = V 1 , V 2 , , V i , , V h
where h represents the number of heads.
The i-th head can be expressed as:
h e a d i = A t t e n t i o n Q i , K i , V i
where Q i ,   K i ,   V i R ( H × W ) × B h .
Multiple independent heads are spliced together to form MHSA, so MHSA can be expressed as:
M H S A Q , K , V = c o n c a t h e a d 1 , , h e a d h W O
where W O indicates the output projection matrix.
To reduce the computational burden caused by dot product of Q and K, we propose to perform down-sampling on K and V after obtaining them, while preserving important information. Specifically, we reduce the spatial dimensions of K and V from (H × W) to (16 × 16), which K i , V i R ( 16 × 16 ) × B h in each head. This not only reduces the computational cost but also enables the network to capture long-range dependencies of the input image pixels. The modified MHSA can be expressed as:
M M H S A = M H S A ( Q , D S A ( K , V ) )
where D S A ( · ) function represents a down-sampling operation.

2.3. Spectral-Coordinate Attention Fusion Network (SCA)

HSIs typically contain hundreds of bands, but many of them contribute little to the HSI classification and thus lead to poor classification performance. In this work, we perform spectral attention and coordinate attention for better utilization of the discriminative spectral and spatial features present in HSIs. Finally, we perform feature fusion to further enhance the HSI classification performance.

2.3.1. Spectral Attention

As shown in Figure 3, we incorporate the SE-Net architecture to recalibrate the spectral features in the HSI to strengthen the connections between spectral bands. This helps the network focus on valuable spectral channel information while suppressing irrelevant or invalid characteristic channel information.
Let X = [ x 1 , x 2 , , x B ] R H × W × B represents the input of SE network and x b R H × W represents b-th channel of feature mapping. By using a squeeze operation Fsq, the input feature map can be compressed along the spatial dimension, reducing two-dimensional features to one-dimensional data. This is achieved through global average pooling. z b R B generated by squeeze can be expressed as follows:
z b = F s q x b = 1 H × W i = 1 H j = 1 W x b i , j
This operation is equivalent to indicating the value distribution of b feature maps. x b i , j represents the element value of the b-th feature map at position (i, j).
The two fully connected layer networks are utilized to automatically learn the interdependency between different channels, with the importance of each channel determined by learned weight coefficients WE. This enables the Excitation formula to capture the dependency relationship between channels, which can be expressed as follows:
s = F e x z , W E = σ g z , W E = σ W 2 δ W 1 z
where s represents the weight of each feature map, δ is the ReLU activation function operation, W 1 R B r × B , W 2 R B × B r , and r represents a ratio of dimension reduction.
At last, the output of the SE block is obtained by rescaling X with the activations s can be expressed as:
x ˜ b = F s c a l e x b , s b = s b · x b
where X ˜ = [ x ˜ 1 , x ˜ 2 , , x ˜ B ] , F s c a l e ( x b , s b ) represents channel-wise scalar multiplication between the scalar s b and feature mapping x b .

2.3.2. Coordinate Attention

SE module uses 2-D global pooling to weigh channels and capture dependencies between them, providing significant performance gains at a relatively low computational cost. However, the SE module only considers information encoding between channels and ignores the importance of positional information, which is actually crucial for obtaining target information. Therefore, we propose incorporating Coordinate Attention (CA) to the network, which not only captures cross-channel information but also provides information on direction and position perception, enabling the model to locate and identify the target of interest more accurately. Moreover, the CA module is flexible and lightweight, making it easy to integrate into classic modules. The CA module encodes channel relationships and long-range dependencies through precise location information, similar to the SE module. It consists of two steps: coordinate information embedding and coordinate attention generation. By incorporating the CA module, we can improve the accuracy of the model in identifying targets, while still maintaining computational efficiency. The structure of CA is shown in Figure 4.
First, the input X = [ x 1 , x 2 , , x B ] R H × W × B is processed by the CA module, which converts it into two separate vectors using two-dimension global pooling. This operation encodes each channel along the two spatial directions using average pooling cores of sizes (H, 1) and (1, W), respectively.
The output of b-channel at height H can be expressed as:
z b h h = 1 W 0 i W x b h , i
Similarly, the output of channel b at width W can be expressed as:
z b w w = 1 H 0 j H x b j , w
After the two transforms are generated, feature aggregation is carried out along two spatial directions. The two transformed vectors are concatenated and passed through the 1 × 1 convolution transformation function F 1 to generate an intermediate feature map f R B / r × H + W , which captures the spatial information of the horizontal and vertical directions. The parameter r represents the reduction ratio, and the function f can be expressed as:
f = δ F 1 z h , z w
Next, we divide the function f into two separate tensors f h R B / r × H and f w R B / r × W along the two spatial directions. The resulting feature maps are then transformed using two 1 × 1 2-D convolution operations, enabling them to be brought to the same channel number as the original input X; the formula is as follows:
o h = σ F h f h o w = σ F w f w
where σ is the sigmoid function. And then, o h and o w are then expanded and used as the attention weights of the H and W direction, respectively. The final output of the coordinate attention module can be defined as:
y b i , j = x b i , j × o b h i × o b w j

3. Experiments

In this section, we conduct experiments on three classical public datasets: the Indian Pines, the Pavia University, and the Salinas datasets to evaluate the performance of our proposed method. We compare our method with several existing methods, including SVM [8], FDSSC [21], SSRN [20], HybridSN [31], CGCNN [32], DBMA [33], and DBDA [29]. We evaluate the effectiveness of our proposed method using overall accuracy (OA), average accuracy (AA), and Kappa statistics (KPP). OA measures the overall accuracy of a classification model, which is defined as the proportion of correctly classified samples in the entire test set. AA is the average accuracy per class, which considers the accuracy of the model for each class. Kappa index is a measure of agreement between the predicted and true class labels that considers the agreement that could occur by chance. The kappa index can be calculated from the confusion matrix, and it is widely used in multi-class classification problems to evaluate the performance of a classifier.

3.1. Configuration for Parameters

The proposed MSSCA method comprises of four modules: PCN, M-MHSA, SA, and CA. Specifically, the PCN module utilizes two network layers and 128 1×1 convolution kernels, and the activation functions used in PCN are leaky rectified linear units (Leaky ReLUs). In the M-MHSA, the numbers of the heads are set to four, and we reduce the spatial dimensions of K and V from (H×W) to (16×16). We adopt a learning rate of 0.005 for iterative updating, and the maximum number of iterations is set to 600. Finally, we conduct experiments on an NVIDIA Geforce RTX 3090 computer with 16 GB of RAM. The experiments were carried out on a Windows 10 Home Edition platform, and the code was implemented using Python 3.7.13 and PyTorch 1.11.0.

3.2. HSI Datasets

(1)
Indian Pines dataset: The first dataset is the Indian Pines dataset acquired by the imaging spectrometer AVIRIS in northwest Indiana, USA. The HSI of this scene consists of 145 × 145 pixels, with 220 bands and a spatial resolution of 20 m/pixel. After removing interference bands, the dataset includes 200 available bands. The dataset comprises 16 different categories of ground objects, with 10,249 reference samples. For training, validation, and testing purposes, 10%, 1%, and 89% of each category were randomly selected, respectively. Figure 5 displays the false-color image and real map, while Table 1 provides detailed category information for this HSI dataset.
(2)
Pavia University dataset: The second dataset is the Pavia University dataset acquired at the Pavia University using the Imaging Spectrometer Sensor ROSIS of the Reflexology System. The HSI of this scene comprises 610 × 340 pixels, with 115 bands and a spatial resolution of 1.3 m/pixel. After removing the interference bands, the dataset includes 103 available bands. The dataset contains nine different categories of ground objects, with 42,776 reference samples. For training, verification, and testing purposes, 1%, 1%, and 98% of each category’s samples were randomly selected, respectively. Figure 6 displays the false-color image and real map, while Table 2 provides detailed class information for this HSI dataset.
(3)
Salinas dataset: The third dataset is the Salinas dataset acquired by the AVIRIS Imaging Spectrometer sensor over the Salinas Valley. The HSI of the scene comprises 512 × 217 pixels, with 224 bands and a spatial resolution of 3.7 m/pixel. After discarding 20 interference bands, the dataset includes 204 available bands. The dataset contains 16 different categories of features, with 54,129 samples available for the experiment. For training, verification, and testing purposes, 1%, 1%, and 98% of each category’s samples were randomly selected, respectively. Figure 7 displays the false-color image and the real object map, while Table 3 provides detailed class information for this HSI dataset.
Figure 5. Indian Pines images: (a) false-color image; (b) ground truth.
Figure 5. Indian Pines images: (a) false-color image; (b) ground truth.
Jimaging 09 00141 g005
Table 1. Category information of Indian Pines Dataset.
Table 1. Category information of Indian Pines Dataset.
No.ClassTrain.Val.Test.
1Alfalfa5148
2Corn-notill143141277
3Corn-mintill838743
4Corn232209
5Grass-pasture494444
6Grass-trees747666
7Grass-pasture-mowed2123
8Hay-windrowed484437
9Oats2117
10Soybean-notill969863
11Soybean-mintill246242198
12Soybean-clean616547
13Wheat212189
14Woods129121153
15Buildings-Grass-Trees-Drives383339
16Stone-Steel-Towers9185
Total1029999238
Figure 6. Pavia University images: (a) false-color image; (b) ground truth.
Figure 6. Pavia University images: (a) false-color image; (b) ground truth.
Jimaging 09 00141 g006
Table 2. Category information of Pavia University dataset.
Table 2. Category information of Pavia University dataset.
No.ClassTrain.Val.Test.
1Asphalt67676497
2Meadows18718718,275
3Gravel21212057
4Trees31313002
5Painted metal sheets14141317
6Bare Soil51514927
7Bitumen14141302
8Self-Blocking Bricks37373608
9Shadows1010927
Total43243241,912
Figure 7. Salinas images: (a) false-color image; (b) ground truth.
Figure 7. Salinas images: (a) false-color image; (b) ground truth.
Jimaging 09 00141 g007
Table 3. Category information of Salinas dataset.
Table 3. Category information of Salinas dataset.
No.ClassTrain.Val.Test.
1Brocoli_green_weeds_121211967
2Brocoli_green_weeds_238383650
3Fallow20201936
4Fallow_rough_plow14141366
5Fallow_smooth27272624
6Stubble40403879
7Celery36363507
8Grapes_untrained11311311,045
9Soil_vinyard_develop63636077
10Corn_senesced_green_weeds33333212
11Lettuce_romaine_4 wk11111046
12Lettuce_romaine_5 wk20201887
13Lettuce_romaine_6 wk1010896
14Lettuce_romaine_7 wk11111048
15Vinyard_untrained73737122
16Vinyard_vertical_trellis19191769
Total54954953,031

3.3. Comparison of Classification Results

In this section, we evaluate the performance of our proposed method and compare it with several deep learning-based networks on three datasets. We conducted 10 repeated experiments and report the experimental results as mean ± standard deviation. The classification accuracy of different classification methods on each dataset is presented in Table 4, Table 5 and Table 6. Additionally, we display the classification maps obtained by these methods in Figure 8, Figure 9 and Figure 10.
Experiments on the Indian Pines dataset demonstrate that our proposed method achieves the highest classification accuracy compared to other methods. The SSRN network extracts spectral and spatial features through continuous spectral and spatial residual blocks, respectively, effectively alleviating the gradient descent phenomenon. Compared to traditional methods, it has shown significant improvement. Our proposed method further improves the accuracy by incorporating an attention mechanism, which has been shown to be more effective than that of SSRN. As shown in Table 4, the proposed method improves the overall accuracy by 25.84% and 15.10% compared to DBMA and DBDA, respectively. Moreover, it also surpasses the advanced CNN network CGCNN.
As shown in Figure 8, our proposed method has fewer misclassification points, which is more consistent with the ground truth. In contrast, the traditional SVM method produces a lot of salt and pepper noise, resulting in many misclassifications. By combining spectral and coordinate attention, our network focuses on effective information, resulting in a significant reduction in the error rate and smoother classification maps.
Similar to the results on the Indian Pines dataset, our proposed method achieves the best classification results on the Pavia University dataset compared to other methods, demonstrating the stability of our network. As shown in Table 5, our proposed method outperforms current state-of-the-art methods, such as CGCNN, DBMA, and DBDA, by improving OA by 1.05%, 15.81%, and 7.16%, respectively. Moreover, our proposed MSSCA method achieves an accuracy of 95% in each category, indicating its effectiveness.
Figure 9 shows that our proposed MSSCA method has fewer misclassification points on the Pavia University dataset, which is more consistent with the ground truth compared to CGCNN, which has shown good performance on this dataset.
Table 6 presents the classification results on the Salinas dataset, where our proposed MSSCA method achieves the best overall accuracy (OA), average accuracy (AA), and Kappa statistics (KPP), with an OA accuracy of 99.41%. Moreover, our proposed method achieves almost the best classification results in each category.
The classification results of different methods on the Salinas dataset are shown in Figure 10, where our proposed MSSCA method outperforms other methods in misclassified categories, such as Lettuce_romaine_7 wk and Vinyard_untrained. The classification map generated by our method is more consistent with the ground truth, and the class boundaries are clearer.

3.4. Ablation Study

To evaluate the effectiveness of each module in the MSSCA architecture, we conducted a set of ablation experiments by splitting and combining different network modules. Table 7 presents the classification accuracy of different modules. As can be seen from the table, using only the SE or CA module results in lower OA compared to when both modules are combined. This indicates that the addition of both SE and CA modules improves the classification accuracy. The SE module focuses on the importance of channels, while the CA module focuses on the importance of spatial locations. By paying attention to both channel and coordinate information, the model can more effectively utilize relevant information, resulting in improved classification results. Moreover, incorporating the PCN module improves classification accuracy by providing more discriminative input and optimizing network feature modules.

3.5. Training Sample Ratio

As is well known, deep learning algorithms heavily depend on large amounts of high-quality labeled data, and the network performance improves as the quantity of labeled data increases. In this section, we analyze the comparative results of different training ratios. Figure 11 presents the experimental results. For the Indian Pines dataset, we use 0.5%, 1%, 3%, 5%, and 10% samples as the training sets. For PU and SV datasets, we use 0.1%, 0.5%, 1%, 5%, and 10%, respectively.
As shown in Figure 11a–c, the classification accuracy of all three datasets increases as the training ratio increases. With sufficient training samples, almost perfect classification results can be achieved. Moreover, as the training ratio increases, the difference in classification accuracy between different methods becomes smaller. Notably, even with a small training ratio, our proposed MSSCA method outperforms other comparison methods. The performance of our proposed method exhibits a steady growth trend across all three datasets, indicating its effectiveness and stability.

3.6. Running Time

This section presents the training and testing times of different methods on different datasets, as shown in Table 8, Table 9 and Table 10. Since the goal of HSI classification is to assign a specific label to each pixel, we consider the time taken to classify all pixels as the test time. From the tables, we can see that SVM has a short training time, but it can only extract shallow features and has poor classification performance. Existing deep learning methods such as DBMA and DBDA perform well but have long testing times. In contrast, our proposed MSSCA method not only achieves outstanding classification performance, but also has a short testing time and low computational cost. This is because we use a lightweight attention mechanism, which reduces the computational cost while improving performance.

4. Conclusions

In this paper, we propose an effective deep learning method called MSSCA for HSI classification. In MSSCA, to reduce the computational burden caused by the dot-product operation, the down-sampling operation is introduced into MHSA, and the novel M-MHSA is proposed to depict the long-range dependencies of HSI pixels. On this basis, we integrate SE and CA networks to effectively leverage spectral and spatial coordinate information, which enhances network performance and classification results without compromising network complexity or computational costs. Three classical datasets, including Indian Pines, Pavia University, and Salinas, are used to evaluate the proposed method. The proposed method’s performance was validated by a performance comparison with some classical methods, such as SSRN, HybridSN, and DBDA. The proposed MSSCA method achieved an overall accuracy of 99.96% for Indian Pines datasets, 99.26% for Pavia University datasets, and 99.41% for Salinas datasets, outperforming most existing HSI classification methods, highlighting the effectiveness and efficiency of our proposed method in HSI classification. In the future, we will continue to explore more lightweight and effective classification frameworks to HSI classification under complex conditions.

Author Contributions

Methodology, M.Z. and Y.D.; validation, Y.D.; writing—original draft preparation, Y.D.; writing—review and editing, M.Z., Y.D., W.S. and H.M.; supervision, M.Z., W.S. and H.M.; funding acquisition, W.S. and Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (61972240), and the Shanghai Science and Technology Commission part of the local university capacity building projects (20050501900).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used in this paper are available at https://ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes, accessed on 10 April 2022.

Acknowledgments

We thank the reviewers and editors for their professional suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Govender, M.; Chetty, K.; Naiken, V.; Bulcock, H. A comparison of satellite hyperspectral and multispectral remote sensing imagery for improved classification and mapping of vegetation. Water Sa 2019, 34, 147–154. [Google Scholar] [CrossRef] [Green Version]
  2. Govender, M.; Chetty, K.; Bulcock, H. A review of hyperspectral remote sensing and its application in vegetation and water resource studies. Water Sa 2009, 33, 145–151. [Google Scholar] [CrossRef] [Green Version]
  3. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern Trends in Hyperspectral Image Analysis: A Review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  4. Huang, H.; Liu, L.; Ngadi, M. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety. Sensors 2014, 14, 7248–7276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Blanzieri, E.; Melgani, F. Nearest Neighbor Classification of Remote Sensing Images with the Maximal Margin Principle. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1804–1811. [Google Scholar] [CrossRef]
  6. Amini, A.; Homayouni, S.; Safari, A. Semi-supervised classification of hyperspectral image using random forest algorithm. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 2866–2869. [Google Scholar] [CrossRef]
  7. Li, S.; Jia, X.; Zhang, B. Superpixel-based Markov random field for classification of hyperspectral images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 3491–3494. [Google Scholar] [CrossRef]
  8. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  9. Qi, Y.; Wang, Y.; Zheng, X.; Wu, Z. Robust feature learning by stacked autoencoder wit-h maximum correntropy criterion. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 6716–6720. [Google Scholar] [CrossRef]
  10. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  11. Mughees, A.; Tao, L. Multiple deep-belief-network-based spectral-spatial classification of hyperspectral images. Tsinghua Sci. Technol. 2019, 24, 183–194. [Google Scholar] [CrossRef]
  12. Zheng, Z.; Zhang, Y.; Li, L.; Zhu, M.; He, Y.; Li, M.; Guo, Z.; Yang, X.; Liu, X. Classification based on deep convolutional neural networks with hyperspectral image. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1828–1831. [Google Scholar] [CrossRef]
  13. Ma, C.; Guo, M.Y. Hyperspectral Image Classification Based on Convolutional Neural Network. In Proceedings of the 2018 5th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Hangzhou, China, 16–19 August 2018; pp. 117–121. [Google Scholar] [CrossRef]
  14. Dai, X.; Xue, W. Hyperspectral Remote Sensing Image Classification Based on Convolutional Neural Network. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 10373–10377. [Google Scholar] [CrossRef]
  15. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  16. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks f-or Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  17. Li, Y.; Zhang, H.; Shen, Q. Spectral—Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
  18. Yang, J.; Zhao, Y.Q.; Chan, W.C. Learning and Transferring Deep Joint Spectral—Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  19. Chen, C.; Zhang, J.J.; Zheng, C.H.; Yan, Q.; Xun, L.N. Classification of Hyperspectral Data Using a Multi-Channel Convolutional Neural Network. In Proceedings of the Intelligent Computing Methosologies; Huang, D.S., Gromiha, M.M., Han, K., Hussain, A., Eds.; Springer International Publishing: Cham, Switzerlands, 2018; pp. 81–92. [Google Scholar] [CrossRef]
  20. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  21. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral—Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  23. Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  24. Qin, A.; Shang, Z.; Tian, J.; Wang, Y.; Zhang, T.; Tang, Y.Y. Spectral-Spatial Graph Convolutional Networks for Semel-Supervised Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. Lett. 2019, 16, 241–245. [Google Scholar] [CrossRef]
  25. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. Adv. Neural Inf. Process. Syst. 2017, 30, 2–8. Available online: http://arxiv.org/abs/1706.03762v5 (accessed on 15 June 2023).
  26. Song, C.; Mei, S.; Ma, M.; Xu, F.; Zhang, Y.; Du, Q. Hyperspectral Image Classification Using Hierarchical Spatial-Spectral Transformer. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 3584–3587. [Google Scholar] [CrossRef]
  27. Valsalan, P.; Latha, G.C.P. Hyperspectral image classification model using squeeze and excitation network with deep learning. Comput. Intell. Neurosci. 2022, 2022, 9430779. [Google Scholar] [CrossRef]
  28. Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral—Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar] [CrossRef]
  29. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
  30. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
  31. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D—2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, Q.; Xiao, L.; Yang, J.; Chan, J.C. Content-Guided Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6124–6137. [Google Scholar] [CrossRef]
  33. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The overall network architecture of the proposed MSSCA.
Figure 1. The overall network architecture of the proposed MSSCA.
Jimaging 09 00141 g001
Figure 2. The architecture of the modified multi-head self-attention.
Figure 2. The architecture of the modified multi-head self-attention.
Jimaging 09 00141 g002
Figure 3. The architecture of spectral attention module.
Figure 3. The architecture of spectral attention module.
Jimaging 09 00141 g003
Figure 4. The architecture of the coordinate attention.
Figure 4. The architecture of the coordinate attention.
Jimaging 09 00141 g004
Figure 8. Classification maps of different methods for the Indian Pines dataset.
Figure 8. Classification maps of different methods for the Indian Pines dataset.
Jimaging 09 00141 g008
Figure 9. Classification maps of different methods for the Pavia University dataset.
Figure 9. Classification maps of different methods for the Pavia University dataset.
Jimaging 09 00141 g009
Figure 10. Classification maps of different methods for the Salinas dataset.
Figure 10. Classification maps of different methods for the Salinas dataset.
Jimaging 09 00141 g010
Figure 11. The OA of different methods with varying ratios of training samples. (a) Indian Pines. (b) Pavia University. (c) Salinas.
Figure 11. The OA of different methods with varying ratios of training samples. (a) Indian Pines. (b) Pavia University. (c) Salinas.
Jimaging 09 00141 g011aJimaging 09 00141 g011b
Table 4. Classification performance of different methods on the Indian Pines dataset.
Table 4. Classification performance of different methods on the Indian Pines dataset.
ClassSVMFDSSCSSRNHybridSNCGCNNDBMADBDAMSSCA
118.88 ± 7.7645.20 ± 30.3575.95 ± 29.9997.16 ± 2.2797.04 ± 2.4639.23 ± 19.4873.37 ± 25.2897.54 ± 1.58
246.05 ± 6.4078.02 ± 12.1085.75 ± 4.7697.15 ± 0.5698.62 ± 0.5970.87 ± 10.5979.10 ± 8.9599.01 ± 0.67
345.88 ± 15.3175.69 ± 14.2383.49 ± 12.8798.25 ± 0.6598.61 ± 1.1367.69 ± 14.7479.24 ± 12.4599.35 ± 0.62
430.05 ± 7.3874.05 ± 30.6177.95 ± 27.6197.90 ± 2.7597.91 ± 1.3264.24 ± 20.6682.49 ± 16.9998.86 ± 1.06
571.42 ± 21.5896.71 ± 3.6096.07 ± 7.6598.49 ± 0.9698.64 ± 1.3189.66 ± 6.5596.89 ± 3.9799.44 ± 0.78
674.53 ± 4.0190.74 ± 12.0694.83 ± 4.1598.92 ± 0.3899.75 ± 0.1685.52 ± 4.9795.36 ± 5.6399.75 ± 0.18
725.70 ± 15.0036.11 ± 28.8570.04 ± 29.66100.00 ± 0.0099.20 ± 0.1632.95 ± 26.1130.56 ± 14.81100.00 ± 0.00
887.20 ± 3.0697.55 ± 5.2997.82 ± 3.4399.67 ± 0.3199.63 ± 0.4899.03 ± 2.20100.00 ± 0.0099.95 ± 0.10
918.28 ± 9.9143.79 ± 29.3679.96 ± 28.8792.38 ± 5.25100.00 ± 0.0012.05 ± 5.4645.58 ± 17.6986.67 ± 15.15
1050.16 ± 8.7880.41 ± 12.3487.96 ± 7.0098.74 ± 0.8197.29 ± 1.4870.97 ± 13.5784.06 ± 8.5497.89 ± 1.26
1152.03 ± 4.6280.90 ± 8.3186.75 ± 6.3999.16 ± 0.2699.22 ± 0.4673.04 ± 5.7083.82 ± 9.7199.49 ± 0.49
1234.82 ± 10.7774.71 ± 27.5585.83 ± 5.9297.47 ± 1.1797.95 ± 0.9163.04 ± 15.4281.08 ± 12.2398.94 ± 0.93
1376.72 ± 5.6492.72 ± 12.5699.50 ± 1.4998.02 ± 1.6499.45 ± 0.0192.24 ± 7.7193.20 ± 8.5799.78 ± 0.44
1479.21 ± 5.2591.99 ± 4.6093.93 ± 3.8199.32 ± 0.4499.80 ± 0.1393.12 ± 4.5693.90 ± 4.1099.96 ± 0.04
1548.80 ± 20.1569.33 ± 35.6293.06 ± 4.7997.64 ± 1.5898.01 ± 0.0267.51 ± 12.9089.00 ± 13.5298.64 ± 1.87
1698.50 ± 2.5780.07 ± 7.3895.68 ± 3.5191.02 ± 4.0799.04 ± 0.1281.27 ± 12.0883.83 ± 6,7697.10 ± 2.36
OA (%)55.98 ± 2.7581.89 ± 6.2788.63 ± 3.9898.44 ± 0.1698.85 ± 0.1873.39 ± 2.8884.13 ± 1.1999.23 ± 0.19
AA (%)53.64 ± 3.4576.00 ± 11.4587.79 ± 8.5697.58 ± 0.8298.76 ± 0.1868.91 ± 4.2680.72 ± 4.3398.27 ± 1.03
KPP (×100)48.72 ± 3.3279.19 ± 7.3786.97 ± 4.6398.23 ± 0.1898.69 ± 0.2169.53 ± 3.2781.85 ± 1.3999.12 ± 0.21
Table 5. Classification performance of different methods on the Pavia University dataset.
Table 5. Classification performance of different methods on the Pavia University dataset.
ClassSVMFDSSCSSRNHybridSNCGCNNDBMADBDAMSSCA
185.83 ± 7.2491.37 ± 5.3891.80 ± 7.9195.13 ± 1.8198.49 ± 0.8191.41 ± 2.8893.76 ± 2.8399.10 ± 0.60
273.97 ± 4.1294.37 ± 4.0186.40 ± 4.0399.16 ± 0.4998.92 ± 0.4089.02 ± 5.7796.20 ± 2.1199.96 ± 0.03
331.14 ± 8.4959.20 ± 18.7659.59 ± 20.1188.73 ± 4.9087.73 ± 5.4465.63 ± 23.5781.67 ± 9.2695.76 ± 2.94
470.16 ± 25.6297.69 ± 1.7298.38 ± 3.3798.18 ± 0.7797.11 ± 1.2694.38 ± 4.4798.22 ± 1.3997.23 ± 1.62
597.27 ± 2.4799.40 ± 0.7598.76 ± 1.5898.98 ± 0.93100.00 ± 0.0099.44 ± 0.9498.17 ± 2.5099.95 ± 0.09
645.73 ± 22.5686.06 ± 9.1077.77 ± 8.0098.66 ± 0.9699.55 ± 0.6974.11 ± 11.2290.50 ± 8.7799.95 ± 0.07
743.20 ± 7.0590.68 ± 7.7265.61 ± 25.3596.64 ± 2.3799.11 ± 0.6466.72 ± 14.1685.39 ± 15.0199.85 ± 0.27
864.45 ± 9.4368.03 ± 20.3374.50 ± 14.2590.69 ± 2.7297.77 ± 1.9366.76 ± 16.6079.61 ± 9.5898.13 ± 1.98
999.90 ± 0.1196.91 ± 1.9198.28 ± 1.6197.21 ± 1.8699.98 ± 0.0490.27 ± 11.3394.35 ± 3.6499.74 ± 0.16
OA (%)69.86 ± 2.2188.46 ± 4.2482.10 ± 3.0197.01 ± 0.6998.21 ± 0.1383.45 ± 3.5892.10 ± 1.1299.26 ± 0.18
AA (%)67.96 ± 5.2787.08 ± 4.5583.45 ± 2.5595.93 ± 0.8797.63 ± 0.2681.97 ± 4.6090.88 ± 1.4198.85 ± 0.30
KPP (×100)58.26 ± 3.7884.61 ± 5.8375.88 ± 4.0796.02 ± 0.9297.63 ± 0.1877.85 ± 4.9389.52 ± 1.4599.02 ± 0.24
Table 6. Classification performance of different methods on the Salinas dataset.
Table 6. Classification performance of different methods on the Salinas dataset.
ClassSVMFDSSCSSRNHybridSNCGCNNDBMADBDAMSSCA
192.70 ± 7.2696.81 ± 9.5696.51 ± 6.2999.79 ± 0.2499.97 ± 0.0497.16 ± 8.3995.67 ± 8.6199.98 ± 0.02
298.61 ± 1.0199.89 ± 0.2992.61 ± 12.1099.97 ± 0.0299.12 ± 0.8298.48 ± 2.1699.99 ± 0.02100.00 ± 0.00
375.17 ± 7.0493.61 ± 4.4492.84 ± 7.8899.96 ± 0.0466.86 ± 3.9395.16 ± 2.7497.65 ± 1.25100.00 ± 0.00
496.79 ± 0.9995.65 ± 3.3895.55 ± 3.3998.35 ± 1.0599.79 ± 0.1885.27 ± 4.9890.16 ± 3.5899.88 ± 0.12
591.04 ± 5.5195.99 ± 6.4389.26 ± 8.5099.93 ± 0.0795.67 ± 4.8094.43 ± 6.8092.90 ± 6.8898.28 ± 1.81
699.87 ± 0.2899.99 ± 1.6299.91 ± 0.1599.93 ± 0.1099.76 ± 0.3099.23 ± 1.1499.88 ± 0.2399.96 ± 0.07
794.30 ± 2.3099.27 ± 0.8498.84 ± 2.19100.00 ± 0.0099.91 ± 0.0595.84 ± 5.0599.71 ± 0.2199.98 ± 0.03
865.65 ± 3.5584.04 ± 6.3376.61 ± 8.9998.81 ± 0.8891.43 ± 3.5981.52 ± 8.5981.60 ± 9.6999.12 ± 0.81
995.03 ± 6.1698.88 ± 0.7698.73 ± 1.3399.96 ± 0.0299.48 ± 0.3198.53 ± 1.5297.80 ± 1.98100.00 ± 0.00
1080.87 ± 11.4595.96 ± 2.6894.79 ± 3.4598.96 ± 1.0693.76 ± 2.9092.15 ± 5.0394.29 ± 2.9997.03 ± 2.46
1158.82 ± 27.59100.00 ± 0.0093.23 ± 4.4099.21 ± 1.0597.50 ± 2.1780.78 ± 17.9493.45 ± 4.6799.87 ± 0.18
1286.41 ± 10.2899.00 ± 1.3594.51 ± 7.9499.81 ± 0.3399.82 ± 0.3197.75 ± 2.1098.56 ± 1.31100.00 ± 0.00
1381.66 ± 11.8498.24 ± 2.6092.66 ± 7.8598.77 ± 2.2898.28 ± 1.7986.76 ± 16.3799.53 ± 0.2499.93 ± 0.09
1480.08 ± 14.3294.24 ± 4.7997.01 ± 1.5099.60 ± 0.4598.10 ± 2.0589.69 ± 7.4495.76 ± 1.8698.70 ± 0.83
1548.14 ± 24.5977.43 ± 9.6269.53 ± 10.9997.88 ± 2.6775.90 ± 12.4575.30 ± 10.4280.91 ± 5.9699.33 ± 0.41
1688.65 ± 15.5299.66 ± 0.6999.02 ± 1.39100.00 ± 0.0096.12 ± 0.6496.39 ± 5.8499.11 ± 1.7399.55 ± 0.55
OA (%)80.50 ± 2.6891.23 ± 1.9486.85 ± 1.9899.27 ± 0.2992.78 ± 1.2088.29 ± 2.0391.41 ± 2.8699.41 ± 0.32
AA (%)83.36 ± 5.2394.77 ± 1.4392.60 ± 1.2099.43 ± 0.1994.47 ± 0.5691.53 ± 2.0994.81 ± 0.9199.48 ± 0.25
KPP (×100)78.21 ± 3.0690.23 ± 2.1885.33 ± 2.2099.19 ± 0.3291.94 ± 1.3686.96 ± 2.2990.41 ± 3.2299.34 ± 0.35
Table 7. Ablation study on attention modules (OA%).
Table 7. Ablation study on attention modules (OA%).
DatasetCASECA + SEPCN + SE + CA
Indian Pines98.19 ± 0.1798.18 ± 0.8499.15 ± 0.1399.23 ± 0.19
Pavia University98.09 ± 0.4098.17 ± 0.2498.52 ± 0.1899.26 ± 0.18
Salinas97.79 ± 0.2698.20 ± 0.7398.60 ± 0.4799.41 ± 0.32
Table 8. Running time (s) of different methods on the Indian Pines dataset.
Table 8. Running time (s) of different methods on the Indian Pines dataset.
DatasetMethodsTrain TimeTest Time
Indian PinesSVM44.5015.43
FDSSC129.39205.27
SSRN77.23204.72
HybridSN239.826.89
CGCNN108.231.72
DBMA107.8759.60
DBDA100.10728.35
MSSCA52.930.52
Table 9. Running time (s) of different methods on the Pavia University dataset.
Table 9. Running time (s) of different methods on the Pavia University dataset.
DatasetMethodsTrain TimeTest Time
Pavia UniversitySVM16.4253.05
FDSSC81.14171.25
SSRN132.9810.11
HybridSN97.1252.60
CGCNN679.446.69
DBMA146.28201.41
DBDA58.81115.80
MSSCA99.658.93
Table 10. Running time (s) of different methods on the Salinas dataset.
Table 10. Running time (s) of different methods on the Salinas dataset.
DatasetMethodsTrain TimeTest Time
SalinasSVM9.853.82
FDSSC129.39205.27
SSRN77.23204.72
HybridSN375.1546.48
CGCNN340.814.69
DBMA84.12323.03
DBDA62.29161.02
MSSCA69.493.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, M.; Duan, Y.; Song, W.; Mei, H.; He, Q. An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention. J. Imaging 2023, 9, 141. https://doi.org/10.3390/jimaging9070141

AMA Style

Zhang M, Duan Y, Song W, Mei H, He Q. An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention. Journal of Imaging. 2023; 9(7):141. https://doi.org/10.3390/jimaging9070141

Chicago/Turabian Style

Zhang, Minghua, Yuxia Duan, Wei Song, Haibin Mei, and Qi He. 2023. "An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention" Journal of Imaging 9, no. 7: 141. https://doi.org/10.3390/jimaging9070141

APA Style

Zhang, M., Duan, Y., Song, W., Mei, H., & He, Q. (2023). An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention. Journal of Imaging, 9(7), 141. https://doi.org/10.3390/jimaging9070141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop