Next Article in Journal
Temporal Variations in Ice Thickness of the Shirase Glacier Derived from Cryosat-2/SIRAL Data
Next Article in Special Issue
DMAU-Net: An Attention-Based Multiscale Max-Pooling Dense Network for the Semantic Segmentation in VHR Remote-Sensing Images
Previous Article in Journal
An Advanced Spatiotemporal Fusion Model for Suspended Particulate Matter Monitoring in an Intermontane Lake
Previous Article in Special Issue
SAR Image Classification Using Gated Channel Attention Based Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification

1
School of Information Engineering, East China Jiaotong University, Nanchang 330013, China
2
The Department of Computer and Information Science, University of Macau, Macau 519000, China
3
School of Computer Science, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1206; https://doi.org/10.3390/rs15051206
Submission received: 18 January 2023 / Revised: 17 February 2023 / Accepted: 20 February 2023 / Published: 22 February 2023
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Classification II)

Abstract

:
Hyperspectral image (HSI) classification is a significant foundation for remote sensing image analysis, widely used in biology, aerospace, and other applications. Convolution neural networks (CNNs) and attention mechanisms have shown outstanding ability in HSI classification and have been widely studied in recent years. However, the existing CNN-based and attention mechanism-based methods cannot fully use spatial–spectral information, which is not conducive to further improving HSI classification accuracy. This paper proposes a new spatial–spectral Transformer network with multi-scale convolution (SS-TMNet), which can effectively extract local and global spatial–spectral information. SS-TMNet includes two key modules, i.e., multi-scale 3D convolution projection module (MSCP) and spatial–spectral attention module (SSAM). The MSCP uses multi-scale 3D convolutions with different depths to extract the fused spatial–spectral features. The spatial–spectral attention module includes three branches: height spatial attention, width spatial attention, and spectral attention, which can extract the fusion information of spatial and spectral features. The proposed SS-TMNet was tested on three widely used HSI datasets: Pavia University, IndianPines, and Houston2013. The experimental results show that the proposed SS-TMNet is superior to the existing methods.

1. Introduction

Hyperspectral image classification is a significant application of remote sensing technology. The hyperspectral remote sensing image has many spectral bands, which provides rich information to achieve a more precise classification of the scene object. Each pixel is a high-dimensional vector with hundreds of wavebands in a hyperspectral image. The numerical value of each vector with hundreds of bands in a hyperspectral image, representing the spectral reflectance at the corresponding wavelengths [1]. HSI classification is the pixel-by-pixel classification of remote sensing scenes, which is extensively used in agriculture, aerospace, biology, and other fields [2,3].
In the past two decades, hyperspectral image classification has received significant attention as an essential application of remote sensing technology. Some traditional machine learning methods [4,5,6] were proposed for HSI classification tasks in the early years. For instance, the support vector machine (SVM) [4] and K-nearest neighbor (KNN) [5] were used to capture abundant spectral information in HSI classification. Li et al. [6] presented a multinomial logistic regression method to classify HSIs using semi-supervised learning of a posterior distribution. An extended morphological profiles (EMPs) method [7] was proposed in handing the spatial information in HSIs through multiple morphological operations. Although the above HSI classification methods have been proven effective in some cases, the classification effect is not satisfactory when the environment is very complex.
With the development of deep learning, CNNs have made significant breakthroughs in many image-related fields, such as image classification [8,9,10], object detection [11], and instance segmentation [12]. Owing to the numerous bands of hyperspectral images, the ability of ordinary classifiers will decrease with the increase of dimension, and the accuracy will also decrease. Therefore, the traditional classifier based on CNNs for RGB images cannot directly be used for HSI classification tasks. Researchers have conducted much work and proposed a series of methods. For instance, an HSI classification method based on 2D-CNN proposed by Song et al. [13] used multi-layer feature fusion and residual connection to build a network. Chen et al. [14] used a 3D-CNN-based method for HSI classification and proposed a method combining 3D-CNN and regularization to extract fused global characteristics. Due to the strong ability of CNNs to extract local spatial features, they have shown optimistic results. However, CNN-based methods can not pay sufficient attention to the representation of spectral features, resulting in low utilization of global spectral information and hindering the further improvement of model performance. Chen et al. [15] proposed a method based on stacked autoencoders (SAE) to classify HSIs through layer-by-layer training. Mou et al. [16] presented a new recurrent neural network (RNN) method for HSI classification, which takes image pixels as sequence data for analysis and processing. However, this method can not capture the long-range relationship between spectra, resulting in unsatisfactory classification results.
Recently, a method based on the self-attention mechanism was presented, named Transformer [17], which shows excellent performance in natural language processing tasks. Thereafter, many researchers [18,19] are committed to introducing Transformer into the field of computer vision. Dosovitskiy et al. [18] used Transformer for image recognition and proposed a method named Vision Transformer (ViT), which divided the image into fixed-size patches and added position coding to obtain tokens and finally put them into Transformer Encoder for training. Due to the excellent performance of Transformer and its powerful ability to process sequence information, many researchers have also applied Transformer to the hyperspectral field. He et al. [19] presented a bidirectional encoder Representation Transformer for HSI classification (HSI-BERT) to capture the correlation between spectra using bidirectional Transformer encode representation. However, the networks do not effectively employ the local spatial features of HSI.
In general, all of the above methods for HSI classification have some shortcomings, which are summarized as follows. For the CNN-based methods [20,21,22,23], it pays too much attention to local spatial correlation, resulting in the inability to capture long-range spectral correlation, which limits the use of high-dimensional bands of HSIs. Even in the adjacent spectral domain, it is hard for CNN-based methods to capture the subtle discrepancies between different spectra. For the RNN-based methods [16], due to the problems of gradient disappearance and gradient explosion, RNN-based methods cannot learn the long-term dependence of spectral data well. For Transformer-based methods [18,24], although it has certain advantages for establishing remote dependency, Transformer-based methods cannot effectively extract important spatial context information and fused spatial–spectral features. Some improved Transformer-based methods, such as [25,26,27,28,29], although the well-designed CNN is used for spatial feature extraction before Transformer processing, can not effectively capture fused spatial–spectral information. Some HSI classification methods based on graph convolution neural network, such as [30,31,32,33,34], have unsatisfactory results due to the large number of parameters and overfitting problems.
In order to solve the above problems, this work presents a spatial–spatial Transformer network with multi-scale convolution (SS-TMNet) for HSI classification, which can more effectively utilize local and global spatial–spectral information. SS-TMNet includes two key modules: multi-scale 3D convolution project module (MSCP) and spatial–spectral attention module (SSAM). Specifically, we utilize the MSCP module for initial feature mapping to capture the fused spatial–spectral features, and employ the SSAM module to encode the height dimension, width dimension, and spectral dimension features, respectively, to capture the local and global dependencies of each dimension. The main contributions of this work are as follows.
  • We design a new Transformer-based HSI classification method (SS-TMNet), which uses multi-scale convolution and spatial–spectral attention to extract local and global information efficiently.
  • We design an MSCP module to extract the fused spatial–spectral features as the initial feature projection. This module uses multi-scale 3D convolutions and feature fusion to extract fused spatial–spectral features from multiple scales efficiently.
  • We propose an SSAM module to encode the input features from the height, width, and spectral dimensions. We use multi-dimensional convolution and self-attention to extract more effective local and global spatial–spectral features.
  • We have conducted extensive experiments based on three benchmark datasets. The experimental results show that the proposed SS-TMNet outperforms the state-of-the-art CNN-based and Transformer-based hyperspectral image classifiers.
The structure of the work is as follows. Section 2 introduces the related work. Section 3 introduces the proposed SS-TMNet architecture, and then introduces the proposed MSCP module and SSAM module in detail. Section 4 reports and analyzes the experimental results. Section 5 summarizes this work.

2. Related Work

Hyperspectral image classification technology is one of the essential technologies in the field of remote sensing. After years of research, researchers have presented many methods for HSI classification tasks [35,36,37,38,39]. This section mainly summarizes related work in three parts: traditional classification methods, CNN-based methods, and Transformer-based methods.

2.1. Traditional Classification Methods

Some kernel-based methods were proposed in the early stage of HSI classification research. For instance, Melgani et al. [4] applied the SVM method to achieve HSI classification. Unlike SVM, the multiple kernel learning (MKL) method proposed by Rakotomamonje et al. [40] aims to learn the kernel and related predictors simultaneously in a supervised learning environment. However, both methods focus only on the feature information of the spectral dimension and overlook the spatial dimension. Benediktsson et al. [7] proposed extended morphological profiles (EMPs) to study the spatial feature information of HSI. Extended attribute profiles and extended multi-attribute profiles (EMAP) are presented in [41] for capturing spatial information. In order to make better use of the spatial features in HSI, Li et al. [42] presented a generalized composite kernel (GCK) method to model spatial information from the extended multiattribute profiles. In addition, due to the high-dimensional characteristics of HSIs, many works specifically explore how to reduce dimension and extract features more effectively. For instance, Bandos et al. [43] presented a linear discriminant analysis (LDA) method, which can be utilized to solve related ill-posed problems for HSIs. Villa et al. [44] applied the Independent Component Analysis (ICA) method to HSI classification and presented the Independent Component Discriminant Analysis (ICDA) method, which calculates the density function of each independent component by using a nonparametric kernel density estimator. Furthermore, linear versus nonlinear PCA (NLPCA) proposed by Licciardi et al. [45] for HSI classification. There are other methods in the literature, such as DSML-FS based on multimodal learning, which was presented by Zhang et al. [46]. This method utilizes joint structure sparse regularization to explore the relationship between the intrinsic structure of the data and its different characteristics. Jouni et al. [47] proposed an HSI classification method based on tensor decomposition and mathematical morphology by modeling the data as a higher-order tensor. Additionally, Luo et al. [48] introduced a new dimension reduction method for HSI classification, known as local geometric structure Fisher analysis (LGSFA), which uses neighboring points and corresponding intra-class reconstruction points to enhance intra-class compactness and inter-class separability. However, these methods are based on shallow feature representation, which can show unsatisfactory classification results in complex scenes.

2.2. CNN-Based Methods

With deep learning development, CNN performs excellently in extracting local spatial features. Therefore, numerous CNN-based methods have been presented for the HSI classification task. Hu et al. [49] introduced the CNN into the HSI classification task and proposed a five-tier 1D-CNN-based method. Compared with the traditional classification methods, the effect has been improved. Hao et al. [20] presented a 2D-CNN-based method to classify ground plants. In addition, Fang et al. [22] presented a 3D asymmetric inception network to extract spatial–spectral features and overcome the overfitting problem. Chang et al. [23] presented a novel 3D-CNN-based method to capture the joint spatial–spectral information by stacking layers of 3D-CNN and 2D-CNN. In order to capture fused spatial–spectral information more effectively, He et al. [21] used multi-scale 3D-CNN for HSI classification and presented a multi-scale 3D deep convolution neural network (M3D-DCNN). Although CNN-based methods perform well in HSI classification, capturing the long-range dependence between spectra is challenging. Furthermore, the excessive dependence of CNN on local spatial information makes it difficult to improve the classification accuracy further.

2.3. Transformer-Based Methods

Recently, due to the excellent performance of Transformer in the NLP field, many researchers have applied it to the image classification field. Dosovitskiy et al. [18] presented a ViT method based on Transformer for image classification. However, in the top-level feature representation of the deep ViT model the feature maps are similar, which leads to the incapability of the self-attention mechanism to learn the deeper feature representation. Zhou et al. [24] presented a ViT-based method that can effectively use the deep architecture, called DeepViT, which generates a new set of attention maps by aggregating multiple attention maps dynamically. Although spectral dependence is considered in these methods, the effect of spatial features is omitted. Considering the superior performance of CNN in extracting local spatial features, many researchers applied convolution on Transformer to obtain better performance. Graham et al. [50] re-examined the CNNs, applied it to ViT, and proposed a hybrid neural network of CNN and ViT for image classification, called LeViT. In order to extract multi-scale features from ViT, Chen et al. [51] presented a multi-scale Transformer using cross attention, called CrossViT, which uses multiple multi-scale encoders with two branches for feature extraction. Many researchers also introduced Transformer-based methods into the HSI classification field. For example, He et al. [25] presented an HSI classification method called spatial–spectral transformer (SST), which uses VGGNet [52] to capture basic spatial information and then inputs the Transformer to capture spectral information. Yang et al. [53] presented a novel Transformer-based method called HiT for HSI classification, which uses double branch 3D convolution as feature mapping, embeds the convolution in the encoder of Transformer architecture, and extracts feature information from different dimensions using convolution. However, these methods do not effectively use the advantages of convolution in the attention mechanism, making it impossible to improve the classification effect further. In this work, we propose a novel Transformer-based method called SS-TMNet, which can effectively employ the advantages of convolution and attention mechanisms to extract global and local spatial–spectral features. In the SS-TMNet, two modules, MSCP and SSAM, are proposed to extract multi-scale fused spatial–spectral information and construct cross-dimensional interactions between different dimensions, respectively.

3. The Proposed SS-TMNet Method

In this section, we introduce our SS-TMNet method in three aspects: the overall architecture of SS-TMNet, the MSCP module, and the encoder sequence module.

3.1. The Framework of the Proposed SS-TMNet

This work presents a novel HSI classification method called SS-TMNet based on Transformer. SS-TMNet consists of two key modules: the MSCP module and the SSAM module. MSCP is used for feature projection of the initial HSI image, where multi-scale 3D convolution is utilized to capture the fused multi-scale spatial–spectral information. SSAM is used to capture local and global spatial–spectral dependencies from different spatial and spectral dimensions. The encoder sequence includes four stages where a downsampling layer is added to reduce the dimensions after the second stage. Moreover, a global residual connection connects the input and the final output. Figure 1 shows the overall architecture of our SS-TMNet method.

3.2. MSCP Module

3.2.1. Multi-Scale 3D Convolution

Hyperspectral images differ from ordinary RGB images. Because of the high-dimensional characteristics of HSI, ordinary 2D convolution can not effectively capture the fused spatial–spectral information because it ignores the dependence between spectra. Meanwhile, 3D convolution can process the features from three dimensions, which can extract features more effectively. In general, HSI data can be represented by a tensor with the size of C × S × H × W , where C represents the number of channels, S denotes the spectral domain, and H and W are the height and width in the spatial domain. Based on this, we can apply 3D convolution to the initial HSI data to extract more effective feature representation for subsequent network learning. More specifically, the formula for 3D convolution is as follows:
v i j x y z = F b i j + m p = 0 P i 1 q = 0 Q i 1 r = 0 R i 1 w i j m p q r v ( i 1 ) m ( x + p ) ( y + q ) ( z + r ) ,
where m represents the feature map in the ( i 1 ) th layer connected to the jth feature map, and P i and Q i are the height and width of the spatial convolution kernel, R i is the size of the 3D kernel along the spectral dimension, w i j m p q r is the value at the ( p , q , r ) th position of the kernel connected to the mth feature map of the preceding layer. b i j is the bias of the jth feature map in the ith layer. F represents the activation function.
We studied HSI’s data characteristics and found that multi-scale 3D can perform feature mapping more effectively than ordinary 3D convolution. As shown in Figure 2, we developed a multi-scale 3D convolution to build the data mapping module and proposed a new feature mapping module called MSCP. The multi-scale convolution layer uses different sizes of 3D convolution to extract the feature map. From a global perspective, we extract features from the feature information of interest in the image to obtain new feature maps of different sizes and then fuse them to obtain the spatial–spectral feature map. The feature map obtained through the MSCP module has rich fused spatial–spectral information, which enhances the efficiency of feature extraction of subsequent networks.

3.2.2. Module Composition

Figure 2 shows that the MSCP module comprises multiple multi-scale 3D convolution layers and feature fusion modules. MSCP processes input HSI data in three phases. Suppose X R C × S × H × W is a patch of the input data (in this paper, the input image is divided into several patches with the size of H × W for processing, and the values of H and W are 15 in the experiments). In the first phase P 1 , the input data X are placed into a 3D convolution layer with ReLU operation to extract the spatial–spectral characteristics X 1 , where the convolution kernel size is set to (11, 3, 3). Then, X 1 is fed into a multi-scale 3D convolution layer M 1 with four different convolution kernel sizes, mainly used to extract spectral characteristics of different scales. Then, we fuse the output multi-scale features with the addition operation. To prevent overfitting, we use the residual connection to link the fused multi-scale feature to the output of the first 3D convolution layer X 1 . The BatchNorm and ReLU operations are then used to produce the first stage output X P 1 . The formula for feature mapping in the first stage is as follows:
X 1 = R e L U ( Conv 3 D ( X ) ) , X P 1 = M 1 ( X 1 ) = R e L U ( B N ( X 1 i = 1 4 Conv 3 D ( X 1 ) ) ) ,
where R e L U represents the activation function, B N represents the BatchNorm operation, ⊕ represents the residual connection, and i represents the 3D convolution of different scales.
In the second stage P 2 , we first feed the output X P 1 of the first stage into a 3D convolution layer with a ReLU operation whose convolution kernel size is (9, 3, 3) to further extract the spatial–spectral characteristics. The output features are placed in two successive multi-scale 3D convolution layers M 2 and M 3 with feature fusion and residual connection operations to extract deeper spectral features. Then, we perform BatchNorm and ReLU operations to obtain the output X P 2 . In the third stage, P 3 , the activation function and 3D pointwise convolution operation are used to handle the features of the output further X P 2 in the second stage. Finally, the MSCP module outputs the final representation X P 3 R H × W × D as the extracted features. The formula for the second and third stages is as follows:
X P 2 = M 3 ( M 2 ( R e L U ( Conv 3 D ( X P 1 ) ) ) ) , X P 3 = F r e s c a l e ( GELU ( Conv 3 D ( X P 2 ) ) ) .
Overall, the proposed approach employs multiple multi-scale 3D convolution layers to extract fused spatial–spectral feature information at multiple scales, as well as shallow local spatial–spectral dependences. To mitigate the issue of gradient disappearance, residual connections are used in multiple locations. The extracted fused spatial–spectral information provides an excellent feature representation for processing subsequent encoder sequences.

3.3. Encoder Sequence

3.3.1. Encoder

As shown in Figure 1, the encoder consists of two modules: SSAM and FFN modules. SSAM encodes features from height, width, and spectral dimensions to extract local and global spatial–spectral features. FFN consists of linear layers with the activation function GELU, which is used to transform features and extract deeper features. The encoder adds LayerNorm and residual connection operations to alleviate overfitting and gradient disappearance, and more effectively cooperates with the above two modules for feature extraction. Given an input embedding X P 3 R H × W × D , the formulas of the coding process are as follows:
Y = X P 3 SSAM ( LayerNorm ( X P 3 ) ) , Z = Y FFN ( LayerNorm ( Y ) ) ,
where ⊕ represents residual connection, Y represents the residual connection between X P 3 and the output of SSAM, and Z represents the FFN module’s output. In general, the SS-TMNet has four stages, and each stage consists of an encoder sequence composed of a different number of encoders. The implementation details of SSAM will be introduced in the next section.

3.3.2. SSAM Module

Figure 3 shows the structure of the SSAM, which encodes the inputting feature along the height, width, and spectral dimensions to extract local and global spatial–spectral features more effectively. We feed X i n ( X P 3 after layer normalization operation) into three branches for height-spatial coding, width-spatial coding, and spectral coding.
In the height branch L H , we utilize a depthwise convolution layer with convolution kernel size (1,3) to the X i n to obtain local height spatial features, which will be fed into the hight spatial attention (HSA) to calculate the spatial self-attention and obtain the global dependent X H . In the width branch L W , we employ a depthwise convolution layer with a convolution kernel size of (3,1) to handle the X i n and obtain local width spatial characteristics. The width spatial attention (WSA) module is then used to derive a globally dependent X W based on the local width spatial characteristics. In the spectral branch L S , local spectral information is captured from the X i n using a pointwise convolution layer with convolution kernel size (1,1). Then, the spectral attention (SA) is utilized to obtain globally dependent X S .
Then, three local residuals connecting the input X i n with the outputs of hight spatial attention, width spatial attention and spectral attention are also added to alleviate gradient disappearance. It is worth noting that three learnable parameters γ h , γ w , and γ s are used to adjust the proportion of learning to characteristics for each branch. Finally, we fuse the feature information of the three branches with the addition operation and linear projection, and join the global residual connection to the X i n to get the final output X o u t R H × W × D . The calculation formula for SSAM is as follows:
X H = L H ( X i n ) = γ h × HSA ( DepthConv ( X ) ) X i n , X W = L W ( X i n ) = γ w × WSA ( DepthConv ( X ) ) X i n , X S = L S ( X i n ) = γ s × SA ( PointConv ( X ) ) X i n , X o u t = F ( X H + X W + X S ) X i n ,
where γ h , γ w , and γ s represent the learnable parameters, DepthConv represents a depthwise convolution layer, PointConv represents a pointwise convolution layer, ⊕ represents residual connection, and F denotes linear projection. Next, we will detail the spatial and spectral attention modules.
As shown in Figure 4, we introduce spatial attention for feature extraction to establish rich spatial feature dependency. We first reshape the X i n R H × W × D to X r e R ( H × W ) × D , and then send it to three parallel linear layers for feature mapping to obtain the output { Q , K , V } R N × D , where N equals H times W. The concrete procedure of spatial attention can be formulated as follows:
X o u t = Linear ( Transpose ( softmax Q K T d ) V ) ,
where ⊗ denotes the operation of matrix multiplication, and d is the scale factor. Finally, a linear layer maps the feature and reshapes the dimension to obtain the final output X o u t R H × W × D . Furthermore, our spectral attention part is similar to spatial attention. To simplify the calculation, our spectral attention discards the initial linear projection layer and uses the input features to calculate the self-attention.
In summary, the SSAM module uses depthwise convolution and pointwise convolution to map features from height, width, and spectral dimensions, respectively, and further extract features using spatial and spectral attention. To extract long-range relationship dependences of both spatial and spectral features, we utilize spatial and spectral self-attention mechanisms. Specifically, our SSAM module integrates convolution with self-attention mechanisms, extracting features from three dimensions and fusing them to obtain feature representations with both global and local dependencies.

4. Experiments

This section introduces the HSI datasets used in the experiments, including the Pavia University, Indian Pines, and Houston2013 datasets. In addition, we introduce the parameter settings, evaluation metrics, and comparison models in experimental settings. Then, we show and analyze the results. Finally, the ablation experiment and model performance analysis are introduced.

4.1. Datasets

4.1.1. Pavia University Dataset

This dataset was obtained with the Reflective Optical Spectral Imaging System (ROSIS) sensor of the University of Pavia, Italy. The spatial size of the hyperspectral image is 610 × 340 pixels, the spectral bands range from 0.43 to 0.86 µm, a total of 103 bands, excluding 12 water absorption bands. The dataset has 9 classification categories. The dataset is shown in Figure 5.

4.1.2. Indian Pines Dataset

This dataset was collected in 1992 by the AVIRIS sensors in northwestern India, USA. The spatial size of the hyperspectral image in the dataset is 145 × 145 pixels, and spectral bands range from 0.4 µm to 2.5 µm. The total number of spectral bands is 200, excluding 20 water absorption bands. Available ground truths comprise 16 classes. The dataset is shown in Figure 6.

4.1.3. Houston2013 Dataset

This dataset was captured by the CASI-1500 sensor over the University of Houston and its surroundings in Texas, USA. The spatial size of the image in the dataset is 949 × 1905 pixels, and the spectral dimension includes 144 bands. The dataset has 15 classification categories. The dataset is shown in Figure 7.

4.2. Experimental Setup

4.2.1. Parameters Setting

The training samples for this work were set to 10% in three datasets, and the rest were used as test samples. It is noteworthy that the selection of training and testing samples was random. To ensure the fairness of the comparative trials, we performed all the comparison models ten times and recorded the results as mean ± standard deviation to compare the performance of different models. The proposed SS-TMNet and the compared methods were implemented on a NVIDIA RTX 3080Ti GPU machine with the pytorch [54] platform. We used the Adam optimizer for gradient descent and set the initial learning rate to 1 × 10−4. The mini-batch size was set to 32, and we set the epochs on these three benchmark datasets to 200.

4.2.2. Evaluation Metrics

Overall accuracy (OA) and Kappa coefficient (K) were chosen in our experiments to evaluate the results produced by different models in experiments. The OA is the average accuracy for each category. The Kappa measures whether the classification results are consistent with the actual underlying category. The formulas for calculating the above evaluation criteria are as follows:
O A = 1 n k T P + T N T P + T N + F N + F P k , K = N i = 1 n x i i i = 1 n x i + × x + i N 2 i = 1 n x i + × x + i ,
where T P represents the true positive value, T N is the true negative value, F P represents the false positive value, and F N represents the false negative value. n is the number of categories, and N is the total number of data samples. x i i denotes the value on the diagonal line of the confusion matrix, x i + and x + i denote the total value of rows i and columns i of the confusion matrix, respectively.

4.2.3. Baselines

To validate the proposed SS-TMNet method, several representative baselines and the most advanced backbone methods are chosen for comparison, including RNN-based methods (such as Mou [16]), CNN-based methods (such as He [21], 3D-CNN [55], and HybridSN [56]), and Transformer-based methods (such as ViT [18], CrossViT [51], LeViT [50], RvT [57], and HiT [53]). A more detailed description is as follows:
  • Mou [16]: An RNN-based method, which uses a recurrent layer containing multiple gated recurrent units. In addition, a fully connection layer and softmax layer are utilized to construct the network.
  • He [21]: A 3D-CNN-based method is composed of 3D convolution layers and multi-scale 3D convolution layers. Each multi-scale 3D convolution layer consists of four sublayers.
  • 3D-CNN [55]: Another 3D-CNN method includes three convolution blocks and two fully connection layers. Each convolution block includes a 3D convolution layer, a BatchNorm layer, and an average pooling layer.
  • HybridSN [56]: A method integrating 2D and 3D convolution, including three 3D convolution layers, one 2D convolution layer, and two fully connection layers.
  • ViT [18]: A classic Transformer-based method, which firstly splits the input image into 16 × 16 patches and then feed them into the Transformer encoder to learning the representation of the image.
  • CrossViT [51]: A method based on dual-branch ViT architecture, where each branch contains a linear projection layer and a different number of Transformer encoders for processing different sized image patches.
  • LeViT [50]: Another Transformer-based method, which includes four convolution layers and three stage codes, and each stage contains four multiple attention layers. We replicated the methods used for HSI classification according to this architecture.
  • RvT [57]: Based on ViT, the RvT method uses a pooling layer to downsample the image and reduce the size of the images. We follow this architecture to design the network for the HSI classification tasks.
  • HiT [53]: A method of embedding convolution into Transformer, which uses two proposed SACP layers based on 3D convolution to process the input image. Feature extraction is performed using a three-branch convolution layer based on transformer architecture.

4.3. Results and Analysis

This section will elaborate the experimental results and analysis, including results comparison and visualization of three datasets: Pavia University, Indian Pines, and Houston2013.

4.3.1. Experimental Analysis on Pavia University Dataset

Table 1 shows the experimental results produced by different comparison models with respects to OA and Kappa metrics on the Pavia University dataset. The table shows that our proposed SS-TMNet is superior to all comparison methods, with OA and Kappa reaching 91.74% and 89.44%. OA was 0.6%, 0.3%, 0.16% higher than RNN-based method Mou [16], CNN-based method HybridSN, and Transformer-based method LeViT, respectively. The possible reason is that SS-TMNet can more effectively capture local and global dependencies. Among all the mentioned methods, the original ViT method performed the worst, with 88.92% OA and 85.81% Kappa, which indicates that it is difficult for the original ViT network to perform the hyperspectral classification task. The reason may be that ViT lacks effective modeling capabilities for spatial characteristics. The methods using only 3D convolution, such as He [21] and 3D-CNN, which obtained 89.97% and 90.72% OA, respectively, did not perform well since these methods focus on only spatial characteristics and spectral correlation is not fully considered. Transformer-based methods, such as LeViT and HiT, their OA metrics were 91.58% and 91.28%, respectively. They are developed on the basis of ViT, which performs better than ViT-only and 3D convolution-only methods. This demonstrates that combining the convolution and Transformer networks can improve classification results.
The visualization experiment results produced by the comparison methods are shown in Figure 8. As shown in the red rectangle box in the figure, most methods produce much noise in the classification maps compared to our SS-TMNet method. It is worth noting that although the classification maps of the HybridSN and LeViT methods are similar to ours, there is still a small amount of noise, and from the evaluation metrics in Table 1, the results of the method we presented are still better. The possible reason is that compared with HybridSN and LeViT, SS-TMNet learns the fused local spatial–spectral features through the proposed MSCP module and more effective local and global feature representations through the SSAM module. The visualization proves that our proposed method can produce better results than most existing methods.

4.3.2. Experimental Analysis on Indian Pines Dataset

Table 2 shows the evaluation results of our presented and compared models on the dataset. Our proposed SS-TSNet shows the best results, with OA and Kappa reaching 84.67% and 82.66%. The OA metric of our proposed method is 9.40% higher than RNN-based methods (i.e., Mou), 12.08% higher than CNN-based methods (i.e., 3D-CNN), and 1.04% higher than Transformer-based methods (i.e., LeViT). One possible reason is that we have improved the encoding of feature projections and spatial–spectral features to enable more efficient feature encoding.
Our method differs from the existing methods (i.e., CrossViT, LeViT, RvT, and HiT). We use MSCP to capture the spatial–spectral dependence of the fused multi-scale features. Meanwhile, the SSAM is presented to capture the local and global spatial–spectral information of multidimensional data. Thus, our proposed model can more effectively model the HSIs from spatial–spectral dependence and local–global features. Figure 9 shows the dataset’s visualization results, which shows that our proposed model produces the classification map with the least noise and achieves satisfactory results. For example, as shown in the red rectangle in the figure, compared with other comparison methods, the SS-TMNet method generates the slightest noise in the classification map. The reason HiT does not perform well relative to our proposed method may be due to its ineffective integration of convolution into Transformer, leading to a lack of effective modeling of global feature dependencies. From the overall effect, our proposed method produces a classification map closer to the ground truth image than other methods, which proves the validity of our proposed method.

4.3.3. Experimental Analysis on Houston2013 Dataset

The experimental results of our proposed SS-TSNet and compared models on the Houston2013 dataset are shown in Table 3. We can see that our model works best with OA and Kappa, reaching 96.22% and 96.22%, respectively. In addition, the standard deviation of our model is the smallest, only 0.12, indicating the stability of our model. It is worth noting that the LeViT performs much worse on this dataset than the other two datasets in the experiment with respect to OA and Kappa, only 87.36% and 86.34%, which indicates that the generalization capability of the LeViT model is relatively weak. Our model performs well on all three datasets, possibly because our SSAM models both local and global features effectively from three dimensions.
Figure 10 shows the visualization results of the experiment. To make it clearer to see the difference at the pixel level, we crop local details to show the classification map. As shown in the red rectangle in the figure, the classification map generated by our method is less noisy than comparison methods and closer to the ground truth image, which shows the superiority of our presented method. Other methods, such as HybridSN, may not perform well because only the combination of 3D convolution and 2D convolution is used. Although it has a good model of local spatial characteristics, it lacks the dependence on the relationship between capturing long-range spectra. As for the CrossViT method, it only uses Transformer to build the network without considering the effect of convolution on classification results, which may result in unsatisfactory performance.

4.3.4. Student’s t-Test

We conducted a Student’s t-test between our presented method and the compared methods with 10 times randomized initializations. We collected OA results produced by 10 randomized experiments on Pavia University, Indian Pines, and Houston2013 datasets using SS-TMNet and other comparative methods. Student’s t-test method was employed to compute the p-value between our proposed methods and existing methods. When the p-value is greater than 0.05, there is no significant difference between the two models. When the p-value is less than 0.05, the results of the two models are significantly different.
To make it easier to observe data differences, our experimental data are represented by scientific notation. As shown in Table 4, the p-value between the SS-TMNet and all the compared methods is less than 0.05 on the three datasets, which shows that our SS-TMNet method has significant advantages over other methods. For instance, on the Pavia University dataset, the p-values between the SS-TMNet method and HybridSN and HiT methods are 1.40 × 10−2 and 2.26 × 10−5, respectively, which are less than 0.05, showing significant differences between methods.

4.4. Ablation Studies

We have performed ablation experiments on the main components of the SS-TMNet model. The results and analysis of the ablation experiments for the proposed MSCP and SSAM modules are described in the following two sections. The results in Table 5 and Table 6 are the average of ten times experiments.

4.4.1. The Effectiveness of the MSCP Module

In order to verify the effectiveness of our proposed MSCP module, we used different projection methods (such as Linear, Conv2D, SACP [53], and MSCP) to project the image features without changing the subsequent module and network structure. Furthermore, SACP is the feature projection module in the HiT method. As shown in Table 5, we chose ViT as the baseline method. The experimental results show that our MSCP+SSAM showed the best performance (91.74% in OA and 89.44% in Kappa). The Mean and Std columns in the table represent the mean and standard deviation differences between our proposed SS-TMNet (MSCP+SSAM) and the comparison method. We can see that our presented method had the highest mean and lowest standard deviation, which shows that the MSCP module is more effective than the other feature extraction modules.
Table 5. Ablation study of the proposed MSCP on the Pavia University dataset (Bold numbers represent the best results).
Table 5. Ablation study of the proposed MSCP on the Pavia University dataset (Bold numbers represent the best results).
MethodsOA(%)Mean(−)Std(+)Kappa(%)Mean(−)Std(+)
ViT88.92 ± 0.31−2.28%+0.19%85.81 ± 0.40−3.63%+0.24%
Linear + SSAM90.58 ± 0.24−1.16%+0.12%87.95 ± 0.30−1.49%+0.14%
Conv2D + SSAM91.53 ± 0.12−0.21%+0.00%89.18 ± 0.16−0.26%+0.00%
SACP [53] + SSAM91.59 ± 0.19−0.15%+0.03%89.25 ± 0.24−0.19%+0.08%
MSCP + SSAM91.74 ± 0.120%0%89.44 ± 0.160%0%

4.4.2. The Effectiveness of the SSAM Module

In order to verify the effectiveness of the SSAM module, we took the ViT method as the baseline method and set up four groups of comparison experiments. In the experiment, the SSAM Module in our proposed method is replaced by the single linear layer connection (Linear), the convolution permutator module (ConvPermute) of the HiT method, and the ViP [58] method (ViP), respectively. The table shows that our MSCP+SSAM had the best performance (91.74 ± 0.12 in OA and 89.44 ± 0.16 in Kappa). Compared with the replaced SACP and ViP modules, our proposed method was 0.45% and 0.37% higher in the OA metric, respectively, which shows the effectiveness of our SSAM module for improving network performance.
Table 6. Ablation study of the proposed SSAM on the Pavia University dataset (Bold numbers represent the best results).
Table 6. Ablation study of the proposed SSAM on the Pavia University dataset (Bold numbers represent the best results).
MethodsOA(%)Mean(−)Std(+)Kappa(%)Mean(−)Std(+)
ViT88.92 ± 0.31−2.82%+0.19%85.81 ± 0.40−3.63%+0.24%
MSCP + Linear90.15 ± 0.30−1.59%+0.18%87.40 ± 0.38−2.04%+0.22%
MSCP + ConvPermute [53]91.29 ± 0.17−0.45%+0.05%88.86 ± 0.22−0.58%+0.06%
MSCP + ViP [58]91.37 ± 0.40−0.37%+0.28%88.97 ± 0.52−0.47%+0.36%
MSCP + SSAM91.74 ± 0.120%0%89.44 ± 0.160%0%

4.5. Scability

Due to the scarcity of hyperspectral image data, it is meaningful to study the influence of the number of training samples on the classification method. We changed the training samples from 10% to 50% on the Houston 2013 dataset to study the scability. Each model was run ten times, and the average value was taken as the final result. Table 7 reports the average OA of the proposed SS-TMNet and compared models. We can see that as the training samples change from 10% to 50%, the performance gradually improves, and our model always shows excellent results and high stability. It is worth noting that the experimental results of LeViT, when the training sample are 40% and 50%, are slightly higher than the model we proposed. However, LeViT performs poorly when the training samples are few, indicating its instability.
Moreover, to study the experimental results of our SS-TMNet method on several datasets that vary with the number of training samples, we tested SS-TMNet on three datasets. The experiment also adopted the average of 10 results as the final result. The experimental visualization results of the OA metric are shown in Figure 11. With the increase of training samples, OA gradually increases and eventually tends to be stable, which effectively proves the proposed method’s stability.

5. Conclusions

This work presents a novel HSI classification Transformer-based method (SS-TMNet) to improve HSI classification, which can fully use the spatial–spectral information in HSI data. SS-TMNet includes two key modules: the MSCP module and the SSAM module. The MSCP module uses multi-scale 3D convolution to extract the fused spatial–spectral features. The SSAM module extracts features through height dimension, width dimension, and spectral dimension, which can more effectively obtain local and global feature information. We compared our proposed method with the most advanced Transformer-based and CNN-based methods on three benchmark HSI datasets. Experimental results show that our SS-TMNet method performs the best overall accuracy on three datasets.
In future work, we plan to study more efficient HSI classification methods based on Transformer by embedding convolution neural networks into Transformer more effectively. For the scarcity problem of labeled HSI, we plan to study transfer learning and self-supervised learning based on SS-TMNet to improve the performance of classification of limited training samples.

Author Contributions

Conceptualization, review and editing, X.H.; write the original draft preparation and correct it, Y.Z.; methodology and correct this paper, X.Y.; data curation and correct this paper, X.Z. and K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 62062033.

Data Availability Statement

The Pavia University and Indian Pines datasets used in our work are publicly available. The Pavia University dataset is available at: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_University_scene, accessed on 15 September 2022. The Indian Pines dataset is available at: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Indian_Pines, accessed on 15 September 2022. The Huston2013 dataset is provided by the Society for Geosciences and Remote Sensing Data Fusion Competition: https://hyperspectral.ee.uh.edu/?page_id=459, accessed on 15 September 2022. The Houston2013 dataset can be obtained through this competition.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  2. Zhan, T.; Song, B.; Sun, L.; Jia, X.; Wan, M.; Yang, G.; Wu, Z. TDSSC: A three-directions spectral–spatial convolution neural network for hyperspectral image change detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 377–388. [Google Scholar] [CrossRef]
  3. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral image classification—Traditional to deep models: A survey for future prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 968–999. [Google Scholar] [CrossRef]
  4. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  5. Samaniego, L.; Bárdossy, A.; Schulz, K. Supervised classification of remotely sensed imagery using a modified k-NN technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112–2125. [Google Scholar] [CrossRef]
  6. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef] [Green Version]
  7. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  8. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  9. Algan, G.; Ulusoy, I. Image classification with deep learning in the presence of noisy labels: A survey. Knowl.-Based Syst. 2021, 215, 106771. [Google Scholar] [CrossRef]
  10. Touvron, H.; Bojanowski, P.; Caron, M.; Cord, M.; El-Nouby, A.; Grave, E.; Izacard, G.; Joulin, A.; Synnaeve, G.; Verbeek, J.; et al. Resmlp: Feedforward networks for image classification with data-efficient training. IEEE Trans. Pattern Anal. Mach. Intell. 2022. [Google Scholar] [CrossRef]
  11. Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
  12. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  13. Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  14. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  16. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  17. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  18. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. In Proceedings of the 9th International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
  19. He, J.; Zhao, L.; Yang, H.; Zhang, M.; Li, W. HSI-BERT: Hyperspectral image classification using the bidirectional encoder representation from transformers. IEEE Trans. Geosci. Remote Sens. 2019, 58, 165–178. [Google Scholar] [CrossRef]
  20. Hao, J.; Dong, F.; Li, Y.; Wang, S.; Cui, J.; Zhang, Z.; Wu, K. Investigation of the data fusion of spectral and textural data from hyperspectral imaging for the near geographical origin discrimination of wolfberries using 2D-CNN algorithms. Infrared Phys. Technol. 2022, 125, 104286. [Google Scholar] [CrossRef]
  21. He, M.; Li, B.; Chen, H. Multi-Scale 3D Deep Convolutional Neural Network for Hyperspectral Image Classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3904–3908. [Google Scholar]
  22. Fang, B.; Liu, Y.; Zhang, H.; He, J. Hyperspectral Image Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning. Remote Sens. 2022, 14, 1711. [Google Scholar] [CrossRef]
  23. Chang, Y.L.; Tan, T.H.; Lee, W.H.; Chang, L.; Chen, Y.N.; Fan, K.C.; Alkhaleefah, M. Consolidated Convolutional Neural Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 1571. [Google Scholar] [CrossRef]
  24. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; Feng, J. Deepvit: Towards Deeper Vision Transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar]
  25. He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  26. Yu, D.; Li, Q.; Wang, X.; Zhang, Z.; Qian, Y.; Xu, C. DSTrans: Dual-Stream Transformer for Hyperspectral Image Restoration. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 3739–3749. [Google Scholar]
  27. Li, J.; Xing, H.; Ao, Z.; Wang, H.; Liu, W.; Zhang, A. Convolution-Transformer Adaptive Fusion Network for Hyperspectral Image Classification. Appl. Sci. 2023, 13, 492. [Google Scholar] [CrossRef]
  28. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral-Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  29. Wang, Y.; Jiang, S.; Xu, M.; Zhang, S.; Jia, S. A Center-Masked Convolutional Transformer for Hyperspectral Image Classification. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, Vienna, Austria, 23–29 July 2022; Volume 3207, pp. 1–6. [Google Scholar]
  30. Zhang, Y.; Wang, X.; Jiang, X.; Zhou, Y. Marginalized graph self-representation for unsupervised hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5516712. [Google Scholar] [CrossRef]
  31. Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Cai, W.; Yu, C.; Yang, N.; Cai, W. Multi-feature fusion: Graph neural network and CNN combining for hyperspectral image classification. Neurocomputing 2022, 501, 246–257. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Ding, Y.; Zhao, X.; Siye, L.; Yang, N.; Cai, Y.; Zhan, Y. Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification. Expert Syst. Appl. 2023, 217, 119508. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Wang, Y.; Chen, X.; Jiang, X.; Zhou, Y. Spectral–Spatial Feature Extraction With Dual Graph Autoencoder for Hyperspectral Image Clustering. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8500–8511. [Google Scholar] [CrossRef]
  34. Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Li, W.; Cai, W.; Zhan, Y. AF2GNN: Graph convolution with adaptive filters and aggregator fusion for hyperspectral image classification. Inf. Sci. 2022, 602, 201–219. [Google Scholar] [CrossRef]
  35. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, W.; Yang, N.; Hu, H.; Huang, X.; Cao, Y.; Cai, W. Unsupervised self-correlated learning smoothy enhanced locality preserving graph convolution embedding clustering for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536716. [Google Scholar] [CrossRef]
  36. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  37. He, X.; Chen, Y.; Li, Q. Two-Branch Pure Transformer for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6015005. [Google Scholar] [CrossRef]
  38. Feng, J.; Luo, X.; Li, S.; Wang, Q.; Yin, J. Spectral Transformer with Dynamic Spatial Sampling and Gaussian Positional Embedding for Hyperspectral Image Classification. In Proceedings of the International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 3556–3559. [Google Scholar]
  39. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, Y.; Li, S.; Deng, B.; Cai, W. Self-supervised locality preserving low-pass graph convolutional embedding for large-scale hyperspectral image clustering. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536016. [Google Scholar] [CrossRef]
  40. Rakotomamonjy, A.; Bach, F.; Canu, S.; Grandvalet, Y. SimpleMKL. J. Mach. Learn. Res. 2008, 9, 2491–2521. [Google Scholar]
  41. Dalla Mura, M.; Atli Benediktsson, J.; Waske, B.; Bruzzone, L. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. Int. J. Remote Sens. 2010, 31, 5975–5991. [Google Scholar] [CrossRef]
  42. Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized Composite Kernel Framework for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
  43. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  44. Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification With Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef] [Green Version]
  45. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles. IEEE Geosci. Remote Sens. Lett. 2011, 9, 447–451. [Google Scholar] [CrossRef] [Green Version]
  46. Zhang, Q.; Tian, Y.; Yang, Y.; Pan, C. Automatic spatial–spectral feature selection for hyperspectral image via discriminative sparse multimodal learning. IEEE Trans. Geosci. Remote Sens. 2014, 53, 261–279. [Google Scholar] [CrossRef]
  47. Jouni, M.; Dalla Mura, M.; Comon, P. Hyperspectral image classification based on mathematical morphology and tensor decomposition. Math.-Morphol.-Theory Appl. 2020, 4, 1–30. [Google Scholar] [CrossRef] [Green Version]
  48. Luo, F.; Huang, H.; Duan, Y.; Liu, J.; Liao, Y. Local geometric structure feature for dimensionality reduction of hyperspectral imagery. Remote Sens. 2017, 9, 790. [Google Scholar] [CrossRef] [Green Version]
  49. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  50. Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; Jégou, H.; Douze, M. Levit: A Vision Transformer in Convnet’s Clothing for Faster Inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 12259–12269. [Google Scholar]
  51. Chen, C.F.R.; Fan, Q.; Panda, R. CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 357–366. [Google Scholar]
  52. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  53. Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral Image Transformer Classification Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528715. [Google Scholar] [CrossRef]
  54. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 8024–8035. [Google Scholar]
  55. Sharma, V.; Diba, A.; Tuytelaars, T.; Van Gool, L. Hyperspectral CNN for Image Classification & Band Selection, with Application to Face Recognition; Technical Report KUL/ESAT/PSI/1604, KU Leuven; ESAT: Leuven, Belgium, 2016. [Google Scholar]
  56. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  57. Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; Oh, S.J. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 11936–11945. [Google Scholar]
  58. Hou, Q.; Jiang, Z.; Yuan, L.; Cheng, M.M.; Yan, S.; Feng, J. Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1328–1334. [Google Scholar] [CrossRef]
Figure 1. The overall architecture of the proposed SS-TMNet. The MSCP is a multi-scale 3D convolution projection module to extract the fused multi-scale spatial–spectral information. The extracted features are fed into the encoder sequence with four stages. Finally, a fully connected layer is used for category prediction.
Figure 1. The overall architecture of the proposed SS-TMNet. The MSCP is a multi-scale 3D convolution projection module to extract the fused multi-scale spatial–spectral information. The extracted features are fed into the encoder sequence with four stages. Finally, a fully connected layer is used for category prediction.
Remotesensing 15 01206 g001
Figure 2. The overall architecture of the proposed MSCP module.
Figure 2. The overall architecture of the proposed MSCP module.
Remotesensing 15 01206 g002
Figure 3. The architecture of the SSAM module.
Figure 3. The architecture of the SSAM module.
Remotesensing 15 01206 g003
Figure 4. The structure of the height and width spatial attention modules. ⊗ denotes the operation of matrix multiplication.
Figure 4. The structure of the height and width spatial attention modules. ⊗ denotes the operation of matrix multiplication.
Remotesensing 15 01206 g004
Figure 5. The Pavia University dataset. (a) false-color composite image; (b) ground truth map; (c) label color bar.
Figure 5. The Pavia University dataset. (a) false-color composite image; (b) ground truth map; (c) label color bar.
Remotesensing 15 01206 g005
Figure 6. The Indian Pines dataset. (a) False-color composite image; (b) ground truth map; (c) label color bar.
Figure 6. The Indian Pines dataset. (a) False-color composite image; (b) ground truth map; (c) label color bar.
Remotesensing 15 01206 g006
Figure 7. The Houston2013 dataset. (a) False-color composite image; (b) ground truth map; (c) label color bar.
Figure 7. The Houston2013 dataset. (a) False-color composite image; (b) ground truth map; (c) label color bar.
Remotesensing 15 01206 g007
Figure 8. Visualization of the experimental results on Pavia University dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Figure 8. Visualization of the experimental results on Pavia University dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Remotesensing 15 01206 g008
Figure 9. Visualization of the experimental results based on Indian Pines dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Figure 9. Visualization of the experimental results based on Indian Pines dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Remotesensing 15 01206 g009
Figure 10. Visualization of the experimental results based on Houston2013 dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Figure 10. Visualization of the experimental results based on Houston2013 dataset. (a) Original image, (b) Ground truth, (c) Mou, (d) He, (e) 3D-CNN, (f) HybridSN, (g) ViT, (h) CrossViT, (i) LeViT, (j) RvT, (k) HiT, (l) SS-TMNet(Ours).
Remotesensing 15 01206 g010
Figure 11. The OA results of the proposed SS-TMNet on three datasets with a varying number of training samples.
Figure 11. The OA results of the proposed SS-TMNet on three datasets with a varying number of training samples.
Remotesensing 15 01206 g011
Table 1. The comparative experimental results on Pavia University dataset (Bold numbers represent the best results for the corresponding category).
Table 1. The comparative experimental results on Pavia University dataset (Bold numbers represent the best results for the corresponding category).
ClassMethods
#MouHe3D-CNNHybridSNViTCrossViTLeViTRvTHiTSS-TMNet
190.32 ± 0.4193.34 ± 0.8093.97 ± 0.7495.30 ± 0.5792.92 ± 0.5894.67 ± 0.2295.70 ± 0.3794.63 ± 0.4395.14 ± 0.2896.11 ± 0.24
295.77 ± 0.1492.11 ± 0.2092.56 ± 0.1092.65 ± 0.0691.13 ± 0.2192.13 ± 0.1392.61 ± 0.1091.95 ± 0.1992.53 ± 0.0892.67 ± 0.08
375.34 ± 0.6984.99 ± 2.2088.73 ± 1.3990.68 ± 1.4482.35 ± 1.4187.63 ± 0.8791.48 ± 1.0487.41 ± 1.0089.91 ± 1.2992.35 ± 0.66
494.63 ± 0.4697.08 ± 0.2996.31 ± 0.4097.30 ± 0.2495.80 ± 0.4796.86 ± 0.3196.80 ± 0.2696.92 ± 0.3997.15 ± 0.1796.46 ± 0.49
599.80 ± 0.1599.77 ± 0.1199.79 ± 0.1699.93 ± 0.0899.69 ± 0.2199.89 ± 0.0799.51 ± 0.7799.94 ± 0.0699.91 ± 0.0799.66 ± 0.16
685.96 ± 0.5397.66 ± 0.7999.52 ± 0.2999.77 ± 0.1894.74 ± 0.8698.26 ± 0.3999.54 ± 0.1497.61 ± 0.6199.38 ± 0.2399.91 ± 0.09
771.43 ± 2.5291.48 ± 1.6592.22 ± 1.6896.51 ± 1.5590.72 ± 1.3495.05 ± 0.9597.90 ± 1.0895.92 ± 0.8595.79 ± 1.5099.05 ± 0.57
882.87 ± 0.6794.39 ± 1.0595.95 ± 1.2697.25 ± 1.2294.43 ± 0.5896.69 ± 0.3898.84 ± 0.2996.44 ± 0.5397.39 ± 0.5798.31 ± 0.38
999.44 ± 0.2098.97 ± 1.0097.50 ± 1.6399.61 ± 0.3797.79 ± 0.9699.74 ± 0.1997.83 ± 2.1299.83 ± 0.1999.47 ± 0.2598.02 ± 0.76
OA(%)91.14 ± 0.2189.97 ± 0.3690.72 ± 0.3791.44 ± 0.2888.92 ± 0.3190.70 ± 0.1391.58 ± 0.1790.55 ± 0.2091.28 ± 0.2191.74 ± 0.12
K(%)88.19 ± 0.2787.17 ± 0.4788.13 ± 0.4789.06 ± 0.3585.81 ± 0.4088.11 ± 0.1789.24 ± 0.2287.92 ± 0.2588.85 ± 0.2789.44 ± 0.16
Table 2. The comparative experimental results on Indian Pines dataset (Bold numbers represent the best results for the corresponding category).
Table 2. The comparative experimental results on Indian Pines dataset (Bold numbers represent the best results for the corresponding category).
ClassMethods
#MouHe3D-CNNHybridSNViTCrossViTLeViTRvTHiTSS-TMNet
131.28 ± 11.8470.82 ± 13.5949.33 ± 19.9334.68 ± 26.0550.11 ± 10.2862.57 ± 11.2565.77 ± 11.6151.24 ± 14.2980.64 ± 8.2287.48 ± 8.15
272.76 ± 1.6862.45 ± 10.1868.47 ± 4.0667.37 ± 20.8365.46 ± 2.5762.14 ± 8.1488.59 ± 4.6076.48 ± 3.1786.18 ± 4.4088.56 ± 2.34
355.39 ± 2.3048.96 ± 12.2351.89 ± 7.6044.89 ± 22.7952.57 ± 2.5841.82 ± 9.8872.43 ± 3.2265.42 ± 5.5069.94 ± 4.9976.50 ± 3.23
447.20 ± 6.3748.86 ± 14.1440.95 ± 10.3334.47 ± 25.4557.92 ± 7.5065.34 ± 11.6977.73 ± 3.5177.96 ± 5.3175.63 ± 5.3982.19 ± 3.92
585.59 ± 2.7762.62 ± 19.2275.29 ± 4.9255.75 ± 24.9952.76 ± 5.0855.28 ± 5.2479.78 ± 2.1150.33 ± 3.8775.26 ± 2.7181.71 ± 3.49
693.19 ± 0.9291.35 ± 4.9393.40 ± 3.3081.17 ± 21.3879.49 ± 2.5288.64 ± 1.3595.69 ± 1.5386.43 ± 2.1694.79 ± 1.6097.76 ± 0.92
750.16 ± 17.7146.70 ± 15.6822.49 ± 17.7213.61 ± 16.5643.72 ± 16.4938.62 ± 33.3121.61 ± 25.5562.93 ± 21.3573.03 ± 18.0372.09 ± 16.30
893.37 ± 0.8192.52 ± 2.7791.76 ± 1.4075.92 ± 26.4489.41 ± 2.3289.45 ± 2.4191.48 ± 1.8992.02 ± 1.1393.09 ± 0.7894.39 ± 0.47
933.62 ± 14.2063.57 ± 18.2835.74 ± 23.6832.78 ± 29.9131.35 ± 13.4813.99 ± 19.4623.43 ± 26.3547.50 ± 13.4559.99 ± 16.5868.66 ± 17.26
1066.05 ± 1.9862.03 ± 17.7772.40 ± 4.6452.51 ± 32.2161.48 ± 2.9559.01 ± 6.9483.09 ± 3.1173.70 ± 4.1785.34 ± 3.4387.19 ± 1.98
1172.82 ± 1.1475.50 ± 6.8379.24 ± 2.1680.16 ± 9.9172.26 ± 1.3370.54 ± 4.4392.85 ± 1.2279.74 ± 3.3589.73 ± 2.4790.70 ± 1.63
1260.66 ± 2.7750.03 ± 15.0258.43 ± 8.3749.05 ± 25.8751.64 ± 2.9940.73 ± 14.1883.30 ± 5.4566.83 ± 6.5776.38 ± 7.7081.85 ± 3.97
1394.23 ± 2.2592.96 ± 4.7996.80 ± 2.8370.88 ± 25.3586.61 ± 3.4687.18 ± 3.7392.75 ± 5.9688.69 ± 5.3895.57 ± 1.9097.18 ± 3.02
1492.56 ± 0.7592.44 ± 2.7993.60 ± 1.3489.57 ± 10.9188.50 ± 1.3790.59 ± 0.5797.09 ± 0.6689.64 ± 1.1094.53 ± 1.2596.21 ± 0.89
1561.43 ± 3.3548.79 ± 5.3644.69 ± 9.5229.38 ± 13.5644.94 ± 3.8447.55 ± 4.3258.74 ± 6.9648.06 ± 7.9458.84 ± 7.6563.92 ± 3.78
1684.57 ± 2.6555.79 ± 16.9455.15 ± 15.2430.91 ± 30.8748.29 ± 12.0627.14 ± 28.6587.47 ± 10.0294.67 ± 3.5986.10 ± 6.2487.73 ± 3.15
OA(%)75.27 ± 0.7769.25 ± 6.6072.59 ± 2.8067.26 ± 13.9866.21 ± 0.8965.71 ± 4.0383.63 ± 1.1373.98 ± 2.3582.13 ± 2.6584.67 ± 1.25
K(%)71.57 ± 0.8764.72 ± 8.0068.69 ± 3.2762.21 ± 16.9661.65 ± 0.9860.79 ± 4.7681.55 ± 1.2770.55 ± 2.6579.77 ± 3.0282.66 ± 1.41
Table 3. The comparative experimental results on Houston2013 dataset (Bold numbers represent the best results for the corresponding category).
Table 3. The comparative experimental results on Houston2013 dataset (Bold numbers represent the best results for the corresponding category).
ClassMethods
#MouHe3D-CNNHybridSNViTCrossViTLeViTRvTHiTSS-TMNet
195.49 ± 0.9295.45 ± 1.4596.31 ± 1.9197.74 ± 0.7295.82 ± 0.8494.36 ± 2.1694.66 ± 1.5197.50 ± 0.5496.99 ± 0.8797.60 ± 0.64
296.28 ± 0.6897.04 ± 1.4696.40 ± 1.5197.54 ± 0.9196.03 ± 0.9894.91 ± 2.6095.37 ± 2.5898.42 ± 0.2497.69 ± 0.5298.44 ± 0.56
399.97 ± 0.0599.03 ± 0.3198.91 ± 0.9499.21 ± 1.0098.15 ± 0.6598.59 ± 1.0792.46 ± 8.6899.70 ± 0.2999.28 ± 0.6999.50 ± 0.23
496.50 ± 0.9795.59 ± 1.1896.54 ± 1.5298.35 ± 0.8495.25 ± 0.7997.10 ± 0.3794.50 ± 1.5098.32 ± 0.5497.45 ± 0.6497.26 ± 0.96
597.76 ± 0.7195.07 ± 1.5596.38 ± 0.6396.72 ± 1.1196.10 ± 0.8796.74 ± 0.6496.56 ± 1.4697.63 ± 0.4797.49 ± 0.6198.19 ± 0.33
697.19 ± 2.8674.68 ± 5.3483.14 ± 5.1493.85 ± 2.6173.81 ± 5.7588.76 ± 2.5787.45 ± 3.2893.30 ± 2.2188.74 ± 3.6293.67 ± 2.33
783.06 ± 0.9990.96 ± 1.4591.60 ± 1.8193.78 ± 1.7391.16 ± 1.3494.86 ± 0.8091.19 ± 4.6195.99 ± 1.4193.05 ± 1.0894.54 ± 1.03
867.91 ± 1.9482.25 ± 3.2986.07 ± 2.1690.60 ± 2.2088.58 ± 1.3889.61 ± 1.9981.54 ± 8.6394.48 ± 1.7191.24 ± 1.9795.74 ± 1.35
978.28 ± 1.8383.92 ± 2.6489.48 ± 1.8189.02 ± 4.0588.71 ± 1.7792.63 ± 0.8283.87 ± 8.5092.34 ± 1.5590.64 ± 1.8494.29 ± 1.32
1072.09 ± 2.4786.58 ± 2.8790.36 ± 1.5092.31 ± 3.7490.39 ± 1.2389.33 ± 2.5476.11 ± 12.4794.44 ± 1.4492.39 ± 1.9296.91 ± 0.81
1176.74 ± 1.0485.83 ± 2.8090.29 ± 2.1791.84 ± 3.2391.15 ± 1.5391.29 ± 2.2778.69 ± 12.9693.75 ± 1.0693.28 ± 1.5594.94 ± 0.72
1271.20 ± 2.0482.39 ± 4.0689.76 ± 2.2991.47 ± 3.0387.13 ± 1.5288.22 ± 3.3484.79 ± 7.9593.37 ± 1.7790.72 ± 2.2596.50 ± 1.00
1354.00 ± 5.4083.31 ± 3.6890.21 ± 4.1192.38 ± 1.3974.81 ± 4.0980.82 ± 2.6657.02 ± 31.6285.68 ± 4.5888.52 ± 2.3393.42 ± 1.61
1495.64 ± 1.0295.41 ± 1.8596.94 ± 2.9296.06 ± 2.5295.13 ± 1.4395.03 ± 1.5390.17 ± 7.8199.12 ± 0.4397.13 ± 1.6299.88 ± 0.19
1598.25 ± 0.4096.28 ± 1.5998.13 ± 0.8396.02 ± 2.0594.69 ± 2.2497.65 ± 1.1894.10 ± 4.0598.20 ± 0.7098.40 ± 1.0698.98 ± 0.58
OA(%)84.91 ± 0.5189.61 ± 1.8292.40 ± 1.3093.90 ± 1.7091.28 ± 0.6992.61 ± 1.0187.36 ± 5.9795.28 ± 0.7293.94 ± 1.0296.22 ± 0.35
K(%)83.68 ± 0.5588.77 ± 1.9791.79 ± 1.4093.41 ± 1.8490.58 ± 0.7492.02 ± 1.0986.34 ± 6.4794.91 ± 0.7893.45 ± 1.1095.92 ± 0.38
Table 4. Student’s t-test results between SS-TMNet and the compared methods.
Table 4. Student’s t-test results between SS-TMNet and the compared methods.
DatasetsMethods
#MouHe3D-CNNHybridSNViTCrossViTLeViTRvTHiT
Pavia University7.12 × 10−75.20 × 10−117.45 × 10−61.40 × 10−21.02 × 10−111.22 × 10−123.68 × 10−21.08 × 10−112.26 × 10−5
Indian Pines1.77 × 10−131.94 × 10−64.05 × 10−85.01 × 10−32.80 × 10−184.59 × 10−84.48 × 10−24.76 × 10−101.78 × 10−2
Houston20131.78 × 10−211.16 × 10−65.61 × 10−62.57 × 10−31.86 × 10−137.50 × 10−91.59 × 10−32.50 × 10−35.08 × 10−5
Table 7. The results of OA by the SS-TMNet method and comparison methods with different training samples on the Houston 2013 dataset (Bold numbers represent the best results for the corresponding category).
Table 7. The results of OA by the SS-TMNet method and comparison methods with different training samples on the Houston 2013 dataset (Bold numbers represent the best results for the corresponding category).
Training SampleMethods
#MouHe3D-CNNHybridSNViTCrossViTLeViTRvTHiTSS-TMNet
10%84.91 ± 0.5189.6 1± 1.8292.40 ± 1.3093.90 ± 1.7091.28 ± 0.6992.61 ± 1.0187.36 ± 5.9795.28 ± 0.7293.94 ± 1.0296.22 ± 0.35
20%87.77 ± 0.3694.71 ± 1.0595.84 ± 0.6497.82 ± 0.2895.59 ± 0.3797.19 ± 0.1597.70 ± 0.3397.55 ± 0.2296.96 ± 0.9697.98 ± 0.19
30%89.42 ± 0.4096.38 ± 0.9397.32 ± 0.2897.92 ± 0.6797.15 ± 0.2598.19 ± 0.1398.46 ± 0.1798.27 ± 0.2298.04 ± 0.2698.49 ± 0.15
40%90.53 ± 0.4296.88 ± 0.9097.88 ± 0.2398.65 ± 0.4197.78 ± 0.2598.61 ± 0.1698.85 ± 0.1198.63 ± 0.1198.43 ± 0.3098.79 ± 0.11
50%91.48 ± 0.3597.59 ± 0.3898.40 ± 0.1598.76 ± 0.1898.24 ± 0.2698.84 ± 0.1398.98 ± 0.0798.82 ± 0.1198.54 ± 0.2998.88 ± 0.13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Zhou, Y.; Yang, X.; Zhu, X.; Wang, K. SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification. Remote Sens. 2023, 15, 1206. https://doi.org/10.3390/rs15051206

AMA Style

Huang X, Zhou Y, Yang X, Zhu X, Wang K. SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification. Remote Sensing. 2023; 15(5):1206. https://doi.org/10.3390/rs15051206

Chicago/Turabian Style

Huang, Xiaohui, Yunfei Zhou, Xiaofei Yang, Xianhong Zhu, and Ke Wang. 2023. "SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification" Remote Sensing 15, no. 5: 1206. https://doi.org/10.3390/rs15051206

APA Style

Huang, X., Zhou, Y., Yang, X., Zhu, X., & Wang, K. (2023). SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification. Remote Sensing, 15(5), 1206. https://doi.org/10.3390/rs15051206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop