Next Article in Journal
Dual-Polarization Radar Fingerprints of Precipitation Physics: A Review
Next Article in Special Issue
FusionNet: A Convolution–Transformer Fusion Network for Hyperspectral Image Classification
Previous Article in Journal
Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model
Previous Article in Special Issue
A Pan-Sharpening Method with Beta-Divergence Non-Negative Matrix Factorization in Non-Subsampled Shear Transform Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer

1
Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China
2
Communication Construction Operation and Maintenance Center, State Grid Heilongjiang Electric Power Co., Ltd. Information and Communication Company, Harbin 150010, China
3
Department of Computer Science, Chubu University, Aichi 487-8501, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3705; https://doi.org/10.3390/rs14153705
Submission received: 20 June 2022 / Revised: 27 July 2022 / Accepted: 30 July 2022 / Published: 2 August 2022
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)

Abstract

:
In recent years, methods based on deep convolutional neural networks (CNNs) have dominated the classification task of hyperspectral images. Although CNN-based HSI classification methods have the advantages of spatial feature extraction, HSI images are characterized by approximately continuous spectral information, usually containing hundreds of spectral bands. CNN cannot mine and represent the sequence properties of spectral features well, and the transformer model of attention mechanism proves its advantages in processing sequence data. This study proposes a new spectral spatial kernel combined with the improved Vision Transformer (ViT) to jointly extract spatial spectral features to complete classification task. First, the hyperspectral data are dimensionally reduced by PCA; then, the shallow features are extracted with an spectral spatial kernel, and the extracted features are input into the improved ViT model. The improved ViT introduces a re-attention mechanism and a local mechanism based on the original ViT. The re-attention mechanism can increase the diversity of attention maps at different levels. The local mechanism is introduced into ViT to make full use of the local and global information of the data to improve the classification accuracy. Finally, a multi-layer perceptron is used to obtain the classification result. Among them, the Focal Loss function is used to increase the loss weight of small-class samples and difficult-to-classify samples in HSI data samples and reduce the loss weight of easy-to-classify samples, so that the network can learn more useful hyperspectral image information. In addition, using the Apollo optimizer to train the HSI classification model to better update and compute network parameters that affect model training and model output, thereby minimizing the loss function. We evaluated the classification performance of the proposed method on four different datasets, and achieved good classification results on urban land object classification, crop classification and mineral classification, respectively. Compared with the state-of-the-art backbone network, the method achieves a significant improvement and achieves very good classification accuracy.

Graphical Abstract

1. Introduction

Hyperspectral imagery (Hyperspectral Imagery, HSI) is an image acquired by a hyperspectral imager, and its spatial and spectral information is very rich. Compared with ordinary images, hyperspectral remote sensing images also have more bands and extremely high resolution. The application of hyperspectral remote sensing to earth observation technology is very common, such as precision agriculture [1], land cover analysis [2], marine hydrology detection [3], geological exploration [4] and other fields.
Hyperspectral Imagery (HSI) classification is an important task in hyperspectral image processing and application. In the early research, many traditional machine learning methods have been applied to hyperspectral image classification, such as K-nearest neighbor method [5], support vector machine [6], random forest [7], naive Bayes [8] and decision trees [9], etc. Although these traditional methods have achieved good performance, they are all based on shallow features for learning classification and rely on manual design of classification features, which is difficult to learn more complex information in hyperspectral images [10].
The hyperspectral image classification algorithm based on deep learning can automatically obtain the advanced features of the image, so that the classification model can better express the characteristic of the remote sensing image to improve classification accuracy. Chen [11] applied deep learning theory to hyperspectral image classification for the first time, which used stacked autoencoders to extract spatial spectral features from hyperspectral images and achieved good results. Yu [12] applied convolutional neural networks (CNN) to hyperspectral image classification, which only used spectral information, without taking into account the relationship between adjacent cells. Chen et al. [13] proposed a three-dimensional convolutional neural network (3D-CNN) feature extraction model to directly extract spectral spatial features and achieve better classification results from hyperspectral images end-to-end, which has higher inter-class distinguishability compared to two-dimensional convolutional neural networks (2D-CNN). Roy [14] proposed the HybridSN framework, which is a spectral–spatial 3D-CNN followed by spatial 2D-CNN to further learn a more abstract spatial representation. Zhong et al. [15] proposed the SSRN network, where a spectral residual block and a spatial residual block sequentially learn discriminative features from the rich spectral features and spatial context in hyperspectral images. The selection of informative spectral–spatial kernel features presents challenges due to the presence of noise and band correlations, which is usually solved by using a convolutional neural network with a fixed size receptive field (RF). Roy et al. [16] proposed an attention-based adaptive spectral spatial kernel modified residual network (A2S2K-ResNet) with spectral attention to capture discriminative spectral spatial features in an end-to-end training manner, using an improved 3D ResBlocks to jointly extract spectral–spatial features for HSI classification. T. Alipour-Fard et al. [17] proposed a new multi-branch selection kernel network (MSKNet), which uses different receptive field sizes to convolve the input image to generate multiple branches, so as to adjust each branch according to the input contrast through the attention mechanism effect of the channel. Automatically adjusting the size of neuron receptive field and enhance the cross-channel relationship between features improves the problem of using fixed size receptive field in convolutional neural network, so as to limit the learning weight of the model. Although CNN-based methods have the advantages of spatial feature extraction, they are difficult to handle continuous data, and CNNs are not good at modeling long-range dependencies.
Recently, the application of transformers in the visual direction has become a hot topic. The spectrum of HSI is a kind of sequence data, which usually contains hundreds of spectral bands. Attention-based transformer models have demonstrated their advantages in handling sequential data, and the transformer framework can represent high-level semantic features well. Although CNN has good local perception ability, due to the limitation of the inherent network backbone, CNN cannot mine and represent the sequence attributes of spectral features well, while the transformer model based on an attention mechanism enables the model to be trained in parallel and has global information. CNN methods have limited ability to acquire deep semantic features. As the depth increases, the traditional CNN will increase the channel dimension and reduce the spatial dimension, and the computational cost will increase significantly. However, the transformer does not have this problem, and the channels and spatial dimensions of different layers do not change. This strategy of reducing the spatial dimension and increasing the channel dimension is also beneficial to improve the performance of the transformer structure.
He et al. [18] used CNN to extract spatial features and a transformer to capture spectral sequence relationships. Hong et al. [19] proposed the SpectralFormer network architecture to learn spectral local sequence information from adjacent bands of HSI images to generate intra-group spectral embeddings. Ji et al. [20] proposed the bidirectional encoder representations from transformers (BERT), which has a global receptive field and can directly capture the global dependencies between pixels without considering their spatial distances. Han et al. [21] proposed that Transformer iN Transformer (TNT) block uses an outer transformer block to model the relationship between patches and an inner Transformer block to model the relationship between pixels. The model not only retains the information extraction at the patch level but also achieves the information extraction at the pixel level, which can significantly improve the model’s ability to model local structures. Hugo et al. [22] used distillation to enable the transformer-based model to learn some inductive biases based on the CNN model, thereby improving the processing capability of image. Although the global interaction between token embeddings can be well modeled by the transformer’s self-attention mechanism, the locality mechanism for information exchange within local regions is lacking. Li et al. [23] introduced locality into the transformer by introducing depthwise convolutions in a feedforward network.
In order to capture the spectral relationship of HSI sequences over long distances, obtain deep semantic features and make full use of the local and global information of the data, this paper proposes a new classification framework, that is, attention-based adaptive spectral spatial kernel combined with ViT. The contributions of this article are summarized as follows:
1. This study proposes a novel HSI classification architecture, the attention-based adaptive spectral spatial kernel combined with improved ViT, which systematically combines bands from shallow to deep, enables neurons to adaptively adjust the receptive field size and successfully handles the long-range dependence of the spectrum making full use of the spectral spatial information and local global information in HSI to improve the classification performance of HSI.
2. This study proposes an improved ViT model that introduces the re-attention mechanism and the local mechanism. We use the re-attention mechanism to increase the diversity of attention maps at different levels. The local mechanism is introduced into ViT, the improved attention mechanism of ViT global relation modeling and the locality mechanism of local information aggregation are combined to make full use of the local and global information of the data and improve the classification accuracy.
3. In order to train the model better, the Focal Loss function is used to increase the loss weight of small-class samples and hard-to-classify samples in HSI data samples and reduce the loss weight of easy-to-classify samples, so that the network can learn more useful hyperspectral image information. In addition, using the Apollo optimizer to train the HSI classification model resulted in better updating and computing network parameters that affect model training and model output, thereby minimizing the loss function. The smaller the loss function, the better the model, thus improving the classification model’s performance.
4. The effectiveness of the method is verified on the challenging HSI four public datasets, the urban ground object classification is realized in the Pavia University dataset, the mineral classification is realized on the Xuzhou dataset, and the classification of crops is implemented on the Indian Pines and WHU-Hi-LongKou datasets. The effectiveness of the method is demonstrated on public datasets in different application domains. Compared with other representative methods, the classification results accuracy of the proposed method is improved.
The remaining part of this paper is organized as follows. Section 2 describes the details of the proposed classification method in detail. Section 3 describes the experimental datasets, experimental results and related analyses. Section 4 gives conclusions and suggestions for future work.

2. Related Works

The framework proposed in this paper for HSI image classification is shown in Figure 1. First, the principal component analysis (PCA) method is used to remove redundant spectra and reduce the time and space complexity of image processing. Considering that in order to effectively adjust the receptive field size of neurons and cross-channel dependencies, we proposed an attention-based adaptive spectral spatial residual method. Since CNN is good for capturing local information but has difficulty processing HSI’s continuous data, the extracted features are sent to the modified Vision Transformer model. The original ViT model is improved by combining it with Re-Attention mechanism to increase the diversity of the attention graph at different levels. Then, the local mechanism is introduced into the ViT, locality is added to the ViT by introducing a depthwise convolution in the feedforward network, and the transformed features are fed into the transformer encoder modules to perform feature representation and learning. The following part is divided into four parts: attention-based adaptive spectral spatial residual module, improved ViT, Apollo optimizer and Focal loss function.

2.1. Spectral–Spatial Feature Extraction

Let the hyperspectral data cube be I M × N × D , where I is the original input, M is the width, N is the height, and D is the number of spectral bands. Every HSI pixel in I contains D spectral measures and forms a one-hot vector Y = ( Y 1 , Y 2 , , Y C ) 1 × 1 × C , where C represents the land cover category. To remove spectral redundancy, a principal component analysis was first performed on the raw input HSI data to reduce the number of spectral bands from D to B while maintaining the same spatial dimension. Let X M × N × B be the data cube after PCA processing and B be the number of spectral bands after PCA [24]. Thus, spectral bands are reduced, and spectral information is preserved. Using the combined spectral and spatial information, a region size of S × S centered on the pixel ( i , j ) is superimposed into X , defined as a spectral–spatial vector X i , j = [ X i , j , 1 , , X i , j , B ] S × S × B . Taking the HSI digital cube X S × S × B as input, the adaptive spectral space kernel feature map V S × S × B is generated as output [16]:
V = F A S S K ( X ; θ a )
where θ a is the trainable parameter in ASSK. By automatically adjusting the receptive field size, neurons can jointly learn spectral spatial features and amplify multi-scale information of neurons in the next layer.
In order to enable neurons to adaptively adjust the size of the receptive field, we use selective kernel convolution to learn the selection of the spectral–spatial kernel attention feature maps between different receptive fields through FASSK, as shown in Figure 2. Selective kernel convolutions between multiple kernels have different kernel sizes. The basic idea is to use gates to control the flow of information from two branches carrying information of different scales into the neurons of the next layer. To achieve this, the gate needs to integrate information from branch offices, where multiple branches with different kernel sizes are fused using softmax function attention guided by information in these branches. Different attention to these branches produces different sizes of the effective receptive fields of neurons in the fusion layer. F ^ s p e c t r a l ( l + 1 ) : X l U ^ ( l + 1 ) S × S × B and F ˜ s p a t i a l ( l + 1 ) : X l U ˜ ( l + 1 ) S × S × B are the transformations of the ( l + 1 ) th layer, where X l is the input to the ( l + 1 ) th layer spectral and spatial kernel selection transformation. The output feature maps U ^ ( l + 1 ) and U ˜ ( l + 1 ) are defined as:
U ^ ( l + 1 ) = F ^ s p e c t r a l ( l + 1 ) ( X l ) = X l W ( 1 × 1 × 7 ) ( l + 1 ) + b ( l + 1 )
U ˜ ( l + 1 ) = F ˜ s p a t i a l ( l + 1 ) ( X l ) = X l * W ( 3 × 3 × 7 ) ( l + 1 ) + b ( l + 1 )
Among them, is the three-dimensional convolution operation, W l + 1 is the weight of the ( l + 1 ) th convolution layer, b ( l + 1 ) is the bias, and two three-dimensional convolution kernels with receptive field sizes (1 × 1 × 7) and (3 × 3 × 7) are used to extract the spectral and spatial feature maps. F ^ s p e c t r a l extracts spectral features, and F ^ s p a t i a l extracts spatial features.
By automatically adjusting the size of the receptive field of neurons, the neurons jointly learn the spectral–spatial features and amplify the multi-scale information flow of the neurons in the next layer. Firstly, element-level summation is used to fuse the results of the two branches:
U ( l + 1 ) = U ˜ ( l + 1 ) + U ^ ( l + 1 )
Secondly, global information is embedded by using global average pooling (GAP) to generate feature response vectors (FRVs) with channel statistics of the data. Specifically, the spatial dimension of U ( l + 1 ) S × S × B is reduced to s b ( l + 1 ) 1 × 1 × B along the b th feature map direction by averaging the spatial elements of S × S at each channel:
s b ( l + 1 ) = 1 S × S i = 1 S j = 1 S u b ( l + 1 ) ( i , j )
Furthermore, to obtain neural activations of different channel features enabling adaptive kernel selection, a compact feature z ( l + 1 ) d × 1 was created to enable guidance for precise and adaptive selection. This is achieved by a simple fully connected layer, which reduces the dimensionality to improve efficiency, and the feature weight vector is defined as:
z ( l + 1 ) = ReLu ( BN ( W ( l + 1 ) s b l + 1 ) )
ReLu is the activation function, and BN is the batch normalization process. d is used to achieve model convergence, and the compression ratio r is used to control z ( l + 1 ) for compressing dimension:
d = max ( C r , L )
where L is the minimum value of d ( L = 32 in our experiment).
Guided by the channel descriptor z ( l + 1 ) , a discriminative spectral-spatial kernel feature map is automatically selected. Specifically, apply z ( l + 1 ) to the softmax function:
a s p e c t r a l l + 1 = e A b ( l + 1 ) z ( l + 1 ) e A b ( l + 1 ) z ( l + 1 ) + e B b ( l + 1 ) z ( l + 1 )
b s p a t i a l l + 1 = e B b ( l + 1 ) z ( l + 1 ) e A b ( l + 1 ) z ( l + 1 ) + e B b ( l + 1 ) z ( l + 1 )
Among them, a s p e c t r a l l + 1 and b s p a t i a l l + 1 denote the soft attention vector for U ^ ( l + 1 ) and U ˜ ( l + 1 ) , respectively. A b ( l + 1 ) 1 × d and B b ( l + 1 ) 1 × d are the b th row of A ( l + 1 ) B × d and B ( l + 1 ) B × d , the final feature map V is obtained through the attention weights on each kernel function:
V = a s p e c t r a l l + 1 × U ^ ( l + 1 ) + b s p a t i a l l + 1 × U ˜ ( l + 1 )
Among them, a s p e c t r a l l + 1 + b s p a t i a l l + 1 = 1 , V = [ V 1 , V 2 , , V B ] and V i S × S , i = 1 , , B .
The kernel feature map is made up of four ResBlocks in order to extract more robust and discriminative spectral–spatial characteristics. Each ResBlock is made up of 24 kernels that are separated into spectral characteristics based on the learning of distinct kernel shapes and spatial features. The first two ResBlocks extract spatially focused spectral characteristics, whereas the latter two extract spatially focused spectral features. As a result, combining spectral and spatial data increase the model’s identification capabilities. A GAP layer is utilized after re-blocking to transform 3D feature maps of size 7 × 7 × 24 into feature vectors of size 1 × 1 × 24.
Efficient Feature Recalibration is recalibrated by residual and spectral spatial channels. Among them, F E F R ( ) takes the transformed feature map of the l th layer X l S × S × B as the input, and generates the feature map recalibrated by the channel X ^ l + 1 S × S × B as the output, that is:
X ^ l + 1 = F E F R ( X l ; θ b )
where θ b is the trainable parameter in the EFR module.

2.2. Improved Vision Transformer

Transformer networks were developed to simulate long-term relationships between sequence parts in machine translation. Although Transformer’s self-attention mechanism can mimic the global interaction between token embeddings, it lacks a local method for information sharing among small areas. We provide a locality mechanism to ViT by incorporating depth-wise convolution because locality is critical for HSI pictures. The upgraded ViT blends global relation modeling’s attention mechanism with local information aggregation’s locality mechanism. Locality is added to the ViT by introducing depthwise convolutions in a feedforward network, and the Re-Attention mechanism based on the original ViT is used to increase the diversity of attention maps at different levels.
When compared to standard convolution, depthwise convolution uses just channels for calculation. That is, just one input feature map is convolved to obtain one channel of the output feature map. As a result, depth-wise convolution is both parameter and computation efficient. The patch is input to the Embedding layer, that is, the Linear Projection of Flattened Patches in the Figure 1, a lot of vectors called tokens can be obtained. Then, a new token is added in front of a series of tokens, and Positional Encoding will be added to the patch embedding to retain the position. The closer the information is located, the more similarly it tends to be encoded. In addition, the location information needs to be added, corresponding to 0 n . Then, it is input into the transformer encoder to repeatedly stack the block N times. The output of the transformer is classified by the MLP Head which consists of LayerNorm and two fully connected layers, and the GELU activation function is used for classification to obtain the final classification result [24].
Figure 3 depicts the transformer encoder, which consists of N stacks of the same layer. Each layer consists of the re-attention mechanism and position-wise fully connected feed-forward network. Around each of these two sublayers, we utilize a residual connection [25] and a normalization layer [26]. That is, LayerNorm ( x + Sublayer ( x ) ) is the output of each sublayer, where Sublayer ( x ) is an implementation function of the sublayer.
1.
Re-Attention
Re-Attention successfully overcomes the problem of attention collapse and allows for more in-depth ViT training, which collects complementing information from multiple attention heads through interactions to promote the variety of attention maps. Specifically, we use dynamic aggregation to create a new set of attention maps based on the head’s attention maps. A learnable transition matrix Θ H × H is defined and used to combine the multi-head attention maps into a new regenerated map before multiplying by V . Re-Attention is accomplished by the following manner [27]:
Re - Attention ( Q , K , V ) = Norm ( Θ T ( softmax ( Q K T d ) ) ) V
The transformation matrix Θ is multiplied by the self-attention map of the head dimension. Norm is the normalizing function used to decrease hierarchical variance. The softmax function is applied to rows of comparable matrices, and d is used to normalize the result. The three learnable weight matrices include query (Q), key (K), and value (V). Relationships between tokens are modeled by projecting the similarity between query key pairs, resulting in attention scores.
2.
Feed forward
After the Re-Attention layer, a feedforward network is attached. A token sequence is first reshaped into a feature map on a 2D lattice. Then, two 1 × 1 convolutions and one depthwise convolution are applied to the feature map. Then, the feature map is reshaped into a sequence of tokens, which is used as self-attention in the transformer layer of the network. The specific description is as follows.
The feedforward network consists of two input convolutions of size 1 × 1 and transforms features along the embedding dimension. The hidden dimension between the two convolutional layers is expanded to learn richer feature representations. Since the feedforward network is applied to the sequence of tokens Z N × d by position, the reshaped features of the sequence of tokens are represented as:
Z r = Seq 2 Img ( Z ) , Z r h × w × d
The sequence is converted into a 2D feature map using Seq2Img. To re-establish token closeness, each token is placed at the pixel position of the feature map, offering a chance to reinstate locality into the network.
There is no information exchange between neighboring pixels since the feature map just performs the 1 × 1 convolution. Furthermore, the transformer’s attention section only captures the global interdependence between all tokens. In the inverted residual block, there is a depthwise convolution. Each channel is given k × k (k > 1) convolution kernels by depthwise convolution. To calculate a new feature, the features from k × k kernels are combined. Therefore, depthwise convolution is a good approach to bring locality into the network. The depthwise convolution is introduced into the transformer feedforward network, and the calculation formula is [23]:
Y r = f ( f ( Z r W 1 r ) W d ) W 2 r
Y = Img 2 Seq ( Y r )
f ( ) is the nonlinear activation function. The bias phrase has been eliminated for clarity. In most cases, the dimensional expansion ratio γ is set to 4. W 1 r d × γ d × 1 × 1 is reshaped from W 1 and represents the convolution kernel. W d γ d × 1 × k × k is the kernel of depthwise convolution. The Img2Seq function returns the image feature map to a series of tokens which is used in the following self-attention layer.

2.3. Apollo Optimizer

The optimizer is used to update and compute network parameters that influence model training and output in order to approach or attain the optimal value, reducing (or maximizing) the loss function. This work employs Apollo, a non-convex quasi-Newtonian stochastic optimization technique that is both simple and computationally efficient. This approach is useful for large-scale optimization issues involving big data sets or high-dimensional parameter spaces, such as deep neural network machine learning, and using the Apollo optimizer improves HSI data categorization accuracy. The method approximates the Hessian through a diagonal matrix, dynamically introduces the curvature of the loss function, and the update and storage of the Hessian diagonal is as efficient as the adaptive first-order optimization method of linear complexity. The Hessian is replaced with its adjusted absolute value to handle non-convexity and ensure that it is positive definite.
The Apollo optimizer formula is as follows [28]:
θ t + 1 = θ t H t 1 g t
where g t = f ( θ t ) is the gradient at θ t , and H t = 2 f ( θ t ) is the Hessian matrix:
θ t + 1 = θ t η t B t 1 g t
where η t is the step size, and B t is the approximation of the Hessian matrix for each parameter update. Exponential moving averages (EMVs) are applied to g t , and bias correction is initialized:
m t + 1 = β ( 1 β t ) 1 β t + 1 m t + 1 β 1 β t + 1 g t + 1
where 0 < β < 1 is the decay rate of the EMV. For each parameter B t , the update formula is as follows:
Λ B t + 1 B t = s t T y t s t T B t s t s t 4 4 Diag ( s t 2 )
Among them, y t = g t + 1 g t , s t = θ t + 1 θ t , and s t 2 is the element-wise squared vector of s t , Diag ( s t 2 ) is a diagonal matrix consisting of the vector’s diagonal elements s t 2 , and 4 is the 4-norm of the vector.
Replace the step size bias in the stochastic gradient g t with the modified gradient g t = η t g t . Combined with the corresponding corrected y t = g t + 1 g t = η t y t , modify the update term Λ in formula (19) and replace y t with y t :
Λ = s t T y t s t T B t s t s t 4 4 Diag ( s t 2 ) = d t T y t + d t T B t d t d t 4 4 Diag ( d t 2 )
where d t = s t η t = B t 1 g t is the updated direction after correction.
When calculating the update direction with B t as preprocessing, we use its absolute value:
| B t | = B t T B t
where is the positive definite square root of the matrix. Apollo uses a diagonal matrix to represent B t . In order to deal with the non-convexity of the objective function, the absolute value of B t is corrected with the convexity hyperparameter σ :
D t = r e c t i f y ( B t , σ ) = max ( | B t | , σ )
Among them, the rectify ( , σ ) function is similar to the corrected linear unit (ReLu), and the threshold is set to σ .

2.4. Focal Loss

Various samples have the same amount of loss in cross-entropy measures prediction; however, in the real HSI classification job, the quantity of samples of different categories varies substantially, as does the classification difficulty of the same category of samples. The categorization complexity of distinct samples varies according to the differences between them. If the same weight is utilized to optimize each instance’s prediction results, the prediction results for difficult-to-classify data will be relatively bad. Furthermore, the categorization findings of certain instances are not optimal due to the effect of mixed pixels. The model must adaptively alter the proportion of each instance in the loss according to the classification difficulty in order to enhance classification performance and pay attention to small-class samples and difficult-to-classify samples at the same time. More “optimization resources” should be allocated to challenging classification samples [29]. Focal Loss [30], an improved variant of cross entropy loss, is used in this research, which is defined as:
CE ( p , y ) = 1 N i = 0 N 1 k = 0 K 1 y i , k log p i , k
Assuming there are K label values, y is the real label, p i , k is the probability of predicting the k th label value for the i th sample, and N represents the number of samples. A common way to address class imbalance is to introduce a weighting factor α 1 × N , which is between [ 0 , 1 ] :
CE ( p , y ) = 1 N i = 0 N 1 k = 0 K 1 α i y i , k log p i , k
A more formal approach is to add a tuning factor ( 1 p t ) γ to the cross-entropy loss function, with tunable focusing parameter γ 0 .
CE ( p , y ) = 1 N i = 0 N 1 k = 0 K 1 ( 1 p i , k ) γ y i , k log p i , k
Combining the above two formulas, the Focal Loss is obtained:
FL ( p , y ) = 1 N i = 0 N 1 k = 0 K 1 α i ( 1 p i , k ) γ y i , k log ( p i , k )

3. Experimental Results

The experiments were carried out on the Windows 10 operating system, and the classification methods were implemented using the Python language and PyTorch library. The experimental environment is an Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz 2.59 GHz processor, 16 GB memory, and a GeForce GTX 1650Ti graphics card. In order to minimize the experimental error and chance, all the experimental data in this paper are the average results of 10 iterations. In order to adapt to hardware resources and reduce the amount of computation per batch during network training, the size of the input data is set to 32 × 32. All experimental networks can reach a stable convergence state after training up to 100 epochs. In order to ensure that all methods can achieve the best classification effect, this paper sets the maximum number of training epochs to 200 and adopts the early stopping method to avoid the overfitting problem. We use the Apollo optimizer to learn the mixing operation parameters, where the learning rate is set to 0.0004. Three indicators of comprehensive accuracy (OA), average accuracy (AA) and the Kappa coefficient (K) are used to quantitatively evaluate the experimental results.

3.1. Hyperspectral Datasets Description

In this study, we conduct experiments on four different HSI datasets, including Indian Pines datasets, Pavia University datasets, Xuzhou datasets and WHU-Hi-LongKou datasets. The datasets used are described in detail below. The number of samples per class, a false color map and a ground truth map of the datasets are shown in Table 1, Table 2, Table 3 and Table 4.
1. Data in the Indian Pines dataset were obtained by the AVIRIS sensor over the Indian Pines Agricultural Proving Ground in northwestern Indiana, USA. The original data have a total of 224 bands, 4 zero bands and 20 water absorption bands (104–108, 150–163 and 220) are removed, and the remaining 200 bands are for experimental study, ranging from 0.4 to 2.5 μm. the space size is 145 × 145 pixels with 16 different types of plants.
Table 1. Indian Pines Dataset Labeled Sample Counts.
Table 1. Indian Pines Dataset Labeled Sample Counts.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1Alfalfa Remotesensing 14 03705 i00146 Remotesensing 14 03705 i002 Remotesensing 14 03705 i003
2Corn-notill Remotesensing 14 03705 i0041428
3Corn-mintill Remotesensing 14 03705 i005830
4Corn Remotesensing 14 03705 i006237
5Grass-pasture Remotesensing 14 03705 i007483
6Grass-trees Remotesensing 14 03705 i008730
7Grass-pasture-mowed Remotesensing 14 03705 i00928
8Hay-windrowed Remotesensing 14 03705 i010478
9Oats Remotesensing 14 03705 i01120
10Soybean-notill Remotesensing 14 03705 i012972
11Soybean-mintill Remotesensing 14 03705 i0132455
12Soybean-clean Remotesensing 14 03705 i014593
13Wheat Remotesensing 14 03705 i015205
14Woods Remotesensing 14 03705 i0161265
15Buildings-Grass-Trees-Drives Remotesensing 14 03705 i017386
16Stone-Steel-Towers Remotesensing 14 03705 i01893
Total 10,249
2. Data in the Pavia University dataset were obtained by ROSIS-03 sensors over the University of Pavia, Pavia, Italy. The size of the dataset is 610 × 340 pixels, and the spatial resolution is 1.3 m. The original data have 115 bands with spectral coverage ranging from 0.43 to 0.86 μm. Twelve noise bands are removed, and the remaining 103 bands are available for experiments with 9 categories.
Table 2. Pavia University Dataset Labeled Sample Counts.
Table 2. Pavia University Dataset Labeled Sample Counts.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1Asphalt Remotesensing 14 03705 i0016631 Remotesensing 14 03705 i019 Remotesensing 14 03705 i020
2Meadows Remotesensing 14 03705 i00418,649
3Gravel Remotesensing 14 03705 i0052099
4Trees Remotesensing 14 03705 i0063064
5Painted metal sheets Remotesensing 14 03705 i0071345
6Bare Soil Remotesensing 14 03705 i0085029
7Bitumen Remotesensing 14 03705 i0091330
8Self-Blocking Bricks Remotesensing 14 03705 i0103682
9Shadows Remotesensing 14 03705 i011947
Total 42,776
3. Data in the Xuzhou dataset [31,32] were acquired by HySpex SWIR-384 and HySpex VNIR-1600 imaging spectrometers in Xuzhou, Jiangsu Province, China, in November 2014, and the experimental area is located near a coal mining area. The size of the dataset is 500 × 260 pixels, the noise bands from 415 to 2508 nm are removed, and there are 436 bands for experiments with 9 categories.
Table 3. Xuzhou Dataset Labeled Sample Counts.
Table 3. Xuzhou Dataset Labeled Sample Counts.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1Bareland-1 Remotesensing 14 03705 i00126,396 Remotesensing 14 03705 i021 Remotesensing 14 03705 i022
2Lakes Remotesensing 14 03705 i0044027
3Coals Remotesensing 14 03705 i0052783
4Cement Remotesensing 14 03705 i0065214
5Crops-1 Remotesensing 14 03705 i00713,184
6Trees Remotesensing 14 03705 i0082436
7Bareland-2 Remotesensing 14 03705 i0096990
8Crops-2 Remotesensing 14 03705 i0104777
9Red-tiles Remotesensing 14 03705 i0113070
Total 68,877
4. The WHU-Hi-LongKou dataset [33,34] consists of an 8 mm focal length head-wall nano-Hyperspec imaging sensor, mounted on the DJI Matrice 600 Pro (DJI M600 Pro) drone platform in 2018 Obtained in Longkou Town, Hubei Province, China, on 7 July 2008. The study area is a simple agricultural scenario containing six crops: corn, cotton, sesame, broad-leaf soybean, narrow-leaf soybean, and rice, with a total of nine categories. The image size is 550 × 400 pixels with 270 bands between 0.4~1 μm, and the spatial resolution of the hyperspectral image carried by the UAV is about 0.463 m.
Table 4. WHU-Hi-LongKou Dataset Labeled Sample Counts.
Table 4. WHU-Hi-LongKou Dataset Labeled Sample Counts.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1Corn Remotesensing 14 03705 i00134,511 Remotesensing 14 03705 i023 Remotesensing 14 03705 i024
2Cotton Remotesensing 14 03705 i0048374
3Sesame Remotesensing 14 03705 i0053031
4Broad-leaf soybean Remotesensing 14 03705 i00663,212
5Narrow-leaf soybean Remotesensing 14 03705 i0074151
6Rice Remotesensing 14 03705 i00811,854
7Water Remotesensing 14 03705 i00967,056
8Roads and houses Remotesensing 14 03705 i0107124
9Mixed weed Remotesensing 14 03705 i0115229
Total 204,542

3.2. Comparison of the Proposed Methods with the State-of-the-Art Methods

In this section, to evaluate the classification performance of our proposed method, the proposed method is validated by using several comparative experiments, including the traditional method RBF-SVM [35] and the deep-learning-related methods CNN [36], HybirdSN [14], PyResNet [37], SSRN [15], SSFTT [38] and A2S2KResNet [16]. For RBF-SVM, the radial basis function is used as the kernel, and the grid search method is used to find the exponential growing sequence. In each dataset, the number of training samples is 10% of the total number of samples. The experimental results of the proposed method are shown in Table 5, Table 6, Table 7 and Table 8. It can be seen that the OA, AA and Kappa values achieved by the proposed method are the best, with OA reaching 98.81%, 99.76%, 99.80% and 99.89% on the Indian Pines, Pavia University, Xuzhou and WHU-Hi-LongKou datasets, respectively.
To show the classification results more clearly, we present the classification results of eight methods on the four hyperspectral datasets, as shown in Figure 4, Figure 5, Figure 6 and Figure 7. Obviously, our proposed method has more accurate classification results compared to other methods. Compared with deep-learning-based methods on the four datasets, there are more noise scatters in the classification graph of EMP-SVM, CNN, Hybird, PyResNet, SSRN, SSFTT, and A2S2KResNet classification methods still have some misclassifications. Compared with ground truth, it can be seen that the proposed method can obtain more accurate classification results, which further proves the effectiveness of the proposed method in the classification of hyperspectral data.

3.3. Ablation Experiments

Among them, we performed ablation experiments on the Indian Pines dataset to verify the effectiveness of the proposed method. The experimental results are shown in Table 9.
When we only use the A2S2kResNet model to classify hyperspectral data, its OA on Indian pines dataset is only 98.51%. When A2S2KResNet is combined with ViT (A2S2KResNet + ViT), the OA is 98.61%, which proves that the ViT model can slightly improve the classification performance of the model. When the A2S2KResNet + ViT model is combined with Focal Loss function or Apollo optimizer, the OAs are 98.63% and 98.75%, respectively. It is proven that Focal Loss function and Apollo optimizer are slightly helpful to A2S2kResNet + ViT model. When A2S2KResNet + ViT is combined with Focal Loss function and Apollo optimizer, which is the HSI classification model proposed by us in this paper, it achieves the highest classification accuracy on Indian Pines dataset, which further proves the effectiveness of our method in improving the classification performance of HSI.

4. Discussions

This paper made some modifications and designed an HSI classification method. This study proposes an improved ViT model that introduces a re-attention mechanism and a local mechanism. Then, the improved ViT model is combined with the attention-based adaptive spectral–spatial kernel, which systematically combines bands from shallow to deep, enables neurons to adaptively adjust the receptive field size, and successfully handles the long-range dependence of the spectrum, making full use of the spectral–spatial information and local global information in HSI to improve the classification performance. The Focal Loss function is used to increase the loss weight of small-class samples and hard-to-classify samples in HSI data samples and Apollo. Furthermore, a quasi-Newton method for nonconvex stochastic optimization is introduced to dynamically incorporate the curvature of the loss function by approximating the Hessian via a diagonal matrix.
It can be seen from Table 5, Table 6, Table 7 and Table 8 that the classical RBF-SVM method and several deep-learning-based methods including CNN, HybirdSN, PyResNet, SSRN, SSFTT, and A2S2KResNet are considered for comparison. All experimental results show that the proposed method achieves the best performance on all datasets. The suggested technique obtained superior performance in terms of classification accuracy on the four popular HSI datasets, according to all experimental results. Taking the Indian Pines dataset as an example, the OA, AA and K of the proposed method are improved by 18.8%, 19.48% and 21.56%, respectively, compared with RBF-SVM. Furthermore, compared with CNN, the OA of the proposed method is improved by 20.5%, 15.5%, 9.24% and 8.47% on the Indian, Pavia, Xuzhou and WHU-Hi-Longkou datasets, respectively. For the Pavia University dataset, the proposed method improves the OA by 8.05%, 5.96%, 0.06%, 0.14% and 0.22% compared with HybirdSN, PyresNet, SSRN, SSFTT and A2S2KResNet, respectively.
Furthermore, to verify the effectiveness of the proposed method for different HSI dataset, Figure 8, Figure 9, Figure 10 and Figure 11 show the classification results of different methods for each class.
We can see that our method achieves the highest classification accuracy for almost every class for four different datasets. For example, for the Oats category in the Indian Pines dataset, our method improves OA by 3.17% over the state-of-the-art among other methods. The effectiveness of the method is demonstrated in different datasets in different application domains. The urban land feature classification is realized on the Pavia University dataset, the mineral classification is realized on the Xuzhou dataset and the Indian Pines dataset and WHU-Hi-LongKou datasets are implemented for fine crop classification.

5. Conclusions

In this study, an attention-based adaptive spectral–spatial kernel combined with an improved ViT network architecture is proposed to classify HSI images. For the spectra of HSI images that are approximately continuous, the proposed method fully utilized the local and global information of the data. Compared with classical methods and some deep-learning-based methods, the proposed method achieves excellent performance on four different datasets in urban land classification, crop classification and mineral classification. In the future research process, we will study strategies to improve the transformer’s architecture to make it more suitable for HSI classification, build lightweight networks and reduce network complexity while ensuring the network’s working performance.

Author Contributions

Conceptualization, A.W., S.X. and H.W.; methodology, S.X., H.W., A.W. and Y.I.; software, validation, S.X. and Y.Z.; writing—review and editing, H.W., A.W. and Y.I.; supervision, Y.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under Grant NSFC-61671190 and Reserved Leaders of Heilongjiang Provincial Leading Talent Echelon 2021.

Data Availability Statement

Acknowledgments

We thank for Kaiyuan Jiang for his valuable comments and discussion. Iwahori’s research is supported by JSPS Grant-in-Aid for Scientific Research (C) (20K11873) and Chubu University Grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ibrahim, A.; Franz, B.; Ahmad, Z.; Healy, R.; Knobelspiesse, K. Atmospheric correction for hyperspectral ocean color retrieval with application to the Hyperspectral Imager for the Coastal Ocean (HICO). Remote Sens. Environ. Interdiscip. J. 2018, 204, 60–75. [Google Scholar] [CrossRef] [Green Version]
  2. Bhosle, K.; Musande, V. Evaluation of deep learning CNN model for land use land cover classification and crop identification using hyperspectral remote sensing images. J. Indian Soc. Remote Sens. 2019, 47, 1949–1958. [Google Scholar] [CrossRef]
  3. Wang, B.; Shao, Q.; Song, D. A Spectral-Spatial Features Integrated Network for Hyperspectral Detection of Marine Oil Spill. Remote Sens. 2021, 13, 1568. [Google Scholar] [CrossRef]
  4. Gao, A.F.; Rasmussen, B.; Kulits, P.; Scheller, E.L.; Greenberger, R.; Ehlmann, B.L. Generalized Unsupervised Clustering of Hyperspectral Images of Geological Targets in the Near Infrared. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 4289–4298. [Google Scholar]
  5. Bo, C.J.; Lu, H.C.; Wang, D. Spectral-spatial K-Nearest Neighbor approach for hyperspectral image classification. Multimed. Tools Appl. 2018, 77, 10419–10436. [Google Scholar] [CrossRef]
  6. Samadzadegan, F.; Hasani, H.; Schenk, T. Simultaneous feature selection and SVM parameter determination in classification of hyperspectral imagery using Ant Colony Optimization. Remote Sens. 2012, 38, 139–156. [Google Scholar] [CrossRef]
  7. Li, M.; Zhang, N.; Pan, B.; Xie, S.; Wu, X.; Shi, Z. Hyperspectral Image Classification Based on Deep Forest and Spectral-Spatial Cooperative Feature and Deep Forest. In Proceedings of the International Conference on Image and Graphics, Shanghai, China, 13–15 September 2017; Volume 10668, pp. 325–336. [Google Scholar]
  8. Kayabol, K. Bayesian Gaussian mixture model for spatial-spectral classification of hyperspectral images. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1805–1809. [Google Scholar]
  9. Wang, M.; Gao, K.; Wang, L.; Miu, X. A Novel Hyperspectral Classification Method Based on C5.0 Decision Tree of Multiple Combined Classifiers. In Proceedings of the 2012 Fourth International Conference on Computational and Information Sciences, Chongqing, China, 17–19 August 2012; pp. 373–376. [Google Scholar]
  10. Lu, Y.; Wang, L.; Shi, Y. Classification of hyperspectral image with small-sized samples based on spatial-spectral feature enhancement. J. Harbin Eng. Univ. 2022, 43, 436–443. [Google Scholar]
  11. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  12. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  13. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  14. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  15. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  16. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
  17. Alipour-Fard, T.; Paoletti, M.E.; Haut, J.M.; Arefi, H.; Plaza, J.; Plaza, A. Multibranch Selective Kernel Networks for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1089–1093. [Google Scholar] [CrossRef]
  18. He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  19. Hong, D. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  20. He, J.; Zhao, L.; Yang, H.; Zhang, M.; Li, W. HSI-BERT: Hyperspectral Image Classification Using the Bidirectional Encoder Representation from Transformers. IEEE Trans. Geosci. Remote Sens. 2020, 58, 165–178. [Google Scholar] [CrossRef]
  21. Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in Transformer. arXiv 2021, arXiv:2103.00112. [Google Scholar]
  22. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jegou, H. Training data-efficient image transformers & distillation through attention. arXiv 2020, arXiv:2012.12877. [Google Scholar]
  23. Li, Y.; Zhang, K.; Cao, J.; Radu, T.; Luc, V.G. LocalViT: Bringing Locality to Vision Transformers. arXiv 2021, arXiv:2104.05707. [Google Scholar]
  24. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  26. Jimmy, L.B.; Jamie, R.K.; Geoffrey, E.H. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
  27. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; Feng, J. DeepViT: Towards Deeper Vision Transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar]
  28. Ma, X. Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization. arXiv 2020, arXiv:2009.13586. [Google Scholar]
  29. Cui, Y.; Xia, J.; Wang, Z.; Gao, S.; Wang, L. Lightweight Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  30. Liang, Y.; Zhao, Z.; Wang, H. Unbalanced Geologic Body Classification of Hyperspectral Data Based on Squeeze and Excitation Networks at Tianshan Area. In Proceedings of the IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 6981–6984. [Google Scholar]
  31. Tan, K.; Wu, F.; Du, Q.; Du, P.; Chen, Y. A Parallel Gaussian-Bernoulli Restricted Boltzmann Machine for Mining Area Classification with Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 627–636. [Google Scholar] [CrossRef]
  32. Wang, X.; Tan, K.; Du, Q.; Chen, Y.; Du, P. Caps-TripleGAN: GAN-Assisted CapsNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7232–7245. [Google Scholar] [CrossRef]
  33. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar] [CrossRef]
  34. Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-borne hyperspectral remote sensing: From observation and processing to applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  35. Benediktsson, J.A.; Palmason, J.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  36. Chen, Y.; Zhu, L.; Ghamisi, P.; Jia, X.; Li, G.; Tang, L. Hyperspectral Images Classification with Gabor Filtering and Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
  37. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R. Deep pyramidal residual networks for spectral-spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 740–754. [Google Scholar] [CrossRef]
  38. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral-Spatial Feature Tokenization Transformer for Hyperspectral Image Classifiction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
Figure 1. The proposed network structure for HSI Classification.
Figure 1. The proposed network structure for HSI Classification.
Remotesensing 14 03705 g001
Figure 2. The structure of selective kernel convolution.
Figure 2. The structure of selective kernel convolution.
Remotesensing 14 03705 g002
Figure 3. Transformer encoder.
Figure 3. Transformer encoder.
Remotesensing 14 03705 g003
Figure 4. The classification results of Indian Pines dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Figure 4. The classification results of Indian Pines dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Remotesensing 14 03705 g004
Figure 5. The classification results of Pavia University dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Figure 5. The classification results of Pavia University dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Remotesensing 14 03705 g005
Figure 6. The classification results of Xuzhou dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Figure 6. The classification results of Xuzhou dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Remotesensing 14 03705 g006
Figure 7. The classification results of WHU-Hi-LongKou dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Figure 7. The classification results of WHU-Hi-LongKou dataset. (a) Ground-truth map; (b) RBF-SVM; (c) CNN; (d) HybirdSN; (e) PyResNet; (f) SSRN; (g) SSFTT; (h) A2S2KResNet; (i) Proposed.
Remotesensing 14 03705 g007
Figure 8. Classification results comparison for each class on the Indian Pines dataset.
Figure 8. Classification results comparison for each class on the Indian Pines dataset.
Remotesensing 14 03705 g008
Figure 9. Classification results comparison for each class on the Pavia University dataset.
Figure 9. Classification results comparison for each class on the Pavia University dataset.
Remotesensing 14 03705 g009
Figure 10. Classification results comparison for each class on the Xuzhou dataset.
Figure 10. Classification results comparison for each class on the Xuzhou dataset.
Remotesensing 14 03705 g010
Figure 11. Classification results comparison for each class on the WHU-Hi-LongKou dataset.
Figure 11. Classification results comparison for each class on the WHU-Hi-LongKou dataset.
Remotesensing 14 03705 g011
Table 5. Classification results of all methods on Indian Pines dataset.
Table 5. Classification results of all methods on Indian Pines dataset.
MethodRBF-SVMCNNHybirdSNPyResNetSSRNSSFTTA2S2KResNetProposed
Class
165.07 ± 9.6581.93 ± 7.7482.99 ± 25.0291.31 ± 3.4385.38 ± 2.1099.76 ± 0.7391.62 ± 2.0598.70 ± 3.16
271.34 ± 2.0174.70 ± 9.0789.21 ± 4.7187.74 ± 7.8298.07 ± 1.6494.85 ± 1.1598.67 ± 1.0598.57 ± 1.13
375.53 ± 2.5859.56 ± 6.9689.13 ± 3.7380.27 ± 8.0697.09 ± 1.2999.24 ± 0.4798.81 ± 0.6598.96 ± 0.67
461.18 ± 6.8445.37 ± 5.6089.50 ± 6.7082.60 ± 5.8797.95 ± 1.9799.30 ± 1.0899.37 ± 0.8798.87 ± 1.00
588.76 ± 2.9789.37 ± 4.6796.73 ± 4.5596.82 ± 3.3296.68 ± 1.2998.78 ± 0.9498.97 ± 0.8098.77 ± 0.83
689.16 ± 1.7794.88 ± 3.4497.49 ± 2.1292.57 ± 5.4797.46 ± 4.5399.37 ± 0.3998.86 ± 1.0199.35 ± 0.67
785.05 ± 9.2781.91 ± 13.1082.27 ± 9.1993.94 ± 6.5070.00 ± 5.8398.40 ± 3.2097.93 ± 6.2197.02 ± 7.70
890.32 ± 1.5097.83 ± 1.2295.70 ± 3.9992.27 ± 3.5798.50 ± 1.9799.79 ± 0.45100.00 ± 0.0099.92 ± 0.12
971.14 ± 13.6557.26 ± 12.8470.23 ± 9.4292.85 ± 1.5374.27 ± 10.9067.22 ± 7.2981.70 ± 10.5496.02 ± 5.45
1075.74 ± 2.6068.54 ± 4.9888.03 ± 5.5185.21 ± 7.2096.94 ± 1.5497.54 ± 0.8997.90 ± 1.2497.51 ± 1.09
1177.97 ± 1.2988.46 ± 3.5591.62 ± 2.2689.28 ± 6.5799.07 ± 0.6199.22 ± 0.3599.16 ± 0.4299.18 ± 0.50
1273.24 ± 3.7564.95 ± 10.0287.51 ± 5.9686.97 ± 8.2598.23 ± 1.6295.96 ± 1.0998.33 ± 1.2598.59 ± 1.41
1390.80 ± 4.3698.47 ± 1.0597.01 ± 3.2698.36 ± 1.5298.03 ± 2.2198.86 ± 0.7199.17 ± 1.1599.46 ± 0.86
1491.74 ± 0.8898.20 ± 0.3597.76 ± 0.9194.17 ± 4.0299.27 ± 0.6599.38 ± 0.8899.25 ± 0.5199.15 ± 0.60
1574.41 ± 6.2553.50 ± 5.0194.66 ± 3.6091.22 ± 3.9298.85 ± 1.1098.01 ± 1.2198.58 ± 1.0698.65 ± 1.13
1698.16 ± 2.2793.67 ± 4.8092.23 ± 7.1995.80 ± 4.8888.71 ± 14.4291.69 ± 5.7994.46 ± 4.9594.33 ± 3.67
OA (%)80.01 ± 0.6678.31 ± 2.8292.08 ± 1.7287.27 ± 4.8297.97 ± 0.5898.07 ± 0.3998.51 ± 0.2698.81 ± 0.32
AA (%)79.35 ± 2.4078.04 ± 1.7890.13 ± 5.3890.71 ± 3.4786.72 ± 7.4596.08 ± 1.4497.05 ± 1.3598.83 ± 0.63
K × 10077.09 ± 0.7775.04 ± 3.1190.96 ± 1.9785.49 ± 5.3797.69 ± 0.6797.85 ± 0.5098.58 ± 0.3098.65 ± 0.36
Table 6. Classification results of all methods on Pavia University dataset.
Table 6. Classification results of all methods on Pavia University dataset.
MethodRBF-SVMCNNHybirdSNPyResNetSSRNSSFTTA2S2KResNetProposed
Class
181.26 ± 5.0896.14 ± 1.6087.42 ± 10.3094.44 ± 3.6999.40 ± 1.3199.72 ± 0.2198.95 ± 1.5299.72 ± 0.14
284.53 ± 3.8196.67 ± 0.9999.57 ± 0.2896.49 ± 2.4299.97 ± 0.0399.98 ± 0.0199.98 ± 0.0399.98 ± 0.02
356.56 ± 16.1773.84 ± 11.7672.11 ± 16.9288.58 ± 11.0598.96 ± 2.1698.97 ± 0.8599.05 ± 1.4099.34 ± 0.85
494.34 ± 3.5075.32 ± 13.6281.59 ± 11.6398.86 ± 1.6799.82 ± 0.2698.70 ± 0.4299.66 ± 0.7299.71 ± 0.22
595.38 ± 3.4099.75 ± 0.1879.47 ± 8.3598.77 ± 1.8399.89 ± 0.1399.85 ± 0.2699.94 ± 0.1199.76 ± 0.25
680.66 ± 7.5479.30 ± 5.7298.88 ± 0.7592.77 ± 7.6099.97 ± 0.0499.99 ± 0.0199.92 ± 0.0999.85 ± 0.22
769.13 ± 11.0468.22 ± 14.4572.33 ± 15.7395.61 ± 4.3999.98 ± 0.0699.55 ± 0.3899.90 ± 0.3199.53 ± 0.52
871.16 ± 6.2480.58 ± 3.8978.22 ± 9.1789.20 ± 3.5598.72 ± 0.8799.02 ± 0.6398.72 ± 0.8399.13 ± 0.60
999.94 ± 0.0797.25 ± 5.1066.95 ± 16.1799.12 ± 0.5599.87 ± 0.1896.63 ± 1.3399.95 ± 0.0999.31 ± 0.54
OA(%)82.06 ± 2.7887.95 ± 3.4791.71 ± 8.3193.80 ± 5.3599.70 ± 0.3299.62 ± 0.0799.62 ± 0.3399.76 ± 0.06
AA(%)79.22 ± 5.8785.23 ± 4.2276.28 ± 2.2994.87 ± 1.8999.62 ± 0.3599.15 ± 0.1999.56 ± 0.3499.60 ± 0.09
K × 10075.44 ± 4.2684.19 ± 4.2888.83 ± 11.4191.70 ± 7.2399.61 ± 0.4299.50 ± 0.1099.50 ± 0.4499.69 ± 0.08
Table 7. Classification results of all methods on Xuzhou dataset.
Table 7. Classification results of all methods on Xuzhou dataset.
MethodRBF-SVMCNNHybirdSNPyResNetSSRNSSFTTA2S2KResNetProposed
Class
196.38 ± 0.3297.16 ± 1.2099.36 ± 0.2594.92 ± 1.2799.83 ± 0.0999.59 ± 0.2199.47 ± 0.5199.84 ± 0.03
299.81 ± 0.1599.06 ± 0.8999.49 ± 0.4699.99 ± 1.3799.99 ± 0.0199.98 ± 0.03100.00 ± 0.0099.98 ± 0.03
393.71 ± 0.6987.15 ± 2.5196.94 ± 1.4195.55 ± 4.0499.71 ± 0.1699.54 ± 0.2599.16 ± 0.5899.49 ± 0.22
497.31 ± 0.4785.76 ± 8.6098.39 ± 0.6193.85 ± 1.2499.70 ± 0.5499.92 ± 0.0899.86 ± 0.0299.94 ± 0.05
594.64 ± 0.4994.03 ± 1.3798.92 ± 0.3597.34 ± 4.5099.52 ± 0.3699.56 ± 0.2099.64 ± 0.2999.75 ± 0.10
688.72 ± 1.1162.30 ± 3.7096.47 ± 0.9984.85 ± 2.8899.20 ± 0.4999.64 ± 0.1799.03 ± 0.3799.46 ± 0.22
787.00 ± 0.8273.31 ± 5.1898.43 ± 0.5688.63 ± 2.9699.59 ± 0.4299.90 ± 0.0599.64 ± 0.1399.77 ± 0.14
898.18 ± 0.2793.97 ± 2.3299.28 ± 0.4498.36 ± 2.5599.77 ± 0.1599.94 ± 0.1399.70 ± 0.1399.77 ± 0.23
997.67 ± 0.6198.63 ± 0.3699.38 ± 0.3598.58 ± 2.8199.89 ± 0.1299.62 ± 0.1699.88 ± 0.1799.88 ± 0.12
OA (%)95.16 ± 0.1390.56 ± 1.1098.91 ± 0.1695.44 ± 0.0999.71 ± 0.1399.68 ± 0.0699.58 ± 0.2399.80 ± 0.02
AA (%)94.82 ± 0.2087.93 ± 1.1698.52 ± 0.2994.67 ± 0.1199.69 ± 0.1699.72 ± 0.0499.60 ± 0.1899.77 ± 0.04
K × 10093.84 ± 0.1688.04 ± 1.3698.61 ± 0.2193.89 ± 1.1399.64 ± 0.1799.60 ± 0.0999.47 ± 0.3099.75 ± 0.02
Table 8. Classification results of all methods on WHU-Hi-LongKou dataset.
Table 8. Classification results of all methods on WHU-Hi-LongKou dataset.
MethodRBF-SVMCNNHybirdSNPyResNetSSRNSSFTTA2S2KResNetProposed
Class
199.52 ± 0.6896.74 ± 1.6489.91 ± 9.9799.97 ± 0.0299.97 ± 0.0499.96 ± 0.0299.92 ± 0.1299.96 ± 0.04
294.13 ± 3.9971.28 ± 3.8980.25 ± 10.7999.70 ± 0.0799.79 ± 0.1599.92 ± 0.0899.79 ± 0.2499.82 ± 0.12
399.09 ± 0.2942.95 ± 2.5478.29 ± 15.5899.81 ± 0.1099.91 ± 0.1599.93 ± 0.1499.98 ± 0.0399.99 ± 0.02
498.87 ± 0.0998.19 ± 0.4689.79 ± 9.9399.84 ± 0.0699.84 ± 0.0699.89 ± 0.0499.86 ± 0.2299.95 ± 0.03
592.46 ± 0.0954.82 ± 2.5072.38 ± 11.8399.21 ± 0.4099.32 ± 0.5399.56 ± 0.3096.43 ± 9.0399.42 ± 0.29
699.76 ± 0.8992.92 ± 3.5387.66 ± 11.5299.84 ± 0.0699.96 ± 0.0499.86 ± 0.1399.17 ± 2.3699.92 ± 0.07
799.98 ± 0.0199.89 ± 0.1093.20 ± 0.1499.98 ± 0.0199.99 ± 0.0199.97 ± 0.0199.98 ± 0.0199.98 ± 0.02
897.40 ± 0.3483.06 ± 9.4378.82 ± 10.4795.11 ± 0.3998.92 ± 0.0698.41 ± 0.5398.45 ± 1.2198.92 ± 0.55
997.63 ± 0.5066.60 ± 4.3874.76 ± 6.0398.91 ± 0.3198.30 ± 0.0898.07 ± 0.6999.03 ± 0.4898.32 ± 0.89
OA (%)98.95 ± 0.0391.42 ± 0.9490.88 ± 9.5899.63 ± 0.5799.83 ± 0.0399.81 ± 0.0299.69 ± 0.4799.89 ± 0.03
AA (%)97.65 ± 0.0177.16 ± 1.5781.67 ± 7.9599.15 ± 1.1499.55 ± 0.1199.50 ± 0.0899.18 ± 1.2899.67 ± 0.11
K × 10098.68 ± 0.0488.91 ± 1.1886.85 ± 9.2099.52 ± 0.7599.78 ± 0.0499.76 ± 0.0399.60 ± 0.6299.86 ± 0.04
Table 9. Comparison of ablation experiments on Indian Pines dataset.
Table 9. Comparison of ablation experiments on Indian Pines dataset.
MethodA2S2KResNetA2S2KResNet + ViTA2S2KResNet + ViT + LossA2S2KResNet + ViT + ApolloProposed
Class
191.62 ± 2.0598.21 ± 3.3098.74 ± 2.2097.66 ± 3.4398.70 ± 3.16
298.67 ± 1.0598.61 ± 1.0298.60 ± 0.6798.36 ± 7.8298.57 ± 1.13
398.81 ± 0.6598.52 ± 1.2498.97 ± 0.8698.74 ± 8.0698.96 ± 0.67
499.37 ± 0.8798.97 ± 1.2397.62 ± 2.1698.71 ± 5.8798.87 ± 1.00
598.97 ± 0.8098.75 ± 1.5898.29 ± 1.3799.00 ± 3.3298.77 ± 0.83
698.86 ± 1.0198.87 ± 0.7199.04 ± 0.7799.13 ± 5.4799.35 ± 0.67
797.93 ± 6.2194.65 ± 7.2396.91 ± 5.5993.24 ± 6.5097.02 ± 7.70
8100.00 ± 0.0099.82 ± 0.2699.14 ± 2.3399.30 ± 3.5799.92 ± 0.12
981.70 ± 10.5483.00 ± 12.2084.24 ± 16.5992.18 ± 1.5396.02 ± 5.45
1097.90 ± 1.2497.4 ± 1.2197.60 ± 1.5597.88 ± 7.2097.51 ± 1.09
1199.16 ± 0.4299.17 ± 0.3999.16 ± 0.3598.93 ± 6.5799.18 ± 0.50
1298.33 ± 1.2597.86 ± 1.2998.11 ± 1.1698.02 ± 8.2598.59 ± 1.41
1399.17 ± 1.1599.34 ± 1.0698.52 ± 1.3699.16 ± 1.5299.46 ± 0.86
1499.25 ± 0.5199.36 ± 0.7399.00 ± 0.6798.99 ± 4.0299.15 ± 0.60
1598.58 ± 1.0698.23 ± 2.0198.12 ± 0.8298.54 ± 3.9298.65 ± 1.13
1694.46 ± 4.9595.5 ± 4.1796.16 ± 3.6495.34 ± 4.8894.33 ± 3.67
OA (%)98.51 ± 0.2698.61 ± 0.4098.63 ± 0.3398.75 ± 0.3398.81 ± 0.32
AA (%)97.05 ± 1.3596.62 ± 1.3797.39 ± 1.3797.70 ± 1.0398.83 ± 0.63
K × 10098.58 ± 0.3098.30 ± 0.4698.42 ± 0.3798.43 ± 0.3898.65 ± 0.36
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, A.; Xing, S.; Zhao, Y.; Wu, H.; Iwahori, Y. A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer. Remote Sens. 2022, 14, 3705. https://doi.org/10.3390/rs14153705

AMA Style

Wang A, Xing S, Zhao Y, Wu H, Iwahori Y. A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer. Remote Sensing. 2022; 14(15):3705. https://doi.org/10.3390/rs14153705

Chicago/Turabian Style

Wang, Aili, Shuang Xing, Yan Zhao, Haibin Wu, and Yuji Iwahori. 2022. "A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer" Remote Sensing 14, no. 15: 3705. https://doi.org/10.3390/rs14153705

APA Style

Wang, A., Xing, S., Zhao, Y., Wu, H., & Iwahori, Y. (2022). A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer. Remote Sensing, 14(15), 3705. https://doi.org/10.3390/rs14153705

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop