Next Article in Journal
An Improved SAR Image Semantic Segmentation Deeplabv3+ Network Based on the Feature Post-Processing Module
Next Article in Special Issue
Multi-Class Double-Transformation Network for SAR Image Registration
Previous Article in Journal
Analysis of Geometric Characteristics and Coverage for Moon-Based/Spaceborne Bistatic SAR Earth Observation
Previous Article in Special Issue
Nearest Neighboring Self-Supervised Learning for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification

School of Information Engineering, Northwest A&F University, Xi’an 712100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2150; https://doi.org/10.3390/rs15082150
Submission received: 9 March 2023 / Revised: 3 April 2023 / Accepted: 14 April 2023 / Published: 19 April 2023

Abstract

:
Hyperspectral imaging (HSI) is widely used in various fields owing to its rich spectral information. Nonetheless, the high dimensionality of HSI and the limited number of labeled data remain significant obstacles to HSI classification technology. To alleviate the above problems, we propose an attention-embedded triple-branch fusion convolutional neural network (AETF-Net) for an HSI classification. The network consists of a spectral attention branch, a spatial attention branch, and a multi-attention fusion branch (MAFB). The spectral branch introduces the cross-channel attention to alleviate the band redundancy problem in high dimensions, while the spatial branch preserves the location information of features and eliminates interfering image elements by a bi-directional spatial attention module. These pre-extracted spectral and spatial attention features are then embedded into a novel MAFB with large kernel decomposition technique. The proposed AETF-Net achieves multi-attention features reuse and extracts more representative and discriminative features. Experimental results on three well-known datasets demonstrate the superiority of the method AETF-Net.

1. Introduction

Hyperspectral remote sensing can obtain the intrinsic characteristics and change patterns of objects by recording the electromagnetic wave characteristics without direct contact, making it a cutting-edge remote sensing technology [1]. Hyperspectral imaging (HSI) can record the spatial information under each waveband and the spectral information under the same position. Therefore, it has excellent application prospects in many fields, such as agriculture and forestry [2,3,4,5], ocean [6], disaster [7], mineral exploration [8,9], and urban construction [10,11,12]. HSI classification assigns category labels to each pixel based on sample features, which is increasingly becoming a key technology in hyperspectral remote sensing.
In the first two decades of the evolution of HSI classification, there were many machine learning algorithms based on hand-crafted features from the perspective of learning spectral and spatial features, for instance, spectral angle map [13], support vector machine [14], sparse representation [15], manifold learning [16], Markov Random Fields [17], Morphological Profiles [18], Random Forests [19], etc. However, due to the significant variability among different objects, classification algorithms based on manual feature extraction face challenges in fitting an optimal set of features for different objects and require greater robustness and discriminability..
Recently, studies on HSI classification have heavily focused on deep learning (DL) technology, since it could adaptively extract features from the input data in a hierarchical manner [20,21,22]. This allowed DL to learn data features in both spectral and spatial dimensions without requiring prior statistical knowledge of the input data. Chen et al. [23] first introduced DL to the HSI classification by applying deep Stacked Auto-Encoder (SAE). Similarly, in [24], the feasibility of using deep belief network (DBN) for HSI classification was investigated. However, implementing SAE and DBN could potentially lead to decreased performance, as they use complex structures to modify the input data [25]. Researchers later discovered that Convolutional Neural Networks (CNNs) [26] could effectively extract multi-level features from large samples, thus eliminating the need for complicated feature extraction techniques. Hu et al. [27] first applied a one-dimensional CNN (1D-CNN) to HSI classification and obtained greater classification accuracy than many conventional machine learning techniques. Nevertheless, 1D-CNN has limited ability to capture spatial relationships between features in the input data. In contrast, two-dimensional CNN (2D-CNN) [28] learns how pixels in an image are related, allowing it to capture complex spatial patterns that are important for accurate image classification. However, it may struggle to capture the spectral relationships between features in the input data, as it considers the different spectral bands only as separate channels of the image. To incorporate the advantages of both 1D-CNN and 2D-CNN, researchers have attempted various methods. Yu et al. [29] utilized a 1D-CNN to extract spectral features and a 2D-CNN to extract spatial-spectral features, resulting in highly accurate classification. Conversely, the three-dimensional CNN (3D-CNN) [30] was proposed to operate on 3D HSI data and was capable of learning both spatial and spectral relationships between features in the input data, compensating for the weaknesses of 2D-CNNs. Nowadays, CNNs have gained significant attention and popularity among scholars [31], as evidenced by recent studies. Zhong et al. [32] proposed a spectral-spatial residual network (SSRN) that combines 3D-CNN for extracting discriminative features. Li et al. [33] developed a two-branch dual attention network (DBDA) that integrates spectral and spatial attention mechanisms for refining extracted feature maps. Yan et al. [34] designed a dual-branch network structure to relieve the issue of insufficient samples in HSI classification by incorporating transfer learning. Through this novel network structure, both [33] and [34] investigated how multimodel features can be used to improve HSI task performance. Although CNNs are well adapted to the high-dimensional and complex features of HSIs, high computational complexity arises, and its classification accuracy can suffer as a result of samples with insufficient data annotation. Furthermore, CNNs may require more refined feature extractors for specific tasks, and CNN models are prone to problems such as overfitting in small samples.
In supervised learning, sufficient labeled samples are required to provide a foundation for the classification algorithm [35]. However, labeling the samples pixel by pixel is time consuming and costly. Thus, the limited number of labeled samples and high-dimensional data can lead to the generation of the Hughes phenomenon [36], a type of model overfitting caused by insufficient training data, which affects classification accuracy heavily. Zhang et al. [37] proposed a lightweight 3D network based on transfer learning to address the sample-limited problem. Sellami et al. [38] proposed a semi-supervised network with adaptive band selection to reduce the dimensional redundancy and alleviate the Huges phenomenon. Although deeper networks can extract richer features to achieve high classification accuracy, a problem arises when the number of training samples is vastly smaller than the data dimensionality, leading to the explosive growth of parameters and vanishing gradients during the training process. Li et al. [39] designed a depth-wise separable Res-Net framework, which permitted separating spectral and spatial information in HSI and reduced network size to avoid overfitting issues. CNNs have shown remarkable performance in HSI classification tasks. Researchers have proposed various techniques, including transfer learning, adaptive band selection, and depth-wise separable networks, to improve the classification accuracy and robustness of the HSI small-sample classification model. However, convolution operations tend to assign equal weights to all pixels or bands in an image, despite the fact that some pixels and bands may be more beneficial for classification than others, or may even interfere with classification.
Currently, the introduction of an attention mechanism provides a solution to the aforementioned issue [40,41,42,43]. The attention mechanism draws inspiration from the visual focus region of the human brain, which aids the network in concentrating on significant regions while ignoring irrelevant ones and performing adaptive weight fitting on features. This enhances the efficiency of feature extraction in models and reduces the need for unnecessary computation and data preprocessing, thereby making it a promising approach for HSI classification. Yu et al. [44] proposed a spatial-spectral dense CNN framework based on a feedback attention mechanism to extract high-level semantic features. Roy et al. [45] proposed an end-to-end trained adaptive spectral-spatial kernel improved residual network (A2S2K) with an attention-based mechanism to capture discriminative features for HSI classification. Li et al. [46] proposed a multi-attention fusion network (MAFN) that employs spatial and spectral attention mechanisms, respectively, to mitigate the effects of band redundancy and interfering pixels. Xue et al. [47] proposed the attention-based second-order pooling network (A-SPN) for modeling distinct and representative features by training the model with adaptive attention weights and second-order statistics. The attention mechanism learns more effective feature information but can lead to overfitting when the sample size is limited. Additionally, the high dimensional data of hyperspectral data carry a large amount of redundant information. The traditional single-attention mechanism needs to locate adequate information quickly and accurately, resulting in the need for a deeper network.
We propose an attention-embedded triple-branch fusion convolutional neural network (AETF-Net) for HSI classification to address the aforementioned issue. As is shown in Figure 1, the network comprises a spectral attention branch, a spatial attention branch, and a multi-attention fusion branch (MAFB). The spectral attention branch and spatial attention branch, respectively, address the issues of feature redundancy and correlation between spectral and spatial dimensions. We design a global band attention module (GBAM) in the spectral branch with a novel SMLP to extract more discriminative band features. In the spatial branch, we reference a bi-directional spatial attention module (BSAM) to extract spatial feature information in both horizontal and vertical directions. To incorporate the extracted spectral and spatial features and reduce the computational cost, we introduce the large kernel decomposition technique in the MAFB, which replaces large kernel convolution operation with some small kernel depth convolution and deep dilated convolution. In the proposed AETF-Net, multiple kinds of attention are used and fused to provide a reference basis for the relative importance of bands and pixels for 3D convolution with different weight values. Consequently, the proposed AETF-Net ensures efficient feature extraction while avoiding the gradient disappearance and feature dissipation issues caused by deep neural networks. In conclusion, the main contributions of this paper are as follows.
  • A novel multi-attention-based module is introduced that incorporates spatial attention, spectral attention, and joint spatial-spectral attention. The proposed approach embeds spatial and spectral feature information into each level of the joint spatial-spectral feature extraction module via cascading to compensate for the feature loss issue of the deep neural network.
  • An improved spectral feature extraction mechanism is designed to generate more accurate band features and weighting information. Moreover, we introduce an innovative weight fusion strategy for feature enhancement to prevent data loss during feature fusion and preserve the relative size relationship between weights.
  • The proposed method AETF-Net has been validated on three public datasets (i.e., IN, UP, and KSC) and has shown significantly better classification results. Particularly, at small sample rates, our method outperforms both traditional and advanced methods. The effectiveness of the method is verified.
The rest of this paper is arranged as follows. Section 2 elaborates on the proposed AETF-Net. Section 3 describes the detailed datasets and analyzes the experimental results. Section 4 provides a comprehensive discussion of the differences between the proposed method and the comparative algorithms. Section 5 summarizes the core of the whole paper and provides some suggestions for further research.

2. Materials and Methods

As is shown in Figure 1, the proposed AETF-Net framework is composed of three primary submodules: (1) spectral attention module, using 1D-CNN to extract attention features and eliminate global band redundancy; (2) spatial attention module, using 2D-CNN to extract attention features from both spatial horizontal and vertical directions to capture more discriminative and detailed edge features; (3) spectral-spatial fusion module, aiming at fusing joint spatial-spectral features by spectral and spatial attentional weights to improve 3D convolution feature extraction efficiency.

2.1. Spectral Attention Module

HSI typically has a large number of spectral bands, while not all of them are useful for classification. Thus, significant spectral bands should be highlighted for feature extraction. Inspired by the channel attention mechanism [48], we regard spectral bands as the channels and develop a new band attention module (GBAM) for spectral feature learning. The structure of the proposed GBAM is shown in Figure 2.
Specifically, the original hyperspectral data cube is first dealt with using a 3D convolution operation to extract low-order spectral features from the HSI, and the output feature map is defined as:
X = X w + b
where X R S × S × B denotes the 3D input HSI patch, S denotes the size of input patch and B denotes the number of bands, w and b denote the weights and biases of the network, and * denotes the 3D convolution operation. The output feature map X is then squeezed in spatial dimension with maximum pooling and average pooling:
X a v g = 1 S × S i = 1 S j = 1 S X i , j X m a x = m a x i S , j S X i , j
where X i , j R S × S × B is the element in the input patch at pixel ( i , j ) , X a v g R 1 × 1 × B represents the output of the average pooling operation, and X m a x R 1 × 1 × B represents the output of the max pooling operation.
They are subsequently delivered into a new shared selective multilayer perceptron (SMLP). The typical MLP consists of an input layer, a simple hidden layer, and an output layer. The hidden layer is commonly designed to reduce the parameters by a squeezing operation, which can lead to the loss of band information in our spectral band. Thus, we propose a new SMLP in which the hidden layer is refined to model the long-range dependencies of all bands by considering k local neighborhoods. Based on the best experimental results, we set the value of k to 9. The SMLP output vector L R B k + 1 is composed of L i , i = ( k 2 , , B k 2 ) , which is:
L i = j = 1 k y i j w i , y i j Ω i k
where · denotes the ceiling function, which rounds a given number up to the nearest integer, Ω i k denotes the set of k spectral bands adjacent to the ith element of the average pooling vector or max pooling vector, and w i is the shared parameters of each y i j . Next, the deconvolution operation is applied to the feature vector S to generate a vector of the same size as the input, facilitating subsequent processing. To enhance the robustness and generalization ability of the deconvolution operation, the activate function ReLU and batch normalization are introduced.
After X m a x and X a v g pass through the SMLP module, the element-wise addition operation and the sigmoid nonlinear activation function yield the band attention weight matrix f ( X ) R S × S × B . The band attention module can be expressed as:
f ( X ) = s i g m o i d ( L ( X m a x ) + L ( X a v g ) )
where sigmoid is the nonlinear activation function.

2.2. Bi-Directional Spatial Attention Module

As far as we know, spatial information is helpful for HSI classification because the neighboring pixels are likely to belong to the same class. Furthermore, the spatial feature from multiple neighboring pixels can suppress noise interference and redundant information. In this paper, we develop a bi-directional spatial attention module (BSAM) to obtain the abstract spatial representation for HSI classification. Instead of using 2D pooling operation in spatial feature extraction, which may lead to a loss of location information [49], BSAM separates spatial attention into two parallel 1D feature encoding processes. Two separate attention feature maps are independently embedded with orientation-specific information. Each of them captures the long-range dependencies of the input feature map along one of the spatial directions, while preserving the location information in the other direction in the generated attention map. The structure of improved BSAM is shown in Figure 3.
BSAM first performs two independent global average pooling operations with kernels ( H , 1 ) and ( 1 , V ) in two spatial dimensions (the horizontal and vertical axes) on each channel to encode attention maps. H denotes the size of the pooling kernel in the horizontal direction, and V denotes the size in the vertical direction. It is noteworthy that the values of H and V are equivalent to the size of the original image patch S × S , while in this section, different symbols are expressed for the purpose of direction distinction. The global average poolings are calculated by:
Z H h = 1 V 0 i < V x ( h , i ) ,   h = [ 1 , , H ] Z V v = 1 H 0 j < H x ( j , v ) ,   v = [ 1 , , V ]
where x ( h , i ) is the input feature pixel at height h, x ( j , v ) is the input feature pixel at width v, and Z H R h × 1 × B and Z V R 1 × v s . × B are the hth horizontal average pooling result and the vth vertical average pooling result, respectively.
The global pooling operation in both directions generates paired direction-aware feature maps. Those feature maps not only capture directional information over a wide spatial range but also preserve location information. They can help the deep neural network locate target locations of interest. The obtained direction-aware feature maps Z H and reshaped feature map Z ˜ V R v × 1 × B are then applied to feature fusion, along with channel compression and feature nonlinear restructuring, to yield the feature map U R ( h + v ) × 1 × B :
U = s i g m o i d ( C o n v ( C a t [ Z H , Z ˜ V ] ) )
where C o n v denotes a 1 × 1 convolution layer and C a t denotes the concatenate operation. Then, the feature map is separated again into two tensor matrices, U H and U V , along the spatial horizontal and vertical directions, where U V needs to be reshaped back to its original shape. Then, U H R h × 1 × B and U V R 1 × v s . × B are delivered into the convolution layer with kernel 1 × 1 and the sigmoid nonlinear activation function, respectively, making the shape of the output tensor matrix the same as the input data patch X .
Finally, feature fusion is performed by element-wise multiplication to acquire the spatial attention weight matrix g ( X ) R S × S × B :
g ( X ) = s i g m o i d ( C o n v ( U H ) ) s i g m o i d ( C o n v ( U V ) )

2.3. Multi-Attention Spectral-Spatial Feature Fusion Branch

The multi-attention fusion branch (MAFB) is designed mainly following the structure of 3D-CNN. The attention weight matrices f ( X ) and g ( X ) learned from 1D-GBAM and 2D-BSAM branches are fused in the MAFB’s convolution operations. In comparison, there are some different points from typical 3D-CNN.
MAFB is developed based on a lightweight CNN (LCNN) [50], which consists of a combination of deep convolution DW-Conv, deep dilated convolution DW-D-Conv, and 1×1 convolution procedures with small kernels. In this structure, the improved MAFB not only captures relative long-distance features for the large-scale visual field, but also relieves the requirements of many training samples and massive computational resources due to the large kernel convolutional operations.
MAFB designs a new multi-attention fusion strategy. The attention weight matrices f ( X ) and g ( X ) learned from 1D-GBAM and 2D-BSAM branches help the network be more attentive to the channels and locations contributing to the feature classification task. However, there are some issues with fusing two attention and convolutional operations by the simple multiplication strategy. As is shown in Figure 4a, the weight values of f ( X ) and g ( X ) are in [ 0 , 1 ] , which leads to a smaller value after they multiply. This may exacerbate the feature intensity to decay and lose the most critical information when using the simple multiplication strategy. Thus, we introduce a softplus-post multiplication in MAFB, as is shown in Figure 4b. Before multiplying the two weight matrices by the input map X , the softplus activation function performs feature enhancement and linear activation on the weight matrices, respectively. By doing so, the weights between different features can be scaled up to avoid feature dissipation while the relative sizes are guaranteed to be constant.
Compared with the traditional ReLU activation function, the softplus activation function is closer to the activation model of brain neurons and solves the Dead ReLU problem. The equation of the softplus activation function is shown below:
G i j = l o g ( 1 + e x p ( m i j ) ) i , j = [ 1 , , S ]
where m i j denotes the element in the ith row and jth column of the f ( X ) or g ( X ) weight matrix and G i j is the output of the activate operation softplus. The G M R S × S × B is acquired by element-wise multiplication of the two weight matrices:
G M = s o f t p l u s ( f ( X ) ) s o f t p l u s ( g ( X ) )
The fused feature weights matrix G M is multiplied by the original input feature map X as the input of the depth convolution DW-Conv. Then, the output feature map of DW-Conv multiplied by the attention weight matrix G M is to be used as the input of the next layer of the depth-void convolution DW-D-Conv. Similarly, the output feature map of DW-D-Conv multiplied by the attention weight matrix G M is to be used as the input of the 1 × 1 convolution. Finally, the output of the convolution operation is multiplied by the original input feature map X to obtain the final attention map of the joint spatial-spectral feature extraction module. The overall fusion equation can be expressed as:
F = C o n v ( ( C DW D ( ( C DW ( G M X ) ) G M ) ) G M )
where X denotes the input feature map, C DW denotes deep convolution operation, C DW D denotes deep dilated convolution operation, and F R S × S × B denotes the feature map, which finally feeds into the classifier.

3. Results

3.1. Dataset Description

The datasets used in this paper are Indian pines (IP), University of Pavia (UP), and Kennedy Space Center (KSC). The sample numbers and corresponding colors of the three datasets are in Table 1, Table 2 and Table 3.
The IP dataset is a widely used hyperspectral remote sensing image dataset, which contains a scene captured by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor at the Indian Pines test site in northwestern Indiana. The scene comprises two-thirds agricultural land and one-third forest or other natural perennial plants. Its data size is 145 × 145, with a spatial resolution of 20 meter/pixel (m/p) and a wavelength range from 0.4 to 2.5 μm containing 224 bands, with 200 remaining after removing the overlying absorption region, and 16 species.
The ROSIS sensor, an optical reflection system imaging spectrometer for urban areas, captured the UP dataset in 2003 at the University of Pavia, Northern Italy. It possesses a spatial resolution of 1.3 m/p, an image size of 610 × 340, 103 bands within the wavelength range from 0.43 to 0.86 μm, and 9 classes. Compared to the IP dataset, the UP dataset has fewer bands while still having a high dimensionality and complex classification task.
The KSC dataset is a hyperspectral remote sensing image dataset collected and released by the National Aeronautics and Space Administration (NASA), which is collected at the Kennedy Space Center by an AVIRIS sensor in March 1996. It has a spatial resolution of 1.8 m/p, 512 × 614 pixels, 224 bands from 0.4 to 2.5 μm, with 176 bands after removing absorbance and noise bands, and covers 13 different ground cover types. The KSC dataset has the same number of bands as the IP dataset while its spatial resolution is lower, thus requiring higher algorithmic requirements.

3.2. Experimental Setup

To demonstrate the efficiency of the proposed method, we conducted a series of classification experiments on three well-known hyperspectral datasets. These included CNN-based methods can be divided into two categories, traditional CNN-based methods (2D-CNN, 3D-CNN, Res-Net, and SSRN) and CNN-based methods with attention mechanism (DBDA, A2S2K, MAFN, and A-SPN). All comparison methods have the same parameter settings as in their corresponding references. The performance of classification will be evaluated using three metrics: overall accuracy (OA), average accuracy (AA), and the statistical kappa coefficient (Kappa) for the results. All methods were repeated ten times independently, after which the average value and standard deviation were taken to guarantee the generalizability of the experimental results.
In our experiments, three datasets were each divided into a 1% training set, a 1% validation set, and a 98% test set. During the training phase, we continuously adjusted certain hyperparameters of the model, such as the size of the convolution kernel, patch size, and learning rate, based on the training results obtained through experimentation. The model was trained using the Adam optimizer and cross-entropy loss function. In the validation phase, 1% of the samples were randomly selected as a validation set to select the best model. Performance metrics were calculated on the validation set to select the best-performing model as the final model. During the final testing phase, the remaining 98% of the test set was used to test the best model and obtain the test results. The experiments were set up to take two samples for any class with a sample number less than 2 in the 1% training samples case.
By employing an early stopping strategy during the training phase, we found that our model converges in terms of loss and accuracy stabilizes around 200 epochs. Thus, we ultimately set 200 epochs to train the model. The batch size was set to 64 and the Adam optimizer was used in the proposed method. The learning rate was initialized at 0.01 and then adjusted using the cosine annealing algorithm. The k value in the SMLP structure of GBAM was set to 9 based on the optimal experimental results. All experiments were finished on the software environment PyTorch and a computer with a process of Inter(R) Xeon(R) Platinum 8124M CPU @ 3.00 GHz, 64G RAM, and an NVIDIA GeForce RTX 3090 graphics card.

3.2.1. The Effect of the Number of Training Samples

To further analyze the effect of the number of training samples on the proposed AETF-Net, we split the three datasets into a training set, a validation set, and a test set with varying proportions. The size of the validation set is always consistent with that of the train set, while the remaining portion constitutes the test set. The remaining hyperparameters were set to be consistent with the above. For the IP dataset, the number of training samples varies from 1%, 3%, 5%, 10%, and 20% of the dataset samples. For the UP and KSC, the number of training samples varies from 1%, 3%, 5%, 7%, and 10% of the dataset samples, respectively. The validation sets in the above three datasets are divided from the remaining data into data samples the same size as the divided training set, while the remaining part is employed as the test set.
Figure 5 shows the classification results of the proposed with the different numbers of training samples. The vertical axis represents OA, and the horizontal axis represents the training set ratio. For three datasets, the values of OA increase with the number of training samples increase until a stable case. For the IP dataset, the value of OA stabilizes when the training set size is between 3% and 5%. It improves dramatically after 5% and reaches stable when the ratio of training size is 10%. The data distribution of the UP and KSC is not as heterogeneous as that of the IP dataset. Therefore, after the training set size reaches 4%, OA becomes stable and can reach nearly 100% accuracy, especially after 1% for the UP dataset, which has a sufficient number of samples.

3.2.2. Effectiveness of the k Value in SMLP Structure

A series of experiments were conducted to verify the effectiveness of the improved SMLP structure in the GBAM module by setting various values of hyperparameter k . The remaining hyperparameter settings of the experiments were consistent with those described above. Firstly, we conducted experiments on the original MLP structure, followed by experiments on the improved SMLP structure with different values of hyperparameter k (3, 5, 7, 9, 11, 13, 15). To ensure fairness, all the experiments were conducted independently 10 times, and the final average results were compared. As shown in Table 4, when using the original MLP in the channel attention module, the classification accuracy OA was 2.29% lower than that of the improved SMLP structure ( k = 9), indicating that the improved SMLP structure could utilize the inter-band correlation information during the sliding window step of the convolutional kernel to extract more useful features than the original MLP structure.
However, the performance decreased significantly when the k value was set to 3 or 5, even lower than that of the original MLP structure, because a small k value cannot capture all the local features, leading to feature loss. With the increase of k value, the sliding window of convolution can capture more inter-band correlation information and local features. However, when the k value exceeds 11, the classification accuracy starts to decline due to the high overlap of the windows, which leads to overfitting, and the same local information is extracted multiple times. Therefore, based on the best experimental results, we set the k value to 9, increase the number of convolutional kernels to extract various features of the data, and set the stride small enough to retain more local features. Combined with deconvolution, we can reduce the dimensionality while retaining the important features of the input data, eliminate the influence of the one-dimensional convolution layer, and restore the data to its original dimensionality.

3.2.3. The Effect of Patch Size

The patch size of the training network has an essential effect on classification performance. Typically, the larger the patch, the more spatial information it contains, leading to a better classification performance of the classification. However, a larger patch causes massive parameters and exacerbates the limited sample learning issue.
In this section, we design several experiments based on the dataset partitioning method and parameter settings described above to analyze the effect of the patch size on the proposed method. Figure 6 shows the classification results of the proposed method with different patch sizes. The OA has achieved 80% when the patch size is 5 × 5 for the IP dataset. As the value of patch size increases, the OA gradually improves and plateaus until the OA tends to 90% when the sample block size exceeds 13 × 13. For the KSC dataset, the OA decreases when the patch size increases to a certain sample block size. The reason is that the objects in KSC are small and in dispersed distribution, so the large patch contains multiple classes, which provide negative information for classifying the center pixel in the patch. Thus, considering the computational cost and HSI scene, we set the patch size to 11 × 11 in our experiments.

3.3. Result and Analysis

Experimental results on the IP dataset: As is shown in Table 5 and Figure 7, the proposed AETF-Net method obtains the highest accuracy among all the methods with 89.58%, 87.91%, and 88.17%, and achieves the most detailed and smooth classification maps. The 3D-CNN has better feature extraction capability than 2D-CNN because it can incorporate both spectral and spatial information. However, in the case of insufficient training samples, the overfitting caused by the conflict between the high dimension of processing data and the insufficient number of samples makes the accuracy lower than the 2D-CNN with 5.47% OA. Res-Net has the worst classification result with 36.06% OA because it has many layers of the network, which is very redundant, and the adequate depth is inadequate. A2S2K adds the attention mechanism to the residual structure to weigh the valuable features, which obtained a 0.27% improvement in OA compared to SSRN, which illustrates the effectiveness of the attention mechanism.
MAFN, DBDA, and A-SPN also introduce the attention mechanism and obtain higher accuracy than the method without introducing the attention mechanism. Among them, DBDA captures many spatial and spectral features using a two-branch and densely connected network and obtains the highest accuracy among the compared methods with 82.19% OA. However, DBDA has not used the attention mechanism to locate the region of interest at the very beginning, so the evaluation indices of our method are 7.39%, 6.46%, and 8.63% higher than those of DBDA. The unbalanced distribution of samples in the IP dataset results in very few training samples for some classes after dividing 1%. A-SPN performs better for small sample classes, where the accuracies for classes 7 and 9 (i.e., Grass-pasture-mowed and Oats) were higher than the proposed method. However, the performance on classes 3, 4, 12, and 15 (i.e., Corn-mintill, Grass-pasture, Soy-bean-clean, and Buildings-Grass-Trees-Drives) is significantly lower than that of the proposed method because these classes are at the edge of the image and have a large number of neighboring species, making it difficult to classify them with blurred boundaries correctly. As a result, the overall OA, AA, and Kappa are 14.90%, 13.18%, and 17.87% lower, respectively, than the proposed method. To further evaluate the classification performance from a visual perspective, the ground-truth map and the classification results of eight comparison methods are shown in Figure 7. 2D-CNN, 3D-CNN, and Res-Net obtained considerable noise within and at the class boundary. The noise point within the classification maps of SSRN, A2S2K, MAFN, and A-SPN are fewer, while the misclassification rates are higher than DBDA. By comparison, the classification map of our proposed methods has minor noise points and misclassified pixels on the boundary between classes and is closest to the ground-truth map.
Experimental results on the UP dataset: Table 6 and Figure 8 show the numerical results and visual results of UP dataset comparison experiments. It could be seen that the OA of the proposed AETF-Net method was improved compared with those attention-based methods A2S2K, MAFN, DBDA, and A-SPN for 1.76%, 0.82%, 0.86%, and 2.66%, respectively. Due to the relatively balanced distribution of each class in the dataset, 2D-CNN, 3D-CNN, and Res-Net obtained relatively higher classification accuracy. MAFN and DBDA all outperformed SSRN, A2S2K, and A-SPN. Compared with the similar multi-attention fusion method DBDA, our method has higher accuracy with 0.86% OA, 1.09% AA, and 0.45% Kappa. Because the MAFN lacks information interaction and feature transfer during the extraction of spectral and band attributes, resulting in one-sided extracted results, which demonstrates the effectiveness of our proposed feature fusion strategy. In addition, MAFN achieved the second-best classification results throughout a multi-scale multi-attention feature extraction framework. Our method can reduce the network depth while extracting sufficient feature information, which avoids the overfitting problem caused by limited samples. Our method has absolute advantages. The classification map of our method performed better on the UP dataset. In 2D-CNN, 3D-CNN, Res-Net, SSRN, and A-SPN, class 2 and class 6 have considerable noise, while the noise points were significantly reduced in other methods because of the multi-attention structure used in MAFN, DABA, and the proposed method. This demonstrates the effectiveness of the multi-attention strategy. Overall, the produced classification map of our method has more precise edges of features and was closest to the ground-truth map.
Experimental results on the KSC dataset: The KSC dataset has only 50 training samples at the 1% data division method, as shown in Table 7, and the proposed method still achieved the best classification accuracy with 96.48% OA, 95.00% AA, and 96.08% Kappa, and the clearest classification results were obtained for some hard distinguish categories like class 4, 6, 8, and 9. Regarding the classification accuracy of each of the thirteen classes of features, eight classes achieve the highest accuracy. Classes 10 and 13 achieved the best precision. Although the number of KSC dataset training samples is the smallest, it obtains better classification accuracy. Because the dataset is relatively balanced, the feature distribution is dispersed, and the inter-class differences are less influential. However, due to the limited samples, the classification accuracy of 2D-CNN, 3D-CNN, and Res-Net still needs to be improved. Although the MAFN method performed well on the IN and UP datasets, it needs to catch up on the KSC dataset due to the minimal and balanced number of samples in each class. It also indicates that the MAFN method is unsuitable for small sample classification. In addition, A2S2K has the best classification accuracy among all the compared methods due to the attention mechanism employed at the beginning of the framework to extract valuable characteristics. As shown in Figure 9, the proposed method had a smoother visual image compared with other methods and the classification map was closest to the ground-truth map.
Furthermore, when viewed in the context of the proposed method, the standard deviation of the results of ten runs for almost every class and OA, AA, and Kappa is lower than that of the other methods. It can be demonstrated that the proposed method produces less variation and more stable results for small samples of different datasets, implying that the method is more robust and can be adapted for a broader range of hyperspectral datasets.

3.4. Ablation Study

To further validate the contribution of the GBAM, BSAM, and MAFB in the proposed framework to the final classification results, ablation experiments were conducted while maintaining the original experimental setup.
The effectiveness of the three branches is examined: (1) GBAM: only employ the GBAM to extract spectral feature extraction and the classifier; (2) BSAM: only employ the BSAM to extract spatial feature extraction and the classifier; (3) LCNN: LCNN network of MAFB without fusing the GBAM and BSAM; (4) LCNN + GBAM: LCNN network of MAFB with fusing the GBAM and without BSAM; (5) LCNN + BSAM: LCNN network of MAFB with fusing the BSAM and without GBAM.
From the results in Table 8, we can see that the performance of the GBAM and BSAM could be better than the other methods because the classification method based on single spectral or spatial feature extraction is significantly inferior to the methods based on spectral-spatial feature fusion. The LCNN overperforms GBAM and BSAM by about 3.86–10% on OA because it utilizes spectral-spatial feature combination by the 3D convolutional operation.
Additionally, the OA of the “LCNN + GBAM” and “LCNN + BSAM” increased by 0.38% to 1.72% compared with the OA of the “LCNN” method. It proved the effectiveness of the attention in GBAM and BSAM for classification. Especially the BSAM has obvious help in improving the AA by about 4.1%. It demonstrated that obtaining spatial features between feature mappings or long-range dependencies via the attention mechanism can significantly enhance the performance of the HSI classification model.
Lastly, the best classification results can be obtained when the spatial context information and the band dependencies are added concurrently to each stage of the MAFB for spatial-spectral joint attention feature extraction. It demonstrates the effectiveness of the proposed multiple attention fusion mechanism.

3.5. Analysis of the Multi-Attention Fusion Strategy

The fusion strategy is essential for multi-attention fusion, which significantly affects the classification method’s performance. In this section, we designed six multi-attention fusion strategies following the AETF-Net framework and did some experiments to analyze and discuss the effect on classification performance.
Six multi-attention fusion strategies are shown in Figure 10. They can mainly be split into two groups: attention weight fusion (Figure 10a) and attention feature map fusion (Figure 10b). The outputs of each attention module in attention weight fusion strategies are the combination of the weights, while the outputs of each attention module in attention feature maps fusion strategies are the combination of the weights and input maps. Especially the six multi-attention fusion strategies are designed as follows:
(1)
Figure 10a(1): the attention weight matrices produced by the GBA and BSA modules are element-wise multiplied and then multiplied with the original feature maps.
(2)
Figure 10a(2): the attention weight matrices produced by the GBA and BSA modules are element-wise added and then multiplied with the original feature maps.
(3)
Figure 10b(1): the feature maps produced by the GBAM and BSAM modules are element-wise added and then added with the original feature maps.
(4)
Figure 10b(2): the feature maps produced by the GBAM and BSAM modules are element-wise added and then multiplied with the original feature maps.
(5)
Figure 10b(3): the feature maps produced by the GBAM and BSAM modules are element-wise multiplied and then added to the original feature maps.
(6)
Figure 10b(4): the feature maps produced by the GBAM and BSAM modules are element-wise multiplied and then multiplied with the original feature maps.
Table 9 shows the classification results of the proposed AETF-Net with six different multi-attention fusion strategies. Compared with the two groups, the attention weight fusion strategy has a slight advantage over the attention feature map fusion strategy from the classification. The reasons are that both multi-attention fusion strategy groups have utilized effective characteristics of the attention mechanism for spectral-spatial feature learning, while the attention feature maps fusion strategies cost more computing resources to generate the feature map and lead to information redundancy.
In the attention weight fusion strategies, the multiplication strategy retains the relative size relationship between different feature mappings better than the addition strategy. Because it retains the variability between features and is superior to the addition strategy, the classification performance of a model can be improved by strengthening the compelling features after eliminating redundant ones. Thus, the proposed AETF-Net adopts the attention weight fusion strategy with multiplication (Figure 10a(1)).

3.6. Running Time Analysis

We computed the training and testing times of different methods using randomly selected samples. As shown in Figure 11, the proposed approach significantly improves training time compared to traditional DL methods. This is primarily attributed to the fact that traditional DL methods incorporate multiple convolutional and pooling layers in their network architecture to extract feature information, which leads to a large number of parameters when processing high-dimensional hyperspectral data. However, our proposed method did not show the least training and testing time compared to DL methods based on attention mechanisms, suggesting that further improvements are needed in our method. According to our model framework, it is possible that the increased computational cost of our method is due to the need for multiple information fusion processes in the backbone network. Nevertheless, our proposed method can fully extract and fuse the spatial and spectral features of HSI and the increase in computational cost is justifiable given the significant improvement in classification accuracy. The method A-SPN, which has the shortest processing time, may be attributed to its abandonment of the hierarchical structure composed of traditional convolution and pooling layers, resulting in a significant reduction in the computational cost of parameters. This is a direction worth exploring in our future work.

4. Discussion

The proposed AETF-Net method has shown remarkable performance in terms of accuracy and classification map quality on three publicly available datasets, surpassing existing state-of-the-art methods.
Firstly, one of the key factors affecting the accuracy of deep learning-based image classification is the number of training samples. The 3D-CNN outperforms the 2D-CNN in feature extraction capabilities. However, overfitting can be a challenge when the number of training samples is insufficient. Additionally, Res-Net’s redundant layers lead to worse classification results. Attention mechanisms, as seen in A2S2K, MAFN, DBDA, and A-SPN methods, have been demonstrated to improve accuracy, especially for small sample classes.
Additionally, despite having a limited number of training samples, the AETF-Net method achieved the best classification accuracy due to the dataset’s balanced feature distribution and dispersed inter-class differences. The study emphasizes the limitations of existing methods for small sample classification and highlights the importance of attention mechanisms in achieving high accuracy. Furthermore, the results demonstrate the potential of AETF-Net to improve image classification tasks and its robustness for a broader range of datasets.
Furthermore, the study’s findings also suggest that AETF-Net has the potential to overcome challenges associated with unbalanced sample distributions and misclassification at class boundaries by minimizing noise and improving classification accuracy. This has significant implications for the development of more reliable and accurate image classification in practical applications.
In conclusion, the results of this study have important implications for the development of deep learning-based image classification methods. The study emphasizes the importance of continued research in this area to improve accuracy and overcome the challenges associated with small sample classification, unbalanced sample distributions, and misclassification at class boundaries.

5. Conclusions

In this paper, we propose a novel HSI classification algorithm named AETF-Net to implement high-accuracy classification under a small sample rate. The model is divided into two sections, the spatial and spectral attention branch, and the spatial-spectral joint attention fusion branch. The first section of the spatial attention module models pixel-distant dependencies from two directions in space while preserving pixel position information, increasing the effectiveness and richness of spatial information. The band attention module establishes inter-band dependencies with adaptive convolution kernels to locate the band of interest. The second section of the spatial-spectral joint attention fusion branch extracts spatial-spectral joint features with three-stage 3D convolution. It embeds spatial and spectral attention features extracted in the first section before each convolution stage, and thus, enhancing the expressiveness and discriminative power of the spatial-spectral joint features extracted by 3D convolution. With a series of comparison and ablation experiments, the proposed AETF-Net achieved outstanding performance on limited training samples from three well-known HSI datasets.
The effectiveness of the multiple attention mechanism in dealing with small sample scales has been initially verified; however, the overfitting problem still exists at tiny sample rates. Further work will be to combine the attention mechanism and multi-scale to enhance the accuracy of HSI classification with tiny sample rates.

Author Contributions

E.Z.; Conceptualization, Writing—review and editing, Supervision. J.Z.; Methodology, Software, Formal analysis, Writing—original draft. J.B. (Jiaxin Bai); Conceptualization, Writing—review and editing. J.B. (Jiarong Bian); Investigation. S.F.; Resources. T.Z.; Supervision, Data curation. M.F.; Visualization, Project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62006188, 62103311), the Natural Science Basic Research Program of Shaanxi Province under Grant (2021JQ-195), the Qin Chuangyuan high-level innovation and entrepreneurship talent program of Shaanxi (2021QCYRC4-50), and the Chinese Universities Scientific Fund (No. 2452022341).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  2. Kang, K.K.K.; Hoekstra, M.; Foroutan, M.; Chegoonian, A.M.; Zolfaghari, K.; Duguay, C.R. Operating Procedures and Calibration of a Hyperspectral Sensor Onboard a Remotely Piloted Aircraft System For Water and Agriculture Monitoring. In Proceedings of the IGARSS, Yokohama, Japan, 28 July–2 August 2019; pp. 9200–9203. [Google Scholar] [CrossRef]
  3. Lanthier, Y.; Bannari, A.; Haboudane, D.; Miller, J.R.; Tremblay, N. Hyperspectral Data Segmentation and Classification in Precision Agriculture: A Multi-Scale Analysis. In Proceedings of the IGARSS, Boston, MA, USA, 7–11 July 2008; Volume 2, pp. 585–588. [Google Scholar] [CrossRef]
  4. Ang, K.L.M.; Seng, J.K.P. Big Data and Machine Learning With Hyperspectral Information in Agriculture. IEEE Access 2021, 9, 36699–36718. [Google Scholar] [CrossRef]
  5. Fan, J.; Zhou, N.; Peng, J.; Gao, L. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification. IEEE Trans. Image Process. 2015, 24, 4172–4184. [Google Scholar] [CrossRef]
  6. Torrecilla, E.; Piera, J.; Aymerich, I.F.; Pons, S.; Ross, O.N.; Vilaseca, M. Hyperspectral Remote Sensing of Phytoplankton Assemblages in the Ocean: Effects of the Vertical Distribution. In Proceedings of the WHISPERS, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar] [CrossRef]
  7. Kruse, F.A.; Clasen, C.C.; Kim, A.M.; Carlisle, S.C. Effects of Spatial and Spectral Resolution on Remote Sensing for Disaster Response. In Proceedings of the IGARSS, Munich, Germany, 22–27 July 2012; pp. 7086–7089. [Google Scholar] [CrossRef]
  8. Contreras, C.; Khodadadzadeh, M.; Tusa, L.; Loidolt, C.; Tolosana-Delgado, R.; Gloaguen, R. Geochemical and Hyperspectral Data Fusion for Drill-Core Mineral Mapping. In Proceedings of the WHISPERS, Amsterdam, The Netherlands, 24–26 September 2019; pp. 1–4. [Google Scholar] [CrossRef]
  9. Murphy, R.J.; Schneider, S.; Monteiro, S.T. Consistency of Measurements of Wavelength Position From Hyperspectral Imagery: Use of the Ferric Iron Crystal Field Absorption at ∼900 nm as an Indicator of Mineralogy. IEEE Geosci. Remote Sens. 2014, 52, 2843–2857. [Google Scholar] [CrossRef]
  10. Ghandehari, M.; Aghamohamadnia, M.; Dobler, G.; Karpf, A.; Cavalcante, C.; Buckland, K.; Qian, J.; Koonin, S. Ground based Hyperspectral Imaging of Urban Emissions. In Proceedings of the WHISPERS, Los Angeles, CA, USA, 21–24 August 2016; pp. 1–3. [Google Scholar] [CrossRef]
  11. Hsieh, T.H.; Kiang, J.F. Comparison of CNN Algorithms on Hyperspectral Image Classification in Agricultural Lands. Sensors 2020, 20, 1734. [Google Scholar] [CrossRef]
  12. Ghamisi, P.; Dalla, M.M.; Benediktsson, J.A. A Survey on Spectral–Spatial Classification Techniques based on Attribute Profiles. IEEE Geosci. Remote Sens. 2015, 53, 2335–2353. [Google Scholar] [CrossRef]
  13. Rashmi, S.; Swapna, A.; Venkat, S. Spectral Angle Mapper Algorithm for Remote Sensing Image Classification. IJISET 2014, 1. [Google Scholar] [CrossRef]
  14. Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM- and MRF-Based Method for Accurate Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef]
  15. Zhang, E.; Zhang, X.; Liu, H.; Jiao, L. Fast Multifeature Joint Sparse Representation for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1397–1401. [Google Scholar] [CrossRef]
  16. Huang, H.; Chen, M.L.; Duan, Y.L.; Shi, G.Y. Hyperspectral Image Classification using Spatial-Spectral Manifold Reconstruction. Opt. Precis. Eng. 2018, 26, 1827–1836. [Google Scholar] [CrossRef]
  17. Ghamisi, P.; Benediktsson, J.A.; Ulfarsson, M.O. The Spectral-Spatial Classification of Hyperspectral Images based on Hidden Markov Random Field and its Expectation-Maximization. In Proceedings of the IGARSS, Melbourne, VIC, Australia, 21–26 July 2013; pp. 1107–1110. [Google Scholar] [CrossRef]
  18. Kumar, B.; Dikshit, O. Hyperspectral Image Classification based on Morphological Profiles and Decision Fusion. Int. J. Remote Sens. 2017, 38, 5830–5854. [Google Scholar] [CrossRef]
  19. Ham, J.; Chen, Y.; Crawford, M.; Ghosh, J. Investigation of the Random Forest Framework for Flassification of Hyperspectral Data. IEEE Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
  20. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  21. Alipourfard, T.; Arefi, H.; Mahmoudi, S. A Novel Deep Learning Framework by Combination of Subspace-Based Feature Extraction and Convolutional Neural Networks for Hyperspectral Images Classification. In Proceedings of the IGARSS, Valencia, Spain, 22–27 July 2018; pp. 4780–4783. [Google Scholar] [CrossRef]
  22. Rissati, J.V.; Molina, P.C.; Anjos, C.S. Hyperspectral Image Classification Using Random Forest and Deep Learning Algorithms. In Proceedings of the IEEE LAGIRS, Santiago, Chile, 22–26 March 2020; p. 132. [Google Scholar] [CrossRef]
  23. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  24. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  25. Li, B.; Wang, Q.W.; Liang, J.H.; Zhu, E.Z.; Zhou, R.Q. SquconvNet: Deep Sequencer Convolutional Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 983. [Google Scholar] [CrossRef]
  26. Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-Spatial Classification of Hyperspectral Imagery using a Dual-channel Convolutional Neural Network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef]
  27. Wei, H.; Yangyu, H.; Li, W.; Fan, Z.; Hengchao, L. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  28. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks. In Proceedings of the IGARSS, Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Huynh, C.P.; Ngan, K.N. Feature Fusion with Predictive Weighting for Spectral Image Classification and Segmentation. IEEE Geosci. Remote Sens. 2019, 57, 6792–6807. [Google Scholar] [CrossRef]
  30. Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  31. Ge, H.; Wang, L.; Liu, M.; Zhu, Y.; Zhao, X.; Pan, H.; Liu, Y. Two-Branch Convolutional Neural Network with Polarized Full Attention for Hyperspectral Image Classification. Remote Sens. 2023, 15, 848. [Google Scholar] [CrossRef]
  32. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  33. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
  34. Yan, H.; Zhang, E.; Wang, J.; Leng, C.; Peng, J. MTFFN: Multimodal Transfer Feature Fusion Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  35. Yue, J.; Fang, L.; Rahmani, H.; Ghamisi, P. Self-Supervised Learning with Adaptive Distillation for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  36. Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  37. Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral Classification Based on Lightweight 3-D-CNN with Transfer Learning. IEEE Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef]
  38. Sellami, A.; Farah, M.; Riadh Farah, I.; Solaiman, B. Hyperspectral Imagery Classification based on Semi-Supervised 3-D Deep Neural Network and Adaptive Band Selection. Expert Syst. Appl. 2019, 129, 246–259. [Google Scholar] [CrossRef]
  39. Li, T.; Zhang, X.; Zhang, S.; Wang, L. Self-Supervised Learning with a Dual-Branch ResNet for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  40. Yang, K.; Sun, H.; Zou, C.; Lu, X. Cross-Attention Spectral–Spatial Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  41. Xiang, J.; Wei, C.; Wang, M.; Teng, L. End-to-End Multilevel Hybrid Attention Framework for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  42. Huang, H.; Luo, L.; Pu, C. Self-Supervised Convolutional Neural Network via Spectral Attention Module for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  43. Tu, B.; He, W.; He, W.; Ou, X.; Plaza, A. Hyperspectral Classification via Global-Local Hierarchical Weighting Fusion Network. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 184–200. [Google Scholar] [CrossRef]
  44. Yu, C.; Han, R.; Song, M.; Liu, C.; Chang, C.I. Feedback Attention-Based Dense CNN for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  45. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
  46. Li, Z.; Zhao, X.; Xu, Y.; Li, W.; Zhai, L.; Fang, Z.; Shi, X. Hyperspectral Image Classification with Multiattention Fusion Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  47. Xue, Z.; Zhang, M.; Liu, Y.; Du, P. Attention-Based Second-Order Pooling Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. 2021, 59, 9600–9615. [Google Scholar] [CrossRef]
  48. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional Block Attention Module. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar] [CrossRef]
  49. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the CVPR, Kuala Lumpur, Malaysia, 18–20 December 2021; pp. 13713–13722. [Google Scholar] [CrossRef]
  50. Guo, M.H.; Lu, C.Z.; Liu, Z.N.; Cheng, M.M.; Hu, S.M. Visual Attention Network. arXiv 2022, arXiv:2202.09741. [Google Scholar]
Figure 1. The overall architecture of the proposed AETF-Net model.
Figure 1. The overall architecture of the proposed AETF-Net model.
Remotesensing 15 02150 g001
Figure 2. The proposed GBAM structure.
Figure 2. The proposed GBAM structure.
Remotesensing 15 02150 g002
Figure 3. The proposed BSAM structure.
Figure 3. The proposed BSAM structure.
Remotesensing 15 02150 g003
Figure 4. (a) Comparison of direct multiplication and (b) softplus-post multiplication.
Figure 4. (a) Comparison of direct multiplication and (b) softplus-post multiplication.
Remotesensing 15 02150 g004
Figure 5. The effect of the number of training samples.
Figure 5. The effect of the number of training samples.
Remotesensing 15 02150 g005
Figure 6. The classification results of the proposed AETF-Net with different patch sizes.
Figure 6. The classification results of the proposed AETF-Net with different patch sizes.
Remotesensing 15 02150 g006
Figure 7. Classification maps of different methods on the IP dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Figure 7. Classification maps of different methods on the IP dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Remotesensing 15 02150 g007
Figure 8. Classification maps of different methods on the UP dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Figure 8. Classification maps of different methods on the UP dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Remotesensing 15 02150 g008
Figure 9. Classification maps of different methods on the KSC dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Figure 9. Classification maps of different methods on the KSC dataset. (a) False-color; (b) Ground truth map; (c) 2D-CNN; (d) 3D-CNN; (e) Res-Net; (f) SSRN; (g) A2S2K; (h) MAFN; (i) DBDA; (j) A-SPN; (k) AETF-Net; (l) Color bar.
Remotesensing 15 02150 g009
Figure 10. The illustration of multi-attention fusion strategy. (a) Attention weight fusion strategies, (b) attention feature maps fusion strategies.
Figure 10. The illustration of multi-attention fusion strategy. (a) Attention weight fusion strategies, (b) attention feature maps fusion strategies.
Remotesensing 15 02150 g010
Figure 11. Comparison of computation time and overall accuracy of different methods.
Figure 11. Comparison of computation time and overall accuracy of different methods.
Remotesensing 15 02150 g011
Table 1. The training and testing sample numbers and colors of the IP dataset.
Table 1. The training and testing sample numbers and colors of the IP dataset.
No.Class NameTrain/ValidateTestTotalColor
1Alfalfa24246Remotesensing 15 02150 i001
2Corn-notill1414001428Remotesensing 15 02150 i002
3Corn-mintill8814830Remotesensing 15 02150 i003
4Corn2233237Remotesensing 15 02150 i004
5Grass-pasture4475483Remotesensing 15 02150 i005
6Grass-trees7716730Remotesensing 15 02150 i006
7Grass-pasture-mowed22428Remotesensing 15 02150 i007
8Hay-windrowed4470478Remotesensing 15 02150 i008
9Oats21620Remotesensing 15 02150 i009
10Soybean-notill9954972Remotesensing 15 02150 i010
11Soybean-mintill2424072455Remotesensing 15 02150 i011
12Soybean-clean5583593Remotesensing 15 02150 i012
13Wheat2201205Remotesensing 15 02150 i013
14Woods1212411265Remotesensing 15 02150 i014
15Buildings-Grass-Trees-Drives3380386Remotesensing 15 02150 i015
16Stone-Steel-Towers28993Remotesensing 15 02150 i016
Total10210,04510,249
Table 2. The training and testing sample numbers and colors of the UP dataset.
Table 2. The training and testing sample numbers and colors of the UP dataset.
No.Class NameTrain/ValidateTestTotalColor
1Asphalt6664996631Remotesensing 15 02150 i017
2Meadows18618,27718,649Remotesensing 15 02150 i018
3Gravel2020592099Remotesensing 15 02150 i019
4Trees3030043064Remotesensing 15 02150 i020
5Painted metal sheets1313191345Remotesensing 15 02150 i021
6Bare Soil5049295029Remotesensing 15 02150 i022
7Bitumen1313041330Remotesensing 15 02150 i023
8Self-Blocking Bricks3636103682Remotesensing 15 02150 i024
9Shadows9929947Remotesensing 15 02150 i025
Total42341,93042,776
Table 3. The training and testing sample numbers and colors of the KSC dataset.
Table 3. The training and testing sample numbers and colors of the KSC dataset.
No.Class NameTrain/ValidateTestTotalColor
1Scrub7747761Remotesensing 15 02150 i026
2Willow swamp2239243Remotesensing 15 02150 i027
3Cabbage palm hammock2252256Remotesensing 15 02150 i028
4Cabbage palm/oak hammock2248252Remotesensing 15 02150 i029
5Slash pine2157161Remotesensing 15 02150 i030
6Oak/broadleaf hammock2225229Remotesensing 15 02150 i031
7Hardwood swamp2101105Remotesensing 15 02150 i032
8Graminoid marsh4423431Remotesensing 15 02150 i033
9Spartina marsh5510520Remotesensing 15 02150 i034
10Cattail marsh4396404Remotesensing 15 02150 i035
11Salt marsh4411419Remotesensing 15 02150 i036
12Mudd flats5493503Remotesensing 15 02150 i037
13Water9909927Remotesensing 15 02150 i038
Total5051115211
Table 4. Performance of the SMLP structure with different k value on the IP dataset.
Table 4. Performance of the SMLP structure with different k value on the IP dataset.
IP (1%)MLPSMLP (k Value)
3579111315
OA0.87290.85560.87380.88110.89580.89200.87780.8781
AA0.83790.83710.87390.87680.87910.87010.84610.8473
Kappa0.84520.83410.85550.86410.88170.87670.86010.8604
Table 5. The classification results (%) of all compared methods on the IP dataset.
Table 5. The classification results (%) of all compared methods on the IP dataset.
Class2D-CNN3D-CNNRes-NetSSRNA2S2KMAFNDBDAA-SPNAETF-Net
11.56 ± 1.0218.44 ± 6.1354.53 ± 36.1150.19 ± 23.6255.93 ± 22.6677.81 ± 25.7879.60 ± 27.8690.00 ± 15.1854.52 ± 9.96
240.77 ± 2.9345.79 ± 7.3440.04 ± 16.4674.40 ± 8.4076.92 ± 6.6374.17 ± 8.9579.33 ± 7.7962.86 ± 6.0886.90 ± 1.50
324.77 ± 2.9725.49 ± 8.1342.65 ± 23.1377.60 ± 9.8677.94 ± 6.6055.09 ± 10.0380.78 ± 11.3849.05 ± 10.7183.74 ± 3.14
40.26 ± 0.049.83 ± 4.7562.66 ± 40.3874.03 ± 12.8180.60 ± 8.9248.62 ± 18.6782.94 ± 8.1324.43 ± 8.1097.36 ± 2.50
557.87 ± 3.6246.03 ± 15.2265.94 ± 31.1587.63 ± 12.5595.92 ± 5.8187.47 ± 7.9298.71 ± 1.9578.08 ± 6.5397.25 ± 1.49
697.60 ± 0.4289.44 ± 6.9846.11 ± 28.5792.34 ± 4.5891.85 ± 05.6687.51 ± 13.6990.69 ± 6.3878.08 ± 3.1887.81 ± 2.30
700.00 ± 0.0000.00 ± 0.0058.40 ± 30.8659.04 ± 21.8258.54 ± 19.1752.58 ± 30.0033.31 ± 16.24100.00 ± 0.0071.37 ± 10.42
899.87 ± 0.2293.33 ± 3.6580.61 ± 20.0296.90 ± 4.3999.57 ± 0.4192.33 ± 10.0999.29 ± 1.5481.88 ± 10.6199.96 ± 0.13
900.00 ± 0.0000.00 ± 0.0011.64 ± 23.1231.75 ± 16.7833.72 ± 08.8448.42 ± 27.8963.84 ± 18.0385.00 ± 20.1952.03 ± 7.19
1035.06 ± 1.9944.59 ± 11.1752.04 ± 31.2173.35 ± 13.1184.78 ± 05.4472.77 ± 14.1678.25 ± 11.8255.07 ± 6.0284.60 ± 1.84
1180.36 ± 2.7360.67 ± 11.0037.55 ± 5.8578.34 ± 5.6169.92 ± 05.6183.71 ± 5.5879.11 ± 8.2992.80 ± 3.6195.48 ± 1.57
1220.87 ± 4.0720.81 ± 6.8442.60 ± 21.5176.38 ± 8.9882.96 ± 14.1259.53 ± 13.4181.33 ± 19.3239.30 ± 7.7994.33 ± 1.78
1382.76 ± 5.0364.78 ± 14.1270.15 ± 30.0290.85 ± 4.0091.84 ± 04.2278.31 ± 16.1091.57 ± 7.1598.72 ± 0.8985.81 ± 2.89
1499.38 ± 0.1790.39 ± 5.9967.80 ± 18.4992.24 ± 4.6590.24 ± 4.6596.89 ± 3.0892.97 ± 7.1899.67 ± 0.4991.84 ± 1.25
1515.18 ± 2.9713.39 ± 6.8642.51 ± 35.5476.51 ± 15.1581.60 ± 10.7966.10 ± 15.0787.68 ± 12.9636.74 ± 11.4989.80 ± 1.98
169.78 ± 4.2410.55 ± 8.2677.51 ± 38.9378.14 ± 5.9676.33 ± 12.1987.55 ± 15.2675.54 ± 11.8398.57 ± 1.7169.40 ± 7.19
OA60.40 ± 5.0154.66 ± 2.1836.06 ± 6.0178.93 ± 3.2579.20 ± 1.9475.80 ± 2.6782.19 ± 4.2774.68 ± 1.3189.58 ± 0.36
AA41.63 ± 6.4339.60 ± 2.2453.30 ± 12.6175.60 ± 4.0378.04 ± 3.6073.05 ± 5.5980.93 ± 4.8174.21 ± 1.6087.39 ± 0.51
Kappa53.31 ± 5.6647.93 ± 2.3523.59 ± 6.4975.89 ± 3.4275.90 ± 2.3072.44 ± 3.0279.54 ± 4.9970.30 ± 4.9988.17 ± 0.49
Table 6. The classification results (%) of all compared methods on the UP dataset.
Table 6. The classification results (%) of all compared methods on the UP dataset.
Class2D-CNN3D-CNNRes-NetSSRNA2S2KMAFNDBDAA-SPNAETF-Net
194.68 ± 1.1488.15 ± 3.4767.75 ± 10.0489.58 ± 3.4392.83 ± 3.3396.99 ± 0.6395.90 ± 6.1597.15 ± 1.0594.95 ± 2.11
297.41 ± 0.3387.63 ± 10.0283.75 ± 7.1097.49 ± 1.3197.60 ± 1.5199.46 ± 0.4799.17 ± 1.0899.36 ± 0.4499.62 ± 0.12
358.24 ± 2.6176.79 ± 9.9172.24 ± 12.1680.21 ± 14.3084.82 ± 6.1095.72 ± 2.8096.23 ± 3.0774.21 ± 6.3196.54 ± 1.46
485.19 ± 1.0188.80 ± 4.9898.16 ± 1.8298.57 ± 1.7699.40 ± 0.4998.19 ± 0.7297.02 ± 1.0592.89 ± 1.5198.49 ± 0.10
5100.00 ± 0.0098.05 ± 1.5198.48 ± 2.1799.25 ± 0.7799.55 ± 0.6499.33 ± 0.7798.95 ± 1.81100.00 ± 0.0099.39 ± 0.38
670.90 ± 1.3969.08 ± 16.2193.17 ± 5.0396.13 ± 2.9698.43 ± 1.3698.42 ± 0.3298.74 ± 1.1986.76 ± 5.4398.78 ± 0.33
748.72 ± 4.6069.68 ± 16.7776.18 ± 16.8394.05 ± 8.8497.15 ± 2.4594.05 ± 6.4697.82 ± 4.3285.53 ± 7.9098.96 ± 1.00
874.00 ± 2.4883.10 ± 17.5474.55 ± 9.0388.65 ± 3.7986.84 ± 4.3193.40 ± 3.7290.12 ± 3.8291.08 ± 5.3386.94 ± 4.25
993.79 ± 2.5796.72 ± 2.0289.03 ± 15.0196.62 ± 3.7498.31 ± 0.9295.39 ± 1.9198.42 ± 1.5194.85 ± 2.6597.09 ± 1.91
OA87.54 ± 3.5884.66 ± 4.0179.73 ± 4.1994.12 ± 1.8195.51 ± 1.0596.45 ± 0.6496.41 ± 1.8694.61 ± 0.8397.27 ± 0.49
AA80.33 ± 8.5184.22 ± 3.0383.70 ± 3.2393.39 ± 2.6394.99 ± 0.9695.66 ± 0.9896.15 ± 1.1191.31 ± 1.4396.75 ± 0.87
Kappa83.21 ± 4.8379.96 ± 4.7171.84 ± 6.2492.16 ± 2.4394.02 ± 1.4195.94 ± 0.8595.56 ± 2.5092.80 ± 1.3196.39 ± 0.65
Table 7. The classification results (%) of all compared methods on the KSC dataset.
Table 7. The classification results (%) of all compared methods on the KSC dataset.
Class2D-CNN3D-CNNRes-NetSSRNA2S2KMAFNDBDAA-SPNAETF-Net
187.58 ± 4.0976.94 ± 18.9874.17 ± 16.4497.22 ± 4.1995.69 ± 3.1692.21 ± 5.7998.11 ± 2.2395.70 ± 3.9199.94 ± 0.16
22.49 ± 2.7334.02 ± 17.5876.31 ± 21.2188.00 ± 18.1397.50 ± 4.2078.03 ± 19.3594.05 ± 8.1570.62 ± 10.7696.29 ± 4.91
336.48 ± 3.8733.04 ± 15.0950.57 ± 22.0877.29 ± 12.8878.61 ± 13.2558.90 ± 10.7475.81 ± 13.6695.45 ± 5.4977.33 ± 10.23
422.69 ± 1.8225.24 ± 13.6457.08 ± 26.1183.62 ± 12.7691.46 ± 7.1757.87 ± 12.1467.41 ± 25.8445.28 ± 13.1993.37 ± 9.02
520.25 ± 5.6515.67 ± 10.9137.33 ± 27.5178.59 ± 14.0487.31 ± 11.8984.16 ± 9.9163.63 ± 25.8689.24 ± 9.5787.42 ± 15.21
620.66 ± 4.7111.78 ± 8.0860.67 ± 32.8184.86 ± 9.8988.46 ± 9.0479.63 ± 17.6381.84 ± 11.8886.46 ± 9.5298.24 ± 2.10
737.30 ± 8.9620.29 ± 13.4082.37 ± 14.8872.37 ± 18.4774.63 ± 14.6659.58 ± 18.6356.97 ± 16.10100.00 ± 0.0088.81 ± 17.33
863.40 ± 10.0124.09 ± 9.6851.56 ± 19.4988.67 ± 8.2395.33 ± 5.8171.99 ± 12.5673.75 ± 28.6191.71 ± 10.5498.10 ± 1.27
973.98 ± 3.9874.16 ± 14.0358.88 ± 16.7294.83 ± 9.1099.04 ± 0.9878.70 ± 10.0079.48 ± 13.3888.25 ± 6.7399.63 ± 0.50
1015.20 ± 6.4225.91 ± 9.7197.13 ± 2.9299.22 ± 1.5899.19 ± 1.3676.07 ± 8.0792.13 ± 9.8499.93 ± 0.11100.00 ± 0.00
1193.86 ± 2.8279.81 ± 6.8896.26 ± 5.8299.48 ± 0.5399.69 ± 0.4995.16 ± 6.5596.14 ± 4.8995.15 ± 2.7199.69 ± 0.77
1236.43 ± 9.6148.15 ± 17.1478.64 ± 20.5596.12 ± 2.8395.63 ± 5.3588.36 ± 6.9385.71 ± 13.6799.20 ± 0.8996.23 ± 3.41
1381.79 ± 6.8996.33 ± 4.1967.87 ± 22.8499.80 ± 0.3296.60 ± 10.1298.06 ± 4.9199.94 ± 0.13100.00 ± 0.00100.00 ± 0.00
OA57.50 ± 1.3956.69 ± 6.0262.06 ± 6.7391.98 ± 2.4393.75 ± 3.5081.43 ± 0.0386.40 ± 6.8891.86 ± 1.5896.48 ± 1.10
AA46.70 ± 1.9943.49 ± 5.7768.37 ± 5.0089.24 ± 3.0992.24 ± 2.6278.36 ± 0.0481.92 ± 7.7689.00 ± 1.9595.00 ± 1.61
Kappa44.70 ± 1.3451.45 ± 6.7556.84 ± 7.8891.07 ± 2.7093.02 ± 3.9179.28 ± 0.0484.84 ± 7.6790.95 ± 1.7696.08 ± 1.23
Table 8. The best ablation study results on the IP dataset.
Table 8. The best ablation study results on the IP dataset.
MethodsIP (1%)
OAAAKappa
GBAM0.76820.71730.7344
BSAM0.82990.80960.8050
LCNN0.86850.81610.8497
LCNN + GBAM0.87230.82390.8534
LCNN + BSAM0.88570.85710.8686
AETF-Net0.89580.87910.8817
Table 9. The effect of multi-attention fusion strategy on the IP dataset.
Table 9. The effect of multi-attention fusion strategy on the IP dataset.
IP (1%)
MethodsOAAAKappa
Weights(GBA × BSA) × Input0.89580.87910.8817
(GBA + BSA) × Input0.89710.84020.8782
Maps(GBAM + BSAM) + Input0.89270.82410.8779
(GBAM + BSAM) × Input0.89240.82190.8777
(GBAM × BSAM) + Input0.88530.82040.8810
(GBAM × BSAM) × Input0.88120.79630.8647
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, E.; Zhang, J.; Bai, J.; Bian, J.; Fang, S.; Zhan, T.; Feng, M. Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification. Remote Sens. 2023, 15, 2150. https://doi.org/10.3390/rs15082150

AMA Style

Zhang E, Zhang J, Bai J, Bian J, Fang S, Zhan T, Feng M. Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification. Remote Sensing. 2023; 15(8):2150. https://doi.org/10.3390/rs15082150

Chicago/Turabian Style

Zhang, Erlei, Jiayi Zhang, Jiaxin Bai, Jiarong Bian, Shaoyi Fang, Tao Zhan, and Mingchen Feng. 2023. "Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification" Remote Sensing 15, no. 8: 2150. https://doi.org/10.3390/rs15082150

APA Style

Zhang, E., Zhang, J., Bai, J., Bian, J., Fang, S., Zhan, T., & Feng, M. (2023). Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification. Remote Sensing, 15(8), 2150. https://doi.org/10.3390/rs15082150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop