Next Article in Journal
Studying the Impact of the Geospace Environment on Solar Lithosphere Coupling and Earthquake Activity
Previous Article in Journal
Sensitivities of Vegetation Gross Primary Production to Precipitation Frequency in the Northern Hemisphere from 1982 to 2015
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification

1
National Key Laboratory of Optical Field Manipulation Science and Technology, Chinese Academy of Sciences, Chengdu 610209, China
2
Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China
3
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(1), 22; https://doi.org/10.3390/rs16010022
Submission received: 27 October 2023 / Revised: 11 December 2023 / Accepted: 16 December 2023 / Published: 20 December 2023

Abstract

:
Due to their devastating ability to extract features, convolutional neural network (CNN)-based approaches have achieved tremendous success in hyperspectral image (HSI) classification. However, previous works have been dedicated to constructing deeper or wider deep learning networks to obtain exceptional classification performance, but as the layers get deeper, the gradient disappearance problem impedes the convergence stability of network models. Additionally, previous works usually focused on utilizing fixed-scale convolutional kernels or multiple available, receptive fields with varying scales to capture features, which leads to the underutilization of information and is vulnerable to feature learning. To remedy the above issues, we propose an innovative hybrid-scale feature enhancement network (HFENet) for HSI classification. Specifically, HFENet contains two key modules: a hybrid-scale feature extraction block (HFEB) and a shuffle attention enhancement block (SAEB). HFEB is designed to excavate spectral–spatial structure information of distinct scales, types, and branches, which can augment the multiplicity of spectral–spatial features while modeling the global long-range dependencies of spectral–spatial informative features. SAEB is devised to adaptively recalibrate spectral-wise and spatial-wise feature responses to generate the purified spectral–spatial information, which effectively filters redundant information and noisy pixels and is conducive to enhancing classification performance. Compared with several sophisticated baselines, a series of experiments conducted on three public hyperspectral datasets showed that the accuracies of OA, AA, and Kappa all exceed 99%, demonstrating that the presented HFENet achieves state-of-the-art performance.

Graphical Abstract

1. Introduction

Hyperspectral imaging, a spectrum-image merging technology combining spectral detection and imaging techniques, utilizes diverse sensors to distinguish the electromagnetic waves reflected from objects and precisely describes the physical characteristics of objects [1,2]. A hyperspectral image (HSI) possesses plentiful spectral and spatial information, which has been widely adopted in extensive application areas, such as precision agriculture [3], environmental monitoring [4], mineral exploration [5], and urban planning [6]. HSI classification has become a research hotspot in pattern recognition and image processing, which is devoted to assigning a unique category label to each spatial pixel [7,8,9,10]. However, HSI classification is still a challenging issue, i.e., especially spatial variability and the curse of dimensionality, thereby increasing the difficulty of classification. The former is induced by factors such as light angle [11], and atmospheric interference [12], which leads to the same object presenting different characteristics. The latter is caused by the unbalance between high-dimensionality features and limited samples, which easily results in overfitting. Consequently, how to capture more representative and discriminative features from the original data is a critical problem in HSI classification.
Initially, bountiful HSI classification techniques have been presented, which focus on two stages, i.e., feature engineering and classifier training. Feature engineering aims to reduce the spectral dimension of HSI data and capture informative features or bands. Feature engineering generally contains two regular methods, i.e., feature selection and feature extraction. Feature selection aims to retain important spectral bands for the following tasks and fade out unnecessary ones. Representative methods have a spectral angle mapper (SAM) [13], Jeffries-Matusita distance [14], Bhattacharyya distance [15], etc. Feature extraction can easily detect varying categories by converting HSI data from a high-dimension space to a low-dimension space. Typical methods include principal component analysis (PCA) [16], independent component analysis (ICA) [17], minimum noise fraction (MNF) [18], etc. Features generated by feature engineering are fed to classifiers for classification tasks. Common classifiers involve support vector machine (SVM) [19], manifold ranking (MR) [20], random forests (RF) [21], etc. However, the classification methods mentioned above only utilize spectral information and do not fully consider spatial information in the target area. Compared with methods-based spectral features, many researchers have demonstrated that making full use of spatial and spectral information helps to strengthen the classification results. In general, these methods exploit multi-kernel learning (MKL) [22], morphological profiles (MP) [23], sparse representation (SR) [24], etc., to extract spatial features. Nevertheless, both spectral-features-based and spectral–spatial-features-based classification methods depend on hand-crafted features with poor generalization ability and limited representation ability, which extremely degenerate the classification performance.
Of late, due to the powerful representation of learning potentials, deep learning (DL)-based methods have acquired tremendous advancements in HSI classification. For example, Chen et al. applied a multilayer stacked autoencoder (SAE) to extract deep features for HSI classification [25]. To obtain spatial–spectral features, Li et al. utilized a multilayer deep belief network (DBN) and a single restricted Boltzmann machine (RBM) [26]. Hong et al. designed a supervised mini graph convolutional network (GCN) for HSI classification [27]. To provide new insight into HSI classification, Hang et al. devised a multitask generative adversarial network (GAN) [28]. Hang et al. constructed a cascaded recurrent neural network (RNN) to fully excavate spectral information to achieve high-accuracy HSI classification [29]. Li et al. built a two-stream convolutional neural network (CNN) to simultaneously capture spectral and spatial features [30]. To model the global relationships of HSI, Zu et al. proposed a cascaded convolution-based transformer [31]. In the abovementioned network models, CNN is always a considerable and indispensable module [32,33,34,35,36]. Giving credit to the characteristics of weight-sharing and local connection, Hu et al. built a 1D CNN model to explore spectral information [37]. Xu et al. designed a pixel-to-pixel, end-to-end spectral–spatial fully convolutional network for HSI classification [38]. To tackle the information leakage of the training, Zou et al. constructed a spectral–spatial 3D fully convolutional network, which can exploit the spectral–spatial joint features and semantic information [39]. Zhang et al. presented a CNN based on varying region inputs to effectively extract contextual interactional information [40]. A multiscale and cross-level attention learning network was devised by Xu et al., which can use multiscale information from local and global views [41]. Although the classification methods-based CNN has demonstrated remarkable success, there are still some drawbacks. To be more specific, the squared region of convolutional kernel size gravely limits the capacity of methods based on CNN to acquire long-range dependencies. Additionally, the informative features captured by CNN commonly involve redundant features and noise, adverse to the classification performance. Consequently, it is an urgent problem to overcome the challenge of finding a way to obtain significant features to enhance HSI classification.
Lately, many promising tricks have been integrated into CNNs, such as neural network search strategy [42], multiple available receptive fields with varying scales [43], sample augmentation [44], residual learning [45], attention mechanism [46], and dense connection [47]. From the perspective of imaging procedures, Chen et al. built a virtual sample augmentation approach to create training data [48]. Cao et al. constructed a compressed CNN to effectively enhance the classification performance of the student network by using virtual samples to describe the teacher network’s classification boundary [49]. To sufficiently exploit information from varying scales of HSI, Xie et al. built a multiscale densely-connected convolutional network [50]. Wang et al. used a multiscale ghost module to capture more distinguishable information using simple operations [51]. Zhu et al. designed a spectral attention block and spatial attention to adaptively emphasize necessary spectral bands and important spatial pixels [52]. Roy presented an improved spectral–spatial ResNet to obtain the spectral–spatial joint information [53]. Zhang et al. devised cascaded parallel improved residual blocks to capture spectral–spatial features [54]. To degrade the computation cost and obtain better classification accuracy, Dong et al. combined the dense connection with attention modules [55].
In the article, we present a hybrid-scale feature enhancement network (HFENet) for HSI classification. HFENet contains two important submodules: hybrid-scale feature extraction block (HFEB) and shuffle attention enhancement block (SAEB). HFEB is devised to extract spectral–spatial structure information of different types and scales, thereby modeling the global long-range dependencies of spectral–spatial features. HFEB consists of two parallel branches, and the nuclear component of each branch is a heterogenous feature refine block (HFRB), where the upper branch has two HFRBs, the latter branch has an HFEB, and the convolutional kernel size of each HFRB is different. HFRB is designed to capture the local dependencies of spectral–spatial features. SAEB is constructed to effectively dispel the redundant information and noisy pixels, further strengthening the discrimination ability of spectral–spatial informative features for HSI classification. In this context, the main contributions of the proposed work rely on the following:
(1)
We construct a heterogenous feature refine block (HFRB) to capture the internal correlations of different channels and the external interactions of all channels, which complement each other, thereby enhancing the local dependencies of spectral–spatial features.
(2)
Different from existing multiscale feature extraction strategies, our designed hybrid-scale feature extraction block (HFEB) exploits multiple HFRBs to obtain more discriminative and representative spectral–spatial structure information of distinct scales, types, and branches, which can not only augment the multiplicity of spectral–spatial features but also model the global long-range dependencies of spectral–spatial features.
(3)
To effectively fade out the redundant information and noisy pixels, we devise a shuffle attention enhancement block (SAEB) to adaptively recalibrate spectral-wise and spatial-wise feature responses to generate the purified spectral–spatial information, which is conducive to enhancing the classification performance.
The rest of this work is formulated as follows. Section 2 describes the proposed approach in detail. Section 3 provides the relevant experimental results and comparisons with several state-of-the-art methods. Section 4 concludes this work.

2. Methods

2.1. Framework of HFENet Model

Figure 1 graphically illustrates the framework of our presented HFENet, which is composed of an initial block, two HFEBs, a SAEB, and an output block. First, considering the classical curse of dimensionality issue of HSI, we conduct the PCA algorithm on raw HSI to reduce spectral band numbers and effectively alleviate the interference of high correlation between spectral bands, where 40 spectral bands remain. Second, to effectively reduce the training time and fully exploit the property of HSI containing both spectral and spatial data, a 3D data cube x R 7 × 7 × 40 consisting of the target pixel and its adjacent pixels is used as the input data of our presented HFENet, where 7 , 7 , and 40 represent the dimensions of height, width, and spectrum, respectively. Third, the 3D data cube is transmitted to the initial block to obtain general spectral–spatial features. The initial block contains a 3D convolutional layer with 128 filters of 1 × 1 × 40 size, a 3D convolutional layer with 128 filters of 3 × 3 × 1 size, 2 BN layers, and 2 PReLU activation functions. Then, the initial spectral–spatial features are transmitted to two HFEBs to extract more discriminative and representative global long-range dependencies of spectral–spatial features. Furthermore, these features are transmitted to a SAEB to filter unnecessary information and dispel interference of noises, thus achieving spectral–spatial feature purification. Finally, the output block is utilized to generate the probabilities of 16 categories. The output block involves a 2D GAP operation, two fully connected layers, two dropouts, and a softmax layer. In addition, to avoid the overfitting problem, L2 regularization is also introduced into the proposed HFENet. We will depict two primary submodules of our presented HFENet: hybrid-scale feature extraction block (HFEB) and shuffle attention enhancement block (SAEB).

2.2. Heterogenous Feature Refine Block

In recent years, to meet higher-quality computer vision task requirements, numerous researchers have ameliorated network performance by exploring features of different channels or layers. For example, Gao et al. constructed a triple-branch attention block to capture interactions across different spatial regions, spectral bands, and channels [56]. Wang et al. proposed an attention mechanism module to obtain the weight information of channel dimension, spectral dimension, and spatial dimension, respectively [57]. To improve SISR performance, Zhang et al. built a residual channel attention mechanism, which can not only reinforce interdependencies of varying channels but also adaptively discard plentiful low-frequency features [58]. Inspired by the above approaches, we construct an innovative, heterogenous feature refine block (HFRB) to strengthen internal and external interactions of different channels and layers while enriching the local dependencies of spectral–spatial features. HFRB exploits a heterogeneous architecture in a parallel manner, which contains a symmetric residual unit (SRU) and a complementary residual unit (CRU). The architecture of our devised HFRB is provided in Figure 2. The input data of HFRB can be referred to X R H × W × C , where H , W , and C are dimension of height, width, and channel, respectively.
Symmetric Residual Unit: SRU adopts a pair of twin branches to boost internal relations of different channels. More precisely, we split the input data X into two sub-branches: X 1 R H × W × ( C / 2 ) and X 2 R H × W × ( C / 2 ) . Each sub-branch involves two C o n v + B N + Re L U layers and a C o n v + B N layer, where 2D convolutional operation with C / 2 filters of n × n size is utilized to excavate spectral–spatial features of varying channels, BN is utilized to strengthen the network performance and ReLU is utilized to obtained features map into a non-linearity. In addition, to enhance information propagation form shallow to deep layers and avoid loss, we apply the skip transmission to each sub-branch. The formulas of upper sub-branch can be explained as follows:
U 11 = F 1 ( X 1 )
U 12 = F 1 ( U 11 )
U 13 = F 2 ( U 12 )
U = U 13 + X 1
where X 1 and U are the input data and output data of upper sub-branch. F 1 ( ) represents the composite function containing C o n v + B N + Re L U layer, F 2 ( ) represents the composite function containing C o n v + B N layer, and + is element-wise addition operation.
The formulas of bottom sub-branch are the same as those of upper sub-branch, which can be explained as follows:
B 21 = F 1 ( X 2 )
B 22 = F 1 ( B 11 )
B 23 = F 2 ( B 12 )
B = B 13 + X 2
where X 2 and B are the input data and output data of bottom sub-branch. Finally, we exploit a plain concatenate operation to aggerate the output features of two sub-branches and use ReLU to strengthen the nonlinear ability of network. The formulas can be explained as follows:
O 1 = σ ( [ U , B ] )
where O 1 stands for the output data of SRU, [ ] refers to the concatenate operation, σ and is ReLU activation function.
Complementary Residual Unit: CRU is devised to enhance the robustness of spectral–spatial features by learning external correlations of all the channels, which complements SRU. CRU contains two C o n v + B N + Re L U layers and a C o n v + B N layer, where 2D convolutional operation with C filters of n × n size is utilized to extract spectral–spatial features of entire channels, different from those of SRU. Uniformly, the skip transmission is also introduced to CRU to avert the loss of information. Finally, the ReLU activation function is exploited to boost the nonlinear ability of model. The formulas of CRU can be explained as follows:
C 11 = F 1 ( X )
C 12 = F 1 ( C 11 )
C 13 = F 2 ( C 12 )
C 14 = σ ( C 13 )
O 2 = C 14 + X
SRU and CRU can obtain internal and external corrections of split channels and all channels, which complement each other. For that, we employ the element-wise addition operation to SRU and CRU to generate richer deep and wide local dependencies of spectral–spatial features. The formulas of HFRB can be explained as follows:
O = O 1 + O 2
where O represents the output data of HFRB, and + is element-wise addition operation.

2.3. Hybrid-Scale Feature Extraction Block

With the increased demand for HSI classification tasks, many scholars have focused on utilizing multiple receptive fields to explore luxuriant spectral–spatial information, thereby achieving remarkable performance. For example, Zhang et al. designed a multi-scale dense network, which can not only fully use the varying scale information of network structure but also aggregate the scale information of the entire network for HSI classification [59]. Xie et al. built a multiscale densely-connected convolutional network to effectively capture spectral–spatial features of multiple scales [50]. To tackle larger intraclass variability, Safari et al. constructed a multiscale deep learning by combining diverse CNNs for HSI classification [43]. Compared with the fixed-scale extraction manner, utilizing multiple different receptive fields contributes to enhancing HSI classification performance. Inspired by the above approaches, we devise an innovative hybrid-scale feature extraction block (HFEB) exploiting spectral–spatial structure information of distinct types and scales to increase the multiplicity of spectral–spatial features while modeling global long-range dependencies of spectral–spatial features. The structure of our proposed HFEB is provided in Figure 3.
Different from prior works based on convolution operations with multiple different receptive fields, our presented HFEB utilizes promising functional HFRBs to obtain spectral–spatial information of distinct types and scales. To be more specific, HFEB is composed of two parallel branches: the upper branch contains two HFEBs with 2D convolutional operations with 3 × 3 size and 5 × 5 size, respectively; the lower branch contains one HFEB with 2D convolutional operations with 7 × 7 size. The structure of HFRB is provided in Figure 2. Second, we utilize the element-wise addition operation to aggregate the output data of two branches, thereby obtaining the global long-range dependencies of spectral–spatial features. Furthermore, to avert the loss of information, the skip transmission is also applied to HFEB. The formulas of HFEB can be explained as follows:
y 1 = H F R B 3 × 3 ( X )
y 2 = H F R B 5 × 5 ( y 1 )
y 3 = H F R B 7 × 7 ( X )
y = X + y 2 + y 3
where X and y are the input data and output data of HFEB. H F E B ( ) refers to the entire treatment process of HFRB, and the subscripts refer to the convolutional kernel size of HFRB. y 1 , y 2 , and y 3 represent the output data of each HFRB, respectively.

2.4. Shuffle Attention Enhancement Block

The attention mechanism mimicking the perception system of humans is one of the most distinguished ideas in the DL domain, which is utilized to focus on the information regions more relevant to computer-vision tasks and filter irrelevant ones. For example, Dong et al. constructed an attention module composed of spatial and spectral axes to emphasize the salient spatial–spectral information [60]. Guo et al. devised a spectral–spatial connected attention mechanism, which integrates spatial attention module and spectral attention module to enhance the distinguishing capacity of spatial pixels and spectral bands [55]. Zhu et al. built a spectral attention module to obtain useful spectral bands and a spatial attention module for the adaptive selection of spatial pixels [52]. Inspired by the above approaches, we design a shuffle attention enhancement block (SAEB) to adaptively recalibrate spectral-wise and spatial-wise feature responses, which effectively eliminates redundant information and noisy pixels, thereby heightening the discriminative ability of spectral–spatial features. The structure of our proposed SAEB is provided in Figure 4. As seen in Figure 4, the SAEB is composed of four prominent parts: feature grouping, spectral enhancement branch, spatial enhancement branch, and feature aggregating.
Feature Grouping:  X R H × W × C refers to the input data of SAEB, where H , W , and C are dimension of height, width, and channel, respectively. The input data X are divided into n subsets, and each subset is also divided into two parts along the spectral dimension: x 1 R H × W × ( C / 2 n ) and x 2 R H × W × ( C / 2 n ) . x 1 and x 2 are fed into the spectral enhancement branch and spatial enhancement branch, respectively.
Spectral Enhancement Branch: This is built to reassign the weights to spectral bands, emphasizing the meaningful bands and fading out the irrelevant ones. Concretely, first, a 2D global average pooling is used to convert x 1 R H × W × ( C / 2 n ) to x 1 R 1 × 1 × ( C / 2 n ) . Second, two fully connected layers, a ReLU activation function and a sigmoid activation function, are adopted to generate the weights of spectral bands W s p e c t r a l . Finally, W s p e c t r a l is multiplied by x 1 to obtain the local spectral-wise feature responses X s p e c t r a l . The formulas of spectral enhancement branch can be explained as follows:
W s p e c t r a l = δ ( F C 2 ( σ ( F C 1 ( G A P ( x 1 ) ) ) ) )
X s p e c t r a l = x 1 W s p e c t r a l
where σ and δ are ReLU and sigmoid activation functions. is the multiplication operation.
Spatial Enhancement Branch: This is constructed to reassign the weights to spatial pixels, strengthening pixels that are conducive for classification in the pixel-centered neighborhood or those from the same class as the center pixel and suppressing unimportant ones. Specifically, first, an average pooling operation and a max pooling operation are adopted to convert x 2 R H × W × ( C / 2 n ) to x 21 R H × W × 1 and x 22 R H × W × 1 , respectively. Second, the average pooling feature and max pooling feature are aggerated by concatenation operation. Third, the aggerated features are transmitted to a 2D convolutional layer with 3 × 3 size to generate the weights of spatial pixels W s p a t i a l . The W s p a t i a l is sent to a BN layer to strengthen the network classification performance. Finally, W s p a t i a l is multiplied by x 2 to obtain the local spatial-wise feature responses X s p a t i a l . In addition, ReLU is utilized to turn feature map into a non-linearity. The formulas of spatial enhancement branch can be explained as follows:
W s p a t i a l = C o n v ( [ A P ( x 2 ) , M P ( x 2 ) ] )
X s p a t i a l = σ ( B N ( W s p a t i a l ) x 2 )
where A P ( ) and M P ( ) are the average pooling operation and max pooling operation, respectively. C o n v ( ) is the 2D convolutional operation, and B N ( ) is the BN layer. σ is the ReLU activation function. [ ] and denote the concatenation operation and multiplication operation, respectively.
Feature Aggregating: The concatenation operation is utilized to integrate the local spectral-wise feature responses X s p e c t r a l and the local spatial-wise feature responses X s p a t i a l into a new subset, thus obtaining the local spectral–spatial feature responses. To encourage the cross-information flow of local spectral–spatial feature responses between different subsets, we also introduce the shuffle unit into our proposed SAEB. Finally, we aggregate all local spectral–spatial feature responses to obtain global spectral–spatial feature responses.

3. Experimental Results and Discussion

3.1. Hyperspectral Datasets and Setup

To estimate the classification performance of our developed HFENet, we adopted three publicly available datasets, i.e., Pavia University (UP), Indian Pines (IP), and Houston 2013 datasets.
The UP dataset was captured by the ROSIS-3 sensor over the city of Pavia, Italy. The image possesses 610 × 340 pixels with a geometric resolution of 1.3 m . It is composed of 9 categories and 115 spectral bands ranging from about 0.43 to 0.86 um . The corrected image contains 103 spectral bands after removing 12 noisy bands.
The IP dataset was collected by the AVIRIS sensor over northwestern Indiana, USA. The image involves 145 × 145 pixels with a geometric resolution of 20 m . It constitutes 16 categories and 224 spectral bands ranging from about 0.4 to 2.5 um . The corrected image has 200 spectral bands; after removing 20 bands, it cannot be reflected by water.
The Houston 2013 dataset was obtained by the ITRES CASI-1500 instrument over the University of Houston campus, USA. The image contains 349 × 1905 pixels with a geometric resolution of 2.5 m . It has 15 categories and 144 spectral bands ranging from about 0.38 to 1.05 um .
Table 1, Table 2 and Table 3 provide the number of samples of each category for training and testing. The validation experiments were performed in a TensorFlow 2.3, Keras 2.4.3, CUDA 10.1, and Python 3.6 environment utilizing an Intel(R) Core(TM) i7-9700F CPU made by the Intel corporation and an NVIDIA GeForce RTX 2060 SUPER 6 GB GPU made by the NVIDIA corporation, procuring from Chengdu, China. The epoch and batch size influence the classification performance of our proposed: if the epoch and batch size are too small, the training process of the model will be unstable and easily disturbed by noisy data; if the epoch and batch size are too large, the training time of model will be too long, and the learning ability of model will be limited. Therefore, setting the suitable epoch and batch size is vital for our proposed HFENet. For the UP, IP, and Houston2013 datasets, the training epochs were set to 100, 200, and 200; the batch size was set to 16, 16, and 16, respectively. The Adam’s Optimizer was chosen as the optimizer, and the learning rate was defined as 0.0005. The average accuracy (AA), overall accuracy (OA), and Kappa coefficient (Kappa) were used as criteria metrics to evaluate the classification performance.

3.2. Classification Comparison with State-of-the-Art Models

Our proposed HFENet model is compared with eleven outstanding classification approaches to comprehensively demonstrate the superiority of the HFENet model. Eleven classification methods are broadly divided into two groups: one containing SVM 1, RF 1, KNN 1, and GuassianNB 1 belongs to traditional ML; the other involving HybridSN 2 [61], RSSAN 3 [52], MSRN [62], MAFN 4 [63], DCRN 5 [64], DMCN [65], and MSDAN [57] belongs to DL. Specifically, HybridSN is composed of 2D CNN and a spectral–spatial 3D CNN to achieve maximum possible accuracy. RSSAN exploits a spectral–spatial attention learning module to filter unimportant information and strengthen beneficial infiormation while using a spectral–spatial feature learning module to refine the learned features. MSRN utilizes the depthwise separable convolution with a mixed depthwise convolution layer to replace the convolutional layer to construct residual blocks, which can emphasize the feature representation ability. MAFN constructs a spatial feature extraction module, a spectral feature extraction module, and a spectral–spatial feature extraction module to obtain more representative features. DCRN designs two parallel branches and a spatial–spectral fusion structure to extract joint features. DMCN involves coordinate attention, a grouped residual 2D CNN, and a dense 3D CNN to mine fusion information. MSDAN applies three different scale modules with dense connections to achieve feature reuse while embedding spectral–spatial–channel attention to improve classification performance. For fairness, all experiments stochastically pick 20% of labeled data as the training size for three datasets. Obtained classification results are shown in Table 4, Table 5 and Table 6.
By comparing the devised HFENet model with diversiform approaches, we can draw the following conclusions:
(1)
According to Table 4, Table 5 and Table 6, it is obvious that ML-based methods obtain inferior classification results compared with DL-based methods. For example, for the UP dataset, GaussianNB has the worst OA, AA, and Kappa values, which are 33.37%, 26.32%, and 41.30% lower than those of HybridSN, respectively. For the IP dataset, SVM has the second worst OA, AA, and Kappa values, which are 25.35%, 33.19%, and 29.87% lower than those of MSRN, respectively. This is because ML-based methods only utilize the spectral information and ignore fertile spatial information. Meanwhile, they devilishly rely on hand-crafted features with poor generalization ability and limited representation ability, which damage the classification accuracies. Due to the hierarchical structure and power feature extraction ability, DL-based methods can adaptively capture features and obtain good classification values.
(2)
Table 4 provides the classification results for the UP dataset. This scenario contains many smaller areas of the species and possesses rich spatial information; most of the methods yield good classification results. Table 5 and Table 6 provide the classification results for the IP and Houston2013 datasets. In the early stages of growth, the crop areas of the former have been imaged and induce strong mixing phenomena; the latter has highly similar spectral characteristics between categories, which increases the classification difficulty. Nevertheless, our proposed HFENet still achieves impressive results on the three datasets. For example, for the Houston2013 dataset, HFENet obtains 99.73% OA, 99.62% AA, and 99.70% Kappa, which are 2.13%, 2.31%, and 2.29% higher than those of DMCN, respectively. For the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 1.96%, 2.81%, and 2.60% higher than those of MSDAN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 9.15%, 17.48%, and 10.47% higher than those of DCRN, respectively. These sufficiently prove the superiority and stability of our proposed HFENet.
(3)
From the point of view of the attention mechanism, RSSAN devises a spectral–spatial attention learning module to refine the learned features. MAFN uses a spatial attention module and a band attention module to relieve the influence of noisy pixels and redundant bands. Our constructed SAEB adaptively recalibrates spectral-wise and spatial-wise feature responses to generate the purified spectral–spatial information. In Table 4, Table 5 and Table 6, we can clearly see that our presented method obtains superb values for three datasets. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 0.43%, 0.71%, and 0.36% higher than those of RSSAN, 0.98%, 0.75%, and 1.39% higher than those of MAFN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 0.44%, 3.17%, and 0.5% higher than those of RSSAN, 0.44%, 1.1%, and 0.5% higher than those of MAFN, respectively. This is because SAEB can effectively dispel the interference of redundant bands and noises from local and global views.
(4)
From the point of view of the multiscale strategy, MSRN devises a multiscale residual block with mixed depthwise convolution to achieve multiscale feature learning. MSDAN designs three different scales modules to enhance feature reuse. Our proposed HFEB can exploit spectral–spatial structure information of distinct types and scales, which is composed of two parallel branches: the upper branch contains two HFEBs with 2D convolutional operations with 3 × 3 size and 5 × 5 size, respectively; the lower branch contains one HFEB with 2D convolutional operations with 7 × 7 size. In Table 4, Table 5 and Table 6, the classification results indicate that HFENet is advantageous in extracting multiscale features. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 1.02%, 1.73%, and 1.36% higher than those of MSRN, and 1.96%, 2.81%, and 2.6% higher than those of MSDAN, respectively. This is because our proposed HFEB exploits multiple HFRBs to obtain more discriminative and representative spectral–spatial structure information instead of simple concatenation convolutional layers with different sizes. HybridSN, RSSAN, MAFN, DCRN, and DMCN utilize the fixed-scale convolutional kernels to extract spectral–spatial features. Although these methods obtain good classification performance, they lack an exploration of the diversity of spectral–spatial features. Compared with the aforementioned methods, our proposed HFENet uses multiple HFRBs with diverse sizes to augment the multiplicity of spectral–spatial features and model the global long-range dependencies of spectral–spatial features. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 2.41%, 5.11%, and 3.19% higher than those of DCRN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 1.65%, 6.23%, and 1.87% higher than those of DMCN, respectively.
(5)
Figure 5, Figure 6 and Figure 7 provide the ground-truth map and the visual classification result map of each comparison method for the three datasets. By comparison, the visual classification result map of our proposed HFENet is closest to the ground truth and the cleanest. Four ML-based methods are inclined to produce salt and pepper noises on the visual classification result maps for three datasets. The visual classification result maps of seven DL-based methods are relatively smooth but may result in the misclassification of pixels at edges. Particularly, our proposed HFENet can effectively avoid the oversmoothing of edges and achieve more precise classification with fine details and more realistic features.

3.3. Parameter Analysis

3.3.1. Varying Proportions of Training Samples

The classification performance of our proposed method is extremely affected by the proportions of training samples. We randomly pick labeled samples in the grid of { 1 % , 3 % , 5 % , 7 % , 10 % , 20 % , 30 % } as the training size, which aims to analyze the classification performance under the varying proportions of training samples. The classification results of three experimental datasets are provided in Figure 8. In Figure 8, for the three datasets, it can be obviously observed that as the training size increases, the OA, AA, and Kappa values gradually increase for the three datasets. When the training size is 20%, the values of criteria metrics are the most impressive. As the training size outweighs 20%, the values of criteria metrics gradually decline. This is because although a large number of training samples contribute to the training process of HFENet, it may introduce more background information and noisy pixels, which weaken the effect of label pixels and are adverse to the classification performance. In addition, we can also find that the proportions of training samples have a great impact on the IP and Houston2013 datasets. This is because, in the early stages of growth, the crop areas of the IP dataset have been imaged and induced strong mixing phenomena, which increase the classification difficulty. The Houston 2013 dataset has highly similar spectral characteristics between categories, where only a small number of categories are labeled, and most of the samples are unlabeled. The two datasets involve a relatively large number of labeled data for training to fulfill decent classification accuracy. In comparison, the UP dataset contains a mass of labeled samples and utilizes a small number of labeled samples as the training set to achieve good classification accuracies. To make the proposed method generalized and obtain excellent classification results, we set the appropriate proportion of training samples to 20% for three datasets.

3.3.2. Different Spatial Sizes of Input Image Patches

The too-small spatial size of the input image patch, due to the insufficient receptive field, results in important information loss. The too-large spatial size of the input image patch involves many noisy pixels and is persecuted for the interclass field. Therefore, we set the spatial size of the input image patch in the grid of { 5 × 5 , 7 × 7 , 9 × 9 , 11 × 11 , 13 × 13 , 15 × 15 } to analyze the classification performance under the different spatial sizes. The classification results of three experimental datasets are shown in Figure 9. In Figure 9, in the UP dataset, it can be readily found that the values of criteria metrics attained are unexceptionable when the spatial size is 15 × 15 . The OA, AA, and Kappa values are optimal when the spatial size is 7 × 7 for the IP and Houston2013 datasets. The above results show that when the spatial size is optimal, the input image patch contains less background information and noisy pixels, and the label pixel can play an important role in the classification task. Hence, to achieve splendid classification results, we set the definitive spatial size of the input image patch for three datasets to 15 × 15 , 7 × 7 , and 7 × 7 , respectively.

3.3.3. Diverse Numbers of Principal Components

HSI contains abundant spectral information from hundreds to thousands of narrow bands, but these bands have a high correlation with each other and easily trigger the Hughe phenomenon, which prejudices the classification performance. Therefore, before extracting general spectral–spatial information, the PCA was performed on the raw HSI. We set the principal component numbers in the grid of { 5 , 10 , 20 , 30 , 40 } to analyze the classification performance under the diverse number of principal components. The classification results of three experimental datasets are shown in Figure 10. In Figure 10, for the UP dataset, the accuracies of OA, AA, and Kappa rise as the number of principal components increases; it can be clearly seen that when the principal component number is 40, our proposed HFENet achieves impressive results. For the IP dataset, except the number of principal components is 10, the accuracies of OA, AA, and Kappa increase monotonically as the number of principal components increases. This is because, compared with other conditions, when the number of principal components is 10, these spectral bands have a high correlation with each other and are adverse to the classification task. For the Houston2013 dataset, the accuracies of OA, AA, and Kappa fluctuate significantly. When the principal component number is 30, our proposed HFENet obtains competitive accuracies. These phenomena indicate that the number of principal components has a great impact on the classification performance for the Houston2013 dataset. Hence, to obtain the best classification performance, we set the most suitable number of principal components for the three datasets to 40, 40, and 30, respectively.
HFEB is built to capture the spectral–spatial structure information of distinct types, scales, and branches. When the number of HFEBs is too small, obtained spectral–spatial information is inadequate; when the number of HFEBs is too large, the number of parameters and model complexity exacerbate. Both of these situations do not contribute to the classification task. Hence, setting a pertinent number of HFEBs is crucial for our developed method. The number of HFEBs is set in the grid of { 2 , 3 , 4 , 5 , 6 } to analyze the classification performance under the varying numbers of HFEBs. The classification results of three experimental datasets are shown in Figure 11. In Figure 11, for the UP and IP datasets, it can be obviously observed that when the number of HFEBs is 2, our proposed HFENet can sufficiently exploit spectral–spatial structure information of distinct types and scales and obtains excellent classification performance. For the Houston2013 dataset, when the number of HFEBs is 4, the values of criteria metrics are perfect. Hence, we set the most appropriate HFEB numbers for three datasets to 2, 2, and 4, respectively.

3.3.4. Different Numbers of Groups for SAEB

SAEB can adaptively recalibrate spatial-wise and spectral-wise responses to generate the purified spectral–spatial information. The number of groups has a significant effect on the classification results of the proposed method. When the number of groups is too small, redundant information and interference pixels are inadequately filtered; when the number of groups is too large, the number of parameters and model complexity exacerbate. Both of these situations are not conducive to the classification task. Therefore, the number of groups is set in the grid of { 2 , 4 , 8 , 16 , 32 } to analyze the classification performance under the varying numbers of groups. The classification accuracy of three experimental datasets is provided in Figure 12. In Figure 12, it can be readily found that for three datasets, our proposed HFENet achieves the outstanding values of criteria metrics, as the number of groups for SAEB is 8. Hence, we set the definitive numbers of groups for SAEB for three datasets to 8, 8, and 8, respectively.
The spectral enhancement branch can emphasize the meaningful bands and fade out the irrelevant ones, which can model the interdependencies between features and enhance the expressive ability of the model. The channel ratio r decides the number of neurons in the first fully connected layer, which is utilized to reduce computation. We set the channel ratios in the grid of { 1 , 2 , 4 , 8 , 16 , 32 } to analyze the classification performance under the diverse channel ratios. The classification results of three experimental datasets are shown in Figure 13. In Figure 13, it can be clearly seen that, for the UP dataset, when the r is 4, the classification performance is the worst; when the r is 2, the accuracies of AA, OA, and Kappa are best. For the IP dataset, when the r is 1 or 8, the classification ability does not manifest well; when the r is 2, the classification accuracies are excellent. For the Houston2013 dataset, when the r is 2 or 16, the classification perform well; the classification accuracies of other conditions are inferior. In addition, we can also find that the accuracies of OA, AA, and Kappa do not increase monotonically as r increases but fluctuate significantly for the IP and Houston2013 datasets. A possible reason for this is that the spectral enhancement branch overfits the feature spectral-wise correlations. Compared with the above two datasets, the accuracies of OA, AA, and Kappa slightly degrade as r increases. This is because that spectral enhancement branch underfits the feature spectral-wise correlations. Therefore, to obtain the outstanding classification results, the most suitable channel ratios are 2, 2, and 16 for three datasets, respectively.

3.3.5. Varying L2 Regularization Parameters

L2 regularization effectively avoids the overfitting problem and was applied to our proposed method. We set the spatial size of the input image patch in the grid of { 0 , 0.0005 , 0.002 , 0.01 , 0.02 , 0.03 , 0.1 , 1 } to analyze the classification performance under the varying L2 regularization parameters. The classification results of the three experimental datasets are shown in Figure 14. In Figure 14, it can be obviously observed that the most proper L2 regularization parameters are 0.002, 0.002, and 0.03 for the three datasets, respectively.

3.4. Ablation Study

3.4.1. Efficiency Analysis of HFRB

HFRB was constructed to strengthen internal and external interactions of different channels and layers while enriching the local dependencies of spectral–spatial features. HFRB is composed of SRU and CRU. The former is utilized to boost the internal relations of different channels; the latter is designed to enhance the robustness of spectral–spatial features by learning the external correlations of all the channels. To sufficiently verify the efficiency of HFRB, comparative experiments were performed under three conditions, i.e., case 1 (only using SRU), case 2 (only using CRU), and case 3 (namely, our presented method, using SRU and CRU). Figure 15 provides the classification results of the three experimental datasets.
According to Figure 15, it can be obviously observed that for the three datasets, case 3 obtains competitive values of the criteria metric. Regarding the UP dataset, case 2 achieves the worst values of the criteria metric, which are 9.19%, 22.26%, and 12.57% lower than those of case 3. For the Houston2013 dataset, the classification performance manifests a similar behavior with respect to the results obtained for the UP dataset. Regarding the IP dataset, case 1 has terrible values in terms of the criteria metric, which are 0.8%, 2.3%, and 0.91% lower than those of case 3. These results sufficiently demonstrate that SRU and CRU complement each other. Only when the SRU and CRU are utilized together can they fully strengthen internal and external interactions of different channels and layers and give play to the effect of 1 + 1 > 2 .

3.4.2. Efficiency Analysis of HFEB

HFEB utilizes multiple promising functional HFRBs to capture spectral–spatial structure information of distinct types and scales, where each HFRB exploits 2D convolutional operations with different sizes. To sufficiently verify the efficiency of HFEB, comparative experiments are performed under three conditions, i.e., case 1 (only using HFEB with 2D convolutional operations with 3 × 3 size), case 2 (using HFEBs with 2D convolutional operations with 3 × 3 and 5 × 5 sizes), and case 3 (namely, our presented method, using HFEBs with 2D convolutional operations with 3 × 3 , 5 × 5 , and 7 × 7 sizes). Figure 16 provides the classification results of the three experimental datasets.
As shown in Figure 16, it can be readily found that values of the criteria metric of case 1 are the lowest for three datasets. Regarding the UP dataset, case 1 obtains 99.03% OA, 98.21% AA, and 98.71% Kappa, which are 0.96%, 1.73%, and 1.24% lower than those of case 2, respectively. Regarding the IP dataset, case 1 obtains 97.10% OA, 92.40% AA, and 96.69% Kappa, which are, respectively, 2.41%, 7.3%, and 2.75% lower than those of case 2. Regarding the Houston2013 dataset, case 1 achieves 49.21% OA, 39.69% AA, and 44.63% Kappa, which are, respectively, 50.52%, 59.93%, and 55.07% lower than those of case 2. These numerical values effectively expound that utilizing HFEBs with 2D convolutional operations with 5 × 5 size is important. By comparison, the values of the criteria metric of case 3 are clearly better than the other two conditions. For example, for the houston2013 dataset, case 3 has 99.73% OA, 99.62% AA, and 99.70% Kappa, which are 36.03%, 47.93%, and 39.26% higher than those of case 2, respectively. These confirm that our constructed HFEB is successful and plays an important role.

3.4.3. Efficiency Analysis of HFENet Model

To analyze and demonstrate the impact of each component, comparative experiments were performed under three conditions, i.e., network 1 (only using HFEB), network 2 (only using SAEB), and network 3 (namely, our presented method, using HFEB and SAEB). Figure 17 provides the classification results of the three experimental datasets.
As shown in Figure 17, for the UP dataset, it can be clearly seen that the classification performance of network 2 is the worst. For the IP and UP datasets, the classification performance manifests a similar behavior; the values of the criteria metric of network 1 are the most terrible. This is because, compared with network 2, network 1 needs more parameters for the training process. The UP dataset has a relatively large number of labeled samples; although network 1 suffers from noisy pixels and redundant bands, it can extract spectral–spatial information from different types and scales and obtain good classification. The other two datasets contain a relatively small number of labeled samples; compared with network 1, the model architecture of network 2 is relatively not complicated and obtains good classification. Network 1 may lead to the overfitting problem. Among the three conditions, network 3 makes a big impression and obtains excellent classification results. For example, for the UP dataset, network 3 obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 1.73%, 3%, and 2.29% higher than those of network 1, respectively. For the IP and Houston2013 datasets, the obtained results exhibit very similar behavior with respect to the results obtained for the UP dataset. These results sufficiently prove that our designed SAEB is valid and effectively fades out the redundant information and noisy pixels, further generating the purified spectral–spatial information. Moreover, compared with network 2, network 3 achieves values of the criteria metric 4.29%, 15.97%, and 5.71% higher than those of network 2 for the UP dataset. The obtained results for the other two datasets have very similar behavior with respect to the results obtained for the UP dataset. These results abundantly verify that our devised HFEB is effective and can extract more discriminative and representative spectral–spatial structure information of distinct types, scales, and branches while modeling the global long-range dependencies of spectral–spatial features. To sum up, the constructed HFEB and SAEB of the proposed methods considerably contribute to the classification performance up to a point.

4. Conclusions

To remedy gradient disappearance and fully exploit spectral–spatial information, this article presents an innovative hybrid-scale feature enhancement network (HFENet) for HSI classification. Different from the classification methods focusing on utilizing the fixed-scale convolutional kernels or multiple available, receptive fields with varying scales, HFENet uses a hybrid-scale feature extraction block (HFEB) to model the global long-range spectral–spatial dependencies of different scales, types, and branches, enriching the diversity of informative features. In addition, to generate the purified spectral–spatial information, HFENet adopts a shuffle attention enhancement block (SAEB) to adaptively recalibrate spectral-wise and spatial-wise responses, effectively filtering redundant information and noisy pixels and is conducive to enhancing the classification performance. From an experimental point of view, our proposed HFENet possesses effectiveness and superiority and exhibits state-of-the-art performance compared with several advanced methods. In the future, we will be devoted to utilizing a neural search strategy to adaptively design the model architecture and apply the unsupervised, semi-supervised, or training mechanisms to our proposed method. Meanwhile, we will try to apply the proposed classification method to some computer vision tasks, such as target recognition, medical diagnosis, and urban planning.

Author Contributions

Conceptualization, D.L.; investigation, D.L. and J.Z.; formal analysis, D.L.; validation, J.Z.; original draft preparation, D.L.; funding acquisition, M.L. and J.Z.; review and editing, D.L., T.S., G.Q., M.L. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 62101529.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, Y.; Chanussot, J.; Jia, X.; Benediktsson, J.A. Multiple Kernel learning for hyperspectral image classification: A review. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6547–6565. [Google Scholar] [CrossRef]
  2. Wang, D.; Zhang, J.; Du, B.; Zhang, L.; Tao, D. DCN-T: Dual Context Network With Transformer for Hyperspectral Image Classification. IEEE Trans. Image Process. 2023, 32, 2536–2551. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, X.; Sun, Y.; Shang, K.; Zhang, L.; Wang, S. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4117–4128. [Google Scholar] [CrossRef]
  4. Yang, X.; Yu, Y. Estimating soil salinity under various moisture conditions: An experimental study. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2525–2533. [Google Scholar] [CrossRef]
  5. Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1388–1400. [Google Scholar] [CrossRef]
  6. Lu, B.; He, Y.; Dao, P.D. Comparing the performance of multispectral and hyperspectral images for estimating vegetation properties. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 1784–1797. [Google Scholar] [CrossRef]
  7. Wang, J.; Hu, J.; Liu, Y.; Hua, Z.; Hao, S.; Yao, Y. EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4688. [Google Scholar] [CrossRef]
  8. Yuan, D.; Yu, A.; Qian, Y.; Xu, Y.; Liu, Y. S2Former: Parallel Spectral-Spatial Transformer for Hyperspectral Image Classification. Remote Sens. 2023, 12, 3937. [Google Scholar] [CrossRef]
  9. Liu, B.; Jia, Z.; Guo, P.; Kong, W. Hyperspectral Image Classification Based on Transposed Convolutional Neural Network Transformer. Remote Sens. 2023, 12, 3979. [Google Scholar] [CrossRef]
  10. Fu, L.; Chen, X.; Pirasteh, S.; Xu, Y. The Classification of Hyperspectral Images: A Double-Branch Multi-Scale Residual Network. Remote Sens. 2023, 15, 4471. [Google Scholar] [CrossRef]
  11. Borsoi, R.; Imbiriba, T.; Berumdez, J.C.; Richard, C.; Jutten, C. Spectral variability in hyperspectral data unmixing: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2021, 9, 223–270. [Google Scholar] [CrossRef]
  12. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Li, J. Visual attention driven hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8065–8080. [Google Scholar] [CrossRef]
  13. Keshava, N. Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1552–1565. [Google Scholar] [CrossRef]
  14. Bruzzone, L.; Roil, F.; Serpico, S.B. An extension of the Jeffreys-Matusita distance to multiclass cases for feature selection. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1318–1321. [Google Scholar] [CrossRef]
  15. Kailath, T. The divergence and bhattacharyya distance measures in signal selection. IEEE Trans. Commun. 1967, 15, 52–60. [Google Scholar] [CrossRef]
  16. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  17. Wang, J.; Chang, C.-I. Independent component analysis-based dimensionality reduction with applications in hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1586–1600. [Google Scholar] [CrossRef]
  18. Nielsen, A.A. Kernel maximum autocorrelation factor and minimum noise fraction transformations. IEEE Trans. Image Process. 2011, 20, 6612–6624. [Google Scholar] [CrossRef]
  19. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  20. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
  21. Belgiu, M.; Dragut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  22. Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
  23. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  24. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  25. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  26. Li, T.; Zhang, J.; Zhang, Y. Classification of hyperspectral image based on deep belief networks. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5132–5136. [Google Scholar]
  27. Hong, D.; Guo, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  28. Hang, R.; Zhou, F.; Liu, Q.; Ghamisi, P. Classification of hyperspectral images via multitask generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1424–1436. [Google Scholar] [CrossRef]
  29. Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
  30. Li, X.; Ding, M.; Pižurica, A. Deep Feature Fusion via Two-Stream Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2615–2629. [Google Scholar] [CrossRef]
  31. Zu, B.; Li, Y.; Li, J.; He, Z.; Wang, H.; Wu, P. Cascaded Convolution-Based Transformer With Densely Connected Mechanism for Spectral–Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2615–2629. [Google Scholar] [CrossRef]
  32. Zhang, C.; Li, G.; Lei, R.; Du, S.; Zhang, X.; Zheng, H.; Wu, Z. Deep Feature Aggregation Network for Hyperspectral Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5314–5325. [Google Scholar] [CrossRef]
  33. Xue, A.; Yu, X.; Liu, B.; Tan, X.; Wei, X. HResNetAM: Hierarchical Residual Network with Attention Mechanism for Hyper-spectral Image Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 3566–3580. [Google Scholar] [CrossRef]
  34. Xu, Q.; Xiao, Y.; Wang, D.; Luo, B. CSA-MSO3DCNN: Multiscale Octave 3D CNN with Channel and Spatial Attention for Hyperspectral Image Classification. Remote Sens. 2020, 12, 188. [Google Scholar] [CrossRef]
  35. Gao, H.; Chen, Z.; Li, C. Sandwich Convolutional Neural Network for Hyperspectral Image Classification Using Spectral Feature Enhancement. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3006–3015. [Google Scholar] [CrossRef]
  36. Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning Sensor-Specific Spatial-Spectral Features of Hyperspectral Images via Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
  37. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  38. Xu, Y.; Du, B.; Zhang, L. Beyond the Patchwise Classification: Spectral-Spatial Fully Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Big Data 2020, 6, 492–506. [Google Scholar] [CrossRef]
  39. Zou, L.; Zhu, X.; Wu, C.; Liu, Y.; Qu, L. Spectral–Spatial Exploration for Hyperspectral Image Classification via the Fusion of Fully Convolutional Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 3, 659–674. [Google Scholar] [CrossRef]
  40. Zhang, M.; Li, W.; Du, Q. Diverse Region-Based CNN for Hyperspectral mage Classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
  41. Xu, F.; Zhang, G.; Song, C.; Wang, H.; Mei, S. Multiscale and Cross-Level Attention Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501615. [Google Scholar] [CrossRef]
  42. Zhang, H.; Gong, C.; Bai, Y.; Bai, Z.; Li, Y. 3-D-ANAS: 3-D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
  43. Safari, K.; Prasad, S.; Labate, D. A Multiscale Deep Learning Approach for High-Resolution Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 167–171. [Google Scholar] [CrossRef]
  44. Zhang, X.; Wang, Y.; Zhang, N.; Xu, D.; Luo, H.; Chen, B.; Ben, G. Spectral–Spatial Fractal Residual Convolutional Neural Network With Data Balance Augmentation for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10473–10487. [Google Scholar] [CrossRef]
  45. Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Residual Group Channel and Space Attention Network for Hyperspectral Image Classification. Remote Sens. 2020, 12, 2035. [Google Scholar] [CrossRef]
  46. Guo, H.; Liu, J.; Yang, J.; Xiao, Z.; Wu, Z. Deep Collaborative Attention Network for Hyperspectral Image Classification by Combining 2-D CNN and 3-D CNN. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4789–4802. [Google Scholar] [CrossRef]
  47. Liu, D.; Li, Q.; Li, M.; Zhang, J. A Decompressed Spectral-Spatial Multiscale Semantic Feature Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4642. [Google Scholar] [CrossRef]
  48. Chen, Y.; Jiang, C.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  49. Cao, X.; Ren, M.; Zhao, J.; Li, H.; Jiao, L. Hyperspectral Imagery Classification Based on Compressed Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1583–1587. [Google Scholar] [CrossRef]
  50. Xie, J.; He, N.; Fang, L.; Ghamisi, P. Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 246–259. [Google Scholar] [CrossRef]
  51. Wang, J.; Huang, R.; Guo, S.; Li, L.; Zhu, M.; Yang, S.; Jiao, L. NAS-Guided Lightweight Multiscale Attention Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8754–8767. [Google Scholar] [CrossRef]
  52. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
  53. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
  54. Zhang, X.; Sahng, S.; Tang, X.; Feng, J.; Jiao, L. Spectral Partitioning Residual Network with Spatial Attention Mechanism for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  55. Dong, Z.; Cai, Y.; Cai, Z.; Liu, X.; Yang, Z.; Zhuge, M. Cooperative Spectral–Spatial Attention Dense Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 866–870. [Google Scholar] [CrossRef]
  56. Gao, H.; Miao, Y.; Cao, X.; Li, C. Densely Connected Multiscale Attention Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2563–2576. [Google Scholar] [CrossRef]
  57. Wang, X.; Fan, Y. Multiscale Densely Connected Attention Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1617–1628. [Google Scholar] [CrossRef]
  58. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  59. Zhang, C.; Li, G.; Du, S. Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9201–9222. [Google Scholar] [CrossRef]
  60. Guo, W.; Ye, H.; Cao, F. Feature-Grouped Network with Spectral–Spatial Connected Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5500413. [Google Scholar] [CrossRef]
  61. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  62. Gao, H.; Yang, Y.; Li, C.; Gao, L.; Zhang, B. Multiscale Residual Network with Mixed Depthwise Convolution for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3396–3408. [Google Scholar] [CrossRef]
  63. Li, Z.; Zhao, X.; Xu, Y.; Li, W.; Zhai, L.; Fang, Z.; Shi, X. Hyperspectral Image Classification with Multiattention Fusion Network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5503305. [Google Scholar] [CrossRef]
  64. Xu, Y.; Li, Z.; Li, W.; Du, Q.; Liu, C.; Fang, Z.; Zhai, L. Dual-Channel Residual Network for Hyperspectral Image Classification with Noisy Labels. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5507916. [Google Scholar] [CrossRef]
  65. Xiang, J.; Wei, C.; Wang, M.; Teng, L. End-to-End Multilevel Hybrid Attention Framework for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5511305. [Google Scholar] [CrossRef]
Figure 1. Overview of hybrid-scale feature enhancement network (HFENet).
Figure 1. Overview of hybrid-scale feature enhancement network (HFENet).
Remotesensing 16 00022 g001
Figure 2. Schematic of heterogenous feature refine block (HFRB).
Figure 2. Schematic of heterogenous feature refine block (HFRB).
Remotesensing 16 00022 g002
Figure 3. Schematic of hybrid-scale feature extraction block (HFEB).
Figure 3. Schematic of hybrid-scale feature extraction block (HFEB).
Remotesensing 16 00022 g003
Figure 4. Schematic of shuffle attention enhancement block (SAEB).
Figure 4. Schematic of shuffle attention enhancement block (SAEB).
Remotesensing 16 00022 g004
Figure 5. Classification maps on the UP dataset.
Figure 5. Classification maps on the UP dataset.
Remotesensing 16 00022 g005
Figure 6. Classification maps on the IP dataset.
Figure 6. Classification maps on the IP dataset.
Remotesensing 16 00022 g006
Figure 7. Classification maps on the Houston2013 dataset.
Figure 7. Classification maps on the Houston2013 dataset.
Remotesensing 16 00022 g007
Figure 8. Varying proportions of training samples.
Figure 8. Varying proportions of training samples.
Remotesensing 16 00022 g008
Figure 9. Different spatial sizes of input image patches.
Figure 9. Different spatial sizes of input image patches.
Remotesensing 16 00022 g009
Figure 10. Diverse principal component numbers: 3, 3, and 4. Varying numbers of HFEBs.
Figure 10. Diverse principal component numbers: 3, 3, and 4. Varying numbers of HFEBs.
Remotesensing 16 00022 g010
Figure 11. Varying numbers of HFEBs.
Figure 11. Varying numbers of HFEBs.
Remotesensing 16 00022 g011
Figure 12. Different numbers of groups for SAEB: 3, 3, and 6. Diverse channel ratios of spectral enhancement branch.
Figure 12. Different numbers of groups for SAEB: 3, 3, and 6. Diverse channel ratios of spectral enhancement branch.
Remotesensing 16 00022 g012
Figure 13. Diverse channel ratios of spectral enhancement branch.
Figure 13. Diverse channel ratios of spectral enhancement branch.
Remotesensing 16 00022 g013
Figure 14. Varying L2 regularization parameters.
Figure 14. Varying L2 regularization parameters.
Remotesensing 16 00022 g014
Figure 15. Efficiency analysis of HFRB.
Figure 15. Efficiency analysis of HFRB.
Remotesensing 16 00022 g015
Figure 16. Efficiency analysis of HFEB.
Figure 16. Efficiency analysis of HFEB.
Remotesensing 16 00022 g016
Figure 17. Efficiency analysis of HFENet model.
Figure 17. Efficiency analysis of HFENet model.
Remotesensing 16 00022 g017
Table 1. Data description of UP dataset.
Table 1. Data description of UP dataset.
No.ColorClassTrainTest
1Remotesensing 16 00022 i001Asphalt13265305
2Remotesensing 16 00022 i002Meadows372914,920
3Remotesensing 16 00022 i003Gravel4191680
4Remotesensing 16 00022 i004Trees6122452
5Remotesensing 16 00022 i005Metal sheets2691076
6Remotesensing 16 00022 i006Bare Soil10054024
7Remotesensing 16 00022 i007Bitumen2661064
8Remotesensing 16 00022 i008Bricks7362946
9Remotesensing 16 00022 i009Shadows189758
Total855134,225
Table 2. Data description of IP dataset.
Table 2. Data description of IP dataset.
No.ColorClassTrainTest
1Remotesensing 16 00022 i010Alfalfa1036
2Remotesensing 16 00022 i011Corn–notill2861142
3Remotesensing 16 00022 i012Corn–mintill166664
4Remotesensing 16 00022 i013Corn48189
5Remotesensing 16 00022 i014Grass–pasture97386
6Remotesensing 16 00022 i015Grass–trees146584
7Remotesensing 16 00022 i016Grass–pasture–mowed622
8Remotesensing 16 00022 i017Hay–windrowed96382
9Remotesensing 16 00022 i018Oats416
10Remotesensing 16 00022 i019Soybean–notill195777
11Remotesensing 16 00022 i020Soybean–mintill4911964
12Remotesensing 16 00022 i021Soybean–clean119474
13Remotesensing 16 00022 i022Wheat41164
14Remotesensing 16 00022 i023Woods2531012
15Remotesensing 16 00022 i024Buildings–grass–tree78308
16Remotesensing 16 00022 i025Stone–steel–towers1974
Total20558194
Table 3. Data description of the Houston 2013 dataset.
Table 3. Data description of the Houston 2013 dataset.
No.ColorClassTrainTest
1Remotesensing 16 00022 i026Healthy grass2511000
2Remotesensing 16 00022 i027Stressed grass2511003
3Remotesensing 16 00022 i028Synthetic grass140557
4Remotesensing 16 00022 i029Trees249995
5Remotesensing 16 00022 i030Soil249993
6Remotesensing 16 00022 i031Water65260
7Remotesensing 16 00022 i032Residential2541014
8Remotesensing 16 00022 i033Commercial249995
9Remotesensing 16 00022 i034Road2511001
10Remotesensing 16 00022 i035Highway246981
11Remotesensing 16 00022 i036Railway247988
12Remotesensing 16 00022 i037Parking Lot 1247986
13Remotesensing 16 00022 i038Parking Lot 294375
14Remotesensing 16 00022 i039Tennis Court86342
15Remotesensing 16 00022 i040Running Track132528
Total301112,018
Table 4. Quantitative comparison on the UP dataset.
Table 4. Quantitative comparison on the UP dataset.
No.SVM 1RF 1KNN 1GaussianNB 1HybridSN 2RSSAN 3MSRNMAFN 4DCRN 5DMCNMSDANHFENet
176.5293.1791.3496.0199.5399.7999.83100.0099.4697.8699.8999.94
285.9489.7088.7480.2099.9999.9199.99100.0099.9599.9499.7699.98
383.7885.3271.8128.0999.5899.13100.00100.0097.9993.1173.5099.82
495.9494.8396.6150.01100.0099.8896.6192.8184.5586.8899.71100.00
599.8199.2799.3380.4999.7299.9198.63100.0086.4665.72100.00100.00
695.8691.4882.3037.77100.0099.3399.6298.7299.8886.5499.63100.00
70.0086.9074.8740.6198.8899.5399.91100.0096.8959.21100.0099.91
867.3983.0180.6869.9899.8697.3393.4997.1896.0898.9599.32100.00
999.8799.87100.00100.0099.4599.2195.6197.0496.8492.6099.0699.74
OA (%)83.8990.3887.6367.4699.8399.5398.9498.9897.5592.5398.0099.96
AA (%)70.9087.7185.1473.0399.3599.2398.2199.1994.8390.0897.1399.94
Kappa × 10077.8287.0583.3458.4899.7899.3898.5998.6696.7690.1797.3599.95
Table 5. Quantitative comparison on the IP dataset.
Table 5. Quantitative comparison on the IP dataset.
No.SVMRFKNNGaussianNBHybridSNRSSANMSRNMAFNDCRNDMCNMSDANHFENet
10.0086.6736.3631.0797.0697.3090.3292.31100.00100.00100.00100.00
261.5182.0250.3845.5498.8698.0097.4596.2077.3097.4698.9599.21
384.0478.6661.9535.9297.0499.5498.74100.0086.2593.5099.54100.00
446.4372.8753.2615.3198.8699.4699.3996.8995.9496.8198.8598.42
588.8290.1684.713.5798.4798.2292.54100.0096.0098.6998.7099.23
676.7282.6178.0867.87100.0099.8399.6599.4997.27100.0099.49100.00
70.0083.3368.42100.00100.00100.00100.0095.65100.00100.0086.96100.00
883.4987.1688.5583.7896.4699.4880.08100.00100.0098.7099.74100.00
90.00100.0040.0011.0276.19100.000.00100.0062.50100.00100.00100.00
1070.8983.6169.4027.0799.7499.4888.93100.0084.3999.8798.4699.48
1158.5175.1669.4960.6098.7799.1997.5799.9092.6499.6999.7499.49
1259.3866.7462.1323.9598.3498.1391.5297.9092.4692.7491.3098.34
1382.2392.5386.7084.38100.0099.3994.58100.0090.3096.9197.02100.00
1487.3989.7891.7675.0899.9099.80100.0099.90100.0099.4099.9099.80
1586.3072.0064.12753.1794.1298.72100.0099.6897.6092.9295.00100.00
1698.36100.00100.0098.4498.6797.3394.3794.87100.0091.1498.5398.67
OA (%)70.2189.9170.9550.8898.5899.0795.5699.0790.3697.8698.6199.51
AA (%)53.0666.7762.3952.6596.8796.5386.2598.6082.2293.4794.8599.70
Kappa × 10065.0778.0166.6344.0798.3998.9494.9498.9488.9797.5798.4199.44
The bold font highlights which mechanic works best.
Table 6. Quantitative comparison on the Houston 2013 dataset.
Table 6. Quantitative comparison on the Houston 2013 dataset.
No.SVMRFKNNGaussianNBHybridSNRSSANMSRNMAFNDCRNDMCNMSDANHFENet
181.9898.4997.7493.9795.8898.5299.8099.7098.8899.7899.0099.40
298.8598.4098.4498.3198.5799.80100.0099.1197.8494.8099.9099.50
396.6899.8198.3791.35100.00100.00100.00100.00100.00100.00100.00100.00
498.5398.0099.4098.8199.6999.6099.5099.8095.5198.1299.5099.90
588.5195.0792.4271.57100.00100.00100.0099.8099.8099.90100.00100.00
6100.0099.17100.0035.89100.00100.00100.00100.0097.6585.81100.00100.00
768.6688.8789.3954.5296.4197.3799.7099.1094.5197.9298.8299.51
884.5392.2388.0679.4197.6498.70100.0097.8598.9096.4798.0399.70
959.6580.7279.2243.0396.2396.5297.4699.1996.4499.6899.19100.00
1058.3885.5984.590.0099.0997.4899.9097.98100.0093.2599.9099.29
1159.1780.7283.4035.92100.0097.23100.0098.40100.0098.3199.2099.60
1263.0776.8679.7924.3199.4997.4799.3996.5499.3998.0099.2999.90
13100.0087.7493.6217.54100.0099.7086.7898.88100.00100.0098.9399.72
1478.5796.5098.5668.72100.0099.13100.00100.00100.00100.0098.84100.00
1599.0899.6299.4399.7999.81100.0098.3297.2495.83100.00100.00100.00
OA (%)77.9090.2390.5061.2198.5598.5399.1198.8098.1997.6099.3399.73
AA (%)77.1589.1788.8763.6798.3998.3899.0898.5897.9597.3199.2399.62
Kappa × 10076.0789.8689.7258.159598.4398.4199.0498.7098.0497.4199.2899.70
The bold font highlights which mechanic works best.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, D.; Shao, T.; Qi, G.; Li, M.; Zhang, J. A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sens. 2024, 16, 22. https://doi.org/10.3390/rs16010022

AMA Style

Liu D, Shao T, Qi G, Li M, Zhang J. A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sensing. 2024; 16(1):22. https://doi.org/10.3390/rs16010022

Chicago/Turabian Style

Liu, Dongxu, Tao Shao, Guanglin Qi, Meihui Li, and Jianlin Zhang. 2024. "A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification" Remote Sensing 16, no. 1: 22. https://doi.org/10.3390/rs16010022

APA Style

Liu, D., Shao, T., Qi, G., Li, M., & Zhang, J. (2024). A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sensing, 16(1), 22. https://doi.org/10.3390/rs16010022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop