A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification
Abstract
:1. Introduction
- (1)
- We construct a heterogenous feature refine block (HFRB) to capture the internal correlations of different channels and the external interactions of all channels, which complement each other, thereby enhancing the local dependencies of spectral–spatial features.
- (2)
- Different from existing multiscale feature extraction strategies, our designed hybrid-scale feature extraction block (HFEB) exploits multiple HFRBs to obtain more discriminative and representative spectral–spatial structure information of distinct scales, types, and branches, which can not only augment the multiplicity of spectral–spatial features but also model the global long-range dependencies of spectral–spatial features.
- (3)
- To effectively fade out the redundant information and noisy pixels, we devise a shuffle attention enhancement block (SAEB) to adaptively recalibrate spectral-wise and spatial-wise feature responses to generate the purified spectral–spatial information, which is conducive to enhancing the classification performance.
2. Methods
2.1. Framework of HFENet Model
2.2. Heterogenous Feature Refine Block
2.3. Hybrid-Scale Feature Extraction Block
2.4. Shuffle Attention Enhancement Block
3. Experimental Results and Discussion
3.1. Hyperspectral Datasets and Setup
3.2. Classification Comparison with State-of-the-Art Models
- (1)
- According to Table 4, Table 5 and Table 6, it is obvious that ML-based methods obtain inferior classification results compared with DL-based methods. For example, for the UP dataset, GaussianNB has the worst OA, AA, and Kappa values, which are 33.37%, 26.32%, and 41.30% lower than those of HybridSN, respectively. For the IP dataset, SVM has the second worst OA, AA, and Kappa values, which are 25.35%, 33.19%, and 29.87% lower than those of MSRN, respectively. This is because ML-based methods only utilize the spectral information and ignore fertile spatial information. Meanwhile, they devilishly rely on hand-crafted features with poor generalization ability and limited representation ability, which damage the classification accuracies. Due to the hierarchical structure and power feature extraction ability, DL-based methods can adaptively capture features and obtain good classification values.
- (2)
- Table 4 provides the classification results for the UP dataset. This scenario contains many smaller areas of the species and possesses rich spatial information; most of the methods yield good classification results. Table 5 and Table 6 provide the classification results for the IP and Houston2013 datasets. In the early stages of growth, the crop areas of the former have been imaged and induce strong mixing phenomena; the latter has highly similar spectral characteristics between categories, which increases the classification difficulty. Nevertheless, our proposed HFENet still achieves impressive results on the three datasets. For example, for the Houston2013 dataset, HFENet obtains 99.73% OA, 99.62% AA, and 99.70% Kappa, which are 2.13%, 2.31%, and 2.29% higher than those of DMCN, respectively. For the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 1.96%, 2.81%, and 2.60% higher than those of MSDAN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 9.15%, 17.48%, and 10.47% higher than those of DCRN, respectively. These sufficiently prove the superiority and stability of our proposed HFENet.
- (3)
- From the point of view of the attention mechanism, RSSAN devises a spectral–spatial attention learning module to refine the learned features. MAFN uses a spatial attention module and a band attention module to relieve the influence of noisy pixels and redundant bands. Our constructed SAEB adaptively recalibrates spectral-wise and spatial-wise feature responses to generate the purified spectral–spatial information. In Table 4, Table 5 and Table 6, we can clearly see that our presented method obtains superb values for three datasets. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 0.43%, 0.71%, and 0.36% higher than those of RSSAN, 0.98%, 0.75%, and 1.39% higher than those of MAFN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 0.44%, 3.17%, and 0.5% higher than those of RSSAN, 0.44%, 1.1%, and 0.5% higher than those of MAFN, respectively. This is because SAEB can effectively dispel the interference of redundant bands and noises from local and global views.
- (4)
- From the point of view of the multiscale strategy, MSRN devises a multiscale residual block with mixed depthwise convolution to achieve multiscale feature learning. MSDAN designs three different scales modules to enhance feature reuse. Our proposed HFEB can exploit spectral–spatial structure information of distinct types and scales, which is composed of two parallel branches: the upper branch contains two HFEBs with 2D convolutional operations with size and size, respectively; the lower branch contains one HFEB with 2D convolutional operations with size. In Table 4, Table 5 and Table 6, the classification results indicate that HFENet is advantageous in extracting multiscale features. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 1.02%, 1.73%, and 1.36% higher than those of MSRN, and 1.96%, 2.81%, and 2.6% higher than those of MSDAN, respectively. This is because our proposed HFEB exploits multiple HFRBs to obtain more discriminative and representative spectral–spatial structure information instead of simple concatenation convolutional layers with different sizes. HybridSN, RSSAN, MAFN, DCRN, and DMCN utilize the fixed-scale convolutional kernels to extract spectral–spatial features. Although these methods obtain good classification performance, they lack an exploration of the diversity of spectral–spatial features. Compared with the aforementioned methods, our proposed HFENet uses multiple HFRBs with diverse sizes to augment the multiplicity of spectral–spatial features and model the global long-range dependencies of spectral–spatial features. For example, for the UP dataset, HFENet obtains 99.96% OA, 99.94% AA, and 99.95% Kappa, which are 2.41%, 5.11%, and 3.19% higher than those of DCRN, respectively. For the IP dataset, HFENet obtains 99.51% OA, 99.70% AA, and 99.44% Kappa, which are 1.65%, 6.23%, and 1.87% higher than those of DMCN, respectively.
- (5)
- Figure 5, Figure 6 and Figure 7 provide the ground-truth map and the visual classification result map of each comparison method for the three datasets. By comparison, the visual classification result map of our proposed HFENet is closest to the ground truth and the cleanest. Four ML-based methods are inclined to produce salt and pepper noises on the visual classification result maps for three datasets. The visual classification result maps of seven DL-based methods are relatively smooth but may result in the misclassification of pixels at edges. Particularly, our proposed HFENet can effectively avoid the oversmoothing of edges and achieve more precise classification with fine details and more realistic features.
3.3. Parameter Analysis
3.3.1. Varying Proportions of Training Samples
3.3.2. Different Spatial Sizes of Input Image Patches
3.3.3. Diverse Numbers of Principal Components
3.3.4. Different Numbers of Groups for SAEB
3.3.5. Varying L2 Regularization Parameters
3.4. Ablation Study
3.4.1. Efficiency Analysis of HFRB
3.4.2. Efficiency Analysis of HFEB
3.4.3. Efficiency Analysis of HFENet Model
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Guo, Y.; Chanussot, J.; Jia, X.; Benediktsson, J.A. Multiple Kernel learning for hyperspectral image classification: A review. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6547–6565. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, J.; Du, B.; Zhang, L.; Tao, D. DCN-T: Dual Context Network With Transformer for Hyperspectral Image Classification. IEEE Trans. Image Process. 2023, 32, 2536–2551. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Sun, Y.; Shang, K.; Zhang, L.; Wang, S. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4117–4128. [Google Scholar] [CrossRef]
- Yang, X.; Yu, Y. Estimating soil salinity under various moisture conditions: An experimental study. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2525–2533. [Google Scholar] [CrossRef]
- Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1388–1400. [Google Scholar] [CrossRef]
- Lu, B.; He, Y.; Dao, P.D. Comparing the performance of multispectral and hyperspectral images for estimating vegetation properties. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 1784–1797. [Google Scholar] [CrossRef]
- Wang, J.; Hu, J.; Liu, Y.; Hua, Z.; Hao, S.; Yao, Y. EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4688. [Google Scholar] [CrossRef]
- Yuan, D.; Yu, A.; Qian, Y.; Xu, Y.; Liu, Y. S2Former: Parallel Spectral-Spatial Transformer for Hyperspectral Image Classification. Remote Sens. 2023, 12, 3937. [Google Scholar] [CrossRef]
- Liu, B.; Jia, Z.; Guo, P.; Kong, W. Hyperspectral Image Classification Based on Transposed Convolutional Neural Network Transformer. Remote Sens. 2023, 12, 3979. [Google Scholar] [CrossRef]
- Fu, L.; Chen, X.; Pirasteh, S.; Xu, Y. The Classification of Hyperspectral Images: A Double-Branch Multi-Scale Residual Network. Remote Sens. 2023, 15, 4471. [Google Scholar] [CrossRef]
- Borsoi, R.; Imbiriba, T.; Berumdez, J.C.; Richard, C.; Jutten, C. Spectral variability in hyperspectral data unmixing: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2021, 9, 223–270. [Google Scholar] [CrossRef]
- Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Li, J. Visual attention driven hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8065–8080. [Google Scholar] [CrossRef]
- Keshava, N. Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1552–1565. [Google Scholar] [CrossRef]
- Bruzzone, L.; Roil, F.; Serpico, S.B. An extension of the Jeffreys-Matusita distance to multiclass cases for feature selection. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1318–1321. [Google Scholar] [CrossRef]
- Kailath, T. The divergence and bhattacharyya distance measures in signal selection. IEEE Trans. Commun. 1967, 15, 52–60. [Google Scholar] [CrossRef]
- Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
- Wang, J.; Chang, C.-I. Independent component analysis-based dimensionality reduction with applications in hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1586–1600. [Google Scholar] [CrossRef]
- Nielsen, A.A. Kernel maximum autocorrelation factor and minimum noise fraction transformations. IEEE Trans. Image Process. 2011, 20, 6612–6624. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
- Belgiu, M.; Dragut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Li, T.; Zhang, J.; Zhang, Y. Classification of hyperspectral image based on deep belief networks. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5132–5136. [Google Scholar]
- Hong, D.; Guo, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
- Hang, R.; Zhou, F.; Liu, Q.; Ghamisi, P. Classification of hyperspectral images via multitask generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1424–1436. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
- Li, X.; Ding, M.; Pižurica, A. Deep Feature Fusion via Two-Stream Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2615–2629. [Google Scholar] [CrossRef]
- Zu, B.; Li, Y.; Li, J.; He, Z.; Wang, H.; Wu, P. Cascaded Convolution-Based Transformer With Densely Connected Mechanism for Spectral–Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2615–2629. [Google Scholar] [CrossRef]
- Zhang, C.; Li, G.; Lei, R.; Du, S.; Zhang, X.; Zheng, H.; Wu, Z. Deep Feature Aggregation Network for Hyperspectral Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5314–5325. [Google Scholar] [CrossRef]
- Xue, A.; Yu, X.; Liu, B.; Tan, X.; Wei, X. HResNetAM: Hierarchical Residual Network with Attention Mechanism for Hyper-spectral Image Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 3566–3580. [Google Scholar] [CrossRef]
- Xu, Q.; Xiao, Y.; Wang, D.; Luo, B. CSA-MSO3DCNN: Multiscale Octave 3D CNN with Channel and Spatial Attention for Hyperspectral Image Classification. Remote Sens. 2020, 12, 188. [Google Scholar] [CrossRef]
- Gao, H.; Chen, Z.; Li, C. Sandwich Convolutional Neural Network for Hyperspectral Image Classification Using Spectral Feature Enhancement. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3006–3015. [Google Scholar] [CrossRef]
- Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning Sensor-Specific Spatial-Spectral Features of Hyperspectral Images via Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, L. Beyond the Patchwise Classification: Spectral-Spatial Fully Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Big Data 2020, 6, 492–506. [Google Scholar] [CrossRef]
- Zou, L.; Zhu, X.; Wu, C.; Liu, Y.; Qu, L. Spectral–Spatial Exploration for Hyperspectral Image Classification via the Fusion of Fully Convolutional Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 3, 659–674. [Google Scholar] [CrossRef]
- Zhang, M.; Li, W.; Du, Q. Diverse Region-Based CNN for Hyperspectral mage Classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
- Xu, F.; Zhang, G.; Song, C.; Wang, H.; Mei, S. Multiscale and Cross-Level Attention Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501615. [Google Scholar] [CrossRef]
- Zhang, H.; Gong, C.; Bai, Y.; Bai, Z.; Li, Y. 3-D-ANAS: 3-D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
- Safari, K.; Prasad, S.; Labate, D. A Multiscale Deep Learning Approach for High-Resolution Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 167–171. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, Y.; Zhang, N.; Xu, D.; Luo, H.; Chen, B.; Ben, G. Spectral–Spatial Fractal Residual Convolutional Neural Network With Data Balance Augmentation for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10473–10487. [Google Scholar] [CrossRef]
- Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Residual Group Channel and Space Attention Network for Hyperspectral Image Classification. Remote Sens. 2020, 12, 2035. [Google Scholar] [CrossRef]
- Guo, H.; Liu, J.; Yang, J.; Xiao, Z.; Wu, Z. Deep Collaborative Attention Network for Hyperspectral Image Classification by Combining 2-D CNN and 3-D CNN. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4789–4802. [Google Scholar] [CrossRef]
- Liu, D.; Li, Q.; Li, M.; Zhang, J. A Decompressed Spectral-Spatial Multiscale Semantic Feature Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4642. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, C.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Cao, X.; Ren, M.; Zhao, J.; Li, H.; Jiao, L. Hyperspectral Imagery Classification Based on Compressed Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1583–1587. [Google Scholar] [CrossRef]
- Xie, J.; He, N.; Fang, L.; Ghamisi, P. Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 246–259. [Google Scholar] [CrossRef]
- Wang, J.; Huang, R.; Guo, S.; Li, L.; Zhu, M.; Yang, S.; Jiao, L. NAS-Guided Lightweight Multiscale Attention Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8754–8767. [Google Scholar] [CrossRef]
- Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
- Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
- Zhang, X.; Sahng, S.; Tang, X.; Feng, J.; Jiao, L. Spectral Partitioning Residual Network with Spatial Attention Mechanism for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Dong, Z.; Cai, Y.; Cai, Z.; Liu, X.; Yang, Z.; Zhuge, M. Cooperative Spectral–Spatial Attention Dense Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 866–870. [Google Scholar] [CrossRef]
- Gao, H.; Miao, Y.; Cao, X.; Li, C. Densely Connected Multiscale Attention Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2563–2576. [Google Scholar] [CrossRef]
- Wang, X.; Fan, Y. Multiscale Densely Connected Attention Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1617–1628. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, C.; Li, G.; Du, S. Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9201–9222. [Google Scholar] [CrossRef]
- Guo, W.; Ye, H.; Cao, F. Feature-Grouped Network with Spectral–Spatial Connected Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5500413. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
- Gao, H.; Yang, Y.; Li, C.; Gao, L.; Zhang, B. Multiscale Residual Network with Mixed Depthwise Convolution for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3396–3408. [Google Scholar] [CrossRef]
- Li, Z.; Zhao, X.; Xu, Y.; Li, W.; Zhai, L.; Fang, Z.; Shi, X. Hyperspectral Image Classification with Multiattention Fusion Network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5503305. [Google Scholar] [CrossRef]
- Xu, Y.; Li, Z.; Li, W.; Du, Q.; Liu, C.; Fang, Z.; Zhai, L. Dual-Channel Residual Network for Hyperspectral Image Classification with Noisy Labels. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5507916. [Google Scholar] [CrossRef]
- Xiang, J.; Wei, C.; Wang, M.; Teng, L. End-to-End Multilevel Hybrid Attention Framework for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5511305. [Google Scholar] [CrossRef]
No. | Color | Class | Train | Test |
---|---|---|---|---|
1 | Asphalt | 1326 | 5305 | |
2 | Meadows | 3729 | 14,920 | |
3 | Gravel | 419 | 1680 | |
4 | Trees | 612 | 2452 | |
5 | Metal sheets | 269 | 1076 | |
6 | Bare Soil | 1005 | 4024 | |
7 | Bitumen | 266 | 1064 | |
8 | Bricks | 736 | 2946 | |
9 | Shadows | 189 | 758 | |
Total | 8551 | 34,225 |
No. | Color | Class | Train | Test |
---|---|---|---|---|
1 | Alfalfa | 10 | 36 | |
2 | Corn–notill | 286 | 1142 | |
3 | Corn–mintill | 166 | 664 | |
4 | Corn | 48 | 189 | |
5 | Grass–pasture | 97 | 386 | |
6 | Grass–trees | 146 | 584 | |
7 | Grass–pasture–mowed | 6 | 22 | |
8 | Hay–windrowed | 96 | 382 | |
9 | Oats | 4 | 16 | |
10 | Soybean–notill | 195 | 777 | |
11 | Soybean–mintill | 491 | 1964 | |
12 | Soybean–clean | 119 | 474 | |
13 | Wheat | 41 | 164 | |
14 | Woods | 253 | 1012 | |
15 | Buildings–grass–tree | 78 | 308 | |
16 | Stone–steel–towers | 19 | 74 | |
Total | 2055 | 8194 |
No. | Color | Class | Train | Test |
---|---|---|---|---|
1 | Healthy grass | 251 | 1000 | |
2 | Stressed grass | 251 | 1003 | |
3 | Synthetic grass | 140 | 557 | |
4 | Trees | 249 | 995 | |
5 | Soil | 249 | 993 | |
6 | Water | 65 | 260 | |
7 | Residential | 254 | 1014 | |
8 | Commercial | 249 | 995 | |
9 | Road | 251 | 1001 | |
10 | Highway | 246 | 981 | |
11 | Railway | 247 | 988 | |
12 | Parking Lot 1 | 247 | 986 | |
13 | Parking Lot 2 | 94 | 375 | |
14 | Tennis Court | 86 | 342 | |
15 | Running Track | 132 | 528 | |
Total | 3011 | 12,018 |
No. | SVM 1 | RF 1 | KNN 1 | GaussianNB 1 | HybridSN 2 | RSSAN 3 | MSRN | MAFN 4 | DCRN 5 | DMCN | MSDAN | HFENet |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 76.52 | 93.17 | 91.34 | 96.01 | 99.53 | 99.79 | 99.83 | 100.00 | 99.46 | 97.86 | 99.89 | 99.94 |
2 | 85.94 | 89.70 | 88.74 | 80.20 | 99.99 | 99.91 | 99.99 | 100.00 | 99.95 | 99.94 | 99.76 | 99.98 |
3 | 83.78 | 85.32 | 71.81 | 28.09 | 99.58 | 99.13 | 100.00 | 100.00 | 97.99 | 93.11 | 73.50 | 99.82 |
4 | 95.94 | 94.83 | 96.61 | 50.01 | 100.00 | 99.88 | 96.61 | 92.81 | 84.55 | 86.88 | 99.71 | 100.00 |
5 | 99.81 | 99.27 | 99.33 | 80.49 | 99.72 | 99.91 | 98.63 | 100.00 | 86.46 | 65.72 | 100.00 | 100.00 |
6 | 95.86 | 91.48 | 82.30 | 37.77 | 100.00 | 99.33 | 99.62 | 98.72 | 99.88 | 86.54 | 99.63 | 100.00 |
7 | 0.00 | 86.90 | 74.87 | 40.61 | 98.88 | 99.53 | 99.91 | 100.00 | 96.89 | 59.21 | 100.00 | 99.91 |
8 | 67.39 | 83.01 | 80.68 | 69.98 | 99.86 | 97.33 | 93.49 | 97.18 | 96.08 | 98.95 | 99.32 | 100.00 |
9 | 99.87 | 99.87 | 100.00 | 100.00 | 99.45 | 99.21 | 95.61 | 97.04 | 96.84 | 92.60 | 99.06 | 99.74 |
OA (%) | 83.89 | 90.38 | 87.63 | 67.46 | 99.83 | 99.53 | 98.94 | 98.98 | 97.55 | 92.53 | 98.00 | 99.96 |
AA (%) | 70.90 | 87.71 | 85.14 | 73.03 | 99.35 | 99.23 | 98.21 | 99.19 | 94.83 | 90.08 | 97.13 | 99.94 |
Kappa × 100 | 77.82 | 87.05 | 83.34 | 58.48 | 99.78 | 99.38 | 98.59 | 98.66 | 96.76 | 90.17 | 97.35 | 99.95 |
No. | SVM | RF | KNN | GaussianNB | HybridSN | RSSAN | MSRN | MAFN | DCRN | DMCN | MSDAN | HFENet |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0.00 | 86.67 | 36.36 | 31.07 | 97.06 | 97.30 | 90.32 | 92.31 | 100.00 | 100.00 | 100.00 | 100.00 |
2 | 61.51 | 82.02 | 50.38 | 45.54 | 98.86 | 98.00 | 97.45 | 96.20 | 77.30 | 97.46 | 98.95 | 99.21 |
3 | 84.04 | 78.66 | 61.95 | 35.92 | 97.04 | 99.54 | 98.74 | 100.00 | 86.25 | 93.50 | 99.54 | 100.00 |
4 | 46.43 | 72.87 | 53.26 | 15.31 | 98.86 | 99.46 | 99.39 | 96.89 | 95.94 | 96.81 | 98.85 | 98.42 |
5 | 88.82 | 90.16 | 84.71 | 3.57 | 98.47 | 98.22 | 92.54 | 100.00 | 96.00 | 98.69 | 98.70 | 99.23 |
6 | 76.72 | 82.61 | 78.08 | 67.87 | 100.00 | 99.83 | 99.65 | 99.49 | 97.27 | 100.00 | 99.49 | 100.00 |
7 | 0.00 | 83.33 | 68.42 | 100.00 | 100.00 | 100.00 | 100.00 | 95.65 | 100.00 | 100.00 | 86.96 | 100.00 |
8 | 83.49 | 87.16 | 88.55 | 83.78 | 96.46 | 99.48 | 80.08 | 100.00 | 100.00 | 98.70 | 99.74 | 100.00 |
9 | 0.00 | 100.00 | 40.00 | 11.02 | 76.19 | 100.00 | 0.00 | 100.00 | 62.50 | 100.00 | 100.00 | 100.00 |
10 | 70.89 | 83.61 | 69.40 | 27.07 | 99.74 | 99.48 | 88.93 | 100.00 | 84.39 | 99.87 | 98.46 | 99.48 |
11 | 58.51 | 75.16 | 69.49 | 60.60 | 98.77 | 99.19 | 97.57 | 99.90 | 92.64 | 99.69 | 99.74 | 99.49 |
12 | 59.38 | 66.74 | 62.13 | 23.95 | 98.34 | 98.13 | 91.52 | 97.90 | 92.46 | 92.74 | 91.30 | 98.34 |
13 | 82.23 | 92.53 | 86.70 | 84.38 | 100.00 | 99.39 | 94.58 | 100.00 | 90.30 | 96.91 | 97.02 | 100.00 |
14 | 87.39 | 89.78 | 91.76 | 75.08 | 99.90 | 99.80 | 100.00 | 99.90 | 100.00 | 99.40 | 99.90 | 99.80 |
15 | 86.30 | 72.00 | 64.127 | 53.17 | 94.12 | 98.72 | 100.00 | 99.68 | 97.60 | 92.92 | 95.00 | 100.00 |
16 | 98.36 | 100.00 | 100.00 | 98.44 | 98.67 | 97.33 | 94.37 | 94.87 | 100.00 | 91.14 | 98.53 | 98.67 |
OA (%) | 70.21 | 89.91 | 70.95 | 50.88 | 98.58 | 99.07 | 95.56 | 99.07 | 90.36 | 97.86 | 98.61 | 99.51 |
AA (%) | 53.06 | 66.77 | 62.39 | 52.65 | 96.87 | 96.53 | 86.25 | 98.60 | 82.22 | 93.47 | 94.85 | 99.70 |
Kappa × 100 | 65.07 | 78.01 | 66.63 | 44.07 | 98.39 | 98.94 | 94.94 | 98.94 | 88.97 | 97.57 | 98.41 | 99.44 |
No. | SVM | RF | KNN | GaussianNB | HybridSN | RSSAN | MSRN | MAFN | DCRN | DMCN | MSDAN | HFENet |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 81.98 | 98.49 | 97.74 | 93.97 | 95.88 | 98.52 | 99.80 | 99.70 | 98.88 | 99.78 | 99.00 | 99.40 |
2 | 98.85 | 98.40 | 98.44 | 98.31 | 98.57 | 99.80 | 100.00 | 99.11 | 97.84 | 94.80 | 99.90 | 99.50 |
3 | 96.68 | 99.81 | 98.37 | 91.35 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
4 | 98.53 | 98.00 | 99.40 | 98.81 | 99.69 | 99.60 | 99.50 | 99.80 | 95.51 | 98.12 | 99.50 | 99.90 |
5 | 88.51 | 95.07 | 92.42 | 71.57 | 100.00 | 100.00 | 100.00 | 99.80 | 99.80 | 99.90 | 100.00 | 100.00 |
6 | 100.00 | 99.17 | 100.00 | 35.89 | 100.00 | 100.00 | 100.00 | 100.00 | 97.65 | 85.81 | 100.00 | 100.00 |
7 | 68.66 | 88.87 | 89.39 | 54.52 | 96.41 | 97.37 | 99.70 | 99.10 | 94.51 | 97.92 | 98.82 | 99.51 |
8 | 84.53 | 92.23 | 88.06 | 79.41 | 97.64 | 98.70 | 100.00 | 97.85 | 98.90 | 96.47 | 98.03 | 99.70 |
9 | 59.65 | 80.72 | 79.22 | 43.03 | 96.23 | 96.52 | 97.46 | 99.19 | 96.44 | 99.68 | 99.19 | 100.00 |
10 | 58.38 | 85.59 | 84.59 | 0.00 | 99.09 | 97.48 | 99.90 | 97.98 | 100.00 | 93.25 | 99.90 | 99.29 |
11 | 59.17 | 80.72 | 83.40 | 35.92 | 100.00 | 97.23 | 100.00 | 98.40 | 100.00 | 98.31 | 99.20 | 99.60 |
12 | 63.07 | 76.86 | 79.79 | 24.31 | 99.49 | 97.47 | 99.39 | 96.54 | 99.39 | 98.00 | 99.29 | 99.90 |
13 | 100.00 | 87.74 | 93.62 | 17.54 | 100.00 | 99.70 | 86.78 | 98.88 | 100.00 | 100.00 | 98.93 | 99.72 |
14 | 78.57 | 96.50 | 98.56 | 68.72 | 100.00 | 99.13 | 100.00 | 100.00 | 100.00 | 100.00 | 98.84 | 100.00 |
15 | 99.08 | 99.62 | 99.43 | 99.79 | 99.81 | 100.00 | 98.32 | 97.24 | 95.83 | 100.00 | 100.00 | 100.00 |
OA (%) | 77.90 | 90.23 | 90.50 | 61.21 | 98.55 | 98.53 | 99.11 | 98.80 | 98.19 | 97.60 | 99.33 | 99.73 |
AA (%) | 77.15 | 89.17 | 88.87 | 63.67 | 98.39 | 98.38 | 99.08 | 98.58 | 97.95 | 97.31 | 99.23 | 99.62 |
Kappa × 100 | 76.07 | 89.86 | 89.72 | 58.1595 | 98.43 | 98.41 | 99.04 | 98.70 | 98.04 | 97.41 | 99.28 | 99.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, D.; Shao, T.; Qi, G.; Li, M.; Zhang, J. A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sens. 2024, 16, 22. https://doi.org/10.3390/rs16010022
Liu D, Shao T, Qi G, Li M, Zhang J. A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sensing. 2024; 16(1):22. https://doi.org/10.3390/rs16010022
Chicago/Turabian StyleLiu, Dongxu, Tao Shao, Guanglin Qi, Meihui Li, and Jianlin Zhang. 2024. "A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification" Remote Sensing 16, no. 1: 22. https://doi.org/10.3390/rs16010022
APA StyleLiu, D., Shao, T., Qi, G., Li, M., & Zhang, J. (2024). A Hybrid-Scale Feature Enhancement Network for Hyperspectral Image Classification. Remote Sensing, 16(1), 22. https://doi.org/10.3390/rs16010022