Hyperspectral Image Classification Method Based on Morphological Features and Hybrid Convolutional Neural Networks
Abstract
:1. Introduction
- A new HRSI classification framework was designed. This framework consists of PCA, MP, CNNs, residual connections, and an attention mechanism.
- A new 3D–2D CNN model was designed. This model combines 3D convolution for extracting spatial and spectral features and 2D convolution for extracting spatial features only. Such a combination effectively improves the classification accuracy of HRSIs.
- Combining residual connections and the attention mechanism establishes a multiscale residual attention module to refine feature mapping.
- The 3D–2D CNN structure uses multiscale convolution composed of depthwise separable convolution (DSC), which can effectively reduce the amount of parameter calculation and prevent overfitting.
2. Problem Formulation
2.1. PCA
2.2. Binarization Process
2.3. MP
2.4. CNNs
2.5. DSC
2.6. Residual Connections
2.7. Attention Mechanism
3. Experiments and Discussion
3.1. Dataset Description
3.2. Parameter Setting
3.3. Parameter Analysis
3.3.1. Effect of Patch Size on Accuracy
3.3.2. Effect of Dropout Probability Values on Accuracy
3.3.3. Effectiveness of MP
3.3.4. Effectiveness of Residual Connections
3.3.5. Effectiveness of Attention Mechanism
3.4. Classification Results and Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Saidi, S.; Idbraim, S.; Karmoude, Y.; Masse, A.; Arbelo, M. Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review. Remote Sens. 2024, 16, 3852. [Google Scholar] [CrossRef]
- Liu, B.; Li, T. A Machine-Learning-Based Framework for Retrieving Water Quality Parameters in Urban Rivers Using UAV Hyperspectral Images. Remote Sens. 2024, 16, 905. [Google Scholar] [CrossRef]
- Tang, L.; Werner, T.T. Global mining footprint mapped from high-resolution satellite imagery. Commun. Earth Environ. 2023, 4, 134. [Google Scholar] [CrossRef]
- Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
- Cannaday, A.B.; Davis, C.H.; Bajkowski, T.M. Detection of Camouflage-Covered Military Objects Using High-Resolution Multi-Spectral Satellite Imagery. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; IEEE: New York, NY, USA, 2023; pp. 5766–5769. [Google Scholar]
- Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Amrani, M.; Chaib, S.; Omara, I.; Jiang, F. Bag-of-visual-words based feature extraction for SAR target classification. In Ninth International Conference on Digital Image Processing (ICDIP 2017), Hong Kong, China, 19–22 May 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10420. [Google Scholar]
- Liu, Q.; Liu, C. A novel locally linear KNN method with applications to visual recognition. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2010–2021. [Google Scholar] [CrossRef]
- Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef]
- Liu, B.; Guo, W.; Chen, X.; Gao, K.; Zuo, X.; Wang, R.; Yu, A. Morphological attribute profile cube and deep random forest for small sample classification of hyperspectral image. IEEE Access 2020, 8, 117096–117108. [Google Scholar] [CrossRef]
- Pan, H.; Liu, M.; Ge, H.; Chen, S. Semi-supervised spatial–spectral classification for hyperspectral image based on three-dimensional Gabor and co-selection self-training. J. Appl. Remote Sens. 2022, 16, 028501. [Google Scholar] [CrossRef]
- Kang, X.; Duan, P.; Li, S. Hyperspectral image visualization with edge-preserving filtering and principal component analysis. Inf. Fusion. 2020, 57, 130–143. [Google Scholar] [CrossRef]
- Kumar, V.; Singh, R.S.; Dua, Y. Morphologically dilated convolutional neural network for hyperspectral image classification. Signal Process Image Commun. 2022, 101, 116549. [Google Scholar] [CrossRef]
- Li, Q.; Wang, Q.; Li, X. Exploring the relationship between 2D/3D convolution for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8693–8703. [Google Scholar] [CrossRef]
- Akodad, S.; Bombrun, L.; Xia, J.; Berthoumieu, Y.; Germain, C. Ensemble learning approaches based on covariance pooling of CNN features for high resolution remote sensing scene classification. Remote Sens. 2020, 12, 3292. [Google Scholar] [CrossRef]
- Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
- Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
- Vaddi, R.; Manoharan, P. Hyperspectral image classification using CNN with spectral and spatial features integration. Infrared Phys. Technol. 2020, 107, 103296. [Google Scholar] [CrossRef]
- Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A fast and compact 3-D CNN for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.Q.; Chan, C.W. Learning and transferring deep joint spectral-spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Sergio, R.; Angelo, C.; Valeria, L.; Giuseppina, U. Machine-learning based vulnerability analysis of existing buildings. Autom. Constr. 2021, 132, 103936. [Google Scholar]
- Yuan, Y.; Jin, M. Multi-type spectral spatial feature for hyperspectral image classification. Neurocomputing 2022, 492, 637–650. [Google Scholar] [CrossRef]
- Serra, J. Image Analysis and Mathematical Morphology; Academic Press: Cambridge, UK, 1982. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Lin, C.; Wang, T.; Dong, S.; Zhang, Q.; Yang, Z.; Gao, F. Hybrid convolutional network combining 3D depthwise separable convolution and receptive field control for hyperspectral image classification. Electronics 2022, 11, 3992. [Google Scholar] [CrossRef]
- Ghaderizadeh, S.; Abbasi-Moghadam, D.; Sharifi, A.; Tariq, A.; Qin, S. Multiscale dual-branch residual spectral–spatial network with attention for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5455–5467. [Google Scholar] [CrossRef]
- Yang, J.; Du, B.; Zhang, L. From center to surrounding: An interactive learning framework for hyperspectral image classification. ISPRS J. Photogramm. 2023, 197, 145–166. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE Publications: New York, NY, USA. [Google Scholar]
- Hamida, A.B.; Benoit, A.; Lambert, P.; Amar, C.B. 3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef]
Layer (Type) | Kernel Size | Stride | Padding | Output Shape |
---|---|---|---|---|
Input layer | Input: [(21,21,20,1)] | |||
Conv3D_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_1: (19,19,18,8) |
Conv3D_2 | (3,3,3) | (1,1,1) | (0,0,0) | Out_2: (17,17,16,16) |
Separable_Conv3D_3_1_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_1_1: (15,15,14,32) |
Conv3D_3_1_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_1_2: (15,15,14,16) |
Separable_Conv3D_3_2_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_1: (15,15,14,32) |
Conv3D_3_2_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_2_2: (15,15,14,32) |
Separable_Conv3D_3_2_3 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_3: (13,13,12,32) |
Conv3D_3_2_4 | (1,1,1) | (1,1,1) | (1,1,1) | Out_3_2_4: (15,15,14,16) |
Concatenate_1 (Out_3_1_2, Out_3_2_4) | Out_C_1: (15,15,14,32) | |||
Residual Connection_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_R_1: (15,15,14,32) |
Add (Out_C1, Out_R1) | Out_A1: (15,15,14,32) | |||
Reshape (Out_A1) | Out_Re: (15,15,448) | |||
Separable_Conv2D_4_1_1 | (3,3) | (1,1) | (0,0) | Out_4_1_1: (13,13,64) |
Conv2D_4_1_2 | (1,1) | (1,1) | (0,0) | Out_4_1_2: (13,13,32) |
Separable_Conv2D_4_2_1 | (3,3) | (1,1) | (0,0) | Out_4_2_1: (13,13,64) |
Conv2D_4_2_2 | (1,1) | (1,1) | (0,0) | Out_4_2_2: (13,13,64) |
Separable_Conv2D_4_2_3 | (3,3) | (1,1) | (0,0) | Out_4_2_3: (11,11,64) |
Conv2D_4_2_4 | (1,1) | (1,1) | (1,1) | Out_4_2_4: (13,13,32) |
Concatenate_2 (Out_4_1_2, Out_4_2_4) | Out_C_2: (13,13,64) | |||
Attention | Out_SE: (13,13,64) | |||
Residual Connection_2 | (3,3) | (1,1) | (0,0) | Out_R_2: (13,13,64) |
Add (Out_C_2, Out_R_2) | Out_A_2: (13,13,64) | |||
Flatten | Out_F: (10,816) | |||
Linear_1 | Out_L_1: (256) | |||
Dropout_1 | Out_D_1: (256) | |||
Linear_2 | Out_L_2: (128) | |||
Dropout_2 | Out_D_2: (128) | |||
Linear_3 | Out_L_3: (16) | |||
Total Parameters: 3,270,656 |
Layer (Type) | Kernel Size | Stride | Padding | Output Shape |
---|---|---|---|---|
Input layer | Input: [(21,21,20,1)] | |||
Conv3D_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_1: (19,19,18,8) |
Conv3D_2 | (3,3,3) | (1,1,1) | (0,0,0) | Out_2: (17,17,16,16) |
Separable_Conv3D_3_1_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_1_1: (15,15,14,32) |
Conv3D_3_1_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_1_2: (15,15,14,16) |
Separable_Conv3D_3_2_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_1: (15,15,14,32) |
Conv3D_3_2_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_2_2: (15,15,14,32) |
Separable_Conv3D_3_2_3 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_3: (13,13,12,32) |
Conv3D_3_2_4 | (1,1,1) | (1,1,1) | (1,1,1) | Out_3_2_4: (15,15,14,16) |
Concatenate 1 (Out_3_1_2, Out_3_2_4) | Out_C_1: (15,15,14,32) | |||
Attention1 | Out_SE1: (15,15,14,32) | |||
Residual Connection_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_R_1: (15,15,14,32) |
Add (Out_C1, Out_R1) | Out_A1: (15,15,14,32) | |||
Reshape (Out_A1) | Out_Re: (15,15,448) | |||
Separable_Conv2D_4_1_1 | (3,3) | (1,1) | (0,0) | Out_4_1_1: (13,13,64) |
Conv2D__4_1_2 | (1,1) | (1,1) | (0,0) | Out_4_1_2: (13,13,32) |
Separable_Conv2D_4_2_1 | (3,3) | (1,1) | (0,0) | Out_4_2_1: (13,13,64) |
Conv2D_4_2_2 | (1,1) | (1,1) | (0,0) | Out_4_2_2: (13,13,64) |
Separable_Conv2D_4_2_3 | (3,3) | (1,1) | (0,0) | Out_4_2_3: (11,11,64) |
Conv2D_4_2_4 | (1,1) | (1,1) | (1,1) | Out_4_2_4: (13,13,32) |
Concatenate_2 (Out_4_1_2, Out_4_2_4) | Out_C_2: (13,13,64) | |||
Attention2 | Out_SE2: (13,13,64) | |||
Residual Connection_2 | (3,3) | (1,1) | (0,0) | Out_R_2: (13,13,64) |
Add (Out_C_2, Out_R_2) | Out_A_2: (13,13,64) | |||
Flatten | Out_F: (10,816) | |||
Linear_1 | Out_L_1: (256) | |||
Dropou_1 | Out_D_1: (256) | |||
Linear_2 | Out_L_2: (128) | |||
Dropout_2 | Out_D_2: (128) | |||
Linear_3 | Out_L_3: (7) | |||
Total Parameters: 3,269,495 |
Layer (Type) | Kernel Size | Stride | Padding | Output Shape |
---|---|---|---|---|
Input layer | Input: [(21,21,10,1)] | |||
Conv3D_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_1: (19,19,8,8) |
Conv3D_2 | (3,3,3) | (1,1,1) | (0,0,0) | Out_2: (17,17,6,16) |
Separable_Conv3D_3_1_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_1_1: (15,15,4,32) |
Conv3D_3_1_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_1_2: (15,15,4,16) |
Separable_Conv3D_3_2_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_1: (15,15,4,32) |
Conv3D_3_2_2 | (1,1,1) | (1,1,1) | (0,0,0) | Out_3_2_2: (15,15,4,32) |
Separable_Conv3D_3_2_3 | (3,3,3) | (1,1,1) | (0,0,0) | Out_3_2_3: (13,13,2,32) |
Conv3D_3_2_4 | (1,1,1) | (1,1,1) | (1,1,1) | Out_3_2_4: (15,15,4,16) |
Concatenate_1 (Out_3_1_2, Out_3_2_4) | Out_C_1: (15,15,4,32) | |||
Residual Connection_1 | (3,3,3) | (1,1,1) | (0,0,0) | Out_R_1: (15,15,4,32) |
Add (Out_C1, Out_R1) | Out_A1: (15,15,4,32) | |||
Reshape (Out_A1) | Out_Re: (15,15,128) | |||
Separable_Conv2D_4_1_1 | (3,3) | (1,1) | (0,0) | Out_4_1_1: (13,13,64) |
Conv2D_4_1_2 | (1,1) | (1,1) | (0,0) | Out_4_1_2: (13,13,32) |
Separable_Conv2D_4_2_1 | (3,3) | (1,1) | (0,0) | Out_4_2_1: (13,13,64) |
Conv2D_4_2_2 | (1,1) | (1,1) | (0,0) | Out_4_2_2: (13,13,64) |
Separable_Conv2D_4_2_3 | (3,3) | (1,1) | (0,0) | Out_4_2_3: (11,11,64) |
Conv2D_4_2_4 | (1,1) | (1,1) | (1,1) | Out_4_2_4: (13,13,32) |
Concatenate_2 (Out_4_1_2, Out_4_2_4) | Out_C_2: (13,13,64) | |||
Attention | Out_SE: (13,13,64) | |||
Residual Connection_2 | (3,3) | (1,1) | (0,0) | Out_R_2: (13,13,64) |
Add (Out_C_2, Out_R_2) | Out_A_2: (13,13,64) | |||
Flatten | Out_F: (10,816) | |||
Linear_1 | Out_L_1: (256) | |||
Dropout_1 | Out_D_1: (256) | |||
Linear_2 | Out_L_2: (128) | |||
Dropout_2 | Out_D_2: (128) | |||
Linear_3 | Out_L_3: (9) | |||
Total Parameters: 3,059,193 |
Dataset | 17 × 17 | 19 × 19 | 21 × 21 | 23 × 23 | 25 × 25 | |
---|---|---|---|---|---|---|
OA | 0.13 | 0.16 | ||||
IP | AA | 0.33 | 0.32 | |||
K | 0.15 | 0.09 | ||||
OA | 0.02 | 0.03 | 0.02 | 0.04 | ||
UP | AA | 0.05 | 0.05 | 0.03 | 0.07 | |
K | 0.03 | 0.04 | 0.02 | |||
OA | 0.04 | 0.03 | 0.05 | 0.03 | ||
My Dataset | AA | 0.08 | 0.07 | 0.06 | 0.05 | |
K | 0.05 | 0.03 | 0.05 | 0.02 |
Dataset | 25% | 30% | 35% | 40% | 45% | |
---|---|---|---|---|---|---|
OA | 0.06 | 0.08 | 0.12 | 0.08 | ||
IP | AA | 0.43 | 0.32 | 0.5 | 0.36 | |
K | 0.07 | 0.07 | 0.14 | 0.08 | ||
OA | 0.11 | 0.11 | 0.02 | 0.02 | ||
UP | AA | 0.23 | 0.15 | 0.06 | 0.05 | |
K | 0.15 | 0.14 | 0.03 | 0.02 | ||
OA | 0.02 | 0.04 | 0.03 | 0.08 | ||
My Dataset | AA | 0.05 | 0.07 | 0.05 | 0.17 | |
K | 0.03 | 0.05 | 0.03 | 0.08 |
Dataset | Proposed Model | No MP | No Res | No Attention | |
---|---|---|---|---|---|
OA | 0.11 | 0.13 | 0.14 | ||
IP | AA | 0.46 | 0.43 | 0.5 | |
K | 0.13 | 0.14 | 0.15 | ||
OA | 0.02 | 0.02 | 0.07 | ||
UP | AA | 0.06 | 0.06 | 0.06 | |
K | 0.03 | 0.03 | 0.09 | ||
OA | 0.03 | 0.04 | 0.11 | ||
My Dataset | AA | 0.06 | 0.13 | 0.08 | |
K | 0.04 | 0.06 | 0.13 |
Class | SVM | 2D CNN | 3D CNN | SSRN | HybridSN | Proposed Model |
---|---|---|---|---|---|---|
1 | 83.21 | 76.3 | 100 | 100 | 100 | 100 |
2 | 74.83 | 82.5 | 77.92 | 98.79 | 99.30 | 99.50 |
3 | 81.05 | 86.9 | 91.25 | 100 | 99.83 | 99.83 |
4 | 78.72 | 63.54 | 91.84 | 98.96 | 100 | 98.81 |
5 | 74.65 | 89.63 | 98.92 | 99.20 | 99.11 | 100 |
6 | 92.5 | 99.03 | 97.99 | 99.31 | 99.80 | 100 |
7 | 95.31 | 77.41 | 100 | 100 | 100 | 100 |
8 | 84.7 | 100 | 96.97 | 100 | 100 | 100 |
9 | 96.83 | 65.31 | 100 | 100 | 100 | 100 |
10 | 72.04 | 81.93 | 80.6 | 99.36 | 99.85 | 99.85 |
11 | 77.51 | 90.65 | 86.44 | 99.80 | 99.59 | 99.88 |
12 | 85.6 | 84.25 | 90.74 | 98.54 | 98.33 | 99.04 |
13 | 84.61 | 99.36 | 97.62 | 94.25 | 98.59 | 100 |
14 | 97.54 | 98.68 | 97.64 | 99.12 | 99.66 | 99.89 |
15 | 93.61 | 88.29 | 94.44 | 97.77 | 99.26 | 98.18 |
16 | 73.4 | 99.63 | 100 | 100 | 95.31 | 100 |
OA | ||||||
AA | ||||||
κ |
Class | SVM | 2D CNN | 3D CNN | SSRN | HybridSN | Proposed Model |
---|---|---|---|---|---|---|
1 | 95.31 | 99.77 | 98.04 | 99.85 | 100 | 99.98 |
2 | 96.64 | 100 | 97.03 | 99.98 | 99.95 | 99.99 |
3 | 83.5 | 99.75 | 95.08 | 99.93 | 100 | 100 |
4 | 95.36 | 100 | 99.67 | 99.91 | 99.91 | 99.91 |
5 | 99.43 | 51.93 | 100 | 100 | 100 | 100 |
6 | 89.69 | 99.80 | 99.63 | 100 | 100 | 100 |
7 | 88.23 | 99.25 | 96.72 | 100 | 99.68 | 100 |
8 | 87.39 | 95.93 | 92.22 | 98.81 | 99.92 | 99.92 |
9 | 99.81 | 100 | 99.65 | 99.55 | 99.55 | 99.85 |
OA | ||||||
AA | ||||||
κ |
Class | SVM | 2D CNN | 3D CNN | SSRN | HybridSN | Proposed Model |
---|---|---|---|---|---|---|
1 | 79.38 | 100 | 95.48 | 100 | 100 | 100 |
2 | 97.41 | 68.23 | 99.26 | 97.65 | 98.54 | 99.37 |
3 | 95.43 | 99.89 | 97.38 | 99.59 | 99.42 | 99.45 |
4 | 99.10 | 100 | 99.84 | 99.79 | 100 | 100 |
5 | 95.59 | 100 | 99.92 | 100 | 99.98 | 100 |
6 | 95.36 | 100 | 99.24 | 100 | 100 | 100 |
7 | 99.5 | 100 | 100 | 100 | 100 | 100 |
OA | ||||||
AA | ||||||
κ |
Dataset | Time | IP | PU | My Dataset |
---|---|---|---|---|
SVM | Train(s) | 2.11 | 5.32 | 3.94 |
Test(s) | 1.03 | 2.31 | 1.84 | |
2D CNN | Train(m) | 1.12 | 1.31 | 1.53 |
Test(s) | 0.9 | 1.52 | 1.61 | |
3D CNN | Train(m) | 3.21 | 8.17 | 16.63 |
Test(s) | 8.51 | 13.04 | 30.26 | |
SSRN | Train(m) | 4.73 | 6.93 | 14.11 |
Test(s) | 5.89 | 11.36 | 25.13 | |
HybridSN | Train(m) | 2.98 | 3.65 | 7.43 |
Test(s) | 1.98 | 2.06 | 4.92 | |
My Net | Train(m) | 2.64 | 3.99 | 6.56 |
Test(s) | 1.23 | 1.63 | 2.91 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ran, T.; Shi, G.; Zhang, Z.; Pan, Y.; Zhu, H. Hyperspectral Image Classification Method Based on Morphological Features and Hybrid Convolutional Neural Networks. Appl. Sci. 2024, 14, 10577. https://doi.org/10.3390/app142210577
Ran T, Shi G, Zhang Z, Pan Y, Zhu H. Hyperspectral Image Classification Method Based on Morphological Features and Hybrid Convolutional Neural Networks. Applied Sciences. 2024; 14(22):10577. https://doi.org/10.3390/app142210577
Chicago/Turabian StyleRan, Tonghuan, Guangfeng Shi, Zhuo Zhang, Yuhao Pan, and Haiyang Zhu. 2024. "Hyperspectral Image Classification Method Based on Morphological Features and Hybrid Convolutional Neural Networks" Applied Sciences 14, no. 22: 10577. https://doi.org/10.3390/app142210577
APA StyleRan, T., Shi, G., Zhang, Z., Pan, Y., & Zhu, H. (2024). Hyperspectral Image Classification Method Based on Morphological Features and Hybrid Convolutional Neural Networks. Applied Sciences, 14(22), 10577. https://doi.org/10.3390/app142210577