A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification
Abstract
:1. Introduction
- (1)
- This paper proposes an end-to-end sequential deep feature extraction and classification network, which is different from other multi branch structures. It can increase the depth of the network and achieve more effective feature extraction and fusion, so as to improve the classification performance.
- (2)
- We propose a new way to extract spectral–spatial features of HSIs, i.e., the spectral and low-level spatial features of HSIs are extracted with a 3D CNN, and the high-level spatial features are extracted by a 2D CNN.
- (3)
- For the extracted spatial and spectral features, a residual-like method is designed for fusion, which further improves the representation of spatial-spectral features of HSIs, thus contributing to accurate classification.
- (4)
- In order to break the limitations of the traditional convolution kernel with a fixed receptive field for feature extraction, we introduce a deformable convolution and design the DCR module to further extract the spatial features; this method not only adjusts the receptive field but also further improves the classification performance and enhances the generalization ability of model.
2. Methodology
2.1. The Overall Structure of the Proposed Method
2.2. Dense Spectral and Spatial Blocks
2.3. Spectral–Spatial Self-Attention Block and Fusion Mechanisms
2.4. Strategy for High-Level Spatial Feature Extraction—DCR Block
2.5. Optimization Methods
- (1)
- PReLU Activation Function
- (2)
- Cosine-Annealing Learning Rate
- (3)
- Other Optimization Approaches
Algorithm 1 The SSAF-DCR model |
Input: An HSI dataset and the corresponding label vectors . |
Step 1: Extract cubes with a patch size of 9 × 9 × L from , where is the number of spectral bands. |
Step 2: Randomly divide the HSI dataset into , , and , which represent the training data, validation data, and testing data, respectively. Likewise, , , and are the corresponding label vector data for , , and . |
Step 3: Input , and , into the initial SSAF-DCR model. |
Step 4: Calculate the dense blocks according to (2) to initially obtain the effective features. |
Step 5: Selectively filter features according to (3)–(5), and (6). |
Step 6: Further extract spatial features according to (8) and (9). |
Step 7: Adam is used for iterative optimization. |
Step 8: Input into the optimal model to predict the classification results. |
Output: The classification results. |
3. Experimental Results and Analysis
3.1. Dataset
3.2. Parameter Setting and Experimental Results
3.3. Efficiency of the Attention Fusion Strategy
3.4. Parameter Analysis
3.5. Ablation Experiments of Three Kinds of Blocks
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chang, C.I. Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Patel, N.K.; Patnaik, C.; Dutta, S.; Shekh, A.M.; Dave, A.J. Study of crop growth parameters using airborne imaging spectrometer data. Int. J. Remote Sens. 2001, 22, 2401–2411. [Google Scholar] [CrossRef]
- Goetz, A.F.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging Spectrometry for Earth Remote Sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef]
- Civco, D.L. Artificial neural networks for land-cover classification and mapping. Int. J. Geogr. Inf. Syst. 1993, 7, 173–186. [Google Scholar] [CrossRef]
- Ghamisi, P.; Benediktsson, J.A.; Ulfarsson, M.O. Spectral–Spatial Classification of Hyperspectral Images Based on Hidden Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2565–2574. [Google Scholar] [CrossRef] [Green Version]
- Farrugia, R.A.; Debono, C.J. A Robust Error Detection Mechanism for H.264/AVC Coded Video Sequences Based on Support Vector Machines. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1766–1770. [Google Scholar] [CrossRef] [Green Version]
- Zhong, P.; Wang, R. Jointly Learning the Hybrid CRF and MLR Model for Simultaneous Denoising and Classification of Hyperspectral Imagery. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1319–1334. [Google Scholar] [CrossRef]
- Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral-spatial classification of hyper- spectral images with a superpixel-based discriminative sparse model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
- Fu, W.; Li, S.; Fang, L. Spectral-spatial hyperspectral image classification via superpixel merging and sparse representation. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4971–4974. [Google Scholar]
- Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J.A. Classification of Hyperspectral Images by Exploiting Spectral–Spatial Information of Superpixel via Multiple Kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef] [Green Version]
- Zehtabian, A.; Ghassemian, H. An adaptive framework for spectral-spatial classification based on a combination of pixel-based and object-based scenarios. Earth Sci. Inform. 2017, 10, 357–368. [Google Scholar] [CrossRef]
- Addink, E.A.; De Jong, S.M.; Pebesma, E.J. The Importance of Scale in Object-based Mapping of Vegetation Parameters with Hyperspectral Imagery. Photogramm. Eng. Remote Sens. 2007, 73, 905–912. [Google Scholar] [CrossRef] [Green Version]
- Zeng, D.; Liu, K.; Chen, Y.; Zhao, J. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1753–1762. [Google Scholar]
- Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; Dauphin, Y.N. Convolutional Sequence to Sequence Learning. arXiv 2017, arXiv:1705.03122. [Google Scholar]
- He, H.; Gimpel, K.; Lin, J. Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Milan, Italy, 26–31 July 2015; pp. 1576–1586. [Google Scholar]
- Li, G.; Li, L.; Zhu, H.; Liu, X.; Jiao, L. Adaptive Multiscale Deep Fusion Residual Network for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8506–8521. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- Wang, R.J.; Li, X.; Ling, C.X. Pelee: A Real-Time Object Detection System on Mobile Devices. arXiv 2018, arXiv:1804.06882. [Google Scholar]
- Sainath, T.N.; Mohamed, A.-R.; Kingsbury, B.; Ramabhadran, B. Deep convolutional neural networks for LVCSR. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8614–8618. [Google Scholar]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H.-C. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral Image Classification Using Deep Pixel-Pair Features. IEEE Trans. Geosci. Remote Sens. 2016, 55, 844–853. [Google Scholar] [CrossRef]
- Fang, L.; Liu, Z.; Song, W. Deep Hashing Neural Networks for Hyperspectral Image Feature Extraction. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1412–1416. [Google Scholar] [CrossRef]
- He, N.; Paoletti, M.E.; Haut, J.N.M.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Feature Extraction With Multiscale Covariance Maps for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 755–769. [Google Scholar] [CrossRef]
- Chen, Y.; Li, C.; Ghamisi, P.; Jia, X.; Gu, Y. Deep Fusion of Remote Sensing Data for Accurate Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1253–1257. [Google Scholar] [CrossRef]
- Zhang, M.; Li, W.; Du, Q. Diverse Region-Based CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef] [PubMed]
- Zhu, J.; Fang, L.; Ghamisi, P. Deformable Convolutional Neural Networks for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1254–1258. [Google Scholar] [CrossRef]
- Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral Image Classification With Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised Deep Feature Extraction for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1909–1921. [Google Scholar] [CrossRef]
- Gao, H.; Yang, Y.; Li, C.; Gao, L.; Zhang, B. Multiscale Residual Network With Mixed Depthwise Convolution for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3396–3408. [Google Scholar] [CrossRef]
- Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3162–3177. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Liu, B.; Yu, X.; Zhang, P.; Tan, X.; Wang, R.; Zhi, L. Spectral–spatial classification of hyperspectral image using three-dimensional convolution network. J. Appl. Remote Sens. 2018, 12, 016005. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Feng, J.; Chen, J.; Liu, L.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. CNN-Based Multilayer Spatial–Spectral Feature Fusion and Sample Augmentation With Local and Nonlocal Constraints for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1299–1313. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W. Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 3141–3149. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Misra, D. Mish: A Self Regularized Non-Monotonic Neural Activation Function. arXiv 2019, arXiv:1908.08681. [Google Scholar]
- Blanzieri, E.; Melgani, F. Nearest Neighbor Classification of Remote Sensing Images With the Maximal Margin Principle. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1804–1811. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Lee, H.; Kwon, H. Going Deeper With Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Cui, B.; Dong, X.-M.; Zhan, Q.; Peng, J.; Sun, W. LiteDepthwiseNet: A Lightweight Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 1–15. [Google Scholar] [CrossRef]
- Bell, S.; Zitnick, C.L.; Bala, K.; Girshick, R. Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2874–2883. [Google Scholar]
- Kong, T.; Yao, A.; Chen, Y.; Sun, F. HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar]
- Liu, C.; Wechsler, H. A shape- and texture-based enhanced Fisher classifier for face recognition. IEEE Trans. Image Process. 2001, 10, 598–608. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Layer | Size |
---|---|
conv2d | 3 × 3, 128 |
conv2d (offset) | 3 × 3, 18 |
deform conv2d | 3 × 3, 128 |
conv2d | 3 × 3, 260 |
Class | Numbers of Samples | ||
---|---|---|---|
No | Name | Training | Test |
1 | Alfafa | 3 | 43 |
2 | Corn–notill | 42 | 1386 |
3 | Corn–mintill | 24 | 806 |
4 | Corn | 7 | 230 |
5 | Grass–pasture | 14 | 469 |
6 | Grass–trees | 21 | 709 |
7 | Grass–pasture–mowed | 3 | 25 |
8 | Hay–windrowed | 14 | 464 |
9 | Oats | 3 | 17 |
10 | Soybean–notill | 29 | 943 |
11 | Soybean–mintill | 73 | 2382 |
12 | Soybean–clean | 17 | 576 |
13 | Wheat | 6 | 199 |
14 | Woods | 37 | 1228 |
15 | Building–grass–trees–drives | 11 | 375 |
16 | Stone–steal–towers | 3 | 90 |
Total | 307 | 9942 |
Class | Numbers of Samples | ||
---|---|---|---|
No | Name | Training | Test |
1 | Asphalt | 33 | 6598 |
2 | Meadows | 93 | 18,556 |
3 | Gravel | 10 | 2089 |
4 | Trees | 15 | 3049 |
5 | Painted metal sheets | 6 | 1339 |
6 | Bare Soil | 25 | 5004 |
7 | Bitumen | 6 | 1324 |
8 | Self-Blocking Bricks | 18 | 3664 |
9 | Shadows | 4 | 943 |
Total | 210 | 42,566 |
Class | Numbers of Samples | ||
---|---|---|---|
No | Name | Training | Test |
1 | Scrub | 38 | 723 |
2 | Willow swamp | 12 | 231 |
3 | CP hammock | 12 | 244 |
4 | Slash pine | 12 | 240 |
5 | Oak/broadleaf | 8 | 153 |
6 | Hardwood | 11 | 218 |
7 | Swamp | 5 | 100 |
8 | Graminoid marsh | 21 | 410 |
9 | Spartina marsh | 26 | 494 |
10 | Cattail marsh | 20 | 384 |
11 | Salt marsh | 20 | 399 |
12 | Mud flats | 25 | 478 |
13 | Water | 46 | 881 |
Total | 256 | 4955 |
Class | Numbers of Samples | ||
---|---|---|---|
No | Name | Training | Test |
1 | Brocoli–green–weeds–1 | 10 | 1999 |
2 | Brocoli–green–weeds–2 | 18 | 3708 |
3 | Fallow | 9 | 1967 |
4 | Fallow–rough–plow | 6 | 1388 |
5 | Fallow–smooth | 13 | 2665 |
6 | Stubble | 19 | 3940 |
7 | Celery | 17 | 3562 |
8 | Grapes–untrained | 56 | 11,215 |
9 | Soil–vinyard–develop | 31 | 6172 |
10 | Corn–senesced–green–weeds | 16 | 3262 |
11 | Lettuce–romaine–4wk | 5 | 1063 |
12 | Lettuce–romaine–5wk | 9 | 1833 |
13 | Lettuce–romaine–6wk | 4 | 912 |
14 | Lettuce–romaine–7wk | 5 | 1065 |
15 | Vinyard–untrained | 36 | 7232 |
16 | Vinyard–vertical–trellis | 9 | 1798 |
Total | 263 | 53,886 |
Class | KNN [48] | SVM-RBF [49] | CDCNN [50] | SSRN [38] | FDSSC [51] | DHCNet [26] | DBMA [52] | HybridSN [39] | DBDA [32] | LiteDepthwiseNet [53] | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 45.95 | 62.24 | 11.54 | 75.29 | 94.75 | 95.03 | 83.17 | 76.47 | 94.92 | 81.41 | 97.82 |
2 | 59.49 | 76.13 | 54.60 | 92.42 | 91.73 | 91.97 | 90.96 | 76.41 | 93.75 | 93.96 | 96.03 |
3 | 47.89 | 68.59 | 42.19 | 90.56 | 94.40 | 95.31 | 92.80 | 91.32 | 95.09 | 94.55 | 96.39 |
4 | 42.63 | 55.18 | 37.62 | 91.57 | 95.37 | 94.82 | 89.14 | 75.00 | 93.15 | 97.21 | 96.00 |
5 | 85.27 | 88.97 | 92.41 | 99.18 | 98.87 | 98.69 | 93.23 | 84.22 | 98.72 | 96.81 | 99.51 |
6 | 85.96 | 89.64 | 80.73 | 97.57 | 95.70 | 98.54 | 96.66 | 97.96 | 97.33 | 98.08 | 99.09 |
7 | 21.74 | 71.28 | 38.04 | 81.26 | 69.06 | 69.88 | 49.34 | 80.77 | 64.57 | 85.24 | 71.13 |
8 | 79.65 | 94.86 | 84.78 | 97.60 | 100.0 | 100.0 | 98.66 | 93.61 | 100.0 | 96.43 | 100.0 |
9 | 12.50 | 70.03 | 42.49 | 90.63 | 66.70 | 86.27 | 51.52 | 71.43 | 86.17 | 91.43 | 96.90 |
10 | 62.72 | 66.97 | 49.61 | 90.33 | 88.18 | 92.54 | 87.52 | 91.25 | 92.18 | 93.29 | 93.11 |
11 | 66.24 | 74.23 | 63.68 | 93.88 | 98.18 | 97.78 | 89.00 | 88.91 | 97.64 | 97.75 | 97.14 |
12 | 43.11 | 65.67 | 31.97 | 89.55 | 93.45 | 90.93 | 77.38 | 77.73 | 91.91 | 91.19 | 93.58 |
13 | 84.30 | 95.46 | 83.64 | 98.63 | 97.06 | 99.30 | 97.73 | 96.26 | 98.89 | 99.41 | 99.69 |
14 | 90.16 | 97.39 | 78.92 | 95.31 | 96.95 | 95.19 | 94.99 | 91.29 | 97.07 | 97.30 | 97.05 |
15 | 49.84 | 67.90 | 71.90 | 89.85 | 93.20 | 94.00 | 83.67 | 89.21 | 93.37 | 89.93 | 95.13 |
16 | 83.23 | 92.58 | 93.87 | 94.55 | 94.04 | 96.94 | 90.68 | 77.65 | 97.27 | 98.85 | 97.63 |
OA(%) | 55.61 | 77.58 | 62.20 | 93.32 | 94.79 | 95.19 | 89.78 | 87.46 | 95.32 | 95.59 | 96.36 |
AA(%) | 51.04 | 77.32 | 58.00 | 91.76 | 91.72 | 93.57 | 85.40 | 84.97 | 93.25 | 93.92 | 95.39 |
Kappa | 49.77 | 0.7206 | 0.5612 | 0.9237 | 0.9407 | 0.9452 | 0.8833 | 0.8564 | 0.9461 | 94.82 | 0.9585 |
Test time(s) | 6.4 | 4.7 | 6.5 | 31.7 | 59.2 | 40.6 | 32.0 | 25.8 | 43.6 | 75.0 | 47.1 |
Class | KNN [48] | SVM-RBF [49] | CDCNN [50] | SSRN [38] | FDSSC [51] | DHCNet [26] | DBMA [52] | HybridSN [39] | DBDA [32] | LiteDepthwiseNet [53] | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 67.61 | 72.53 | 85.49 | 96.24 | 96.84 | 97.41 | 94.42 | 84.13 | 96.37 | 97.10 | 98.80 |
2 | 71.66 | 79.62 | 91.35 | 98.41 | 97.29 | 98.69 | 98.57 | 96.26 | 99.04 | 98.98 | 100.0 |
3 | 40.00 | 72.85 | 57.34 | 87.23 | 90.33 | 93.27 | 95.69 | 75.16 | 96.57 | 94.74 | 94.46 |
4 | 50.74 | 75.36 | 97.87 | 99.11 | 98.09 | 98.56 | 96.22 | 95.92 | 98.82 | 98.47 | 99.16 |
5 | 85.93 | 62.67 | 96.09 | 99.86 | 97.41 | 98.72 | 99.85 | 95.22 | 99.68 | 99.64 | 100.0 |
6 | 63.60 | 75.99 | 85.58 | 95.88 | 95.32 | 94.15 | 97.45 | 96.48 | 97.44 | 97.62 | 97.95 |
7 | 77.22 | 85.48 | 67.62 | 92.33 | 97.39 | 97.25 | 92.47 | 87.76 | 98.55 | 98.77 | 94.11 |
8 | 68.07 | 71.54 | 72.36 | 84.10 | 80.25 | 85.30 | 84.01 | 76.46 | 82.14 | 84.94 | 88.23 |
9 | 58.94 | 89.75 | 95.04 | 99.48 | 100.0 | 98.04 | 94.22 | 85.76 | 98.34 | 98.37 | 100.0 |
OA(%) | 68.21 | 81.14 | 86.89 | 95.66 | 94.72 | 96.29 | 95.72 | 92.83 | 96.47 | 96.60 | 97.43 |
AA(%) | 65.97 | 76.19 | 83.19 | 94.74 | 94.20 | 95.71 | 94.76 | 89.13 | 96.33 | 96.74 | 96.96 |
Kappa | 0.6197 | 0.7343 | 0.8236 | 0.9424 | 0.9268 | 0.9496 | 0.9431 | 0.8803 | 0.9531 | 96.15 | 0.9659 |
Test time(s) | 41.2 | 24.3 | 28.3 | 57.7 | 210.2 | 53.6 | 86.3 | 55.4 | 89.7 | 170.0 | 156.1 |
Class | KNN [48] | SVM-RBF [49] | CDCNN [50] | SSRN [38] | FDSSC [51] | DHCNet [26] | DBMA [52] | HybridSN [39] | DBDA [32] | LiteDepthwiseNet [53] | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 90.63 | 97.83 | 95.16 | 99.40 | 99.29 | 98.47 | 99.83 | 33.78 | 99.86 | 99.80 | 100.0 |
2 | 83.32 | 85.33 | 73.35 | 96.85 | 96.53 | 96.21 | 96.33 | 52.76 | 98.29 | 95.92 | 99.00 |
3 | 86.72 | 76.51 | 42.63 | 92.66 | 88.09 | 96.34 | 88.87 | 51.61 | 97.44 | 88.95 | 97.56 |
4 | 42.39 | 67.83 | 35.31 | 84.48 | 89.77 | 92.33 | 79.03 | 46.31 | 86.45 | 88.43 | 94.12 |
5 | 50.25 | 40.57 | 12.92 | 75.10 | 80.10 | 83.28 | 72.34 | 77.21 | 87.66 | 89.00 | 81.91 |
6 | 62.76 | 79.09 | 67.33 | 99.54 | 100.0 | 94.09 | 97.46 | 28.57 | 91.94 | 94.24 | 100.0 |
7 | 44.96 | 35.78 | 26.30 | 95.00 | 92.52 | 93.68 | 84.85 | 25.99 | 87.09 | 95.67 | 95.73 |
8 | 81.43 | 90.06 | 77.03 | 99.44 | 98.85 | 97.59 | 97.18 | 53.13 | 99.69 | 98.71 | 99.95 |
9 | 72.58 | 70.30 | 77.41 | 99.62 | 99.90 | 99.83 | 96.15 | 28.15 | 99.79 | 99.71 | 99.92 |
10 | 90.31 | 89.44 | 85.81 | 100.0 | 99.67 | 99.79 | 95.70 | 76.27 | 99.63 | 99.84 | 100.0 |
11 | 95.92 | 98.56 | 99.01 | 98.73 | 98.24 | 98.16 | 99.29 | 67.75 | 98.98 | 98.83 | 98.43 |
12 | 77.12 | 92.34 | 93.89 | 99.05 | 99.57 | 99.63 | 97.82 | 66.15 | 99.30 | 99.73 | 99.34 |
13 | 88.26 | 97.90 | 97.66 | 100.0 | 97.27 | 100.0 | 100.0 | 86.91 | 100.0 | 99.81 | 100.0 |
OA(%) | 79.98 | 84.97 | 80.91 | 96.06 | 96.58 | 97.41 | 95.07 | 63.72 | 97.59 | 97.41 | 98.41 |
AA(%) | 74.35 | 78.58 | 67.99 | 95.37 | 95.36 | 96.10 | 92.68 | 56.63 | 95.85 | 96.04 | 97.38 |
Kappa | 0.7739 | 0.8321 | 0.7871 | 0.9563 | 0.9631 | 0.9739 | 0.9451 | 0.5889 | 0.9732 | 97.12 | 0.9823 |
Test time(s) | 2.4 | 1.1 | 3.1 | 9.4 | 11.8 | 10.2 | 13.9 | 12.8 | 13.7 | 30.3 | 21.2 |
Class | KNN [48] | SVM-RBF [49] | CDCNN [50] | SSRN [38] | FDSSC [51] | DHCNet [26] | DBMA [52] | HybridSN [39] | DBDA [32] | LiteDepthwiseNet [53] | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 89.44 | 99.87 | 36.60 | 90.69 | 99.95 | 94.62 | 93.48 | 98.75 | 100.0 | 99.95 | 100.0 |
2 | 69.72 | 97.62 | 71.96 | 99.78 | 99.84 | 99.81 | 99.53 | 99.59 | 99.78 | 99.65 | 99.93 |
3 | 84.01 | 93.09 | 73.72 | 90.00 | 97.57 | 90.27 | 96.12 | 97.56 | 96.55 | 97.20 | 98.33 |
4 | 86.78 | 96.43 | 91.38 | 96.52 | 94.94 | 93.64 | 94.09 | 91.80 | 94.39 | 95.42 | 96.99 |
5 | 85.27 | 94.56 | 93.39 | 99.40 | 99.67 | 99.53 | 96.31 | 97.35 | 96.50 | 95.49 | 97.68 |
6 | 98.63 | 99.50 | 98.56 | 99.89 | 99.66 | 99.86 | 99.79 | 98.55 | 99.98 | 99.97 | 100.0 |
7 | 78.11 | 95.58 | 93.58 | 98.08 | 99.83 | 99.31 | 96.44 | 98.94 | 98.41 | 98.10 | 100.0 |
8 | 67.00 | 70.86 | 71.42 | 91.62 | 97.26 | 95.28 | 86.66 | 93.44 | 89.69 | 92.58 | 92.72 |
9 | 95.65 | 98.41 | 94.99 | 99.58 | 99.70 | 99.74 | 99.68 | 99.75 | 96.89 | 99.77 | 99.86 |
10 | 81.97 | 90.27 | 80.14 | 96.34 | 98.60 | 98.07 | 93.83 | 98.16 | 94.99 | 94.18 | 98.64 |
11 | 64.32 | 79.41 | 81.78 | 85.48 | 96.21 | 91.93 | 91.85 | 92.53 | 96.13 | 96.14 | 96.91 |
12 | 88.96 | 89.06 | 83.76 | 96.75 | 98.58 | 96.49 | 99.80 | 97.18 | 98.19 | 96.81 | 97.43 |
13 | 93.76 | 93.64 | 93.47 | 96.59 | 99.69 | 94.27 | 96.92 | 86.59 | 99.88 | 98.31 | 96.78 |
14 | 94.42 | 92.67 | 94.15 | 98.28 | 98.04 | 98.40 | 97.19 | 96.82 | 94.41 | 97.94 | 99.27 |
15 | 75.46 | 73.05 | 59.53 | 74.08 | 81.03 | 85.41 | 87.47 | 90.62 | 89.75 | 91.00 | 91.49 |
16 | 96.04 | 99.17 | 98.51 | 99.88 | 99.99 | 100.0 | 99.43 | 97.09 | 100.0 | 100.0 | 100.0 |
OA(%) | 82.17 | 86.45 | 80.51 | 90.11 | 94.60 | 94.45 | 92.62 | 95.05 | 95.81 | 96.22 | 96.53 |
AA(%) | 84.34 | 91.45 | 82.31 | 94.56 | 97.53 | 96.04 | 95.54 | 95.92 | 96.59 | 97.03 | 97.87 |
Kappa | 80.69 | 0.8490 | 0.7815 | 0.8906 | 0.9403 | 0.9463 | 0.9177 | 0.9426 | 0.9521 | 96.08 | 0.9614 |
Test time(s) | 47.75 | 55.5 | 35.4 | 120.9 | 207.5 | 168.3 | 181.1 | 140.8 | 243.2 | 440.0 | 197.5 |
Strategy | IN | UP | KSC | SV |
---|---|---|---|---|
Without fusion | 93.73 | 95.03 | 94.25 | 93.69 |
With fusion | 96.36 | 97.43 | 98.41 | 96.53 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, T.; Shi, C.; Liao, D.; Wang, L. A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3590. https://doi.org/10.3390/rs13183590
Zhang T, Shi C, Liao D, Wang L. A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification. Remote Sensing. 2021; 13(18):3590. https://doi.org/10.3390/rs13183590
Chicago/Turabian StyleZhang, Tianyu, Cuiping Shi, Diling Liao, and Liguo Wang. 2021. "A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification" Remote Sensing 13, no. 18: 3590. https://doi.org/10.3390/rs13183590
APA StyleZhang, T., Shi, C., Liao, D., & Wang, L. (2021). A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification. Remote Sensing, 13(18), 3590. https://doi.org/10.3390/rs13183590