DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification
Abstract
:1. Introduction
- We present a specialized two-branch network architecture named the DCTransformer. Within this network, spectral and spatial features are extracted flexibly and effectively by their respective branches. Owing to this advantage, the network facilitates a subsequent dynamic focus module (DFM) that adaptively learns spatial and spectral features with varying emphases to address different land cover characteristics. These characteristics may prioritize either spatial visual features or the categorical information conveyed by spectral features.
- We introduce the DFE module, which effectively extracts spatial features while minimally increasing the total number of parameters.
- We introduce the BFE module, which is composed of the channel attention mechanism combined with the DCT method, to effectively extract spectral features and to extract the base spectral features, including global frequent information.
- We provide a DFM that fuses the spectral and spatial features self-adaptively.
2. Methods
2.1. Related Work
2.1.1. Backbone Based on CNNs
2.1.2. Discrete Cosine Transform
2.1.3. Attention Mechanism
2.2. Proposed Method
- The overall architecture;
- The DFE model;
- The BFE model;
- The DFM model;
- The dataset and evaluation metrics.
2.2.1. Overall Network
2.2.2. Detail Feature Extractor
2.2.3. Base Feature Extractor
2.2.4. Dynamic Feature Fusion Mechanism
2.3. Dataset and Evaluation Metrics
2.3.1. Dataset Description
2.3.2. Evaluation Metrics
- Overall accuracy (OA): A comprehensive assessment of the classification;
- Average accuracy (AA): The average accuracy provides a balanced representation of the model’s performance by calculating the mean classification accuracy across all land cover categories;
- Kappa coefficient (): The kappa coefficient assesses the agreement between the anticipated classifications and the ground truth while considering any agreement that might happen by chance.
3. Results
- Experiment setting;
- Comparative experiments;
- Ablation experiments.
3.1. Experiment Setting
3.2. Comparative Experiments
Class No. | DeepHyperX | ViT | SpectralFormer | SSFTT | MorphFormer | HybridSN | DBMA | DCTransformer |
---|---|---|---|---|---|---|---|---|
1 | 88.98 | 83.00 | 82.15 | 82.53 | 81.67 | 80.15 | 81.86 | 82.43 |
2 | 94.27 | 98.50 | 98.78 | 100.00 | 100.00 | 100.00 | 95.77 | 100.00 |
3 | 76.43 | 85.74 | 94.65 | 96.24 | 97.82 | 94.85 | 99.20 | 99.00 |
4 | 99.91 | 99.53 | 91.73 | 99.72 | 96.69 | 96.68 | 98.29 | 95.92 |
5 | 80.69 | 66.57 | 87.32 | 80.88 | 95.87 | 99.24 | 99.71 | 90.78 |
6 | 99.34 | 95.93 | 96.59 | 100.00 | 97.63 | 100.00 | 100.00 | 99.72 |
7 | 95.80 | 72.03 | 90.91 | 92.31 | 95.80 | 93.56 | 95.42 | 95.80 |
8 | 88.07 | 61.05 | 68.07 | 91.23 | 81.75 | 88.03 | 79.58 | 91.23 |
9 | 87.41 | 90.21 | 90.30 | 95.90 | 93.28 | 90.65 | 88.66 | 95.06 |
10 | 70.30 | 83.11 | 79.89 | 95.92 | 97.53 | 88.52 | 89.67 | 98.58 |
11 | 77.30 | 63.72 | 73.69 | 74.45 | 81.95 | 90.03 | 96.29 | 91.93 |
12 | 83.38 | 68.56 | 76.02 | 84.51 | 89.71 | 92.02 | 92.12 | 90.75 |
13 | 79.34 | 89.38 | 51.54 | 80.79 | 93.63 | 82.80 | 88.77 | 94.40 |
14 | 83.81 | 93.12 | 93.52 | 100.00 | 99.60 | 95.54 | 98.78 | 100.00 |
15 | 73.57 | 85.62 | 77.17 | 82.24 | 87.53 | 100.00 | 99.36 | 97.67 |
OA | 85.35 | 83.17 | 83.78 | 89.80 | 92.74 | 92.27 | 92.20 | 94.40 |
AA | 85.24 | 82.74 | 84.01 | 90.45 | 92.71 | 92.80 | 93.16 | 94.89 |
() | 84.09 | 81.74 | 82.39 | 88.93 | 92.11 | 91.43 | 91.53 | 93.92 |
F1 score | 85.37 | 83.21 | 83.72 | 89.69 | 92.64 | 92.08 | 92.19 | 94.41 |
precision | 85.86 | 84.49 | 84.89 | 90.25 | 92.97 | 92.44 | 92.76 | 94.83 |
Class No. | DeepHyperX | ViT | SpectralFormer | SSFTT | MorphFormer | HybridSN | DBMA | DCTransformer |
---|---|---|---|---|---|---|---|---|
1 | 67.27 | 52.10 | 47.18 | 91.26 | 92.56 | 94.07 | 94.71 | 94.72 |
2 | 78.44 | 34.82 | 70.66 | 99.620 | 98.31 | 98.08 | 98.08 | 98.60 |
3 | 83.15 | 75.54 | 77.17 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
4 | 88.59 | 90.16 | 80.76 | 97.54 | 96.86 | 98.43 | 97.09 | 95.75 |
5 | 90.10 | 72.45 | 76.47 | 99.57 | 99.43 | 99.28 | 99.56 | 99.43 |
6 | 97.95 | 94.99 | 92.94 | 99.21 | 99.54 | 100.00 | 99.31 | 99.54 |
7 | 66.99 | 65.14 | 69.39 | 94.01 | 87.91 | 92.81 | 90.30 | 94.44 |
8 | 61.29 | 57.07 | 76.26 | 97.48 | 94.09 | 96.73 | 96.15 | 97.48 |
9 | 57.45 | 36.88 | 66.49 | 96.10 | 93.97 | 90.60 | 91.31 | 86.17 |
10 | 98.77 | 92.59 | 98.15 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
11 | 92.60 | 78.94 | 91.40 | 97.03 | 98.39 | 98.63 | 97.58 | 98.64 |
12 | 94.24 | 74.24 | 78.79 | 100.00 | 100.00 | 99.39 | 100.00 | 100.00 |
13 | 93.33 | 97.78 | 93.33 | 97.28 | 100.00 | 100.00 | 99.13 | 100.00 |
14 | 89.74 | 61.54 | 92.31 | 100.00 | 89.74 | 94.87 | 96.43 | 100.00 |
15 | 99.98 | 100.00 | 100.00 | 97.11 | 100.00 | 100.00 | 100.00 | 100.00 |
16 | 100.00 | 100.00 | 100.00 | 89.16 | 100.00 | 100.00 | 99.97 | 100.00 |
OA | 75.38 | 63.13 | 74.00 | 96.69 | 95.37 | 95.61 | 96.15 | 96.73 |
AA | 85.00 | 74.02 | 81.96 | 95.10 | 96.93 | 96.68 | 97.12 | 97.80 |
() | 72.02 | 58.46 | 70.21 | 93.36 | 94.69 | 95.10 | 95.63 | 96.25 |
F1 score | 75.51 | 63.82 | 73.98 | 96.76 | 96.06 | 95.51 | 95.75 | 96.76 |
precision | 76.71 | 66.73 | 75.23 | 96.85 | 96.25 | 95.58 | 95.74 | 96.97 |
Class No. | DeepHyperX | ViT | SpectralFormer | SSFTT | MorphFormer | HybridSN | DBAM | DCTransformer |
---|---|---|---|---|---|---|---|---|
1 | 95.62 | 82.94 | 92.37 | 95.72 | 98.23 | 97.02 | 95.05 | 99.44 |
2 | 72.28 | 95.43 | 89.96 | 86.25 | 88.12 | 89.52 | 90.35 | 92.76 |
3 | 66.31 | 59.36 | 42.25 | 70.05 | 86.36 | 97.86 | 95.98 | 92.78 |
4 | 99.40 | 83.39 | 94.25 | 99.89 | 98.37 | 97.25 | 98.71 | 99.13 |
5 | 99.33 | 96.77 | 97.38 | 99.56 | 99.91 | 99.91 | 99.20 | 99.94 |
6 | 81.42 | 54.23 | 57.80 | 89.09 | 89.02 | 85.02 | 81.55 | 86.37 |
OA | 94.02 | 85.83 | 90.25 | 96.43 | 96.80 | 96.17 | 95.80 | 97.45 |
AA | 85.72 | 78.69 | 79.00 | 90.09 | 93.33 | 94.43 | 93.47 | 95.07 |
() | 91.97 | 81.17 | 86.98 | 95.20 | 95.72 | 94.87 | 94.37 | 96.58 |
F1 score | 93.96 | 85.91 | 90.19 | 96.39 | 96.50 | 96.19 | 95.84 | 97.35 |
precision | 93.97 | 88.07 | 91.16 | 96.40 | 96.51 | 96.27 | 96.02 | 97.38 |
Class No. | DeepHyperX | ViT | SpectralFormer | SSFTT | MorphFormer | HybridSN | DBAM | DCTransformer |
---|---|---|---|---|---|---|---|---|
1 | 96.27 | 94.98 | 97.06 | 97.24 | 97.96 | 97.45 | 97.46 | 97.91 |
2 | 78.53 | 72.26 | 71.45 | 93.64 | 89.97 | 89.57 | 85.87 | 92.33 |
3 | 82.55 | 77.79 | 78.33 | 89.98 | 90.72 | 90.65 | 90.34 | 91.85 |
4 | 84.11 | 73.49 | 82.77 | 96.20 | 92.16 | 91.00 | 94.52 | 94.70 |
5 | 91.36 | 89.53 | 91.00 | 95.14 | 95.40 | 93.93 | 92.08 | 94.19 |
6 | 72.23 | 36.12 | 65.91 | 83.07 | 84.65 | 83.29 | 86.45 | 90.52 |
7 | 97.03 | 68.32 | 83.40 | 88.21 | 90.43 | 86.61 | 91.51 | 92.88 |
8 | 91.84 | 79.47 | 91.50 | 97.50 | 96.66 | 97.16 | 96.86 | 97.12 |
9 | 55.24 | 42.86 | 62.84 | 48.86 | 61.93 | 67.32 | 63.37 | 64.23 |
10 | 20.69 | 0.00 | 0.00 | 5.17 | 20.69 | 13.79 | 24.13 | 24.71 |
11 | 65.62 | 13.67 | 42.58 | 73.05 | 73.44 | 72.65 | 76.95 | 73.05 |
OA | 89.50 | 84.06 | 76.35 | 93.57 | 93.98 | 93.50 | 93.22 | 94.46 |
AA | 75.05 | 58.95 | 76.89 | 78.91 | 81.27 | 80.31 | 81.78 | 82.86 |
() | 86.09 | 78.82 | 91.61 | 91.50 | 92.03 | 91.39 | 91.01 | 92.65 |
F1 score | 89.35 | 83.60 | 88.07 | 93.32 | 93.87 | 93.43 | 93.16 | 94.36 |
precision | 89.35 | 83.66 | 87.93 | 93.31 | 93.81 | 93.39 | 93.17 | 94.31 |
Method | Params (MB) | GFLOPs |
---|---|---|
DeepHyperX | 0.20 | 2.65 |
ViT | 0.08 | 0.41 |
SpectralFormer | 0.03 | 0.51 |
SSFTT | 0.28 | 0.98 |
MorphFormer | 0.22 | 0.72 |
HydridSN | 0.16 | 1.53 |
DBMA | 0.23 | 0.97 |
DCTransformer | 0.17 | 0.81 |
3.3. Ablation Experiments
Window Size | OA (%) | AA (%) | Kappa (%) | GFLOPs |
---|---|---|---|---|
11 | 96.73 | 97.80 | 96.25 | 2.38 |
13 | 96.87 | 98.18 | 96.41 | 3.25 |
15 | 95.77 | 97.31 | 95.15 | 4.27 |
17 | 95.81 | 97.76 | 95.20 | 5.44 |
19 | 96.00 | 97.63 | 95.41 | 6.75 |
4. Experiments and Discussion
4.1. Impact of the BFE and DFE Modules
4.2. Impact of Different Window Sizes on Image Pathes
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
HSIs | Hyperspectral Images |
CNN | Convolution Neural Network |
CAM | Channel Attention Mechanism |
DCT | Discrete Cosine Transform |
SVMs | Support Vector Machines |
ViT | Vision Transformer |
SA | Self-Attention |
DC | Dilation Convolution |
DSC | Depthwise Separable Convolution |
CLS | Class Token |
GAP | Global Average Pooling |
PU | Pavia University |
CASI | Compact Airborne Spectrographic Imager |
MPP | Meters Per Pixel |
GSD | Group Sampling Distance |
IP | Indian Pines |
UH | University of Houston |
POSIS | Reflective Optics System Imaging Spectrometer |
OA | Overall Accuracy |
AA | Average Accurary |
Kappa Coefficient |
References
- Fan, J.; Zhou, N.; Peng, J.; Gao, L. Hierarchical learning of tree classifiers for large-scale plant species identification. IEEE Trans. Image Process. 2015, 24, 4172–4184. [Google Scholar] [PubMed]
- Sabbah, S.; Rusch, P.; Gerhard, J.H.; Stöckling, C.; Eichmann, J.; Harig, R. Remote sensing of gases by hyperspectral imaging: Results of measurements in the Hamburg port area. In Proceedings of the Electro-Optical Remote Sensing, Photonic Technologies, and Applications V. SPIE 2011, 8186, 261–269. [Google Scholar]
- Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of spectral–temporal response surfaces by combining multispectral satellite and hyperspectral UAV imagery for precision agriculture applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3140–31460. [Google Scholar] [CrossRef]
- Lu, B.; He, Y.; Dao, P.D. Comparing the performance of multispectral and hyperspectral images for estimating vegetation properties. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1784–1797. [Google Scholar] [CrossRef]
- Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
- Gao, L.; Hong, D.; Yao, J.; Zhang, B.; Gamba, P.; Chanussot, J. Spectral superresolution of multispectral imagery with joint sparse and low-rank learning. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2269–2280. [Google Scholar] [CrossRef]
- Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral image classification—Traditional to deep models: A survey for future prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 968–999. [Google Scholar] [CrossRef]
- Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active semi-supervised random forest for hyperspectral image classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef]
- Seifi Majdar, R.; Ghassemian, H. A probabilistic SVM approach for hyperspectral image classification using spectral and texture features. Int. J. Remote Sens. 2017, 38, 4265–4284. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: Piscataway Township, NJ, USA, 2015; pp. 4959–4962. [Google Scholar]
- Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep learning with attribute profiles for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
- Xu, Y.; Nakayama, H. Dct-based fast spectral convolution for deep convolutional neural networks. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 1–8. [Google Scholar]
- Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–Spatial Morphological Attention Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Li, J. Visual attention-driven hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8065–8080. [Google Scholar] [CrossRef]
- Hu, X.; Li, T.; Zhou, T.; Liu, Y.; Peng, Y. Contrastive learning based on transformer for hyperspectral image classification. Appl. Sci. 2021, 11, 8670. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Asker, M.E. Hyperspectral image classification method based on squeeze-and-excitation networks, depthwise separable convolution and multibranch feature fusion. Earth Sci. Inform. 2023, 16, 1427–1448. [Google Scholar] [CrossRef]
- Huang, W.; Zhao, Z.; Sun, L.; Ju, M. Dual-branch attention-assisted CNN for hyperspectral image classification. Remote Sens. 2022, 14, 6158. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Ahmed, N.; Natarajan, T.; Rao, K.R. Discrete cosine transform. IEEE Trans. Comput. 1974, 100, 90–93. [Google Scholar] [CrossRef]
- Xu, K.; Qin, M.; Sun, F.; Wang, Y.; Chen, Y.K.; Ren, F. Learning in the frequency domain. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1740–1744. [Google Scholar]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature extraction for hyperspectral imagery: The evolution from shallow to deep: Overview and toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
- Khayam, S.A. The discrete cosine transform (DCT): Theory and application. Mich. State Univ. 2003, 114, 31. [Google Scholar]
- Qin, Z.; Zhang, P.; Wu, F.; Li, X. Fcanet: Frequency channel attention networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 783–792. [Google Scholar]
- He, J.; Zhao, L.; Yang, H.; Zhang, M.; Li, W. HSI-BERT: Hyperspectral image classification using the bidirectional encoder representation from transformers. IEEE Trans. Geosci. Remote Sens. 2019, 58, 165–178. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Roy, S.; Deria, A.; Hong, D.; Rasti, B.; Plaza, A.; Chanussot, J. Multimodal fusion transformer for remote sensing image classification. arXiv 2022, arXiv:2203.16952. [Google Scholar] [CrossRef]
- Dubey, S.R.; Chakraborty, S.; Roy, S.K.; Mukherjee, S.; Singh, S.K.; Chaudhuri, B.B. diffGrad: An optimization method for convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 4500–4511. [Google Scholar] [CrossRef]
- Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. Adv. Neural Inf. Process. Syst. 2014, 27, 1–9. [Google Scholar]
- Hendrix, E.M.; Paoletti, M.; Haut, J.M. On Training Set Selection in Spatial Deep Learning. In High-Dimensional Optimization and Probability: With a View Towards Data Science; Springer: Berlin/Heidelberg, Germany, 2022; pp. 327–339. [Google Scholar]
- Zhang, M.; Wang, Z.; Wang, X.; Gong, M.; Wu, Y.; Li, H. Features kept generative adversarial network data augmentation strategy for hyperspectral image classification. Pattern Recognit. 2023, 142, 109701. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. Hybridsn: Exploring 3-d–2-d cnn feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef]
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Healthy Grass | 198 | 1053 |
2 | Stressed Grass | 190 | 1064 |
3 | Synthetic Grass | 192 | 505 |
4 | Tree | 188 | 1056 |
5 | Soil | 186 | 1056 |
6 | Water | 182 | 143 |
7 | Residential | 196 | 1072 |
8 | Commercial | 191 | 1053 |
9 | Road | 193 | 1059 |
10 | Highway | 191 | 1036 |
11 | Railway | 181 | 1054 |
12 | Parking Lot 1 | 192 | 1041 |
13 | Parking Lot 2 | 184 | 285 |
14 | Tennis Court | 181 | 247 |
15 | Running Track | 184 | 473 |
Total | 2832 | 12,197 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Corn Notill | 50 | 1384 |
2 | Corn Mintill | 50 | 784 |
3 | Corn | 50 | 184 |
4 | Grass Pasture | 50 | 447 |
5 | Grass Trees | 50 | 697 |
6 | Hay Windrowed | 50 | 439 |
7 | Soybean Notill | 50 | 918 |
8 | Soybean Mintill | 50 | 2418 |
9 | Soybean Clean | 50 | 564 |
10 | Wheat | 50 | 162 |
11 | Woods | 50 | 1244 |
12 | Buildings, Grass, Trees, Drives | 50 | 330 |
13 | Stone and Steel Towers | 50 | 45 |
14 | Alfalfa | 15 | 39 |
15 | Grass, Pasture, Towers | 15 | 11 |
16 | Oats | 15 | 5 |
Total | 695 | 9671 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Trees | 1162 | 22,084 |
2 | Grass Groundsurface | 344 | 6538 |
3 | Road Materials | 334 | 6353 |
4 | Buildings’ Shadow | 112 | 2121 |
5 | Sidewalk | 69 | 1316 |
6 | Cloth Panels | 13 | 256 |
7 | Grass—Pure | 214 | 4056 |
8 | Dirt and Sand | 91 | 1735 |
9 | Water | 23 | 443 |
10 | Buildings | 312 | 5928 |
11 | Yellow Curb | 9 | 174 |
Total | 2683 | 28,920 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Buildings | 125 | 2778 |
2 | Woods | 154 | 8969 |
3 | Roads | 122 | 3052 |
4 | Apples | 129 | 3905 |
5 | Ground | 105 | 374 |
6 | Vineyard | 184 | 10,317 |
Total | 819 | 29,395 |
Cases | OA (%) | AA (%) | Kappa (%) | Params (MB) |
---|---|---|---|---|
baseline | 93.83 | 80.39 | 91.83 | 0.12 |
w/o DFE | 94.26 | 81.75 | 92.40 | 0.165 |
w/o BFE | 94.04 | 80.54 | 92.10 | 0.13 |
DCTransformer | 94.46 | 82.86 | 92.65 | 0.17 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dang, Y.; Zhang, X.; Zhao, H.; Liu, B. DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification. Appl. Sci. 2024, 14, 1701. https://doi.org/10.3390/app14051701
Dang Y, Zhang X, Zhao H, Liu B. DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification. Applied Sciences. 2024; 14(5):1701. https://doi.org/10.3390/app14051701
Chicago/Turabian StyleDang, Yuanyuan, Xianhe Zhang, Hongwei Zhao, and Bing Liu. 2024. "DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification" Applied Sciences 14, no. 5: 1701. https://doi.org/10.3390/app14051701
APA StyleDang, Y., Zhang, X., Zhao, H., & Liu, B. (2024). DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification. Applied Sciences, 14(5), 1701. https://doi.org/10.3390/app14051701