A Two-Staged Feature Extraction Method Based on Total Variation for Hyperspectral Images
Abstract
:1. Introduction
- This work innovatively proposes an efficient two-staged FE method based on total variation for HSI. Based on different solutions of anisotropic and isotropic models, it successively completes the extraction of multi-scale structure information and detail smoothing enhancement of HSI, improving the discriminative ability of different land covers.
- Our design has no complex framework or redundant loop, which greatly reduces computational overhead. When compared with many state-of-the-art algorithms, our method can significantly outperform in classification performance and computing time, especially achieving better results in most classes.
- We give a sufficiently detailed parameter analysis and give the reasonable value and change explanation of each parameter. There is no need to reset parameters for diverse datasets, and the results show that our method has strong robustness and stability, strengthening the advantages in hyperspectral practical application.
2. Proposed Method
2.1. ATVM
Algorithm 1: The ATVM under Numerical Solution to Extract Different Scale Structure |
1: Input: input raw image ; regularization parameter ; scale size parameter ; 2: Initialize: ; setting and for enough small; ; 3: While 4: 5: for 6: 7: end; 8: ], 9: ; 10: End 11: Output: Structure image . |
2.2. ITVM
Algorithm 2: The ITVM Solution Based on Split Bregman Algorithm for Smoothing |
1: Input: input image ; fidelity parameter; regularization parameter; 2: Initialize: , and stopping tolerance ; 3: While 4: 5: 6: 7: 8: 9: 10: End 11: Output: Smoothed image . |
2.3. Overall Design
Algorithm 3: The Proposed Two-Staged FE Method |
1: Input: input raw image ; regularization parameter ; fidelity parameter , scale size parameter ; average fused number , SVD number ; First Stage: 2: ; 3: = ); Second Stage: 4: 5: for 6: ; 7: end 8: ]; 9: Output: Featured block . |
3. Experiments
3.1. Datasets
3.2. Experimental Setup
3.2.1. Comparison Methods
- SVM [48]. Directly send the original datasets into SVM for classification as a basic comparison.
- Local Covariance Matrix Representation (LCMR) [49]. This is a FE method using local covariance matrices to characterize the spatial–spectral information.
- Random patches network (RPNet) [50]. This is an efficient deep learning-based method that directly regards the random patches taken from the image as the convolution kernels, which combines both shallow and deep convolutional features.
- Multi-Scale Total Variation (MSTV) [51]. This is a noise-robust method which extracts multiscale information.
- Generalized tensor regression (GTR) [52]. This is a strengthened tensorial version of the ridge regression for multivariate labels which exploits the discrimination information of different modes.
- Double-Branch Dual-Attention mechanism network (DBDA) [35]. This is a deep learning network that contains two branches, applied channel attention block and spatial attention block, to capture considerable information on spectral and spatial features.
- Fusion of Dual Spatial Information (FDSI) [29]. This is a framework using the fusion of dual spatial information, which includes pre-processing FE and post-processing spatial optimization.
- l0-l1HTV [31]. This is a hybrid model that takes full consideration of the local spatial–spectral structure and yields sharper edge preservation.
- SpectralFormer [53]. This is a backbone network based on the transformer, which learns spectrally local sequence information from neighboring bands, yielding group-wise spectral embeddings.
3.2.2. Experimental Parameters
3.2.3. Metrics and Device
3.3. Experimental Results and Discussion
3.3.1. Indian Pines
3.3.2. Salinas
3.3.3. Houston University 2018
4. Parameter Analysis and Discussion
4.1. First Stage
4.2. Second Stage
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kuras, A.; Brell, M.; Rizzi, J.; Burud, I. Hyperspectral and Lidar Data Applied to the Urban Land Cover Machine Learning and Neural-Network-Based Classification: A Review. Remote Sens. 2021, 13, 3393. [Google Scholar] [CrossRef]
- Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J.; Luck, B. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sens. 2020, 12, 2028. [Google Scholar] [CrossRef]
- Hellwig, F.M.; Stelmaszczuk-Górska, M.A.; Dubois, C.; Wolsza, M.; Truckenbrodt, S.C.; Sagichewski, H.; Chmara, S.; Bannehr, L.; Lausch, A.; Schmullius, C. Mapping European Spruce Bark Beetle Infestation at Its Early Phase Using Gyrocopter-Mounted Hyperspectral Data and Field Measurements. Remote Sens. 2021, 13, 4659. [Google Scholar] [CrossRef]
- Pour, A.B.; Zoheir, B.; Pradhan, B.; Hashim, M. Editorial for the Special Issue: Multispectral and Hyperspectral Remote Sensing Data for Mineral Exploration and Environmental Monitoring of Mined Areas. Remote Sens. 2021, 13, 519. [Google Scholar] [CrossRef]
- Manninen, A.; Kääriäinen, T.; Parviainen, T.; Buchter, S.; Heiliö, M.; Laurila, T. Long Distance Active Hyperspectral Sensing Using High-Power near-Infrared Supercontinuum Light Source. Opt. Express 2014, 22, 7172–7177. [Google Scholar] [CrossRef]
- Ou, Y.; Zhang, B.; Yin, K.; Xu, Z.; Chen, S.; Hou, J. Hyperspectral Imaging for the Spectral Measurement of Far-Field Beam Divergence Angle and Beam Uniformity of a Supercontinuum Laser. Opt. Express 2018, 26, 9822–9828. [Google Scholar] [CrossRef] [PubMed]
- Qian, L.; Wu, D.; Liu, D.; Song, S.; Shi, S.; Gong, W.; Wang, L. Parameter Simulation and Design of an Airborne Hyperspectral Imaging LiDAR System. Remote Sens. 2021, 13, 5123. [Google Scholar] [CrossRef]
- Jing, Z.; Guan, H.; Zhao, P.; Li, D.; Yu, Y.; Zang, Y.; Wang, H.; Li, J. Multispectral LiDAR Point Cloud Classification Using SE-PointNet++. Remote Sens. 2021, 13, 2516. [Google Scholar] [CrossRef]
- Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
- Prasad, S.; Bruce, L.M. Limitations of Principal Components Analysis for Hyperspectral Target Recognition. IEEE Geosci. Remote Sens. Lett. 2008, 5, 625–629. [Google Scholar] [CrossRef]
- Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification with Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef] [Green Version]
- Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A Transformation for Ordering Multispectral Data in Terms of Image Quality with Implications for Noise Removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Prasad, S.; Fowler, J.E.; Bruce, L.M. Locality-Preserving Dimensionality Reduction and Classification for Hyperspectral Image Analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1185–1198. [Google Scholar] [CrossRef] [Green Version]
- Zu, B.; Xia, K.; Du, W.; Li, Y.; Ali, A.; Chakraborty, S. Classification of Hyperspectral Images with Robust Regularized Block Low-Rank Discriminant Analysis. Remote Sens. 2018, 10, 817. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Zhang, L.; You, J. Locally Weighted Discriminant Analysis for Hyperspectral Image Classification. Remote Sens. 2019, 11, 109. [Google Scholar] [CrossRef] [Green Version]
- Jia, S.; Zhao, Q.; Zhuang, J.; Tang, D.; Long, Y.; Xu, M.; Zhou, J.; Li, Q. Flexible Gabor-Based Superpixel-Level Unsupervised LDA for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10394–10409. [Google Scholar] [CrossRef]
- Zhang, L.; Su, H.; Shen, J. Hyperspectral Dimensionality Reduction Based on Multiscale Superpixelwise Kernel Principal Component Analysis. Remote Sens. 2019, 11, 1219. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Luo, F. Review on Graph Learning for Dimensionality Reduction of Hyperspectral Image. Geo-Spat. Inf. Sci. 2020, 23, 98–106. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Zhang, L.; Zhang, L.; Du, B. GPU Parallel Implementation of Isometric Mapping for Hyperspectral Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1532–1536. [Google Scholar] [CrossRef]
- Shi, G.; Huang, H.; Liu, J.; Li, Z.; Wang, L. Spatial-Spectral Multiple Manifold Discriminant Analysis for Dimensionality Reduction of Hyperspectral Imagery. Remote Sens. 2019, 11, 2414. [Google Scholar] [CrossRef] [Green Version]
- Shi, G.; Luo, F.; Tang, Y.; Li, Y. Dimensionality Reduction of Hyperspectral Image Based on Local Constrained Manifold Structure Collaborative Preserving Embedding. Remote Sens. 2021, 13, 1363. [Google Scholar] [CrossRef]
- MartÍnez-UsÓMartinez-Uso, A.; Pla, F.; Sotoca, J.M.; GarcÍa-Sevilla, P. Clustering-Based Hyperspectral Band Selection Using Information Measures. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4158–4171. [Google Scholar] [CrossRef]
- Kang, X.; Li, S.; Benediktsson, J.A. Feature Extraction of Hyperspectral Images with Image Fusion and Recursive Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3742–3752. [Google Scholar] [CrossRef]
- Wang, Q.; Li, Q.; Li, X. A Fast Neighborhood Grouping Method for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5028–5039. [Google Scholar] [CrossRef]
- Tang, C.; Liu, X.; Zhu, E.; Wang, L.; Zomaya, A. Hyperspectral Band Selection via Spatial-Spectral Weighted Region-Wise Multiple Graph Fusion-Based Spectral Clustering. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence Organization, Montreal, QC, Canada, 19–27 August 2021; pp. 3038–3044. [Google Scholar]
- Zhao, H.; Bruzzone, L.; Guan, R.; Zhou, F.; Yang, C. Spectral-Spatial Genetic Algorithm-Based Unsupervised Band Selection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9616–9632. [Google Scholar] [CrossRef]
- Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise Reduction in Hyperspectral Imagery: Overview and Application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef] [Green Version]
- Rasti, B.; Ulfarsson, M.O.; Sveinsson, J.R. Hyperspectral Feature Extraction Using Total Variation Component Analysis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6976–6985. [Google Scholar] [CrossRef]
- Duan, P.; Ghamisi, P.; Kang, X.; Rasti, B.; Li, S.; Gloaguen, R. Fusion of Dual Spatial Information for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1–13. [Google Scholar] [CrossRef]
- Wang, M.; Wang, Q.; Chanussot, J.; Li, D. Hyperspectral Image Mixed Noise Removal Based on Multidirectional Low-Rank Modeling and Spatial–Spectral Total Variation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 488–507. [Google Scholar] [CrossRef]
- Wang, M.; Wang, Q.; Chanussot, J.; Hong, D. L0-l1 Hybrid Total Variation Regularization and Its Applications on Hyperspectral Image Mixed Noise Removal and Compressed Sensing. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7695–7710. [Google Scholar] [CrossRef]
- Zhu, Z.; Luo, Y.; Qi, G.; Meng, J.; Li, Y.; Mazur, N. Remote Sensing Image Defogging Networks Based on Dual Self-Attention Boost Residual Octave Convolution. Remote Sens. 2021, 13, 3104. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Gong, H.; Li, Q.; Li, C.; Dai, H.; He, Z.; Wang, W.; Li, H.; Han, F.; Tuniyazi, A.; Mu, T. Multiscale Information Fusion for Hyperspectral Image Classification Based on Hybrid 2D-3D CNN. Remote Sens. 2021, 13, 2268. [Google Scholar] [CrossRef]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- Zeng, Y.; Ritz, C.; Zhao, J.; Lan, J. Attention-Based Residual Network with Scattering Transform Features for Hyperspectral Unmixing with Limited Training Samples. Remote Sens. 2020, 12, 400. [Google Scholar] [CrossRef] [Green Version]
- Xie, J.; He, N.; Fang, L.; Ghamisi, P. Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 246–259. [Google Scholar] [CrossRef]
- Liu, J.; Yang, Z.; Liu, Y.; Mu, C. Hyperspectral Remote Sensing Images Deep Feature Extraction Based on Mixed Feature and Convolutional Neural Networks. Remote Sens. 2021, 13, 2599. [Google Scholar] [CrossRef]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Shi, Y.; Chang, Q. Efficient Algorithm for Isotropic and Anisotropic Total Variation Deblurring and Denoising. J. Appl. Math. 2013, 2013, 797239. [Google Scholar] [CrossRef]
- Goldstein, T.; Osher, S. The Split Bregman Method for L1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
- Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure Extraction from Texture via Relative Total Variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef]
- Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, L.; Cerra, D.; Pato, M.; Carmona, E.; Prasad, S.; Yokoya, N.; Hänsch, R.; Le Saux, B. Advanced Multi-Sensor Optical Remote Sensing for Urban Land Use and Land Cover Classification: Outcome of the 2018 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1709–1724. [Google Scholar] [CrossRef]
- 2018 IEEE GRSS Data Fusion Contest. Available online: http://www.grss-ieee.org/community/technical-committees/data-fusion (accessed on 1 December 2021).
- Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Fang, L.; He, N.; Li, S.; Plaza, A.J.; Plaza, J. A New Spatial–Spectral Feature Extraction Method for Hyperspectral Images Using Local Covariance Matrix Representation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3534–3546. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, F.; Zhang, L. Hyperspectral Image Classification via a Random Patches Network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
- Duan, P.; Kang, X.; Li, S.; Ghamisi, P. Noise-Robust Hyperspectral Image Classification via Multi-Scale Total Variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1948–1962. [Google Scholar] [CrossRef]
- Liu, J.; Wu, Z.; Xiao, L.; Sun, J.; Yan, H. Generalized Tensor Regression for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1244–1258. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2021, 1. [Google Scholar] [CrossRef]
- Foody, G.M. Classification Accuracy Comparison: Hypothesis Tests and the Use of Confidence Intervals in Evaluations of Difference, Equivalence and Non-Inferiority. Remote Sens. Environ. 2009, 113, 1658–1663. [Google Scholar] [CrossRef] [Green Version]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. ICLR 2021. Available online: https://openreview.net/forum?id=YicbFdNTTy (accessed on 1 December 2021).
- Chen, C.; Xiong, Z.; Tian, X.; Zha, Z.-J.; Wu, F. Real-World Image Denoising with Deep Boosting. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 3071–3087. [Google Scholar] [CrossRef] [PubMed]
- Miller, S.; Zhang, C.; Hirakawa, K. Multi-Resolution Aitchison Geometry Image Denoising for Low-Light Photography. IEEE Trans. Image Processing 2021, 30, 5724–5738. [Google Scholar] [CrossRef] [PubMed]
Datasets | Indian Pines | Salinas | Houston University 2018 |
---|---|---|---|
Source | AVIRIS sensor | AVIRIS sensor | CASI 1500 |
Spectral Range | 0.4–2.5 μm | 0.4–2.5 μm | 0.38–1.05 μm |
Spatial Resolution | 20 m | 3.7 m | 1 m |
Class | 16 | 16 | 20 |
Band | 220 | 224 | 48 |
Spatial Size | 145 × 145 | 512 × 217 | 601 × 2384 |
Class | Training Set | Test Set | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 10 | 36 | 14.65 | 99.17 | 90.44 | 94.48 | 100.00 | 100.00 | 97.37 | 90.83 | 70.42 | 96.94 |
2 | 10 | 1418 | 42.66 | 74.92 | 68.78 | 77.80 | 66.69 | 86.36 | 81.51 | 71.27 | 48.64 | 81.73 |
3 | 10 | 820 | 37.52 | 73.01 | 56.83 | 79.84 | 74.32 | 80.07 | 95.08 | 76.66 | 70.50 | 79.48 |
4 | 10 | 227 | 15.83 | 96.65 | 41.57 | 56.73 | 93.74 | 100.00 | 88.55 | 82.38 | 82.13 | 98.59 |
5 | 10 | 473 | 50.74 | 88.69 | 80.53 | 92.00 | 83.51 | 99.52 | 95.46 | 81.99 | 68.87 | 88.27 |
6 | 10 | 720 | 78.07 | 88.24 | 94.30 | 98.69 | 94.58 | 87.80 | 99.57 | 81.79 | 89.71 | 97.15 |
7 | 10 | 19 | 18.34 | 100.00 | 51.59 | 71.04 | 100.00 | 65.71 | 61.14 | 95.56 | 100.00 | 100.00 |
8 | 10 | 468 | 90.07 | 99.59 | 91.78 | 99.98 | 96.20 | 100.00 | 100.00 | 98.21 | 98.21 | 100.00 |
9 | 10 | 10 | 6.86 | 100.00 | 41.04 | 65.80 | 100.00 | 93.33 | 79.05 | 100.00 | 100.00 | 100.00 |
10 | 10 | 962 | 36.07 | 74.31 | 68.81 | 74.93 | 69.58 | 85.82 | 78.28 | 78.87 | 77.92 | 86.14 |
11 | 10 | 2445 | 59.45 | 69.21 | 80.49 | 92.13 | 68.62 | 86.84 | 95.81 | 74.15 | 68.54 | 89.87 |
12 | 10 | 583 | 20.66 | 81.29 | 47.87 | 73.40 | 80.55 | 96.68 | 63.01 | 66.88 | 76.38 | 92.62 |
13 | 10 | 195 | 72.95 | 99.29 | 94.52 | 99.29 | 99.28 | 100.00 | 99.38 | 99.33 | 97.71 | 99.59 |
14 | 10 | 1255 | 80.94 | 96.37 | 95.80 | 99.18 | 88.37 | 91.29 | 97.28 | 96.36 | 90.69 | 99.86 |
15 | 10 | 376 | 35.22 | 95.29 | 59.97 | 98.53 | 94.95 | 95.73 | 94.66 | 97.13 | 46.63 | 98.88 |
16 | 10 | 83 | 87.57 | 97.83 | 98.10 | 95.38 | 99.28 | 93.62 | 78.26 | 92.65 | 100.00 | 93.13 |
AA | 43.73 | 89.62 | 72.65 | 85.58 | 88.10 | 91.42 | 87.74 | 86.50 | 80.40 | 93.89 | ||
OA | 46.35 | 81.17 | 70.99 | 85.75 | 80.36 | 89.08 | 88.34 | 80.75 | 73.18 | 90.64 | ||
KAPPA | 40.10 | 78.72 | 67.40 | 83.85 | 75.69 | 87.58 | 87.24 | 78.24 | 69.67 | 89.35 |
Methods | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|
Time | 4.250 | 10.021 | 2.356 | 3.452 | 4.067 | 105.61 | 7.680 | 159.59 | 331.24 | 1.354 |
Class | Training Set | Test Set | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 5 | 2003 | 80.85 | 89.51 | 86.36 | 100.00 | 98.58 | 100.00 | 99.83 | 99.54 | 98.09 | 96.24 |
2 | 5 | 3721 | 95.16 | 97.41 | 97.99 | 99.67 | 92.18 | 97.25 | 100.00 | 98.97 | 94.47 | 99.81 |
3 | 5 | 1971 | 74.19 | 88.18 | 96.08 | 94.58 | 99.90 | 98.99 | 99.73 | 94.13 | 85.02 | 100.00 |
4 | 5 | 1389 | 97.16 | 95.36 | 97.83 | 93.74 | 96.68 | 93.91 | 90.04 | 99.57 | 97.82 | 98.54 |
5 | 5 | 2673 | 92.94 | 95.42 | 96.01 | 93.55 | 95.31 | 93.63 | 94.69 | 91.91 | 97.37 | 96.65 |
6 | 5 | 3954 | 100.00 | 98.48 | 99.88 | 99.16 | 99.82 | 99.85 | 100.00 | 95.19 | 99.04 | 99.67 |
7 | 5 | 3574 | 94.12 | 93.77 | 93.78 | 99.61 | 99.62 | 97.89 | 97.91 | 98.70 | 95.53 | 97.64 |
8 | 5 | 11,266 | 64.31 | 74.82 | 67.17 | 84.01 | 68.48 | 77.19 | 94.38 | 71.04 | 46.08 | 82.23 |
9 | 5 | 6198 | 95.45 | 99.08 | 99.08 | 97.29 | 99.99 | 97.46 | 99.37 | 99.74 | 97.04 | 100.00 |
10 | 5 | 3273 | 66.16 | 93.32 | 72.47 | 93.26 | 90.89 | 95.61 | 96.24 | 88.65 | 73.27 | 92.10 |
11 | 5 | 1063 | 61.36 | 94.40 | 85.38 | 96.87 | 98.52 | 88.00 | 73.41 | 94.37 | 88.45 | 99.61 |
12 | 5 | 1922 | 77.38 | 98.51 | 91.34 | 93.68 | 94.34 | 100.00 | 94.66 | 100.00 | 64.71 | 99.54 |
13 | 5 | 911 | 80.09 | 95.13 | 86.22 | 97.25 | 97.65 | 96.38 | 99.96 | 97.89 | 97.77 | 93.15 |
14 | 5 | 1065 | 67.62 | 96.50 | 88.22 | 90.18 | 90.93 | 97.44 | 94.0 | 94.72 | 89.90 | 99.19 |
15 | 5 | 7263 | 43.18 | 87.18 | 47.74 | 70.95 | 69.36 | 81.69 | 67.87 | 86.76 | 72.39 | 85.83 |
16 | 5 | 1802 | 94.68 | 91.18 | 76.56 | 98.67 | 99.13 | 100.00 | 100.00 | 85.37 | 86.23 | 99.91 |
AA | 80.85 | 93.58 | 86.38 | 93.90 | 93.21 | 94.70 | 93.88 | 93.53 | 86.45 | 96.26 | ||
OA | 74.58 | 88.65 | 80.50 | 89.50 | 87.33 | 90.88 | 90.39 | 89.75 | 79.33 | 93.28 | ||
KAPPA | 71.96 | 86.93 | 78.39 | 88.33 | 85.94 | 89.82 | 89.35 | 88.65 | 77.16 | 91.84 |
Methods | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|
Time | 4.32 | 57.77 | 10.23 | 15.94 | 15.61 | 334.94 | 33.15 | 857.75 | 1686.43 | 6.45 |
Class | Training Set | Test Set | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 100 | 9699 | 80.30 | 87.28 | 68.30 | 61.53 | 91.65 | 83.86 | 52.36 | 83.89 | 97.46 | 84.02 |
2 | 100 | 32,402 | 85.35 | 84.12 | 85.51 | 88.23 | 54.90 | 85.26 | 83.90 | 73.30 | 84.93 | 79.16 |
3 | 100 | 584 | 95.60 | 100.00 | 23.74 | 95.71 | 100.00 | 100.00 | 100.00 | 99.77 | 100.00 | 100.00 |
4 | 100 | 13,488 | 76.84 | 96.95 | 80.43 | 79.30 | 98.11 | 92.88 | 74.79 | 90.31 | 95.25 | 95.39 |
5 | 100 | 4948 | 26.93 | 93.72 | 26.24 | 44.39 | 86.90 | 79.91 | 50.70 | 79.92 | 82.50 | 95.56 |
6 | 100 | 4416 | 42.54 | 99.68 | 61.35 | 93.82 | 100.00 | 98.16 | 60.33 | 99.44 | 95.81 | 100.00 |
7 | 100 | 166 | 51.37 | 100.00 | 66.94 | 95.54 | 100.00 | 100.00 | 97.24 | 98.39 | 99.40 | 99.04 |
8 | 100 | 39,662 | 51.62 | 84.73 | 65.14 | 75.01 | 70.51 | 79.38 | 85.76 | 88.70 | 65.15 | 92.76 |
9 | 100 | 223,584 | 96.24 | 71.56 | 95.72 | 97.79 | 56.52 | 92.51 | 92.04 | 70.50 | 62.30 | 89.39 |
10 | 100 | 45,710 | 48.33 | 50.96 | 49.63 | 66.27 | 15.29 | 62.43 | 72.83 | 42.56 | 32.20 | 58.33 |
11 | 100 | 33,902 | 42.62 | 48.63 | 40.70 | 48.93 | 6.40 | 63.06 | 54.39 | 29.07 | 29.52 | 49.31 |
12 | 100 | 1416 | 4.54 | 76.91 | 5.76 | 9.71 | 12.50 | 35.62 | 12.80 | 73.33 | 45.83 | 85.00 |
13 | 100 | 46,258 | 59.19 | 55.57 | 69.27 | 82.14 | 30.90 | 69.78 | 86.11 | 61.13 | 35.25 | 69.88 |
14 | 100 | 9749 | 42.37 | 94.26 | 72.05 | 82.81 | 97.81 | 83.91 | 77.01 | 96.38 | 84.76 | 98.99 |
15 | 100 | 6837 | 60.40 | 99.60 | 66.39 | 94.96 | 92.69 | 91.87 | 91.04 | 95.58 | 96.31 | 98.51 |
16 | 100 | 11,375 | 56.61 | 89.47 | 55.27 | 82.77 | 41.40 | 82.86 | 83.44 | 67.27 | 70.58 | 84.88 |
17 | 100 | 49 | 4.65 | 100.00 | 2.46 | 40.20 | 100.00 | 84.94 | 44.89 | 100.00 | 100.00 | 100.00 |
18 | 100 | 6478 | 23.36 | 90.57 | 33.63 | 66.66 | 85.88 | 86.01 | 67.40 | 69.79 | 75.75 | 81.62 |
19 | 100 | 5265 | 30.00 | 97.43 | 53.70 | 80.18 | 80.58 | 90.81 | 66.64 | 81.81 | 85.38 | 92.05 |
20 | 100 | 6724 | 56.75 | 99.90 | 52.86 | 75.35 | 99.52 | 94.55 | 89.55 | 99.01 | 95.17 | 99.85 |
AA | 51.78 | 86.07 | 53.76 | 73.07 | 71.08 | 82.89 | 72.16 | 80.01 | 76.58 | 87.69 | ||
OA | 62.88 | 72.15 | 67.37 | 80.90 | 52.61 | 83.49 | 80.37 | 68.37 | 59.67 | 82.16 | ||
KAPPA | 55.86 | 66.00 | 60.80 | 75.90 | 44.88 | 77.13 | 76.19 | 61.57 | 52.19 | 77.27 |
Methods | SVM | LCMR | RPNet | MSTV | GTR | DBDA | FDSI | l0-l1HTV | SpectralFormer | OURS |
---|---|---|---|---|---|---|---|---|---|---|
Time | 198.08 | 522.71 | 445.60 | 226.44 | 233.04 | 22,202.63 | 722.46 | 2743.76 | 6524.85 | 75.71 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Tang, X.; Shi, L.; Peng, Y.; Tang, Y. A Two-Staged Feature Extraction Method Based on Total Variation for Hyperspectral Images. Remote Sens. 2022, 14, 302. https://doi.org/10.3390/rs14020302
Li C, Tang X, Shi L, Peng Y, Tang Y. A Two-Staged Feature Extraction Method Based on Total Variation for Hyperspectral Images. Remote Sensing. 2022; 14(2):302. https://doi.org/10.3390/rs14020302
Chicago/Turabian StyleLi, Chunchao, Xuebin Tang, Lulu Shi, Yuanxi Peng, and Yuhua Tang. 2022. "A Two-Staged Feature Extraction Method Based on Total Variation for Hyperspectral Images" Remote Sensing 14, no. 2: 302. https://doi.org/10.3390/rs14020302
APA StyleLi, C., Tang, X., Shi, L., Peng, Y., & Tang, Y. (2022). A Two-Staged Feature Extraction Method Based on Total Variation for Hyperspectral Images. Remote Sensing, 14(2), 302. https://doi.org/10.3390/rs14020302