Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction
Abstract
:1. Introduction
- (1)
- Introducing SmaT-HOSVD, a 3D feature extraction method utilizing superpixels and Multiscale Adaptive T-HOSVD, which fully utilizes spectral and spatial information and enhances the comprehensiveness of features;
- (2)
- Employing adaptive rank estimation based on the energy distribution of data, enhancing noise separation and inter-class distinction. In addition, a low-rank fusion strategy is utilized to generate multilevel features to enhance the comprehensiveness of the features and the robustness of the algorithm;
- (3)
- Demonstrating low time complexity and memory footprint, enhancing computational efficiency, and validating performance across multiple public datasets.
2. Related Works
2.1. Notations and Definitions
2.2. T-HOSVD
3. Methods
3.1. Superpixel Segmentation
3.2. Superpixelwise Multiscale Adaptive T-HOSVD
3.3. Classification
Algorithm 1: Superpixelwise Multiscale Adaptive T-HOSVD(SmaT-HOSVD) |
Input:, number of superpixels S, reduction dimensionality n, threshold T1 and T2.
|
OUTPUT: Classification results |
4. Results
4.1. Datasets
- (1)
- Indian Pines: This dataset was acquired by the imaging spectrometer AVRIS over the Indian Pines test field in northwest Indiana. It has a spatial size of 145 × 145 pixels and encompasses a total of 224 bands of reflectance data. The spectral reflectance bands range from 400 nm to 2500 nm. Approximately two-thirds of the hyperspectral scene is dedicated to agricultural areas, while one-third is predominantly covered by forest or vegetation. To focus on relevant information, bands that do not capture water reflections were excluded, resulting in the utilization of the remaining 200 bands for the study.
- (2)
- Pavia University: Collected by the ROSIS sensor during a flight over Pavia, northern Italy, this dataset comprises 610 × 240 pixels in 115 consecutive bands within the wavelength range of 430 nm to 860 nm. Noise affects 12 of the bands, leading to the selection of the remaining 103 bands for the study.
- (3)
- Salinas: Captured by the imaging spectrometer AVRIS which is manufactured by NASA’s Jet Propulsion Laboratory (JPL), this dataset represents an image of the Salinas Valley in California, USA. The image size is 512 × 217, and it includes 224 bands. Bands that do not capture water reflections were excluded, resulting in the selection of the remaining 204 bands for the study. The dataset includes areas with vegetables, bare soil, and vineyards.
4.2. Experimental Setup
- (1)
- A support vector machine (SVM) is employed as a classifier with a Gaussian radial basis function (RBF) serving as its kernel function. To determine the hyperparameters, we utilize five-fold cross-validation.
- (2)
- We utilize T-HOSVD for feature extraction and compare it with our proposed SmaT-HOSVD. The classifier employed is SVM, and the rank- is set to 60.
- (3)
- SpectralFormer [26] is a neural network based on transformer, utilizing a patch-wise version with a patch size of 1 × 3 and trained for 300 epochs.
- (4)
- SSTN is employed with a batch size of 32 and trained for 100 epochs, while keeping other hyperparameters consistent with the specifications in [27].
- (5)
- SuperPCA relies on dimensionality reduction through superpixel segmentation, employing the same parameters as specified in [22].
- (6)
- SpaSSA is grounded in feature extraction with superpixel segmentation, and the optimal parameters specified in [23] are utilized.
- (7)
- TensorSSA extracts spectral-spatial features through a 3D approach, and the optimal parameters specified in [20] are employed.
- (8)
- For our proposed method, the optimal parameters are detailed in IV.C. SVM is employed for classification.
4.3. Comparisons with State-of-the-Arts Models
- (1)
- Performance with Different Training Percentages: Figure 4 displays the overall accuracy (OA) achieved by different methods using various training percentages for the three datasets. Specifically, the randomly selected training sample percentages vary as follows: 1%, 2%, 3%, 4%, and 5% for the Indian Pines (IP) dataset; 0.5%, 1%, 2%, 3%, and 5% for the Pavia University (PU) dataset; and 0.2%, 0.4%, 0.5%, 1%, and 2% for the Salinas (SD) dataset.
- (2)
- Quantitative Evaluation: To thoroughly evaluate the effectiveness of the proposed method, we provide a detailed comparison of producer accuracy (PA) for each category in Table 1, Table 2 and Table 3, along with the three evaluation metrics OA, AA, and Kappa. The proposed SmaT-HOSVD consistently achieves the highest accuracy in most classes and outperforms other methods in terms of overall metrics. Taking the IP dataset in Table 1 as an example for analysis, firstly, in terms of OA, SmaT-HOSVD improves 27.83%, 27.78%, 33.08%, 4.24%, 1.81%, 14.35%, and 7.35% compared to SVM, T-HOSVD, SpectralFormer, SSTN, SuperPCA, SpaSSA, and TensorSSA, respectively. Also, on AA and Kappa, SmaT-HOSVD improved to varying degrees. It is worth noting that the PA of SuperPCA is not much different from SmaT-HOSVD, but the processing time of SuperPCA is shorter. This is because during the processing of SmaT-HOSVD, too many neighboring pixels are added to some small superpixel blocks, which introduces noise and results in a degradation of the effect. And compared to SuperPCA, the time complexity of SmaT-HOSVD is higher, but overall, from all three datasets, SmaT-HOSVD achieves better performance. On the PA of each class, SmaT-HOSVD s’ accuracy is basically at the top of the list, above 80% for 5 classes and above 90% for 11 classes. The superior performance of the SmaT-HOSVD method is also observed in Table 2 and Table 3 for the Pavia University and Salinas datasets, respectively.
- (3)
- Qualitative Evaluation: A diagram categorizing IP, PU, and SD data is illustrated in Figure 5, Figure 6 and Figure 7. The analysis primarily focuses on the IP dataset presented in Figure 5. Once again, the emphasis is on the Indian Pines (IP) dataset depicted in Figure 5. SVM and T-HOSVD exhibit significant categorization noise across all categories, characterized by an exceptionally high number of classification errors. The performance of SpectralFormer is unsatisfactory, showing high accuracy only in a few classes while remaining extremely noisy in most classes. This outcome is likely attributed to limited training samples and underfitting of the model. SSTN is significantly more effective, reducing speckle-like classification errors, but still exhibits misclassifications in certain areas. SuperPCA achieves a high accuracy rate primarily because superpixel blocks provide localized features and enhance intra-class similarity. However, some classes are still affected by contamination from speckles. Both SpaSSA and TensorSSA also exhibit significant classification noise.
- (4)
- Analysis of Running Time: Table 1, Table 2 and Table 3 likewise give the running times for all the methods. From the table, it can be seen that T-HOSVD, SuperPCA, and SmaT-HOSVD are the fastest and basically belong to the same order of magnitude. The training time for SpectralFormer and SSTN is prolonged due to the model’s complexity and the high number of training iterations, even when utilizing GPUs for training. SpaSSA, on the other hand, exhibits a longer runtime because the feature extraction needs to be carried out individually for each band, resulting in significant time consumption. TensorSSA requires performing the decomposition of the trajectory tensor, which is a time-consuming process. In comparison to SuperPCA and T-HOSVD, SmaT-HOSVD requires more time due to the additional tasks involved. However, despite this, SmaT-HOSVD is relatively efficient and performs well in terms of computational efficiency compared to many other methods.
5. Discussion
5.1. Parameter Analysis
- (1)
- Number of superpixels (S): Through superpixel segmentation, we can obtain different homogeneous regions where similar pixels are more likely to fall into the same class [22]. The parameter S determines the level of segmentation for the superpixels. A larger S leads to finer segmentation with larger superpixels, whereas a smaller S results in coarser segmentation with smaller superpixels. The degree of superpixel segmentation significantly influences the extracted features. Due to variations in spatial resolution and dataset size, the optimal number of superpixels (S) differs for each dataset. Although ideal classification accuracy requires over-segmentation of the scene, due to the complexity of the distribution of land cover, specific analysis is still required [50]. As shown in Figure 8, the best results are taken at 30, 25, and 90 for S in our approach for Indian Pines, Pavia University, and Salinas datasets, respectively. For example, in the IP dataset, a number of superpixel S below 50 achieves good results, which is due to the fact that the homogeneous regions in the IP dataset are around 30. When S is too large, it makes the image over-segmented, resulting in the obtained superpixel blocks being fragmented and narrow, making the algorithm ineffective.
- (2)
- Reduction dimensionality (n): The dimensionality reduction n directly influences the amount of information retained in the data. A larger value of n generally retains more information but may also increase noise; conversely, a smaller n emphasizes the main components of information, potentially leading to the loss of important details. Because of the diversity in data, the optimal dimensionality reduction n varies for each dataset. The ideal reduction n effectively filters out noise while preserving essential information. As shown in Figure 9, in our approach, the Indian Pines, University of Pavia, and Salinas datasets work best when the reduced dimension n is set to 20, 20, and 15, respectively. Based on the downscaling results of different datasets, our algorithm achieves the best classification performance in the range of downscaling dimensions from about 15 to 25. This shows that for the obtained low-rank tensor, the dimension reduction is usually around 20 to leave the most dominant information and remove the noise to extract the better features, which is consistent with the conclusion in [45]. When the downscaling dimension is larger, the information retained is cumbersome and contains noise, making the effect decrease. At the same time, a downscaling dimension of around 20 on different datasets achieves better results, which further illustrates the stability of our algorithm.
- (3)
- Threshold T1 and T2: The choice of different thresholds (T1 and T2) significantly affects the extracted features. The larger threshold- results in a higher rank-, preserving more common features of the HSI. The level of low-rank features extracted varies with different T. As T increases, more dimensions are retained in the tensor decomposition, resulting in the preservation of a larger amount of information. This leads to extracted features that tend to be richer; however, they may also contain some noise. On the contrary, as T decreases, the tensor decomposition retains fewer dimensions, and thus some details and features may be lost. Furthermore, to simplify parameter settings, both T1 and T2 are set to the same value, while T3 is taken relatively independently. And in order to obtain different levels of features, T2 is generally set higher than T1, with T2 as a larger threshold and T1 as a smaller one. In T-HOSVD, the sum of the top 10% or even 1% of the singular values accounted for more than 99% of the sum of all singular values. Based on this principle, we set T1 as [(0.8,0.8,0.8), (0.85,0.85,0.9), (0.8,0.8,0.9)], and T2 as [(0.8,0.8,0.95), (0.85,0.85,0.95), (0.9,0.9,0.95)]. The detailed results of parameter tuning are illustrated in Figure 10.
5.2. Ablation Study
5.3. Analysis between SmaT-HOSVD and Classification
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
- Deng, B.; Jia, S.; Shi, D. Deep Metric Learning-Based Feature Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1422–1435. [Google Scholar] [CrossRef]
- Zhao, C.; Li, X.; Ren, J.; Marshall, S. Improved sparse representation using adaptive spatial support for effective target detection in hyperspectral imagery. Int. J. Remote Sens. 2013, 34, 8669–8684. [Google Scholar] [CrossRef]
- Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. HSI-CNN: A Novel Convolution Neural Network for Hyperspectral Image. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018; pp. 464–469. [Google Scholar]
- Carpenter, M.H.; Croce, M.P.; Baker, Z.K.; Batista, E.R.; Caffrey, M.P.; Fontes, C.J.; Koehler, K.E.; Kossmann, S.E.; McIntosh, K.G.; Rabin, M.W.; et al. Hyperspectral X-ray Imaging with TES Detectors for Nanoscale Chemical Speciation Mapping. J. Low Temp. Phys. 2020, 200, 437–444. [Google Scholar] [CrossRef]
- Al Ktash, M.; Stefanakis, M.; Englert, T.; Drechsel, M.S.L.; Stiedl, J.; Green, S.; Jacob, T.; Boldrini, B.; Ostertag, E.; Rebner, K.; et al. UV Hyperspectral Imaging as Process Analytical Tool for the Characterization of Oxide Layers and Copper States on Direct Bonded Copper. Sensors 2021, 21, 7332. [Google Scholar] [CrossRef] [PubMed]
- Batshev, V.I.; Krioukov, A.V.; Machikhin, A.S.; Zolotukhina, A.A. Multispectral video camera optical system. J. Opt. Technol. 2023, 90, 706–712. [Google Scholar] [CrossRef]
- Adesokan, M.; Alamu, E.O.; Otegbayo, B.; Maziya-Dixon, B. A Review of the Use of Near-Infrared Hyperspectral Imaging (NIR-HSI) Techniques for the Non-Destructive Quality Assessment of Root and Tuber Crops. Appl. Sci. 2023, 13, 5226. [Google Scholar] [CrossRef]
- Kulya, M.; Petrov, N.V.; Tsypkin, A.; Egiazarian, K.; Katkovnik, V. Hyperspectral data denoising for terahertz pulse time-domain holography. Opt. Express 2019, 27, 18456–18476. [Google Scholar] [CrossRef] [PubMed]
- Fabiyi, S.D.; Vu, H.; Tachtatzis, C.; Murray, P.; Harle, D.; Dao, T.K.; Andonovic, I.; Ren, J.; Marshall, S. Varietal Classification of Rice Seeds Using RGB and Hyperspectral Images. IEEE Access 2020, 8, 22493–22505. [Google Scholar] [CrossRef]
- Cloutis, E.A. Review Article Hyperspectral geological remote sensing: Evaluation of analytical techniques. Int. J. Remote Sens. 1996, 17, 2215–2242. [Google Scholar] [CrossRef]
- Cochrane, M.A. Using vegetation reflectance variability for species level classification of hyperspectral data. Int. J. Remote Sens. 2000, 21, 2075–2087. [Google Scholar] [CrossRef]
- Murphy, R.J.; Monteiro, S.T.; Schneider, S. Evaluating Classification Techniques for Mapping Vertical Geology Using Field-Based Hyperspectral Sensors. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3066–3080. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Qiao, T.; Ren, J.; Wang, Z.; Zabalza, J.; Sun, M.; Zhao, H.; Li, S.; Benediktsson, J.A.; Dai, Q.; Marshall, S. Effective Denoising and Classification of Hyperspectral Images Using Curvelet Transform and Singular Spectrum Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 119–133. [Google Scholar] [CrossRef]
- Zabalza, J.; Ren, J.; Yang, M.; Zhang, Y.; Wang, J.; Marshall, S.; Han, J. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J. Photogramm. Remote Sens. 2014, 93, 112–122. [Google Scholar] [CrossRef]
- Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Sun, Y.; Yuan, X.; Pei, H.; Plaza, J.; Plaza, A. Robust Matrix Discriminative Analysis for Feature Extraction From Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2002–2011. [Google Scholar] [CrossRef]
- He, X.; Niyogi, P. Locality Preserving Projections. Adv. Neural Inf. Process. Syst. 2003, 16. [Google Scholar]
- Fu, H.; Sun, G.; Zhang, A.; Shao, B.; Ren, J.; Jia, X. Tensor Singular Spectrum Analysis for 3-D Feature Extraction in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5403914. [Google Scholar] [CrossRef]
- Imani, M.; Ghassemian, H. An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges. Inf. Fusion 2020, 59, 59–83. [Google Scholar] [CrossRef]
- Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef]
- Sun, G.; Fu, H.; Ren, J.; Zhang, A.; Zabalza, J.; Jia, X.; Zhao, H. SpaSSA: Superpixelwise Adaptive SSA for Unsupervised Spatial–Spectral Feature Extraction in Hyperspectral Image. IEEE Trans. Cybern. 2022, 52, 6158–6169. [Google Scholar] [CrossRef] [PubMed]
- Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5518615. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, Y.; Ma, L.; Li, J.; Zheng, W.S. Spectral–Spatial Transformer Network for Hyperspectral Image Classification: A Factorized Architecture Search Framework. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5514715. [Google Scholar] [CrossRef]
- Li, Q.; Schonfeld, D. Multilinear Discriminant Analysis for Higher-Order Tensor Data Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2524–2537. [Google Scholar] [CrossRef] [PubMed]
- Peng, J.; Sun, W.; Li, H.C.; Li, W.; Meng, X.; Ge, C.; Du, Q. Low-Rank and Sparse Representation for Hyperspectral Image Processing: A review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 10–43. [Google Scholar] [CrossRef]
- De Lathauwer, L.; De Moor, B.; Vandewalle, J. A Multilinear Singular Value Decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef]
- Bernardi, A.; Iannacito, M.; Rocchini, D. High order singular value decomposition for plant diversity estimation. Boll. Dell’unione Mat. Ital. 2021, 14, 557–591. [Google Scholar] [CrossRef]
- An, J.; Zhang, X.; Zhou, H.; Jiao, L. Tensor-Based Low-Rank Graph with Multimanifold Regularization for Dimensionality Reduction of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4731–4746. [Google Scholar] [CrossRef]
- Zhang, Z.; Aeron, S. Exact Tensor Completion Using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
- Quijorna, E.P. Nuclear instruments and methods in physics research section B: Beam interactions with materials and atoms. Nucl. Instrum. Methods Phys. Res. B 2024, 549, 165287. [Google Scholar] [CrossRef]
- Wang, Q.; Kang, W. Adaptive tensor compressive sensing based on noise estimation: Application in three-dimensional images. Multimed. Tools Appl. 2017, 76, 6175–6188. [Google Scholar] [CrossRef]
- Bergqvist, G.; Larsson, E.G. The Higher-Order Singular Value Decomposition: Theory and an Application [Lecture Notes]. IEEE Signal Process. Mag. 2010, 27, 151–154. [Google Scholar] [CrossRef]
- Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
- De Lathauwer, L.; Vandewalle, J. Dimensionality reduction in higher-order signal processing and rank-(R1,R2,…,RN) reduction in multilinear algebra. Linear Algebra Its Appl. 2004, 391, 31–55. [Google Scholar] [CrossRef]
- Vasilescu, M.A.O.; Terzopoulos, D. Multilinear Analysis of Image Ensembles: TensorFaces. In Computer Vision—ECCV 2002, Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 447–460. [Google Scholar]
- Ahmadi-Asl, S.; Abukhovich, S.; Asante-Mensah, M.G.; Cichocki, A.; Phan, A.H.; Tanaka, T.; Oseledets, I. Randomized Algorithms for Computation of Tucker Decomposition and Higher Order SVD (HOSVD). IEEE Access 2021, 9, 28684–28706. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N.Q. Superpixel segmentation: A benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar] [CrossRef]
- Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
- Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–Spatial Hyperspectral Image Classification via Multiscale Adaptive Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
- An, J.; Lei, J.; Song, Y.; Zhang, X.; Guo, J. Tensor Based Multiscale Low Rank Decomposition for Hyperspectral Images Dimensionality Reduction. Remote Sens. 2019, 11, 1485. [Google Scholar] [CrossRef]
- Cherkassky, V.; Ma, Y. Practical selection of SVM parameters and noise estimation for SVM regression. Neural Netw. 2004, 17, 113–126. [Google Scholar] [CrossRef] [PubMed]
- Pal, M.; Foody, G.M. Feature Selection for Classification of Hyperspectral Data by SVM. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2297–2307. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM- and MRF-Based Method for Accurate Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef]
- Guo, X.; Huang, X.; Zhang, L.; Zhang, L. Hyperspectral image noise reduction based on rank-1 tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2013, 83, 50–63. [Google Scholar] [CrossRef]
- Li, J.; Zhang, H.; Zhang, L. Efficient Superpixel-Level Multitask Joint Sparse Representation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar] [CrossRef]
- Fan, F.; Ma, Y.; Li, C.; Mei, X.; Huang, J.; Ma, J. Hyperspectral image denoising with superpixel segmentation and low-rank representation. Inf. Sci. 2017, 397–398, 48–68. [Google Scholar] [CrossRef]
- Yang, S.; Hou, J.; Jia, Y.; Mei, S.; Du, Q. Superpixel-Guided Discriminative Low-Rank Representation of Hyperspectral Images for Classification. IEEE Trans. Image Process. 2021, 30, 8823–8835. [Google Scholar] [CrossRef]
- Huang, H.; Shi, G.; He, H.; Duan, Y.; Luo, F. Dimensionality Reduction of Hyperspectral Imagery Based on Spatial–Spectral Manifold Learning. IEEE Trans. Cybern. 2020, 50, 2604–2616. [Google Scholar] [CrossRef]
- Luo, F.; Zou, Z.; Liu, J.; Lin, Z. Dimensionality Reduction and Classification of Hyperspectral Image via Multistructure Unified Discriminative Embedding. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5517916. [Google Scholar] [CrossRef]
- Ye, P.; Han, C.; Zhang, Q.; Gao, F.; Yang, Z.; Wu, G. An Application of Hyperspectral Image Clustering Based on Texture-Aware Superpixel Technique in Deep Sea. Remote Sens. 2022, 14, 5047. [Google Scholar] [CrossRef]
Class | Samples | Svm | T-hosvd | Spectralformer | Sstn | Superpca | Spassa | Tensorssa | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 46 | 5.33 | 8.88 | 0.0 | 82.61 | 100.0 | 38.67 | 89.11 | 80.00 |
2 | 1428 | 57.13 | 68.40 | 51.75 | 93.98 | 90.26 | 75.16 | 81.49 | 87.59 |
3 | 830 | 45.87 | 38.62 | 38.50 | 91.09 | 85.00 | 74.61 | 83.92 | 84.76 |
4 | 237 | 18.32 | 21.55 | 33.62 | 90.29 | 61.25 | 81.16 | 67.98 | 80.52 |
5 | 483 | 69.28 | 66.38 | 20.08 | 82.19 | 91.86 | 76.53 | 85.12 | 92.01 |
6 | 730 | 84.77 | 81.53 | 89.37 | 93.01 | 96.24 | 90.04 | 94.80 | 99.89 |
7 | 28 | 27.04 | 59.25 | 0.0 | 100.0 | 96.3 | 74.44 | 95.19 | 95.93 |
8 | 478 | 94.92 | 87.39 | 99.78 | 100.0 | 96.94 | 90.56 | 93.27 | 100.0 |
9 | 20 | 24.74 | 31.57 | 0.0 | 100.0 | 95.26 | 28.95 | 99.47 | 84.74 |
10 | 972 | 56.17 | 44.43 | 38.44 | 89.81 | 88.05 | 74.14 | 81.82 | 90.50 |
11 | 2455 | 74.82 | 77.96 | 74.51 | 80.93 | 95.67 | 79.78 | 87.55 | 95.40 |
12 | 593 | 32.75 | 27.53 | 16.70 | 87.52 | 81.51 | 56.99 | 69.50 | 90.65 |
13 | 205 | 91.45 | 90.50 | 66.5 | 100.0 | 99.5 | 79.65 | 95.5 | 99.55 |
14 | 1265 | 89.99 | 93.94 | 97.34 | 98.18 | 97.27 | 92.71 | 97.37 | 99.98 |
15 | 386 | 25.90 | 93.94 | 15.61 | 79.02 | 94.37 | 69.79 | 74.89 | 95.27 |
16 | 93 | 49.78 | 39.56 | 83.52 | 49.46 | 54.29 | 86.81 | 85.38 | 96.70 |
OA | 65.48 | 65.53 | 60.23 | 89.07 | 91.50 | 78.96 | 85.96 | 93.31 | |
AA | 53.02 | 53.76 | 45.36 | 88.63 | 88.99 | 73.12 | 86.40 | 92.09 | |
Kappa | 60.18 | 60.07 | 53.77 | 87.59 | 90.30 | 76.01 | 84.01 | 92.37 | |
Time (s) | 3.017 | 1.645 | 224.73 | 96 | 1.11 | 109.43 | 32.55 | 2.667 |
Class | Samples | Svm | T-hosvd | Spectralformer | Sstn | Superpca | Spassa | Tensorssa | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 6631 | 85.6 | 88.84 | 88.01 | 89.47 | 80.72 | 91.28 | 94.26 | 96.47 |
2 | 18,649 | 96.0 | 96.41 | 98.12 | 99.89 | 96.47 | 99.58 | 98.82 | 99.32 |
3 | 2099 | 67.8 | 78.72 | 3.90 | 84.75 | 86.57 | 77.82 | 84.22 | 95.2 |
4 | 3064 | 84.8 | 87.10 | 74.81 | 89.55 | 55.94 | 85.00 | 89.19 | 87.28 |
5 | 1345 | 98.9 | 98.87 | 98.87 | 99.70 | 98.14 | 99.17 | 99.19 | 98.43 |
6 | 5029 | 75.5 | 78.88 | 21.72 | 95.64 | 93.61 | 97.26 | 91.15 | 98.91 |
7 | 1330 | 78.7 | 75.30 | 6.16 | 99.84 | 79.29 | 75.59 | 92.03 | 96.43 |
8 | 3682 | 80.1 | 75.47 | 91.85 | 86.97 | 73.42 | 85.85 | 88.52 | 96.83 |
9 | 947 | 99.8 | 99.67 | 100.0 | 97.52 | 44.21 | 72.54 | 97.01 | 89.47 |
OA | 88.1 | 89.33 | 77.94 | 95.12 | 86.68 | 93.39 | 94.27 | 97.21 | |
AA | 85.3 | 86.59 | 64.83 | 93.70 | 78.71 | 87.14 | 92.94 | 95.37 | |
Kappa | 84.1 | 85.76 | 69.23 | 93.51 | 82.15 | 91.21 | 92.97 | 96.30 | |
Time (s) | 12.2 | 4.731 | 513.2 | 118 | 3.27 | 859.0 | 34.33 | 11.298 |
Class | Samples | Svm | T-hosvd | Spectralformer | Sstn | Superpca | Spassa | Tensorssa | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 2009 | 97.41 | 98.34 | 98.44 | 99.69 | 100.0 | 98.40 | 98.27 | 100.0 |
2 | 3726 | 99.12 | 97.93 | 97.23 | 96.10 | 99.80 | 98.73 | 99.69 | 99.82 |
3 | 1976 | 89.79 | 92.99 | 78.43 | 99.54 | 97.88 | 99.19 | 97.75 | 100.0 |
4 | 1394 | 98.49 | 95.28 | 99.20 | 99.06 | 93.11 | 92.86 | 98.39 | 98.99 |
5 | 2678 | 96.67 | 96.15 | 97.10 | 69.34 | 96.51 | 96.04 | 97.24 | 98.69 |
6 | 3959 | 99.11 | 99.43 | 99.62 | 97.41 | 99.94 | 99.87 | 99.29 | 99.95 |
7 | 3579 | 99.38 | 99.37 | 98.08 | 75.28 | 99.27 | 97.23 | 99.33 | 99.29 |
8 | 11271 | 73.52 | 79.47 | 78.64 | 99.18 | 99.54 | 94.48 | 83.04 | 99.59 |
9 | 6203 | 98.81 | 98.97 | 98.37 | 100.0 | 99.67 | 99.57 | 99.13 | 99.92 |
10 | 3278 | 84.99 | 79.35 | 88.41 | 100.0 | 96.90 | 94.44 | 90.55 | 97.28 |
11 | 1068 | 87.08 | 83.34 | 71.14 | 97.46 | 90.65 | 98.15 | 90.25 | 93.29 |
12 | 1927 | 95.58 | 95.43 | 93.18 | 100.0 | 97.83 | 99.40 | 99.14 | 99.88 |
13 | 916 | 97.05 | 98.34 | 97.02 | 100.0 | 98.38 | 97.73 | 97.74 | 98.54 |
14 | 1070 | 89.49 | 87.15 | 91.50 | 97.46 | 97.07 | 96.02 | 91.47 | 95.94 |
15 | 7268 | 58.69 | 49.15 | 60.67 | 97.88 | 99.43 | 93.64 | 74.39 | 99.49 |
16 | 1807 | 89.16 | 89.14 | 86.69 | 100.0 | 99.24 | 95.93 | 94.00 | 99.15 |
OA | 86.02 | 85.54 | 86.55 | 97.66 | 98.74 | 96.55 | 91.28 | 99.25 | |
AA | 90.90 | 89.99 | 89.61 | 94.69 | 97.83 | 96.98 | 94.35 | 98.74 | |
Kappa | 84.43 | 83.84 | 85.01 | 97.40 | 98.59 | 96.16 | 90.30 | 99.17 | |
Time (s) | 19.834 | 9.214 | 554.5 | 123 | 3.602 | 821.94 | 37.188 | 14.681 |
Pixel Processing | Low-Rank Representation Stage | Accuracy (OA) | ||||
---|---|---|---|---|---|---|
Raw HSI | Superpixel Blocks (ERS) | T-HOSVD | Multiscale Adaptive T-HOSVD | IP (2%) | PU (1%) | SD (1%) |
✓ | ✓ | 62.9 | 87.58 | 84.76 | ||
✓ | ✓ | 66.39 | 89.44 | 86.64 | ||
✓ | ✓ | 90.78 | 91.61 | 98.90 | ||
✓ | ✓ | 93.31 | 97.21 | 99.25 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, Q.; Ma, C.; Zhang, Q. Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction. Sensors 2024, 24, 4072. https://doi.org/10.3390/s24134072
Dai Q, Ma C, Zhang Q. Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction. Sensors. 2024; 24(13):4072. https://doi.org/10.3390/s24134072
Chicago/Turabian StyleDai, Qiansen, Chencong Ma, and Qizhong Zhang. 2024. "Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction" Sensors 24, no. 13: 4072. https://doi.org/10.3390/s24134072
APA StyleDai, Q., Ma, C., & Zhang, Q. (2024). Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction. Sensors, 24(13), 4072. https://doi.org/10.3390/s24134072