A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme
Abstract
:1. Introduction
- (1)
- Addressing the challenge of HSI classification with small samples and multiple classes, we employ a composite three-input neural network structure to enhance the capability of capturing intra-class and inter-class features. This method expands the triple sample pair to optimize the model’s classification performance. The triplet network composed of the three-input neural network, projection head, and classifier is used as the backbone for feature extraction and classification tasks.
- (2)
- In this paper, a novel active learning method based on feature mixing is proposed. By blending certain features of labeled samples into unlabeled samples, the predicted changes in the mixed samples can reveal new features in the unlabeled samples. With FMAL, the dependence of sample selection on the quality of the initial model is reduced; thus, representative samples with high information richness are selected. Analytical experiments verify that our method is more effective than the classical methods.
- (3)
- We propose a dual-strategy pseudo-active learning method. Two filters were used to select valuable samples in unannotated samples as pseudo-samples. The two filters adopt different filtering strategies, and the aim is to identify samples with both certain confidence and certain new features. Pseudo-samples are added to the training set, enriching it to improve the accuracy of the model without increasing the labeling costs. The results show that compared with several existing state-of-the-art methods, the MSTNC strategy enhances the overall accuracy in the three generic datasets by 8.34–28.22%, 2.34–12.19%, and 7.07–23.66%, respectively, under the condition of limited training samples (five samples per class).
2. Related Work
2.1. Triplet Network
2.2. Active Learning
- The characteristics of the latent space play a crucial role in identifying the most valuable samples to be labeled.
- The model’s incorrect predictions mainly stem from novel “features” in the input that are not recognizable.
- The model predicts the loss of the pseudo-label of an unlabeled instance at its interpolation with a labelled one. By utilizing losses, it is possible to calculate which features are novel to the model.
2.3. Pseudo-Active Learning
- Using the trained model to give unlabeled data a pseudo label. The method is very straightforward: use the training model to predict unlabeled data and use the category with the highest probability as the pseudo label for unlabeled data.
- Applying entropy regularization to transform unsupervised data into regularization terms of the objective function (loss).
- Noise caused by incorrect labeling of pseudo-label samples;
- The lack of new features between the pseudo-label samples and the training set samples, leading to model overfitting.
3. Materials and Methods
3.1. The MSTNC Framework
3.2. Triplet Network Classifier
3.3. Feature-Mixture-Based Active Learning
3.4. Dual-Strategy Pseudo-Active Learning
Algorithm 1. MSTNC model. |
Input: HSI patch x ∈ , The total number of samples is denoted as M. Initialization: train-set: , test-set: |
1. Obtain: Triplet samples (, ) 2. Train the TNC with , Obtain the model TNC1 |
3. while: Number of samples < M: 4. Compute the anchor values for 5. Encoding-layer features mixing |
6. Put all corresponding to that meet the condition into , update , 7. Train TNC with the updated to obtain the model TNC2 8. while: iterations < 4 9. Calculate the BvSB values f 10. Sorting the BvSB values for each class of samples. 11. Put the samples in Q between 0 and into candidate set 12. while: Number of samples < : 13. Compute the anchor values for 14. Encoding-layer features mixing |
15. Put all corresponding to that meet the condition into , update |
16. Train TNC with the updated and obtain the model TNC3 |
17. Output: Predict test sample with the model TNC3 |
4. Results
4.1. Hyperspectral Datasets
4.2. Experimental Settings
4.3. Comparison of Classification Results
- Performance with very few training samples
- B.
- Performance with limited training samples
- C.
- Performance with Different Training Percentages
- D.
- Complexity Analysis
4.4. Ablation Analysis
4.4.1. Effectiveness of Triplet Contrastive Learning
4.4.2. Effectiveness of AL and PAL Strategies
5. Discussion
5.1. Parameter Analysis
5.2. Analysis of Various MSTNC Strategies
5.2.1. Comparison of Three Different Sample Selection Methods
5.2.2. Analysis of AL Iterative Strategy
5.2.3. Analysis of the Number of DSPAL Iterations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Nomenclatures and Notations
MSTNC | Multi-strategy triplet network classifier |
HSI | Hyperspectral Imaging |
CNN | Convolutional neural networks |
conv3D | 3-dimensional convolutional neural networks |
conv2D | 2-dimensional convolutional neural networks |
PCA | Principal component analysis |
TN | Triplet network |
TNC | Triplet networfk classifier |
AL | Active learning |
M | The total number of samples that can be chosen |
FMAL | Feature mixture-based active learning |
DSPAL | Dual- strategy pseudo-active learning |
FC | Fully connected |
GAP | Global average pooling layer |
BvSB | Best vs. Second Best |
The classes of land cover | |
Encoder | |
Projection head | |
Classifier | |
Labeled sample | |
Label of the i-th sample | |
Unlabeled sample | |
Test set | |
Train set | |
The same class as (positive pair) | |
The different class as (negative pair) | |
The output vector corresponding to | |
The output vector corresponding to | |
The output vector corresponding to | |
The value of the i-th class | |
Projected value | |
Projection head parameters | |
Classifier parameters | |
Anchor value | |
Encoding-layer feature | |
The anchor value of the i-th class of samples | |
Mixed feature value |
References
- Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. CoSpace: Common subspace learning from hyperspectral-multispectral correspondences. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4349–4359. [Google Scholar] [CrossRef]
- Hong, D.; Yokoya, N.; Ge, N.; Chanussot, J.; Zhu, X.X. Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS J. Photogramm. Remote Sens. 2019, 147, 193–205. [Google Scholar] [CrossRef]
- Han, X.; Zhang, H.; Sun, W. Spectral anomaly detection based on dictionary learning for sea surfaces. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1502505. [Google Scholar] [CrossRef]
- Kumar, V.; Ghosh, J.K. Camouflage detection using MWIR hyperspectral images. J. Indian Soc. Remote Sens. 2017, 45, 139–145. [Google Scholar] [CrossRef]
- Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
- Briottet, X.; Boucher, Y.; Dimmeler, A. Military applications of hyperspectral imagery. In Targets and Backgrounds XII: Characterization and Representation; SPIE: Bellingham, DC, USA, 2006; Volume 6239, pp. 82–89. [Google Scholar]
- Ke, C. Military object detection using multiple information extracted from hyperspectral imagery. In Proceedings of the 2017 International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 15–17 December 2017; IEEE: New York, NY, USA, 2017; pp. 124–128. [Google Scholar]
- Jasani, B.; Stein, G. Commercial Satellite Imagery: A Tactic in Nuclear Weapon Deterrence; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Carpenter, M.H.; Croce, M.P.; Baker, Z.K.; Batista, E.R.; Caffrey, M.P.; Fontes, C.J.; Koehler, K.E.; Kossmann, S.E.; McIntosh, K.G.; Rabin, M.W.; et al. Hyperspectral X-ray Imaging with TES Detectors for Nanoscale Chemical Speciation Mapping. J. Low Temp. Phys. 2020, 200, 437–444. [Google Scholar] [CrossRef]
- Al Ktash, M.; Stefanakis, M.; Englert, T.; Drechsel, M.S.L.; Stiedl, J.; Green, S.; Jacob, T.; Boldrini, B.; Ostertag, E.; Rebner, K.; et al. UV Hyperspectral Imaging as Process Analytical Tool for the Characterization of Oxide Layers and Copper States on Direct Bonded Copper. Sensors 2021, 21, 7332. [Google Scholar] [CrossRef] [PubMed]
- Batshev, V.I.; Krioukov, A.V.; Machikhin, A.S.; Zolotukhina, A.A. Multispectral video camera optical system. J. Opt. Technol. 2023, 90, 706–712. [Google Scholar] [CrossRef]
- Adesokan, M.; Alamu, E.O.; Otegbayo, B.; Maziya-Dixon, B. A Review of the Use of Near-Infrared Hyperspectral Imaging(NIR-HSI) Techniques for the Non-Destructive Quality Assessment of Root and Tuber Crops. Appl. Sci. 2023, 13, 5226. [Google Scholar] [CrossRef]
- Kulya, M.; Petrov, N.V.; Tsypkin, A.; Egiazarian, K.; Katkovnik, V. Hyperspectral data denoising for terahertz pulse time-domain holography. Opt. Express 2019, 27, 18456–18476. [Google Scholar] [CrossRef]
- Zare, A.; Ho, K.C. Endmember variability in hyperspectral analysis:Addressing spectral variability during spectral unmixing. IEEE Signal Process. Mag. 2014, 31, 95–104. [Google Scholar] [CrossRef]
- Mei, S.; Bi, Q.; Ji, J.; Hou, J.; Du, Q. Spectral variation alleviation bylow-rank matrix approximation for hyperspectral image analysis. IEEE Geosci. Remote Sens. Lett. 2016, 13, 796–800. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Liu, W.; Tao, D. Multiview Hessian regularization for polynomial logistic regression in hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1897–1909. [Google Scholar]
- Shevkunov, I.; Katkovnik, V.; Claus, D.; Pedrini, G.; Petrov, N.V.; Egiazarian, K. Spectral object recognition in hyperspectral holography with complex-domain denoising. Sensors 2019, 19, 5188. [Google Scholar] [CrossRef]
- Fu, H.; Sun, G.; Ren, J.; Zhang, A.; Jia, X. Fusion of PCA and segmented-PCA domain multiscale 2-D-SSA for effective spectral–spatial feature extraction and data classification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5500214. [Google Scholar] [CrossRef]
- Mu, C.; Zeng, Q.; Liu, Y.; Qu, Y. A two-branch network combined with robust principal component analysis for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 2147–2151. [Google Scholar] [CrossRef]
- Fu, H.; Sun, G.; Zabalza, J.; Zhang, A.; Ren, J.; Jia, X. Tensor singular spectrum analysis for 3D feature extraction in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5403914. [Google Scholar] [CrossRef]
- Dai, Q.; Ma, C.; Zhang, Q. Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction. Sensors 2024, 24, 4072. [Google Scholar] [CrossRef]
- Zheng, J.; Feng, Y.; Bai, C.; Zhang, J. Hyperspectral image classification using mixed convolutions and covariance pooling. IEEE Trans. Geosci. Remote Sens. 2020, 59, 522–534. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Y.; He, X.; Li, Z. Soft augmentation-based Siamese CNN for hyperspectral image classification with limited training samples. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5508505. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Chanussot, J. Convolutional neural networks for multimodal remote sensing data classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5517010. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: New York, NY, USA, 2015; pp. 4959–4962. [Google Scholar]
- Cheng, G.; Li, Z.; Han, J.; Yao, X.; Guo, L. Exploring hierarchical convolutional features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6712–6722. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
- Yu, W.; Wan, S.; Li, G.; Yang, J.; Gong, C. Hyperspectral image classification with contrastive graph convolutional network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503015. [Google Scholar] [CrossRef]
- Xue, Z.; Zhou, Y.; Du, P. S3Net: Spectral–spatial Siamese network for few-shot hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531219. [Google Scholar] [CrossRef]
- Xue, Z.; Liu, Z.; Zhang, M. DSR-GCN: Differentiated-scale restricted graph convolutional network for few-shot hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5504918. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Yu, A.; Zhang, P.; Wan, G.; Wang, R. Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2290–2304. [Google Scholar] [CrossRef]
- Sun, C.; Zhang, X.; Meng, H.; Cao, X.; Zhang, J.; Jiao, L. Dual-branch spectral-spatial adversarial representation learning for hyperspectral image classification with few labeled samples. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1–15. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, W.; Zhang, M.; Wang, S.; Tao, R.; Du, Q. Graph information aggregation cross-domain few-shot learning for hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 1912–1925. [Google Scholar] [CrossRef]
- Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality reduction by learning an invariant map. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: New York, NY, USA, 2006; Volume 2, pp. 1735–1742. [Google Scholar]
- Settles, B. Active Learning Literature Survey; Department of Computer Sciences, University of Wisconsin-Madison: Madison, WI, USA, 2009. [Google Scholar]
- Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature verification using a “siamese” time delay neural network. In Advances in Neural Information Processing Systems; Morgan Kaufmann Pub: Burlington, MA, USA, 1993; Volume 6. [Google Scholar]
- Hoffer, E.; Ailon, N. Deep metric learning using triplet network. In Similarity-Based Pattern Recognition, Proceedings of the Third International Workshop, SIMBAD 2015, Copenhagen, Denmark, 12–14 October 2015; Proceedings 3; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 84–92. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Zhao, S.; Li, W.; Du, Q.; Ran, Q. Hyperspectral classification based on siamese neural network using spectral-spatial feature. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 2567–2570. [Google Scholar]
- Cao, Z.; Li, X.; Jianfeng, J.; Zhao, L. 3D convolutional siamese network for few-shot hyperspectral classification. J. Appl. Remote Sens. 2020, 14, 048504. [Google Scholar] [CrossRef]
- Jia, S.; Jiang, S.; Lin, Z.; Xu, M.; Sun, W.; Huang, Q.; Zhu, J.; Jia, X. A semisupervised Siamese network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5516417. [Google Scholar] [CrossRef]
- Di, X.; Xue, Z.; Zhang, M. Active learning-driven siamese network for hyperspectral image classification. Remote Sens. 2023, 15, 752. [Google Scholar] [CrossRef]
- Yang, J.; Qin, J.; Qian, J.; Li, A.; Wang, L. AL-MRIS: An active learning-based multipath residual involution siamese network for few-shot hyperspectral image classification. Remote Sens. 2024, 16, 990. [Google Scholar] [CrossRef]
- Patel, U.; Patel, V. Active learning-based hyperspectral image classification: A reinforcement learning approach. J. Supercomput. 2024, 80, 2461–2486. [Google Scholar] [CrossRef]
- Zhuang, H.; Zhang, Y.; Wu, Q. Disconnection-based active learning for hyperspectral image classification. Remote Sens. 2020, 12, 1484. [Google Scholar]
- Ma, W.; Zhou, T.; Qin, J. Adaptive multi-feature fusion via cross-entropy normalization for effective image retrieval. Information Process. Management 2023, 60, 103119. [Google Scholar] [CrossRef]
- Raj, A.; Bach, F. Convergence of uncertainty sampling for active learning. arXiv 2021, arXiv:2110.15784. [Google Scholar]
- Chen, C.; Xu, J. A marginal sampling approach for active learning in hyperspectral image classification. Remote Sens. Lett. 2023, 14, 152–161. [Google Scholar]
- Li, J. Active learning for hyperspectral image classification with a stacked autoencoders based neural network. In Proceedings of the 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; IEEE: New York, NY, USA, 2015; pp. 1–4. [Google Scholar]
- Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new Bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
- Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A survey: Deep learning for hyperspectral image classification with few labeled samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
- Lei, Z.; Zeng, Y.; Liu, P.; Su, X. Active deep learning for hyperspectral image classification with uncertainty learning. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5502405. [Google Scholar] [CrossRef]
- Wang, Z.; Zhao, S.; Zhao, G.; Song, X. Dual-Branch Domain Adaptation Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2024, 62, 5506116. [Google Scholar] [CrossRef]
- Wang, H.; Wang, L. Collaborative active learning based on improved capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5522126. [Google Scholar] [CrossRef]
- Parvaneh, A.; Abbasnejad, E.; Teney, D.; Haffari, R.; Van Den Hengel, A.; Shi, J.Q. Active learning by feature mixing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12237–12246. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Liu, C.; Yi, Z.; Huang, B.; Zhou, Z.; Fang, S.; Li, X.; Zhang, Y.; Wu, X. A deep learning method based on triplet network using self-attention for tactile grasp outcomes prediction. IEEE Trans. Instrum. Meas. 2023, 72, 2518914. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, H.; Liu, T. An active learning framework for hyperspectral image classification. Remote Sens. 2020, 12, 40. [Google Scholar]
- Wang, J.; Li, Y. Improving classification accuracy through active learning: A case study of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4782–4796. [Google Scholar]
- Sener, O.; Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv 2017, arXiv:1708.00489. [Google Scholar]
- Joshi, A.J.; Porikli, F.; Papanikolopoulos, N. Multi-class active learning for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2372–2379. [Google Scholar]
- Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
- Lesniak, D.; Sieradzki, I.; Podolak, T. Distribution-interpolation trade off in generative models. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Parvaneh, A.; Abbasnejad, E.; Teney, D.; Shi, J.Q.; Van den Hengel, A. Counterfactual vision-and-language navigation: Unravelling the unseen. Adv. Neural Inf. Process. Syst. 2020, 33, 5296–5307. [Google Scholar]
- Zhang, H.; Cisse, M.; Dauphin, Y.; Lopez-Paz, D. Mixup: Beyond empirical risk minimization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–13. [Google Scholar]
- Li, Z.; Zhang, L.; Liu, D. A small-sample hyperspectral image classification method based on spectral–spatial features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5045–5057. [Google Scholar]
- Pei, S.; Song, H. A small-sample hyperspectral image classification method based on dual-channel spectral enhancement network. Electronics 2022, 11, 2540. [Google Scholar] [CrossRef]
- Lee, D.H. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the Workshop on Challenges in Representation Learning, ICML, Atlanta, GA, USA, 16–21 June 2013; Volume 3, pp. 896–901. [Google Scholar]
- Zhang, H.; Liu, L.; Long, Y.; Shao, L. Unsupervised deep hashing with pseudo labels for scalable image retrieval. IEEE Trans. Image Process. 2017, 27, 1626–1638. [Google Scholar] [CrossRef]
- Zhu, W.; Shi, B.; Feng, Z. A transfer learning method using high-quality pseudo labels for bearing fault diagnosis. IEEE Trans. Instrum. Meas. 2022, 72, 3502311. [Google Scholar] [CrossRef]
- Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci. Remote Sens. Lett. 2011, 9, 447–451. [Google Scholar] [CrossRef]
- Bai, Y.; Xu, M.; Zhang, L.; Liu, Y. Pruning multi-scale multi-branch network for small-sample hyperspectral image classification. Electrics 2023, 12, 674. [Google Scholar] [CrossRef]
- Zhang, K.; Yan, J.; Zhang, F.; Ge, C.; Wan, W.; Sun, J.; Zhang, H. Spectral-spatial dual graph unfolding network for multispectral and hyperspectral image fusion. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5508718. [Google Scholar] [CrossRef]
Number/Class | 3DCNN [20] | DFSL-NN [36] | DFSL-SVM [36] | Gia-CFSL [38] | DBDAFSL [59] | CapsGLOM [60] | MSTNC (Ours) |
---|---|---|---|---|---|---|---|
1. Alfalfa | 90.11 | 98.23 | 97.56 | 100 | 100 | 100 | 100 |
2. Corn-notill | 58.56 | 49.24 | 52.93 | 68.37 | 45.2 | 41.2 | 96.76 |
3. Corn-mintill | 43.42 | 62.96 | 69.21 | 66.33 | 61.62 | 65.62 | 77.91 |
4. Corn | 44.98 | 53.77 | 57.31 | 87.01 | 83.23 | 79.23 | 88.05 |
5. Grass-pasture | 67.88 | 84.89 | 84.71 | 71.75 | 76.72 | 86.72 | 69.21 |
6. Grass-trees | 89.42 | 92.38 | 92.56 | 84.72 | 92.55 | 93.55 | 86.63 |
7. Grass-pasture-mowed | 91.92 | 98.01 | 98.65 | 99.45 | 100 | 100 | 100 |
8. Hay-windrowed | 81.22 | 85.59 | 85.59 | 98.73 | 99.22 | 100 | 99.78 |
9. Oats | 98.11 | 90.9 | 90.9 | 99.71 | 98.66 | 84.21 | 100 |
10. Soybean-notill | 66.93 | 65.66 | 30.21 | 71.21 | 62.57 | 26.57 | 73.34 |
11. Soybean-mintill | 19.67 | 73.64 | 95.78 | 44.78 | 70.77 | 85.77 | 73.71 |
12. Soybean-clean | 30.73 | 21.08 | 39.96 | 54.96 | 38.02 | 89.02 | 79.06 |
13. Wheat | 92.67 | 75.26 | 81.63 | 89.63 | 97.44 | 100 | 92.85 |
14. Woods | 89.37 | 90.81 | 89.87 | 76.87 | 84.59 | 93.59 | 84.92 |
15. Buildings-Grass-Trees-Drives | 55.1 | 58.43 | 40.42 | 80.42 | 88.18 | 78.18 | 100 |
16. Stone-Steel-Towers | 95.06 | 96.53 | 97.8 | 99.02 | 93.52 | 81.52 | 100 |
OA | 54.75 ± 3.53 | 65.90 ± 4.64 | 66.49 ± 3.37 | 67.52 ± 3.79 | 70.52 ± 3.62 | 74.63 ± 3.62 | 82.97 ± 2.44 |
AA | 63.91 ± 1.45 | 71.25 ± 3.34 | 70.34 ± 2.51 | 80.94 ± 1.78 | 80.97 ± 1.77 | 81.58 ± 1.77 | 88.89 ± 1.51 |
Kappa | 52.21 ± 3.71 | 60.63 ± 4.17 | 60.09 ± 3.28 | 63.67 ± 3.92 | 66.67 ± 2.89 | 70.60 ± 2.89 | 80.70 ± 2.48 |
Number/Class | 3DCNN [20] | DFSL-NN [36] | DFSL-SVM [36] | Gia-CFSL [38] | DBDAFSL [59] | CapsGLOM [60] | MSTNC (Ours) |
---|---|---|---|---|---|---|---|
1. Asphalt | 80.94 | 75.63 | 47.22 | 84.01 | 80.01 | 79.51 | 85.06 |
2. Meadows | 90.56 | 83.78 | 85.31 | 84.83 | 81.83 | 95.22 | 92.74 |
3. Gravel | 36.61 | 51.41 | 83.25 | 54.98 | 92.98 | 35.31 | 79.44 |
4. Trees | 61.20 | 74.22 | 66.96 | 86.99 | 65.99 | 94.87 | 71.06 |
5. Painted metal sheets | 99.69 | 98.15 | 99.83 | 100.00 | 99.77 | 100.00 | 100.00 |
6. Bitumen | 55.92 | 85.46 | 85.2 | 73.22 | 99.22 | 60.79 | 83.85 |
7. Bare Soil | 90.76 | 96.28 | 95.3 | 98.14 | 99.14 | 89.24 | 94.23 |
8. Self-Blocking Bricks | 37.22 | 71.12 | 70.41 | 19.41 | 54.41 | 92.47 | 88.69 |
9. Shadows | 81.49 | 94.98 | 34.12 | 62.02 | 90.02 | 99.36 | 86.62 |
OA | 75.75 ± 5.56 | 80.47 ± 5.68 | 76.33 ± 3.86 | 76.79 ± 6.74 | 82.07 ± 6.74 | 85.60 ± 4.85 | 87.94 ± 3.45 |
AA | 70.49 ± 3.39 | 81.34 ± 3.85 | 75.20 ± 2.59 | 73.73 ± 5.63 | 84.90 ± 5.63 | 82.98 ± 2.64 | 86.85 ± 2.37 |
Kappa | 67.47 ± 6.45 | 74.64 ± 6.71 | 69.55 ± 4.77 | 69.73 ± 4.59 | 77.02 ± 4.59 | 80.76 ± 5.69 | 84.11 ± 4.22 |
Number/Class | 3DCNN [20] | DFSL-NN [36] | DFSL-SVM [36] | Gia-CFSL [38] | DBDAFSL [59] | CapsGLOM [60] | MSTNC (Ours) |
---|---|---|---|---|---|---|---|
1. Strawberry | 85.97 | 68.26 | 55.16 | 51.52 | 57.36 | 71.22 | 91.25 |
2. Cowpea | 80.89 | 45.93 | 46.96 | 67.86 | 58.85 | 76.45 | 83.07 |
3. Soybean | 42.70 | 86.21 | 85.5 | 79.62 | 78.00 | 77.45 | 86.90 |
4. Sorghum | 88.92 | 96.31 | 96.85 | 95.74 | 97.35 | 97.33 | 97.36 |
5. Water spinach | 30.80 | 86.71 | 99.84 | 89.81 | 96.99 | 100.00 | 98.58 |
6. Watermelon | 0.11 | 59.93 | 66.97 | 84.73 | 48.50 | 74.55 | 81.76 |
7. Greens | 2.55 | 81.02 | 98.71 | 95.38 | 82.49 | 79.94 | 83.94 |
8. Trees | 44.75 | 61.29 | 35.29 | 59.14 | 31.21 | 65.98 | 76.21 |
9. Grass | 11.35 | 56.32 | 26.96 | 53.61 | 62.21 | 63.20 | 83.70 |
10. Red roof | 39.77 | 66.53 | 73.74 | 61.03 | 91.39 | 81.54 | 84.47 |
11. Gray roof | 0.04 | 87.74 | 67.8 | 65.52 | 79.59 | 89.06 | 96.75 |
12. Plastic | 34.34 | 43.86 | 59.24 | 70.56 | 74.70 | 73.57 | 51.94 |
13. Bare soil | 13.34 | 39.19 | 48.63 | 36.20 | 36.44 | 41.22 | 55.11 |
14. Road | 24.37 | 65.21 | 34.46 | 30.51 | 52.45 | 57.59 | 80.80 |
15. Bright object | 68.10 | 53.08 | 65.68 | 57.24 | 67.99 | 58.21 | 57.07 |
16. Water | 98.06 | 72.67 | 98.1 | 96.39 | 98.97 | 97.72 | 96.48 |
OA | 62.91 ± 2.21 | 67.85 ± 2.81 | 68.04 ± 1.65 | 70.15 ± 1.85 | 72.22 ± 2.71 | 79.50 ± 2.67 | 86.57 ± 1.17 |
AA | 41.71 ± 1.75 | 67.05 ± 2.65 | 66.08 ± 2.38 | 68.51 ± 1.78 | 69.58 ± 2.14 | 75.60 ± 1.65 | 79.18 ± 1.05 |
Kappa | 55.30 ± 2.71 | 63.53 ± 2.31 | 63.00 ± 2.42 | 65.67 ± 1.56 | 67.97 ± 2.31 | 76.22 ± 2.51 | 84.36 ± 1.13 |
DATASET | Measure | SSRN [31] | 3DAES [47] | ConGcn [33] | TensorSSA [22] | SmaT-HOSVD [23] | MSTNC (Ours) |
---|---|---|---|---|---|---|---|
IP | OA (%) | 76.16 | 78.93 | 79.94 | 87.55 | 92.45 | 96.36 |
AA (%) | 82.25 | 78.25 | 78.42 | 88.92 | 91.89 | 96.29 | |
κ × 100 | 72.49 | 75.26 | 77.85 | 86.68 | 93.01 | 95.85 | |
PU | OA (%) | 91.27 | 92.76 | 91.18 | 94.48 | 96.88 | 100 |
AA (%) | 88.50 | 88.50 | 86.50 | 93.45 | 94.45 | 100 | |
κ × 100 | 88.37 | 92.60 | 90.60 | 92.52 | 95.79 | 100 | |
WHU | OA (%) | 96.21 | 95.54 | 95.04 | 96.78 | 97.88 | 100 |
AA (%) | 97.21 | 97.91 | 96.01 | 98.02 | 99.01 | 100 | |
κ × 100 | 92.59 | 92.61 | 90.55 | 96.24 | 97.95 | 100 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bai, Y.; Liu, D.; Zhang, L.; Wu, H. A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme. Sensors 2024, 24, 6647. https://doi.org/10.3390/s24206647
Bai Y, Liu D, Zhang L, Wu H. A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme. Sensors. 2024; 24(20):6647. https://doi.org/10.3390/s24206647
Chicago/Turabian StyleBai, Yu, Dongmin Liu, Lili Zhang, and Haoqi Wu. 2024. "A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme" Sensors 24, no. 20: 6647. https://doi.org/10.3390/s24206647
APA StyleBai, Y., Liu, D., Zhang, L., & Wu, H. (2024). A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme. Sensors, 24(20), 6647. https://doi.org/10.3390/s24206647