From Video to Hyperspectral: Hyperspectral Image-Level Feature Extraction with Transfer Learning
Abstract
:1. Introduction
- 1.
- We propose an image-level feature extraction method to achieve more refined HSI classification, avoiding the inherent defects accompanying the previous patch-level methods.
- 2.
- We look at the global HSI with hundreds of contiguous spectral bands from a sequential image perspective and extract the global spectral variation information between adjacent spectral bands using the optical flow estimation method.
- 3.
- We transfer the optical flow estimation network PWC-Net that is pre-trained with the video into the HSI feature extraction target task. To our knowledge, this is the first work that transfers a network pre-trained on video data to HSI classification with excellent performance.
- 4.
- We design a vote strategy in the classification phase, which utilizes features at different scales to construct multiple tasks and votes for obtaining the optimal results, so as to further improve the classification accuracy.
2. Related Works
2.1. Hyperspectral Image Classification
2.2. Transfer Learning
2.3. Optical Flow Estimation
3. Materials and Methods
3.1. PWC-Net
3.2. Training and Transfer Learning Strategy
3.3. Data Adaptation for Feature Extraction
3.4. Classification Based on Vote Strategy
4. Results and Discussion
4.1. Datasets and Evaluation Criterion
4.2. Implementation Details
4.3. Analysis of Vote Strategy
4.4. Comparative Experiments
4.5. Analysis of Training and Inference Speed
4.6. Analysis of Image-Level Feature
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
- Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
- Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar] [CrossRef] [Green Version]
- Sun, D.; Yang, X.; Liu, M.Y.; Kautz, J. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Agarwal, A.; El-Ghazawi, T.; El-Askary, H.; Le-Moigne, J. Efficient Hierarchical-PCA Dimension Reduction for Hyperspectral Imagery. In Proceedings of the 2007 IEEE International Symposium on Signal Processing and Information Technology, Cairo, Egypt, 15–18 December 2007; pp. 353–356. [Google Scholar] [CrossRef]
- Jia, S.; Hu, J.; Xie, Y.; Shen, L.; Jia, X.; Li, Q. Gabor Cube Selection Based Multitask Joint Sparse Representation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3174–3187. [Google Scholar] [CrossRef]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Chen, C.; Su, H.; Du, Q. Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
- Xing, C.; Ma, L.; Yang, X. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images. J. Sensors 2016, 2016, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sensors 2015, 2015, 1687-725X. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, L.; Ghamisi, P.; Jia, X.; Li, G.; Tang, L. Hyperspectral Images Classification With Gabor Filtering and Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised Deep Feature Extraction for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1909–1921. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Liu, B.; Yu, X.; Zhang, P.; Tan, X. Deep 3D convolutional network combined with spatial-spectral features for hyperspectral image classification. Acta Geod. Cartogr. Sin. 2019, 48, 53. [Google Scholar]
- Lin, Z.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Tang, X.; Meng, F.; Zhang, X.; Cheung, Y.M.; Ma, J.; Liu, F.; Jiao, L. Hyperspectral Image Classification Based on 3-D Octave Convolution With Spatial–Spectral Attention Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2430–2447. [Google Scholar] [CrossRef]
- Gao, K.; Liu, B.; Yu, X.; Zhang, P.; Sun, Y. Small sample classification of hyperspectral image using model-agnostic meta-learning algorithm and convolutional neural network. Int. J. Remote Sens. 2021, 42, 3090–3122. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2145–2160. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, Y.; Ma, L.; Li, J.; Zheng, W.S. Spectral–Spatial Transformer Network for Hyperspectral Image Classification: A Factorized Architecture Search Framework. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Shen, Y.; Zhu, S.; Chen, C.; Du, Q.; Xiao, L.; Chen, J.; Pan, D. Efficient Deep Learning of Nonlocal Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6029–6043. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, L. Beyond the Patchwise Classification: Spectral-Spatial Fully Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Big Data 2020, 6, 492–506. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Ma, A.; Zhang, L. FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5612–5626. [Google Scholar] [CrossRef]
- Wang, D.; Du, B.; Zhang, L. Fully Contextual Network for Hyperspectral Scene Parsing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Wang, Y.; Li, K.; Xu, L.; Wei, Q.; Wang, F.; Chen, Y. A Depthwise Separable Fully Convolutional ResNet With ConvCRF for Semisupervised Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4621–4632. [Google Scholar] [CrossRef]
- Jiang, G.; Sun, Y.; Liu, B. A fully convolutional network with channel and spatial attention for hyperspectral image classification. Remote Sens. Lett. 2021, 12, 1238–1249. [Google Scholar] [CrossRef]
- Sun, Y.; Liu, B.; Yu, X.; Yu, A.; Xue, Z.; Gao, K. Resolution reconstruction classification: Fully octave convolution network with pyramid attention mechanism for hyperspectral image classification. Int. J. Remote Sens. 2022, 43, 2076–2105. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Yu, A.; Zhang, P.; Wan, G.; Wang, R. Deep Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2290–2304. [Google Scholar] [CrossRef]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised Spatial–Spectral Feature Learning by 3D Convolutional Autoencoder for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Liu, B.; Yu, A.; Yu, X.; Wang, R.; Gao, K.; Guo, W. Deep Multiview Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7758–7772. [Google Scholar] [CrossRef]
- Zhang, J.; Li, W.; Ogunbona, P.; Xu, D. Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective. ACM Comput. Surv. (CSUR) 2019, 52, 1–38. [Google Scholar] [CrossRef] [Green Version]
- Windrim, L.; Melkumyan, A.; Murphy, R.J.; Chlingaryan, A.; Ramakrishnan, R. Pretraining for Hyperspectral Convolutional Neural Network Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2798–2810. [Google Scholar] [CrossRef]
- Zhong, C.; Zhang, J.; Wu, S.; Zhang, Y. Cross-Scene Deep Transfer Learning With Spectral Feature Adaptation for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2861–2873. [Google Scholar] [CrossRef]
- Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer Learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef] [Green Version]
- He, X.; Chen, Y.; Ghamisi, P. Heterogeneous Transfer Learning for Hyperspectral Image Classification Based on Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3246–3263. [Google Scholar] [CrossRef]
- Butler, D.J.; Wulff, J.; Stanley, G.B.; Black, M.J. A Naturalistic Open Source Movie for Optical Flow Evaluation. In Computer Vision—ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 611–625. [Google Scholar]
- Xiao, X.; Hu, H.; Wang, W. Trajectories-based motion neighborhood feature for human action recognition. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4147–4151. [Google Scholar] [CrossRef]
- Ochs, P.; Malik, J.; Brox, T. Segmentation of Moving Objects by Long Term Video Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1187–1200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Menze, M.; Geiger, A. Object scene flow for autonomous vehicles. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3061–3070. [Google Scholar] [CrossRef] [Green Version]
- Horn, B.; Schunck, B.G. Determining Optical Flow. In Techniques and Applications of Image Understanding; SPIE: Bellingham, DC, USA, 1981. [Google Scholar]
- Brox, T.; Bruhn, A.; Papenberg, N.; Weickert, J. High Accuracy Optical Flow Estimation Based on a Theory for Warping. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004. [Google Scholar]
- Papenberg, N.; Bruhn, A.; Brox, T.; Didas, S.; Weickert, J. Highly Accurate Optic Flow Computation with Theoretically Justified Warping. Int. J. Comput. Vis. 2006, 67, 141–158. [Google Scholar] [CrossRef] [Green Version]
- Sun, D.; Roth, S.; Black, M.J. Secrets of Optical Flow Estimation and Their Principles. In Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
- Baker, S.; Scharstein, D.; Lewis, J.P.; Roth, S.; Black, M.J.; Szeliski, R. A Database and Evaluation Methodology for Optical Flow. Int. J. Comput. Vis. 2011, 92, 1–31. [Google Scholar] [CrossRef]
- Vogel, C.; Roth, S.; Schindler, K. An Evaluation of Data Costs for Optical Flow. In Proceedings of the German Conference on Pattern Recognition, Saarbrücken, Germany, 4–6 September 2013. [Google Scholar]
- Brox, T.; Malik, J. Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 500–513. [Google Scholar] [CrossRef]
- Barnes, C.; Shechtman, E.; Dan, B.G.; Finkelstein, A. The Generalized PatchMatch Correspondence Algorithm. In Proceedings of the European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010. [Google Scholar]
- Liu, C.; Yuen, J.; Torralba, A. SIFT Flow: Dense Correspondence across Scenes and Its Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 978–994. [Google Scholar] [CrossRef] [Green Version]
- Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large Displacement Optical Flow with Deep Matching. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1385–1392. [Google Scholar] [CrossRef] [Green Version]
- Gadot, D.; Wolf, L. PatchBatch: A Batch Augmented Loss for Optical Flow. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4236–4245. [Google Scholar] [CrossRef] [Green Version]
- Han, X.; Leung, T.; Jia, Y.; Sukthankar, R.; Berg, A.C. MatchNet: Unifying feature and metric learning for patch-based matching. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3279–3286. [Google Scholar] [CrossRef] [Green Version]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Häusser, P.; Hazirbas, C.; Golkov, V.; Smagt, P.v.d.; Cremers, D.; Brox, T. FlowNet: Learning Optical Flow with Convolutional Networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar] [CrossRef] [Green Version]
- Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1647–1655. [Google Scholar] [CrossRef] [Green Version]
- Ranjan, A.; Black, M.J. Optical Flow Estimation Using a Spatial Pyramid Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21-26 July 2017; pp. 2720–2729. [Google Scholar] [CrossRef] [Green Version]
- Hui, T.W.; Tang, X.; Loy, C.C. LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8981–8989. [Google Scholar] [CrossRef] [Green Version]
- Janai, J.; Güney, F.; Ranjan, A.; Black, M.; Geiger, A. Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Liu, P.; Lyu, M.; King, I.; Xu, J. SelFlow: Self-Supervised Learning of Optical Flow. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4566–4575. [Google Scholar] [CrossRef]
- Tian, L.; Tu, Z.; Zhang, D.; Liu, J.; Li, B.; Yuan, J. Unsupervised Learning of Optical Flow With CNN-Based Non-Local Filtering. IEEE Trans. Image Process. 2020, 29, 8429–8442. [Google Scholar] [CrossRef]
- Sun, D.; Roth, S.; Michael, J. A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. Kluwer Acad. Publ. 2014, 106, 115–137. [Google Scholar]
- Zhang, C.; Li, G.; Du, S. Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9201–9222. [Google Scholar] [CrossRef]
- Mayer, N.; Ilg, E.; Häusser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; Brox, T. A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4040–4048. [Google Scholar] [CrossRef] [Green Version]
- Liu, Q.; Xiao, L.; Yang, J.; Wei, Z. CNN-Enhanced Graph Convolutional Network With Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8657–8671. [Google Scholar] [CrossRef]
- Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
Indian Pines | Salinas | Pavia University | Houston | |
---|---|---|---|---|
85.70 ± 3.00 | 96.01 ± 1.06 | 88.92 ± 2.28 | 81.87 ± 2.30 | |
84.76 ± 3.23 | 96.48 ± 1.03 | 88.54 ± 2.55 | 82.25 ± 1.76 | |
85.28 ± 2.43 | 95.94 ± 1.17 | 89.29 ± 2.47 | 82.22 ± 1.68 | |
85.13 ± 2.37 | 96.23 ± 0.93 | 89.45 ± 2.99 | 82.25 ± 1.85 | |
85.05 ± 3.12 | 96.27 ± 1.16 | 90.42 ± 2.18 | 81.72 ± 1.64 | |
85.86 ± 2.34 | 95.17 ± 1.62 | 89.82 ± 1.78 | 82.59 ± 1.41 | |
85.10 ± 2.96 | 96.06 ± 0.97 | 89.56 ± 2.54 | 82.23 ± 2.50 | |
85.32 ± 2.33 | 96.32 ± 0.74 | 90.36 ± 1.93 | 81.60 ± 1.78 | |
85.08 ± 2.41 | 96.41 ± 0.92 | 89.94 ± 2.63 | 82.83 ± 2.02 | |
85.16 ± 1.75 | 96.10 ± 1.03 | 89.76 ± 2.50 | 81.74 ± 1.37 | |
85.07 ± 2.31 | 96.78 ± 0.65 | 90.44 ± 2.72 | 81.88 ± 3.03 | |
± 2.25 | ± 0.94 | ± 2.23 | ± 1.69 |
Traditional Method | Patch-Level | Image-Level | ||||||
---|---|---|---|---|---|---|---|---|
EMPs | LBP | DFSL | 3D-CAE | SSTN | FreeNet | CEGCN | Spe-TL | |
1 | 40.76 ± 4.86 | 80.89 ± 13.78 | 86.01 ± 6.24 | 48.50 ± 8.99 | 98.69 ± 2.21 | 99.13 ± 1.06 | 93.91 ± 5.79 | 87.88 ± 9.85 |
2 | 49.69 ± 6.71 | 62.19 ± 8.15 | 67.78 ± 2.91 | 57.11 ± 4.57 | 79.97 ± 8.25 | 63.22 ± 12.35 | 66.26 ± 14.97 | 81.28 ± 6.06 |
3 | 49.54 ± 9.70 | 66.55 ± 13.63 | 71.55 ± 3.15 | 59.12 ± 5.30 | 80.05 ± 10.02 | 80.39 ± 7.51 | 57.34 ± 10.54 | 76.94 ± 6.92 |
4 | 44.25 ± 9.32 | 59.02 ± 8.27 | 87.45 ± 1.66 | 51.10 ± 5.58 | 95.02 ± 2.77 | 96.20 ± 3.80 | 86.62 ± 5.14 | 80.01 ± 10.05 |
5 | 54.36 ± 4.32 | 78.06 ± 4.74 | 91.07 ± 2.45 | 69.51 ± 3.97 | 89.39 ± 3.18 | 83.80 ± 9.37 | 88.75 ± 5.31 | 94.15 ± 3.01 |
6 | 87.59 ± 6.45 | 87.02 ± 8.01 | 94.48 ± 2.68 | 93.40 ± 2.05 | 97.56 ± 2.53 | 94.28 ± 3.12 | 99.01 ± 0.57 | 95.48 ± 4.24 |
7 | 36.39 ± 7.08 | 49.74 ± 11.38 | 89.17 ± 3.82 | 60.85 ± 10.31 | 100.00 ± 0.00 | 100.0 ± 0.00 | 100.00 ± 0.00 | 87.29 ± 10.29 |
8 | 97.96 ± 1.23 | 95.53 ± 6.18 | 96.56 ± 1.29 | 98.96 ± 0.71 | 99.81 ± 0.25 | 99.18 ± 1.34 | 99.70 ± 0.29 | 95.79 ± 2.32 |
9 | 25.94 ± 4.48 | 34.07 ± 20.37 | 83.33 ± 15.91 | 54.55 ± 11.12 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 78.45 ± 10.11 |
10 | 47.33 ± 14.38 | 59.14 ± 8.10 | 77.96 ± 2.90 | 63.37 ± 3.26 | 82.80 ± 8.53 | 84.80 ± 7.26 | 88.49 ± 6.09 | 73.91 ± 11.83 |
11 | 66.59 ± 6.23 | 88.96 ± 5.25 | 63.03 ± 1.53 | 80.29 ± 2.87 | 79.49 ± 6.08 | 76.43 ± -3.98 | 64.82 ± 6.76 | 92.23 ± 3.91 |
12 | 33.40 ± 6.09 | 66.80 ± 13.10 | 77.50 ± 3.52 | 48.12 ± 3.49 | 86.23 ± 5.59 | 72.56 ± 8.40 | 68.09 ± 19.40 | 73.51 ± 8.00 |
13 | 88.38 ± 6.71 | 96.74 ± 4.21 | 97.84 ± 2.09 | 95.71 ± 2.86 | 99.75 ± 0.58 | 98.43 ± 1.96 | 99.75 ± 0.45 | 99.81 ± 0.32 |
14 | 89.40 ± 3.21 | 96.93 ± 2.23 | 89.76 ± 1.87 | 94.50 ± 2.31 | 93.83 ± 4.69 | 95.11 ± 3.89 | 93.28 ± 3.42 | 99.65 ± 0.25 |
15 | 61.38 ± 10.73 | 74.47 ± 5.94 | 69.54 ± 3.65 | 71.71 ± 4.63 | 93.37 ± 5.53 | 94.76 ± 7.15 | 90.72 ± 6.67 | 91.34 ± 8.31 |
16 | 98.14 ± 3.82 | 96.61 ± 7.01 | 97.27 ± 2.52 | 98.82 ± 0.31 | 97.63 ± 3.29 | 98.60 ± 1.18 | 99.35 ± 1.09 | 99.66 ± 0.54 |
OA | 61.06 ± 2.66 | 74.96 ± 2.16 | 76.39 ± 0.55 | 71.90 ± 1.43 | 85.41 ± 1.44 | 82.47 ± 1.82 | 78.34 ± 2.71 | ± 2.25 |
AA | 60.69 ± 1.50 | 74.55 ± 2.46 | 83.77 ± 0.78 | 71.61 ± 1.66 | 86.10 ± 0.77 | 85.81 ± 1.27 | 86.26 ± 1.96 | ± 1.98 |
56.03 ± 3.11 | 71.9 ± 2.35 | 73.15 ± 0.61 | 68.42 ± 1.51 | 84.61 ± 1.62 | 80.21 ± 2.03 | 75.54 ± 2.99 | ± 2.54 |
Traditional Method | Patch-Level | Image-Level | ||||||
---|---|---|---|---|---|---|---|---|
EMPs | LBP | DFSL | 3D-CAE | SSTN | FreeNet | CEGCN | Spe-TL | |
1 | 96.42 ± 3.87 | 99.96 ± 0.12 | 98.62 ± 0.68 | 96.17 ± 3.14 | 94.58 ± 9.42 | 79.78 ± 9.89 | 99.96 ± 0.12 | 98.50 ± 1.56 |
2 | 98.73 ± 0.48 | 98.21 ± 1.95 | 99.37 ± 0.41 | 98.88 ± 0.52 | 98.50 ± 4.24 | 98.76 ± 2.32 | 96.12 ± 1.62 | 99.80 ± 0.24 |
3 | 87.36 ± 5.04 | 69.06 ± 11.23 | 98.74 ± 1.20 | 92.26 ± 2.19 | 93.75 ± 8.87 | 99.76 ± 0.37 | 100.00 ± 0.00 | 98.33 ± 1.85 |
4 | 97.54 ± 0.72 | 90.02 ± 6.03 | 99.47 ± 0.27 | 97.87 ± 0.47 | 98.91 ± 1.32 | 98.20 ± 3.84 | 99.77 ± 0.13 | 97.25 ± 0.54 |
5 | 95.96 ± 3.84 | 97.25 ± 2.23 | 96.17 ± 0.89 | 96.70 ± 4.38 | 95.43 ± 2.18 | 97.65 ± 1.32 | 98.05 ± 1.45 | 98.43 ± 0.50 |
6 | 99.95 ± 0.14 | 97.20 ± 3.24 | 99.62 ± 0.21 | 99.96 ± 0.08 | 99.46 ± 0.93 | 97.67 ± 3.04 | 99.97 ± 0.04 | 99.94 ± 0.06 |
7 | 95.44 ± 2.81 | 98.79 ± 1.37 | 99.47 ± 0.16 | 97.95 ± 1.54 | 99.20 ± 1.40 | 99.60 ± 0.59 | 99.86 ± 0.11 | 99.13 ± 0.17 |
8 | 80.02 ± 4.26 | 79.56 ± 6.03 | 78.83 ± 3.75 | 82.72 ± 1.77 | 76.15 ± 9.66 | 89.25 ± 6.38 | 84.92 ± 9.69 | 99.33 ± 1.05 |
9 | 99.41 ± 0.15 | 96.92 ± 3.60 | 98.42 ± 1.17 | 99.27 ± 0.32 | 99.15 ± 0.83 | 99.97 ± 0.05 | 99.51 ± 1.17 | 98.99 ± 0.45 |
10 | 85.73 ± 5.13 | 91.62 ± 3.73 | 91.82 ± 1.03 | 87.27 ± 1.78 | 91.77 ± 3.98 | 94.45 ± 5.86 | 90.33 ± 3.82 | 95.51 ± 1.60 |
11 | 69.40 ± 6.60 | 84.25 ± 6.37 | 96.30 ± 1.30 | 78.69 ± 2.89 | 97.42 ± 1.67 | 99.10 ± 1.20 | 98.87 ± 1.37 | 91.80 ± 6.08 |
12 | 93.64 ± 2.01 | 93.67 ± 3.09 | 99.89 ± 0.11 | 94.83 ± 1.36 | 99.14 ± 0.92 | 99.37 ± 0.72 | 97.20 ± 2.90 | 94.89 ± 1.85 |
13 | 86.43 ± 7.40 | 93.85 ± 4.20 | 98.41 ± 0.66 | 94.17 ± 0.86 | 99.82 ± 0.39 | 99.84 ± 0.23 | 99.78 ± 0.34 | 68.36 ± 14.84 |
14 | 92.19 ± 4.18 | 94.12 ± 4.25 | 96.48 ± 1.02 | 95.02 ± 1.08 | 99.44 ± 0.68 | 99.22 ± 0.96 | 99.34 ± 0.49 | 89.99 ± 1.75 |
15 | 62.57 ± 7.05 | 58.56 ± 3.37 | 68.01 ± 5.03 | 63.48 ± 5.65 | 81.03 ± 9.25 | 91.80 ± 8.76 | 70.56 ± 10.34 | 93.15 ± 5.01 |
16 | 87.75 ± 5.65 | 76.86 ± 9.84 | 98.67 ± 0.52 | 91.13 ± 3.20 | 96.03 ± 5.16 | 98.37 ± 2.75 | 98.49 ± 1.23 | 99.68 ± 0.47 |
OA | 86.07 ± 1.84 | 84.40 ± 1.49 | 89.88 ± 0.27 | 87.86 ± 1.18 | 90.78 ± 2.09 | 95.00 ± 1.82 | 91.69 ± 2.52 | ± 0.94 |
AA | 89.28 ± 1.15 | 88.74 ± 1.75 | 94.89 ± 0.20 | 91.65 ± 0.54 | 94.99 ± 1.34 | 95.42 ± 2.30 | 95.76 ± 1.38 | ± 0.89 |
84.57 ± 2.01 | 82.73 ± 1.63 | 88.72 ± 0.29 | 86.54 ± 1.29 | 89.77 ± 2.31 | 94.44 ± 2.02 | 90.75 ± 2.79 | ± 1.04 |
Traditional Method | Patch-Level | Image-Level | ||||||
---|---|---|---|---|---|---|---|---|
EMPs | LBP | DFSL | 3D-CAE | SSTN | FreeNet | CEGCN | Spe-TL | |
1 | 96.38 ± 1.44 | 91.20 ± 3.48 | 76.32 ± 4.10 | 97.57 ± 0.77 | 92.66 ± 4.72 | 85.57 ± 6.92 | 92.01 ± 6.18 | 97.13 ± 3.00 |
2 | 88.73 ± 2.07 | 96.87 ± 1.74 | 82.89 ± 4.39 | 93.97 ± 1.16 | 88.89 ± 5.15 | 77.12 ± 7.40 | 87.55 ± 4.91 | 98.04 ± 0.92 |
3 | 51.77 ± 5.21 | 53.45 ± 7.31 | 82.63 ± 1.95 | 70.92 ± 2.85 | 83.43 ± 7.01 | 77.85 ± 10.96 | 90.91 ± 10.20 | 83.51 ± 10.45 |
4 | 82.90 ± 15.55 | 80.76 ± 6.69 | 94.55 ± 2.17 | 92.88 ± 3.42 | 90.35 ± 6.06 | 95.96 ± 2.10 | 93.83 ± 2.97 | 77.64 ± 7.94 |
5 | 98.51 ± 1.67 | 99.10 ± 1.73 | 99.16 ± 0.26 | 98.11 ± 1.60 | 99.58 ± 0.57 | 99.97 ± 0.06 | 99.96 ± 0.06 | 93.01 ± 5.50 |
6 | 47.85 ± 7.52 | 70.21 ± 4.26 | 83.87 ± 6.06 | 55.70 ± 7.44 | 91.84 ± 4.04 | 94.08 ± 6.36 | 95.95 ± 3.32 | 86.35 ± 6.87 |
7 | 61.16 ± 11.52 | 55.98 ± 12.17 | 93.46 ± 1.35 | 72.21 ± 5.03 | 98.63 ± 3.27 | 94.33 ± 4.55 | 99.57 ± 0.61 | 85.94 ± 8.46 |
8 | 82.79 ± 5.26 | 76.48 ± 6.43 | 83.46 ± 2.44 | 91.00 ± 1.34 | 92.43 ± 4.03 | 94.75 ± 5.79 | 86.61 ± 9.01 | 78.20 ± 6.71 |
9 | 99.95 ± 0.07 | 73.08 ± 20.10 | 99.83 ± 0.22 | 99.89 ± 0.21 | 97.83 ± 1.44 | 99.41 ± 0.81 | 99.27 ± 0.55 | 99.81 ± 0.10 |
OA | 77.89 ± 2.29 | 82.73 ± 3.36 | 84.03 ± 1.34 | 85.18 ± 2.19 | 90.20 ± 2.14 | 85.07 ± 3.60 | 90.78 ± 2.19 | ± 2.23 |
AA | 78.89 ± 1.79 | 77.46 ± 4.18 | 87.46 ± 0.57 | 85.81 ± 1.16 | 91.83 ± 1.11 | 91.01 ± 1.77 | ± 1.26 | 89.07 ± 2.59 |
71.64 ± 2.45 | 77.94 ± 4.05 | 79.37 ± 1.58 | 80.94 ± 2.58 | 88.04 ± 2.67 | 81.10 ± 4.23 | 88.11 ± 2.71 | ± 2.80 |
Traditional Method | Patch-Level | Image-Level | ||||||
---|---|---|---|---|---|---|---|---|
EMPs | LBP | DFSL | 3D-CAE | SSTN | FreeNet | CEGCN | Spe-TL | |
1 | 88.43 ± 7.04 | 77.48 ± 5.12 | 89.48 ± 6.14 | 84.83 ± 6.31 | 83.46 ± 5.98 | 89.73 ± 5.48 | 87.91 ± 5.57 | 85.76 ± 7.40 |
2 | 92.21 ± 7.10 | 69.32 ± 6.78 | 91.73 ± 5.33 | 91.27 ± 6.88 | 88.99 ± 8.64 | 82.06 ± 10.30 | 93.95 ± 5.82 | 84.15 ± 7.86 |
3 | 72.70 ± 13.04 | 72.82 ± 6.15 | 98.82 ± 0.59 | 73.19 ± 11.46 | 99.11 ± 2.06 | 98.10 ± 2.10 | 99.97 ± 0.09 | 90.69 ± 7.86 |
4 | 95.74 ± 6.05 | 82.81 ± 7.27 | 93.10 ± 2.28 | 95.64 ± 5.93 | 94.53 ± 3.49 | 93.45 ± 2.68 | 93.95 ± 0.72 | 89.45 ± 4.88 |
5 | 90.26 ± 3.98 | 75.47 ± 6.15 | 97.75 ± 1.62 | 89.35 ± 2.76 | 96.23 ± 4.93 | 99.27 ± 1.29 | 98.82 ± 2.59 | 91.77 ± 4.17 |
6 | 83.24 ± 14.37 | 70.11 ± 7.05 | 90.30 ± 5.93 | 83.55 ± 13.46 | 89.63 ± 7.16 | 91.35 ± 5.73 | 90.80 ± 5.96 | 85.96 ± 11.4 |
7 | 81.87 ± 4.44 | 76.60 ± 6.54 | 79.91 ± 4.11 | 78.95 ± 3.10 | 76.52 ± 13.12 | 82.83 ± 9.80 | 83.75 ± 6.64 | 87.04 ± 5.63 |
8 | 68.99 ± 14.10 | 61.84 ± 10.20 | 50.18 ± 9.18 | 60.54 ± 9.34 | 58.48 ± 13.21 | 52.42 ± 11.96 | 55.26 ± 8.85 | 66.75 ± 12.43 |
9 | 78.76 ± 8.77 | 76.56 ± 7.87 | 75.19 ± 3.76 | 75.76 ± 13.95 | 79.66 ± 8.61 | 85.82 ± 8.19 | 81.18 ± 5.29 | 66.89 ± 7.22 |
10 | 65.08 ± 7.80 | 63.90 ± 7.16 | 63.81 ± 11.08 | 54.12 ± 9.72 | 64.96 ± 15.82 | 88.13 ± 11.79 | 93.25 ± 7.59 | 79.70 ± 9.22 |
11 | 68.06 ± 6.30 | 79.37 ± 10.26 | 59.86 ± 7.37 | 74.64 ± 10.60 | 60.02 ± 18.95 | 74.87 ± 12.75 | 77.09 ± 11.06 | 82.34 ± 5.97 |
12 | 63.45 ± 5.65 | 61.10 ± 5.81 | 54.10 ± 10.08 | 58.82 ± 6.52 | 45.04 ± 22.28 | 80.55 ± 8.65 | 81.23 ± 7.77 | 74.86 ± 5.41 |
13 | 73.66 ± 7.82 | 71.16 ± 9.54 | 31.11 ± 8.76 | 74.27 ± 10.03 | 70.02 ± 27.50 | 88.46 ± 5.61 | 47.46 ± 19.25 | 60.90 ± 11.56 |
14 | 79.76 ± 8.55 | 77.06 ± 8.16 | 97.50 ± 1.35 | 76.74 ± 7.35 | 99.72 ± 0.41 | 99.88 ± 0.21 | 99.97 ± 0.07 | 99.98 ± 0.07 |
15 | 97.79 ± 0.91 | 88.81 ± 8.50 | 98.81 ± 0.38 | 96.23 ± 1.98 | 98.69 ± 2.51 | 97.98 ± 3.01 | 99.57 ± 0.38 | 96.84 ± 1.72 |
OA | 78.64 ± 1.78 | 72.63 ± 1.52 | 77.22 ± 1.01 | 76.05 ± 1.33 | 77.91 ± 2.34 | 81.12 ± 2.04 | 82.41 ± 1.82 | ± 1.69 |
AA | 80.00 ± 1.54 | 73.63 ± 1.55 | 78.11 ± 0.70 | 77.86 ± 1.74 | 80.34 ± 2.32 | 82.99 ± 1.62 | 83.64 ± 1.97 | ± 1.43 |
76.94 ± 1.92 | 70.45 ± 1.64 | 75.36 ± 1.08 | 74.14 ± 1.43 | 76.16 ± 2.53 | 83.45 ± 2.20 | 83.22 ± 1.97 | ± 1.83 |
Methods | Training Sample Number per Class | Indian Pines | Salinas | Pavia University |
---|---|---|---|---|
3D-LWNet | 50 | 94.18 () | / | 95.57 () |
Two-CNN-transfer | 50 | / | 91.83 () | 85.40 () |
HT-CNN-Attention | 200 | 90.86 () | 94.70 () | 94.25 () |
CSDTL-MSA | 50 | / | / | 94.70 () |
SSTN | FreeNet | CEGCN | Spe-TL () | Spe-TL () | ||
---|---|---|---|---|---|---|
Indian Pines | Feature Extraction | / | / | / | 1.27 s | 9.35 s |
Training | 461.19 s | 47.34 s | 8.96 s | 0.34 s | 3.48 s | |
Test | 7.08 s | 0.12 s | 0.24 s | 0.45 s | 4.84 s | |
Salinas | Feature Extraction | / | / | / | 1.76 s | 11.19 s |
Training | 1379.62 s | 133.03 s | 12.62 s | 0.41 s | 3.98 s | |
Test | 24.82 s | 0.28 s | 1.08 s | 0.69 s | 5.38 s | |
Pavia University | Feature Extraction | / | / | / | 1.04 s | 7.89 s |
Training | 851.48 s | 144.88 s | 40.15 s | 0.29 s | 2.45 s | |
Test | 27.94 s | 0.29 s | 1.64 s | 0.67 s | 5.82 s | |
Houston | Feature Extraction | / | / | / | 4.59 s | 38.09 s |
Training | 186.97 s | 565.65 s | 96.94 s | 0.31 s | 2.94 s | |
Test | 65.96 s | 0.34 s | 13.82 s | 1.12 s | 7.40 s |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Y.; Liu, B.; Yu, X.; Yu, A.; Gao, K.; Ding, L. From Video to Hyperspectral: Hyperspectral Image-Level Feature Extraction with Transfer Learning. Remote Sens. 2022, 14, 5118. https://doi.org/10.3390/rs14205118
Sun Y, Liu B, Yu X, Yu A, Gao K, Ding L. From Video to Hyperspectral: Hyperspectral Image-Level Feature Extraction with Transfer Learning. Remote Sensing. 2022; 14(20):5118. https://doi.org/10.3390/rs14205118
Chicago/Turabian StyleSun, Yifan, Bing Liu, Xuchu Yu, Anzhu Yu, Kuiliang Gao, and Lei Ding. 2022. "From Video to Hyperspectral: Hyperspectral Image-Level Feature Extraction with Transfer Learning" Remote Sensing 14, no. 20: 5118. https://doi.org/10.3390/rs14205118
APA StyleSun, Y., Liu, B., Yu, X., Yu, A., Gao, K., & Ding, L. (2022). From Video to Hyperspectral: Hyperspectral Image-Level Feature Extraction with Transfer Learning. Remote Sensing, 14(20), 5118. https://doi.org/10.3390/rs14205118