An Efficient Graph Convolutional RVFL Network for Hyperspectral Image Classification
Abstract
:1. Introduction
2. Related Works
2.1. Notations
2.2. GCN
2.3. RVFL
3. Materials and Method
3.1. Overall Framework
3.2. Graph Construction
3.3. Random Graph Convolution
3.4. Graph Convolutional Regression
Algorithm 1: GCRVFL |
Input: HSI cube: , K, L, and . |
1 Construct patch graphs according to Equation (5); |
2 Randomly generate graph filters ; |
3 Calculate random graph embedding ; |
4 Calculate graph-level representation ; |
5 Solve graph convolutional regression ; |
6 Predict test labels. Output: Labels of test set. |
3.5. Connection to Existing Methods
3.5.1. GCRVFL vs. RVFL
3.5.2. GCRVFL vs. GCN
3.5.3. GCRVFL vs. Attention Mechanism
3.6. Data Sets
- Houston 2013: This data set was acquired by the ITRES CASI-1500 sensor over the University of Houston campus and the neighboring urban area [41]. The Houston imagery consists of samples and the spatial resolution is 2.5 m by pixel. It has 144 spectral bands in the 380 nm to 1050 nm region, and contains 15 classes with 15,029 labeled samples. Table 1 lists 15 challenging land-cover and land-use categories as well as the number of training and testing samples. Figure 4a shows a false-color image and the map of training and testing samples.
- Indian Pines 2010: This data set was captured by the ProSpecTIR sensor over Purdue University, Indiana in 2010, and includes a variety of different crops. The Indian Pines 2010 imagery consists of 445 × 750 samples and the spatial resolution is 2 m by pixel. This scene contains 360 spectral bands ranging from 400 to 2450 nm, and is composed of 16 classes with 198,074 labeled samples. Table 2 shows 16 different land cover categories and the number of training and testing samples. Figure 4b depicts the false-color scene and the corresponding visualization of training and testing samples.
- Salinas: This data set was collected by AVIRIS sensors over Salinas Valley, California. The Salinas imagery consists of 512 × 217 samples and the spatial resolution is 3.7 m by pixel. It has 224 spectral bands in the 400 nm to 2500 nm region, and includes 16 classes with 54,129 labeled samples. Table 3 presents the number of training and testing samples with 16 different classes; those samples’ visualizations are exhibited in Figure 4c.
4. Experiments
4.1. Analysis of Hyper-Parameter Sensitivity
4.1.1. Impact of the Number of Hidden Neurons L
4.1.2. Impact of
4.1.3. Impact of the Number of Neighbors K
4.1.4. Impact of Activation Function
4.1.5. Impact of the Size of Patch
4.1.6. Influence of the Number of Principal Components
4.2. Main Results
4.2.1. Baselines and Setup
- RF: 200 decision trees are used in a RF classifier.
- SVM: The implementation of scikit-learn is used, where the radial basis function (RBF) kernel is adopted. Fivefold cross-validation is utilized to select the two hyper-parameters, and . The kernel width varies in the range of , and the regularization coefficient varies in the range of .
- RVFL: The number of neurons is set as 512, and the regularization coefficient is tuned by grid searching in the range of .
- CNN-2D: Two 2D convolutional blocks are used followed by a fully connected layer with 512 neurons and a softmax classifier. Each convolutional block involves a 2D conventional layer, a batch normalization layer, a max-pooling layer, and a ReLU activation layer. The filters used in the convolutional layers are and , respectively.
- ResNet: The network consists of two residual blocks, each containing two convolutional layers with kernels. The same classification head as that of CNN-2D is used.
- DenseNet: Two dense blocks are used in the model. Each block consists of two convolutional layers with kernels.
- GCN: A graph convolutional layer is adopted with 512 neurons followed by a ReLu activation in our case. The learning rate is set as 0.002, and the number of training epochs is set to 500.
- SGC: A simplified convolution is used with a feature propagation of 5 steps, followed by a softmax classifier.
- APPNP: For the APPNP, the teleport probability of PageRank is set to 0.1, the number of power iteration steps is set to 5, and the other network configurations are the same as those of GCN and SGC.
4.2.2. Quantitative Results
4.2.3. Qualitative Comparison of Different Methods
4.2.4. Comparison with the State of the Art
4.3. Training Time
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Salcedo-Sanz, S.; Ghamisi, P.; Piles, M.; Werner, M.; Cuadra, L.; Moreno-Martínez, A.; Izquierdo-Verdiguier, E.; noz Marí, J.M.; Mosavi, A.; Camps-Valls, G. Machine learning information fusion in Earth observation: A comprehensive review of methods, applications and data sources. Inf. Fusion 2020, 63, 256–272. [Google Scholar] [CrossRef]
- Ghamisi, P.; Maggiori, E.; Li, S.; Souza, R.; Tarablaka, Y.; Moser, G.; De Giorgi, A.; Fang, L.; Chen, Y.; Chi, M.; et al. New Frontiers in Spectral-Spatial Hyperspectral Image Classification: The Latest Advances Based on Mathematical Morphology, Markov Random Fields, Segmentation, Sparse Representation, and Deep Learning. IEEE Geosci. Remote Sens. Mag. 2018, 6, 10–43. [Google Scholar] [CrossRef]
- Yang, J.; Du, B.; Zhang, L. From center to surrounding: An interactive learning framework for hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2023, 197, 145–166. [Google Scholar] [CrossRef]
- Okwuashi, O.; Ndehedehe, C.E. Deep support vector machine for hyperspectral image classification. Pattern Recognit. 2020, 103, 107298. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929.2021. [Google Scholar]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5518615. [Google Scholar] [CrossRef]
- Jia, N.; Tian, X.; Gao, W.; Jiao, L. Deep Graph-Convolutional Generative Adversarial Network for Semi-Supervised Learning on Graphs. Remote Sens. 2023, 15, 3172. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Guo, M.; Liu, H.; Xu, Y.; Huang, Y. Building Extraction Based on U-Net with an Attention Block and Multiple Losses. Remote Sens. 2020, 12, 1400. [Google Scholar] [CrossRef]
- Meng, Y.; Chen, S.; Liu, Y.; Li, L.; Zhang, Z.; Ke, T.; Hu, X. Unsupervised Building Extraction from Multimodal Aerial Data Based on Accurate Vegetation Removal and Image Feature Consistency Constraint. Remote Sens. 2022, 14, 1912. [Google Scholar] [CrossRef]
- Wang, F.; Jiang, M.; Qian, C.; Yang, S.; Li, C.; Zhang, H.; Wang, X.; Tang, X. Residual Attention Network for Image Classification. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Oskouei, A.G.; Balafar, M.A.; Motamed, C. RDEIC-LFW-DSS: ResNet-based deep embedded image clustering using local feature weighting and dynamic sample selection mechanism. Inf. Sci. 2023, 646, 119374. [Google Scholar] [CrossRef]
- Zhan, L.; Li, W.; Min, W. FA-ResNet: Feature affine residual network for large-scale point cloud segmentation. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103259. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Li, B.; Xiao, C.; Wang, L.; Wang, Y.; Lin, Z.; Li, M.; An, W.; Guo, Y. Dense nested attention network for infrared small target detection. IEEE Trans. Image Process. 2022, 32, 1745–1758. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Yan, C.; Sun, Y.; Xin, Q. A densely attentive refinement network for change detection based on very-high-resolution bitemporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4409818. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention is All you Need. In Advances in Neural Information Processing Systems 30; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 5998–6008. [Google Scholar]
- Cai, Y.; Liu, X.; Cai, Z. BS-Nets: An End-to-End Framework for Band Selection of Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1969–1984. [Google Scholar] [CrossRef]
- Zhao, C.; Qin, B.; Feng, S.; Zhu, W.; Sun, W.; Li, W.; Jia, X. Hyperspectral image classification with multi-attention transformer and adaptive superpixel segmentation-based active learning. IEEE Trans. Image Process. 2023, 32, 3606–3621. [Google Scholar] [CrossRef] [PubMed]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef] [PubMed]
- Ding, Y.; Guo, Y.; Chong, Y.; Pan, S.; Feng, J. Global Consistent Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Instrum. Meas. 2021, 70, 5501516. [Google Scholar] [CrossRef]
- Cai, Y.; Zhang, Z.; Cai, Z.; Liu, X.; Jiang, X.; Yan, Q. Graph Convolutional Subspace Clustering: A Robust Subspace Clustering Framework for Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4191–4202. [Google Scholar] [CrossRef]
- Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3162–3177. [Google Scholar] [CrossRef]
- Ding, Y.; Zhao, X.; Zhang, Z.; Cai, W.; Yang, N.; Zhan, Y. Semi-Supervised Locality Preserving Dense Graph Neural Network With ARMA Filters and Context-Aware Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5511812. [Google Scholar] [CrossRef]
- Yin, J.; Liu, X.; Hou, R.; Chen, Q.; Huang, W.; Li, A.; Wang, P. Multiscale Pixel-Level and Superpixel-Level Method for Hyperspectral Image Classification: Adaptive Attention and Parallel Multi-Hop Graph Convolution. Remote Sens. 2023, 15, 4235. [Google Scholar] [CrossRef]
- Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, S.; Zhu, P.; Tang, X.; Feng, J.; Jiao, L. Spatial Pooling Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5521315. [Google Scholar] [CrossRef]
- Zhang, L.; Suganthan, P. A comprehensive evaluation of random vector functional link networks. Inf. Sci. 2016, 367–368, 1094–1105. [Google Scholar] [CrossRef]
- Zhang, Z.; Cai, Y.; Gong, W. Evolution-Driven Randomized Graph Convolutional Networks. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 7516–7526. [Google Scholar] [CrossRef]
- Gao, R.; Du, L.; Suganthan, P.N.; Zhou, Q.; Yuen, K.F. Random vector functional link neural network based ensemble deep learning for short-term load forecasting. Expert Syst. Appl. 2022, 206, 117784. [Google Scholar] [CrossRef]
- Malik, A.K.; Ganaie, M.A.; Tanveer, M.; Suganthan, P.N.; Initiative, A.D.N.I. Alzheimer’s Disease Diagnosis via Intuitionistic Fuzzy Random Vector Functional Link Network. IEEE Trans. Comput. Soc. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
- Cai, Y.; Zhang, Z.; Yan, Q.; Zhang, D.; Banu, M.J. Densely Connected Convolutional Extreme Learning Machine for Hyperspectral Image Classification. Neurocomputing 2020, 434, 21–32. [Google Scholar] [CrossRef]
- Zhou, Y.; Wei, Y. Learning Hierarchical Spectral-Spatial Features for Hyperspectral Image Classification. IEEE Trans. Cybern. 2016, 46, 1667–1678. [Google Scholar] [CrossRef]
- Cao, F.; Yang, Z.; Ren, J.; Ling, W.; Zhao, H.; Sun, M.; Benediktsson, J.A. Sparse Representation-Based Augmented Multinomial Logistic Extreme Learning Machine With Weighted Composite Features for Spectral-Spatial Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6263–6279. [Google Scholar] [CrossRef]
- Zhang, Z.; Cui, P.; Zhu, W. Deep Learning on Graphs: A Survey. IEEE Trans. Knowl. Data Eng. 2020, 56, 6263–6279. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017. [Google Scholar]
- Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Proceedings of the Advances in Neural Information Processing Systems; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29, pp. 3844–3852. [Google Scholar]
- Chen, D.; Lin, Y.; Li, W.; Li, P.; Zhou, J.; Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 3438–3445. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Wu, F.; Souza, A.; Zhang, T.; Fifty, C.; Yu, T.; Weinberger, K. Simplifying Graph Convolutional Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6861–6871. [Google Scholar]
- Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
- Mekha, P.; Teeyasuksaet, N. Image Classification of Rice Leaf Diseases Using Random Forest Algorithm. In Proceedings of the 2021 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunication Engineering, Cha-am, Thailand, 3–6 March 2021; pp. 165–169. [Google Scholar] [CrossRef]
- Fauvel, M.; Chanussot, J.; Benediktsson, J.A. A spatial-spectral kernel-based approach for the classification of remote-sensing images. Pattern Recognit. 2012, 45, 381–392. [Google Scholar] [CrossRef]
- Igelnik, B.; Pao, Y.H. Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 1995, 6, 1320–1329. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2018, arXiv:1608.06993. [Google Scholar]
- Klicpera, J.; Bojchevski, A.; Günnemann, S. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Healthy Grass | 198 | 1053 |
2 | Stressed Grass | 190 | 1064 |
3 | Synthetic Grass | 192 | 505 |
4 | Tree | 188 | 1056 |
5 | Soil | 186 | 1056 |
6 | Water | 182 | 143 |
7 | Residential | 196 | 1072 |
8 | Commercial | 191 | 1053 |
9 | Road | 193 | 1059 |
10 | Highway | 191 | 1036 |
11 | Railway | 181 | 1054 |
12 | Parking Lot1 | 192 | 1041 |
13 | Parking Lot2 | 184 | 285 |
14 | Tennis Court | 181 | 247 |
15 | Running Track | 187 | 473 |
Total | 2832 | 12,197 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Corn_high | 726 | 2661 |
2 | Corn_mid | 465 | 1275 |
3 | Corn_low | 66 | 290 |
4 | Soy_bean_high | 324 | 1041 |
5 | Soy_bean_mid | 2548 | 35,317 |
6 | Soy_bean_low | 1428 | 27,782 |
7 | Residues | 368 | 5427 |
8 | Wheat | 182 | 3205 |
9 | Hay | 1938 | 48,107 |
10 | Grass/Pasture | 496 | 5048 |
11 | Cover_crop_1 | 400 | 2346 |
12 | Cover_crop_2 | 176 | 1988 |
13 | Woodlands | 1640 | 46,919 |
14 | Highway | 105 | 4758 |
15 | Local road | 52 | 450 |
16 | Buildings | 40 | 506 |
Total | 10,954 | 187,120 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Brocoli_green_weeds_1 | 20 | 1989 |
2 | Brocoli_green_weeds_2 | 20 | 3706 |
3 | Fallow | 20 | 1956 |
4 | Fallow_rough_plow | 20 | 1374 |
5 | Fallow_smooth | 20 | 2658 |
6 | Stubble | 20 | 3939 |
7 | Celery | 20 | 3559 |
8 | Grapes_untrained | 20 | 11,251 |
9 | Soil_vinyard_develop | 20 | 6183 |
10 | Corn_senesced_green_weeds | 20 | 3258 |
11 | Lettuce_romaine_4wk | 20 | 1048 |
12 | Lettuce_romaine_5wk | 20 | 1907 |
13 | Lettuce_romaine_6wk | 20 | 896 |
14 | Lettuce_romaine_7wk | 20 | 1050 |
15 | Vinyard_untrained | 20 | 7248 |
16 | Vinyard_vertical_trellis | 20 | 1787 |
Total | 320 | 53,809 |
Data Set | Metrics | Sigmoid | Tahn | None | ReLU |
---|---|---|---|---|---|
Houston | OA | 56.63 ± 0.02 | 76.26 ± 0.55 | 51.94 ± 4.02 | 84.78 ± 1.24 |
AA | 60.10 ± 0.02 | 79.65 ± 0.50 | 56.31 ± 3.38 | 87.16 ± 1.00 | |
Kappa × 100 | 53.50 ± 0.00 | 74.30 ± 0.57 | 48.68 ± 4.10 | 83.52 ± 1.35 | |
Ind. Pines | OA | 79.61 ± 0.00 | 80.81 ± 0.11 | 79.48 ± 0.00 | 89.21 ± 0.32 |
AA | 48.70 ± 0.01 | 61.90 ± 0.51 | 52.06 ± 0.00 | 87.34 ± 0.05 | |
Kappa × 100 | 74.70 ± 0.00 | 76.43 ± 0.12 | 74.80 ± 0.00 | 86.80 ± 0.40 | |
Salinas | OA | 74.92 ± 4.28 | 85.76 ± 1.57 | 73.21 ± 2.08 | 92.46 ± 0.06 |
AA | 77.60 ± 1.52 | 90.84 ± 0.80 | 72.81 ± 1.59 | 96.51 ± 0.03 | |
Kappa × 100 | 72.03 ± 4.54 | 84.13 ± 1.76 | 70.10 ± 2.16 | 91.60 ± 0.10 |
Data Sets | Houston 2013 | Indian Pines 2010 | Salinas |
---|---|---|---|
L | 512 | 512 | 512 |
0.005 | 0.05 | 0.005 | |
K | 5 | 5 | 5 |
s | 7 | 7 | 7 |
Class No. | RF | SVM | RVFL | CNN-2D | ResNet | DenseNet | GCN | SGC | APPNP | GCRVFL |
---|---|---|---|---|---|---|---|---|---|---|
1 | 82.05 ± 0.00 | 82.01 ± 0.05 | 82.71 ± 0.12 | 82.68 ± 0.36 | 82.89 ± 0.21 | 82.99 ± 0.17 | 83.10 ± 0.00 | 82.86 ± 0.05 | 83.10 ± 0.00 | 81.90 ± 0.92 |
2 | 83.58 ± 0.04 | 83.46 ± 0.16 | 83.00 ± 0.33 | 83.52 ± 1.14 | 83.87 ± 0.99 | 84.33 ± 0.76 | 83.78 ± 0.49 | 83.55 ± 0.47 | 85.01 ± 0.05 | 83.53 ± 1.35 |
3 | 99.74 ± 0.09 | 99.74 ± 0.13 | 99.60 ± 0.18 | 96.24 ± 1.47 | 95.56 ± 1.59 | 94.61 ± 1.92 | 98.06 ± 0.34 | 93.86 ± 0.00 | 97.23 ± 0.00 | 99.72 ± 0.10 |
4 | 86.99 ± 0.04 | 87.07 ± 0.29 | 90.16 ± 0.47 | 87.92 ± 1.61 | 85.15 ± 3.95 | 86.78 ± 4.02 | 84.02 ± 0.80 | 82.77 ± 1.33 | 86.93 ± 0.09 | 89.03 ± 0.82 |
5 | 97.60 ± 0.12 | 97.62 ± 0.18 | 97.94 ± 0.10 | 99.70 ± 0.23 | 99.77 ± 0.31 | 99.90 ± 0.20 | 99.83 ± 0.09 | 99.01 ± 0.14 | 99.86 ± 0.05 | 100.00 ± 0.00 |
6 | 95.80 ± 0.57 | 96.15 ± 0.78 | 94.41 ± 0.83 | 94.55 ± 1.43 | 95.17 ± 1.51 | 94.83 ± 2.41 | 95.24 ± 0.28 | 90.21 ± 0.00 | 95.10 ± 0.00 | 95.80 ± 0.00 |
7 | 82.56 ± 0.80 | 82.26 ± 0.62 | 70.68 ± 1.12 | 72.36 ± 2.60 | 78.71 ± 4.29 | 77.93 ± 2.82 | 75.17 ± 1.24 | 70.43 ± 0.09 | 76.49 ± 0.93 | 74.59 ± 1.16 |
8 | 41.09 ± 0.43 | 42.01 ± 1.51 | 48.16 ± 7.04 | 64.71 ± 9.08 | 75.95 ± 12.37 | 68.93 ± 13.56 | 71.36 ± 1.44 | 48.58 ± 0.43 | 69.37 ± 0.14 | 48.36 ± 2.14 |
9 | 72.18 ± 0.89 | 72.08 ± 0.63 | 65.49 ± 0.87 | 74.04 ± 1.87 | 74.27 ± 3.30 | 76.51 ± 3.64 | 81.70 ± 1.33 | 46.55 ± 1.51 | 81.40 ± 1.89 | 89.86 ± 2.83 |
10 | 55.02 ± 2.58 | 54.39 ± 1.11 | 53.36 ± 2.84 | 59.11 ± 7.75 | 63.30 ± 14.31 | 55.73 ± 4.55 | 49.15 ± 1.29 | 62.50 ± 0.14 | 48.46 ± 0.29 | 93.32 ± 5.58 |
11 | 88.96 ± 0.24 | 89.58 ± 0.72 | 85.72 ± 1.18 | 78.93 ± 1.62 | 76.12 ± 1.99 | 79.22 ± 3.76 | 77.93 ± 0.27 | 55.03 ± 0.38 | 77.80 ± 0.66 | 83.68 ± 1.77 |
12 | 84.05 ± 1.51 | 85.30 ± 1.18 | 81.92 ± 2.03 | 91.19 ± 3.16 | 93.99 ± 2.03 | 95.31 ± 1.51 | 93.18 ± 0.68 | 63.45 ± 0.34 | 94.00 ± 0.53 | 85.42 ± 6.28 |
13 | 73.22 ± 0.17 | 72.67 ± 1.14 | 57.09 ± 1.28 | 81.54 ± 3.14 | 77.30 ± 3.55 | 81.30 ± 2.52 | 76.77 ± 0.68 | 69.65 ± 0.18 | 74.74 ± 1.40 | 82.18 ± 3.41 |
14 | 99.19 ± 0.00 | 99.11 ± 0.40 | 97.04 ± 0.45 | 98.91 ± 0.60 | 99.07 ± 1.43 | 98.46 ± 2.01 | 98.79 ± 0.00 | 97.37 ± 0.20 | 99.19 ± 0.00 | 100.00 ± 0.00 |
15 | 96.97 ± 0.10 | 97.12 ± 0.14 | 97.82 ± 0.30 | 96.49 ± 1.89 | 95.60 ± 1.76 | 95.62 ± 2.45 | 99.03 ± 0.22 | 92.18 ± 0.85 | 97.46 ± 0.42 | 100.00 ± 0.00 |
OA | 79.69 ± 0.23 | 79.84 ± 0.13 | 77.98 ± 0.53 | 81.41 ± 1.31 | 82.98 ± 1.61 | 82.47 ± 1.06 | 81.93 ± 0.15 | 72.20 ± 0.23 | 82.08 ± 0.11 | 84.78 ± 1.24 |
AA | 82.60 ± 0.16 | 82.71 ± 0.15 | 80.34 ± 0.38 | 84.13 ± 1.12 | 85.12 ± 1.27 | 84.83 ± 0.98 | 84.47 ± 0.17 | 75.87 ± 0.13 | 84.41 ± 0.17 | 87.16 ± 1.00 |
Kappa × 100 | 77.97 ± 0.25 | 78.15 ± 0.14 | 76.15 ± 0.57 | 79.93 ± 1.40 | 81.60 ± 1.72 | 81.09 ± 1.12 | 80.50 ± 0.17 | 70.15 ± 0.25 | 80.65 ± 0.15 | 83.52 ± 1.35 |
Class No. | RF | SVM | RVFL | CNN-2D | ResNet | DenseNet | GCN | SGC | APPNP | GCRVFL |
---|---|---|---|---|---|---|---|---|---|---|
1 | 87.51 ± 0.29 | 85.23 ± 0.00 | 91.33 ± 0.55 | 90.74 ± 1.70 | 85.54 ± 2.41 | 82.79 ± 5.31 | 94.10 ± 0.04 | 80.91 ± 5.22 | 94.29 ± 0.11 | 90.77 ± 1.22 |
2 | 90.82 ± 0.17 | 91.14 ± 0.00 | 65.12 ± 3.48 | 99.39 ± 0.28 | 99.68 ± 0.60 | 97.59 ± 3.29 | 100.00 ± 0.00 | 99.96 ± 0.04 | 100.00 ± 0.00 | 100.00 ± 0.00 |
3 | 97.24 ± 0.00 | 95.52 ± 0.00 | 97.24 ± 0.00 | 97.66 ± 1.14 | 99.07 ± 1.40 | 99.89 ± 0.16 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
4 | 71.89 ± 0.25 | 72.53 ± 0.00 | 48.25 ± 0.97 | 86.36 ± 0.84 | 86.41 ± 2.72 | 84.73 ± 2.45 | 93.23 ± 0.34 | 85.64 ± 2.83 | 92.60 ± 0.19 | 83.77 ± 0.19 |
5 | 72.76 ± 0.55 | 81.02 ± 0.00 | 89.18 ± 0.26 | 78.08 ± 1.38 | 80.71 ± 3.85 | 77.33 ± 4.02 | 75.10 ± 0.08 | 71.01 ± 1.22 | 76.58 ± 1.08 | 79.94 ± 0.82 |
6 | 87.13 ± 0.08 | 90.32 ± 0.00 | 74.53 ± 0.46 | 81.92 ± 1.51 | 92.04 ± 4.85 | 94.84 ± 3.17 | 96.08 ± 0.01 | 83.95 ± 4.28 | 94.33 ± 1.71 | 95.84 ± 1.45 |
7 | 53.06 ± 0.60 | 45.92 ± 0.00 | 29.32 ± 1.80 | 71.72 ± 1.45 | 77.52 ± 2.69 | 76.66 ± 4.54 | 75.26 ± 0.06 | 77.39 ± 2.89 | 75.47 ± 0.21 | 74.66 ± 0.15 |
8 | 25.34 ± 0.09 | 25.55 ± 0.00 | 23.57 ± 0.05 | 24.29 ± 0.33 | 25.95 ± 1.59 | 23.35 ± 2.54 | 23.31 ± 0.09 | 41.05 ± 8.56 | 23.49 ± 0.09 | 24.99 ± 0.00 |
9 | 71.13 ± 0.51 | 84.53 ± 0.00 | 85.89 ± 0.20 | 85.84 ± 0.76 | 82.88 ± 5.25 | 82.18 ± 1.62 | 86.00 ± 0.05 | 80.52 ± 4.17 | 86.07 ± 0.56 | 86.73 ± 0.08 |
10 | 84.50 ± 0.45 | 81.38 ± 0.01 | 60.95 ± 0.83 | 94.85 ± 0.73 | 89.22 ± 5.66 | 92.21 ± 2.25 | 95.04 ± 0.19 | 89.86 ± 2.46 | 95.00 ± 0.80 | 97.89 ± 0.23 |
11 | 66.98 ± 0.37 | 81.84 ± 0.00 | 39.23 ± 1.53 | 93.15 ± 0.46 | 94.06 ± 1.34 | 95.68 ± 1.89 | 86.68 ± 0.79 | 88.51 ± 3.01 | 86.40 ± 1.41 | 92.86 ± 0.11 |
12 | 100.00 ± 0.00 | 99.40 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 99.52 ± 0.23 | 100.00 ± 0.00 | 100.00 ± 0.00 |
13 | 96.66 ± 0.10 | 96.51 ± 0.00 | 94.82 ± 0.39 | 95.52 ± 1.48 | 92.97 ± 3.65 | 96.43 ± 1.26 | 93.56 ± 0.50 | 90.08 ± 0.75 | 93.98 ± 0.11 | 97.97 ± 0.17 |
14 | 91.15 ± 0.14 | 86.07 ± 0.00 | 89.17 ± 0.09 | 98.62 ± 0.62 | 97.84 ± 1.60 | 98.85 ± 0.76 | 97.08 ± 0.34 | 98.81 ± 0.85 | 98.32 ± 0.25 | 99.88 ± 0.01 |
15 | 95.26 ± 0.28 | 43.56 ± 0.00 | 72.22 ± 1.19 | 98.00 ± 1.17 | 97.22 ± 3.04 | 94.15 ± 7.65 | 99.89 ± 0.11 | 99.33 ± 0.44 | 99.33 ± 0.00 | 100.00 ± 0.00 |
16 | 41.70 ± 1.70 | 1.38 ± 0.00 | 5.40 ± 0.99 | 21.34 ± 6.12 | 14.80 ± 10.54 | 5.01 ± 1.62 | 13.83 ± 1.38 | 13.54 ± 2.47 | 15.42 ± 1.78 | 72.13 ± 4.15 |
OA | 80.42 ± 0.19 | 85.36 ± 0.00 | 82.82 ± 0.13 | 85.60 ± 0.39 | 86.15 ± 2.20 | 86.58 ± 0.94 | 86.74 ± 0.11 | 81.94 ± 0.21 | 86.92 ± 0.09 | 89.21 ± 0.32 |
AA | 77.07 ± 0.21 | 72.62 ± 0.00 | 66.64 ± 0.39 | 82.34 ± 0.41 | 82.24 ± 0.81 | 81.36 ± 0.38 | 83.07 ± 0.10 | 81.25 ± 0.52 | 83.20 ± 0.03 | 87.34 ± 0.05 |
Kappa × 100 | 76.03 ± 0.21 | 82.00 ± 0.00 | 78.77 ± 0.17 | 82.54 ± 0.48 | 83.21 ± 2.59 | 83.67 ± 1.14 | 83.85 ± 0.15 | 78.25 ± 0.35 | 84.10 ± 0.10 | 86.80 ± 0.40 |
Class No. | RF | SVM | RVFL | CNN-2D | ResNet | DenseNet | GCN | SGC | APPNP | Ours |
---|---|---|---|---|---|---|---|---|---|---|
1 | 98.70 ± 0.49 | 95.53 ± 0.88 | 93.27 ± 1.11 | 97.87 ± 2.11 | 98.54 ± 1.27 | 98.58 ± 1.29 | 99.21 ± 0.53 | 94.10 ± 5.19 | 99.66 ± 0.40 | 99.87 ±0.03 |
2 | 98.11 ± 1.82 | 91.68 ± 3.53 | 90.83 ± 2.69 | 94.27 ± 7.20 | 98.06 ± 2.30 | 97.54 ± 1.73 | 99.96 ± 0.05 | 98.05 ± 1.38 | 99.83 ± 0.17 | 99.53 ± 0.34 |
3 | 88.66 ± 5.50 | 82.43 ± 2.14 | 75.89 ± 2.87 | 97.48 ± 1.59 | 99.18 ± 0.62 | 97.99 ± 2.09 | 99.69 ± 0.32 | 90.49 ± 5.67 | 99.60 ± 0.42 | 99.31 ± 0.08 |
4 | 99.29 ± 0.35 | 98.66 ± 0.54 | 95.68 ± 2.46 | 99.05 ± 1.07 | 98.94 ± 0.60 | 99.23 ± 0.47 | 99.07 ± 0.51 | 97.44 ± 1.73 | 99.24 ± 0.25 | 99.34 ± 0.22 |
5 | 96.31 ± 0.88 | 87.98 ± 3.10 | 85.01 ± 3.45 | 93.12 ± 3.66 | 95.76 ± 0.87 | 96.06 ± 1.09 | 97.86 ± 1.88 | 94.06 ± 6.34 | 98.21 ± 0.99 | 96.82 ± 0.36 |
6 | 99.28 ± 0.48 | 99.59 ± 0.27 | 96.75 ± 2.28 | 99.30 ± 0.93 | 99.95 ± 0.09 | 99.84 ± 0.14 | 99.88 ± 0.22 | 99.99 ± 0.01 | 99.93 ± 0.12 | 99.86 ± 0.14 |
7 | 98.93 ± 0.23 | 98.83 ± 0.34 | 96.62 ± 1.16 | 96.94 ± 2.58 | 99.76 ± 0.19 | 98.77 ± 1.10 | 98.95 ± 0.97 | 95.18 ± 6.38 | 99.15 ± 0.69 | 99.13 ± 0.70 |
8 | 67.26 ± 5.90 | 57.85 ± 3.45 | 56.01 ± 4.84 | 71.46 ± 7.33 | 58.73 ± 14.83 | 58.56 ± 12.21 | 74.55 ± 9.95 | 71.68 ± 7.38 | 83.78 ± 9.81 | 79.67 ± 0.08 |
9 | 99.14 ± 0.36 | 95.18 ± 1.85 | 93.42 ± 2.26 | 98.39 ± 1.63 | 99.93 ± 0.11 | 99.28 ± 0.69 | 99.73 ± 0.34 | 96.84 ± 2.94 | 99.95 ± 0.06 | 99.98 ± 0.02 |
10 | 79.96 ± 3.17 | 73.66 ± 4.10 | 71.12 ± 5.94 | 89.71 ± 4.45 | 92.55 ± 3.05 | 89.68 ± 3.54 | 92.01 ± 2.37 | 88.95 ± 5.51 | 92.55 ± 1.72 | 92.17 ± 0.21 |
11 | 92.48 ± 2.04 | 83.53 ± 2.64 | 82.16 ± 3.10 | 96.76 ± 3.23 | 98.97 ± 1.28 | 98.95 ± 1.07 | 99.64 ± 0.63 | 98.13 ± 1.52 | 99.41 ± 0.64 | 99.33 ± 0.38 |
12 | 98.12 ± 1.43 | 81.92 ± 7.26 | 77.71 ± 5.22 | 97.59 ± 3.10 | 98.79 ± 1.42 | 98.65 ± 0.76 | 99.70 ± 0.31 | 96.62 ± 2.76 | 99.76 ± 0.28 | 99.06 ± 0.52 |
13 | 97.81 ± 0.49 | 92.17 ± 3.42 | 81.18 ± 5.10 | 97.43 ± 2.56 | 98.19 ± 1.97 | 98.88 ± 0.56 | 98.75 ± 1.05 | 96.27 ± 2.79 | 99.75 ± 0.16 | 99.55 ± 0.00 |
14 | 90.93 ± 2.20 | 89.22 ± 2.14 | 78.51 ± 8.84 | 97.02 ± 2.09 | 99.60 ± 0.27 | 98.61 ± 0.47 | 99.79 ± 0.19 | 95.60 ± 3.62 | 99.83 ± 0.13 | 99.52 ± 0.48 |
15 | 59.64 ± 7.55 | 52.80 ± 6.67 | 51.44 ± 4.78 | 69.44 ± 6.35 | 57.27 ± 13.08 | 58.48 ± 12.33 | 84.58 ± 5.13 | 61.54 ± 12.91 | 72.57 ± 9.43 | 82.12 ± 0.59 |
16 | 93.72 ± 2.22 | 95.99 ± 0.98 | 90.65 ± 2.48 | 98.38 ± 0.79 | 97.87 ± 1.20 | 97.81 ± 0.85 | 98.25 ± 0.62 | 97.34 ± 1.08 | 98.42 ± 0.70 | 98.85 ± 0.48 |
OA | 84.86 ± 0.74 | 79.13 ± 0.92 | 76.51 ± 0.98 | 87.62 ± 1.74 | 84.51 ± 1.69 | 84.25 ± 1.06 | 91.74 ± 1.68 | 86.10 ± 1.47 | 92.17 ± 1.34 | 92.46 ± 0.06 |
AA | 91.15 ± 0.75 | 86.06 ± 0.39 | 82.27 ± 0.62 | 93.39 ± 0.85 | 93.26 ± 0.65 | 92.93 ± 0.28 | 96.35 ± 0.39 | 92.02 ± 1.05 | 96.35 ± 0.39 | 96.51 ± 0.03 |
Kappa × 100 | 83.16 ± 0.81 | 76.82 ± 1.04 | 73.90 ± 1.08 | 86.24 ± 1.92 | 82.78 ± 1.83 | 82.48 ± 1.12 | 90.84 ± 1.84 | 84.54 ± 1.64 | 91.28 ± 1.45 | 91.60 ± 0.10 |
Class No. | GCRVFL | RNN | MiniGCN | ViT | Spectral Former |
---|---|---|---|---|---|
1 | 81.90 | 82.34 | 98.39 | 82.81 | 83.48 |
2 | 83.53 | 94.27 | 92.11 | 96.62 | 95.58 |
3 | 99.72 | 99.60 | 99.60 | 99.80 | 99.60 |
4 | 89.03 | 97.54 | 96.78 | 99.24 | 99.15 |
5 | 100.00 | 93.28 | 97.73 | 97.73 | 97.44 |
6 | 95.80 | 95.10 | 95.10 | 95.10 | 95.10 |
7 | 74.59 | 83.77 | 57.28 | 76.77 | 88.99 |
8 | 48.36 | 56.03 | 68.09 | 55.65 | 73.31 |
9 | 89.86 | 72.14 | 53.92 | 67.42 | 71.86 |
10 | 93.32 | 84.17 | 77.41 | 68.05 | 87.93 |
11 | 83.68 | 82.83 | 84.91 | 82.35 | 80.36 |
12 | 85.42 | 70.61 | 77.23 | 58.50 | 70.70 |
13 | 82.18 | 69.12 | 50.88 | 60.00 | 71.23 |
14 | 100.00 | 98.79 | 98.38 | 98.79 | 98.79 |
15 | 100.00 | 95.98 | 98.52 | 98.73 | 98.73 |
OA | 84.78 | 83.23 | 81.71 | 80.41 | 86.14 |
AA | 87.16 | 85.04 | 83.09 | 82.50 | 87.48 |
Kappa × 100 | 83.52 | 81.83 | 80.18 | 78.76 | 84.97 |
Data Set | RF | SVM | RVFL | CNN-2D | ResNet | DenseNet | GCN | SGC | APPNP | GCRVFL |
---|---|---|---|---|---|---|---|---|---|---|
Houston 2013 | 1.202 | 0.085 | 0.068 | 41.429 | 74.707 | 74.213 | 505.971 | 183.757 | 446.336 | 0.846 |
Indian Pines 2010 | 4.346 | 0.496 | 0.244 | 141.581 | 248.961 | 287.809 | 1812.658 | 562.704 | 1428.152 | 3.852 |
Salinas | 0.650 | 0.028 | 0.016 | 9.456 | 9.086 | 10.683 | 145.545 | 45.995 | 134.625 | 0.341 |
Time Complexity |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Cai, Y.; Liu, X.; Zhang, M.; Meng, Y. An Efficient Graph Convolutional RVFL Network for Hyperspectral Image Classification. Remote Sens. 2024, 16, 37. https://doi.org/10.3390/rs16010037
Zhang Z, Cai Y, Liu X, Zhang M, Meng Y. An Efficient Graph Convolutional RVFL Network for Hyperspectral Image Classification. Remote Sensing. 2024; 16(1):37. https://doi.org/10.3390/rs16010037
Chicago/Turabian StyleZhang, Zijia, Yaoming Cai, Xiaobo Liu, Min Zhang, and Yan Meng. 2024. "An Efficient Graph Convolutional RVFL Network for Hyperspectral Image Classification" Remote Sensing 16, no. 1: 37. https://doi.org/10.3390/rs16010037
APA StyleZhang, Z., Cai, Y., Liu, X., Zhang, M., & Meng, Y. (2024). An Efficient Graph Convolutional RVFL Network for Hyperspectral Image Classification. Remote Sensing, 16(1), 37. https://doi.org/10.3390/rs16010037