Preserving Global Information for Graph Clustering with Masked Autoencoders
Abstract
:1. Introduction
- To improve the learning ability of autoencoders, we propose to use a mask technique on the original information and encoded features. To the best of our knowledge, we give the first attempt to apply masked autoencoders to the graph clustering task.
- We propose a graph diffusion method to obtain a node’s global position representation and preserve it in the node’s features. We also propose a parameterized Laplacian matrix to make the global direction adaptive.
- We design a low–high-pass filter that screens important low-frequency and high-frequency information conveyed by the data. A discriminative representation is obtained by graph filtering, which encodes topological structure information into features.
- Experimental results on both homophilic and heterophilic datasets suggest that the proposed method outperforms existing graph clustering methods, including recent self-supervised learning methods.
2. Related Work
2.1. Graph Clustering
2.2. Graph Masked Autoencoder
2.3. Graph Filtering
3. Methodology
3.1. Notation
3.2. Graph Filtering
3.3. Global Position Encoding
3.4. Graph Masked Autoencoder
3.4.1. Graph Mask Encoder
3.4.2. Graph Mask Decoder
3.4.3. Loss Function
4. Experiments
4.1. Datasets
4.2. Baselines
4.3. Setup
4.4. Results Analysis
4.5. Ablation Study
4.6. Parameter Analysis
5. Future Directions
- Exploring different masking strategies: In this work, we used the traditional augmentation strategy of masking. Future work could explore different masking strategies to see if they can further improve the model’s learning ability. This includes investigating different types of masks, such as one-sided and path masks, and examining their impact on the model’s performance. The new masking strategies could be verified through experiments with the ACC metric.
- Adapting the masking strategy to different graph structures: While the GCMA method demonstrates remarkable performance on attributed graphs, which are characterized by their rich node features and interconnections, there remains a compelling need to apply it to diverse graph structures, such as hypergraphs and bipartite graphs.
- Application to other tasks: While our focus has been on graph clustering, the mask techniques we have developed could potentially be applied to other tasks. Future work could explore these possibilities.
- Integration of graph attention mechanisms: Incorporating graph attention mechanisms into masked autoencoders could improve their ability to capture important global structures in graphs. Graph attention mechanisms enable nodes to dynamically attend to their neighbors based on learned attention weights, allowing the model to focus on relevant global information while filtering out noise and irrelevant features. Extra constraints can be considered for attention weights.
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Duan, Z.; Wang, C.; Zhong, W. SSGCL: Simple Social Recommendation with Graph Contrastive Learning. Mathematics 2024, 12, 1107. [Google Scholar] [CrossRef]
- Yang, X.; Liu, Y.; Zhou, S.; Wang, S.; Tu, W.; Zheng, Q.; Liu, X.; Fang, L.; Zhu, E. Cluster-guided contrastive graph clustering network. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washigton, DC, USA, 7–14 February 2023. [Google Scholar]
- Gan, J.; Liang, Y.; Du, L. Local-Sample-Weighted Clustering Ensemble with High-Order Graph Diffusion. Mathematics 2023, 11, 1340. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, X.; Zhou, S.; Liu, X.; Wang, S.; Liang, K.; Tu, W.; Li, L. Simple contrastive graph clustering. IEEE Trans. Neural Netw. Learn. Syst. 2023. [Google Scholar] [CrossRef] [PubMed]
- Tian, F.; Gao, B.; Cui, Q.; Chen, E.; Liu, T.Y. Learning deep representations for graph clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; Volume 28. [Google Scholar]
- Hassani, K.; Khasahmadi, A.H. Contrastive multi-view representation learning on graphs. In Proceedings of the International Conference on Machine Learning, PMLR 2020, Virtual Event, 13–18 July 2020; pp. 4116–4126. [Google Scholar]
- Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; Wang, L. Deep graph contrastive representation learning. arXiv 2020, arXiv:2006.04131. [Google Scholar]
- Kenton, J.D.M.W.C.; Toutanova, L.K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16000–16009. [Google Scholar]
- Hou, Z.; Liu, X.; Cen, Y.; Dong, Y.; Yang, H.; Wang, C.; Tang, J. Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 594–604. [Google Scholar]
- Xie, X.; Chen, W.; Kang, Z.; Peng, C. Contrastive graph clustering with adaptive filter. Expert Syst. Appl. 2023, 219, 119645. [Google Scholar] [CrossRef]
- Chien, E.; Peng, J.; Li, P.; Milenkovic, O. Adaptive Universal Generalized PageRank Graph Neural Network. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Li, G.; Müller, M.; Ghanem, B.; Koltun, V. Training graph neural networks with 1000 layers. In Proceedings of the International Conference on Machine Learning, PMLR 2021, Virtual Event, 18–24 July 2021; pp. 6437–6449. [Google Scholar]
- Bo, D.; Wang, X.; Shi, C.; Zhu, M.; Lu, E.; Cui, P. Structural deep clustering network. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 1400–1410. [Google Scholar]
- Kipf, T.N.; Welling, M. Variational graph auto-encoders. In Proceedings of the NIPS Bayesian Deep Learning Workshop, Barcelona, Spain, 10 December 2016. [Google Scholar]
- Wang, C.; Pan, S.; Long, G.; Zhu, X.; Jiang, J. Mgae: Marginalized graph autoencoder for graph clustering. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 889–898. [Google Scholar]
- Wang, C.; Pan, S.; Hu, R.; Long, G.; Jiang, J.; Zhang, C. Attributed Graph Clustering: A Deep Attentional Embedding Approach. In Proceedings of the IJCAI-19, Macao, China, 10–16 August 2019. [Google Scholar]
- Hui, B.; Zhu, P.; Hu, Q. Collaborative graph convolutional networks: Unsupervised learning meets semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 4215–4222. [Google Scholar]
- Wang, C.; Pan, S.; Celina, P.Y.; Hu, R.; Long, G.; Zhang, C. Deep neighbor-aware embedding for node clustering in attributed graphs. Pattern Recognit. 2022, 122, 108230. [Google Scholar] [CrossRef]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, PMLR 2020, Virtual Event, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.H.; Buchatskaya, E.; Doersch, C.; Pires, B.A.; Guo, Z.D.; Azar, M.G.; et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv 2020, arXiv:2006.07733. [Google Scholar]
- You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; Shen, Y. Graph contrastive learning with augmentations. Adv. Neural Inf. Process. Syst. 2020, 33, 5812–5823. [Google Scholar]
- Zhu, J.; Rossi, R.A.; Rao, A.; Mai, T.; Lipka, N.; Ahmed, N.K.; Koutra, D. Graph Neural Networks with Heterophily. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; Volume 35, pp. 11168–11176. [Google Scholar]
- Veličković, P.; Fedus, W.; Hamilton, W.L.; Liò, P.; Bengio, Y.; Hjelm, R.D. Deep Graph Infomax. In Proceedings of the ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Li, Y.; Hu, P.; Liu, Z.; Peng, D.; Zhou, J.T.; Peng, X. Contrastive Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; Volume 35, pp. 8547–8555. [Google Scholar]
- Xia, W.; Gao, Q.; Yang, M.; Gao, X. Self-supervised Contrastive Attributed Graph Clustering. arXiv 2021, arXiv:2110.08264. [Google Scholar]
- Liu, Y.; Tu, W.; Zhou, S.; Liu, X.; Song, L.; Yang, X.; Zhu, E. Deep Graph Clustering via Dual Correlation Reduction. In Proceedings of the AAAI, Virtual Event, 22 February–1 March 2022. [Google Scholar]
- Tan, Q.; Liu, N.; Huang, X.; Choi, S.H.; Li, L.; Chen, R.; Hu, X. S2GAE: Self-supervised graph autoencoders are generalizable learners with graph masking. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, Singapore, 27 February–3 March 2023; pp. 787–795. [Google Scholar]
- Li, J.; Wu, R.; Sun, W.; Chen, L.; Tian, S.; Zhu, L.; Meng, C.; Zheng, Z.; Wang, W. What’s Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 1268–1279. [Google Scholar]
- Shi, Y.; Dong, Y.; Tan, Q.; Li, J.; Liu, N. Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 2259–2269. [Google Scholar]
- Zhang, X.; Liu, H.; Li, Q.; Wu, X.M. Attributed Graph Clustering via Adaptive Graph Convolution. In Proceedings of the IJCAI-19, Macao, China, 10–16 August 2019. [Google Scholar]
- Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral networks and deep locally connected networks on graphs. In Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Henaff, M.; Bruna, J.; LeCun, Y. Deep convolutional networks on graph-structured data. arXiv 2015, arXiv:1506.05163. [Google Scholar]
- Kang, Z.; Liu, Z.; Pan, S.; Tian, L. Fine-grained Attributed Graph Clustering. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), SIAM 2022, Alexandria, VA, USA, 28–30 April 2022; pp. 370–378. [Google Scholar]
- Chang, H.; Rong, Y.; Xu, T.; Huang, W.; Sojoudi, S.; Huang, J.; Zhu, W. Spectral graph attention network with fast eigen-approximation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, New York, NY, USA, 1–5 November 2021; pp. 2905–2909. [Google Scholar]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Beyond Low-pass Filtering: Graph Convolutional Networks with Automatic Filtering. arXiv 2021, arXiv:2107.04755. [Google Scholar] [CrossRef]
- Li, S.; Kim, D.; Wang, Q. Beyond low-pass filters: Adaptive feature propagation on graphs. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Bilbao, Spain, 13–17 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 450–465. [Google Scholar]
- Zhang, X.; Xie, X.; Kang, Z. Graph Learning for Attributed Graph Clustering. Mathematics 2022, 10, 4834. [Google Scholar] [CrossRef]
- Li, P.; Wang, Y.; Wang, H.; Leskovec, J. Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning. arXiv 2020, arXiv:2009.00142. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Wang, X.; Jin, D.; Cao, X.; Yang, L.; Zhang, W. Semantic community identification in large attribute networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Li, Q.; Wu, X.M.; Liu, H.; Zhang, X.; Guan, Z. Label efficient semi-supervised learning via graph filtering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9582–9591. [Google Scholar]
- Rozemberczki, B.; Allen, C.; Sarkar, R. Multi-scale attributed node embedding. J. Complex Netw. 2021, 9, cnab014. [Google Scholar] [CrossRef]
- Pan, S.; Hu, R.; Long, G.; Jiang, J.; Yao, L.; Zhang, C. Adversarially regularized graph autoencoder for graph embedding. In Proceedings of the IJCAI 2018, Stockholm, Sweden, 13–19 July 2018. [Google Scholar]
- Pan, S.; Hu, R.; Fung, S.f.; Long, G.; Jiang, J.; Zhang, C. Learning graph embedding with adversarial training methods. IEEE Trans. Cybern. 2019, 50, 2475–2487. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Zhu, R.; Cheng, Y.; Shan, C.; Luo, S.; Li, D.; Qian, W. Finding global homophily in graph neural networks when meeting heterophily. In Proceedings of the International Conference on Machine Learning. PMLR 2022, Baltimore, MN, USA, 17–23 July 2022; pp. 13242–13256. [Google Scholar]
- Guo, L.; Dai, Q. End-to-end variational graph clustering with local structural preservation. Neural Comput. Appl. 2021, 34, 3767–3782. [Google Scholar] [CrossRef]
- Zhu, P.; Li, J.; Xiao, B.; Zhao, S.; Hu, Q. Collaborative Decision-Reinforced Self-Supervision for Attributed Graph Clustering. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10851–10863. [Google Scholar] [CrossRef] [PubMed]
Dataset | Nodes | Edges | Features | Classes | Homophily |
---|---|---|---|---|---|
Cora | 2708 | 5429 | 1433 | 7 | 0.83 |
CiteSeer | 3327 | 4732 | 3703 | 6 | 0.71 |
PubMed | 19l,717 | 44,338 | 500 | 3 | 0.79 |
Wiki | 2405 | 17,981 | 4973 | 17 | 0.46 |
Large Cora | 11,881 | 64,898 | 3780 | 10 | 0.73 |
Squirrel | 5201 | 217,073 | 2089 | 5 | 0.22 |
Chameleon | 2277 | 2325 | 31,371 | 5 | 0.23 |
Wisconsin | 251 | 515 | 1703 | 5 | 0.16 |
Cornell | 183 | 298 | 1703 | 5 | 0.11 |
Texas | 183 | 325 | 1703 | 5 | 0.06 |
Methods | Cora | CiteSeer | PubMed | Wiki | Large Cora | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | |
SCI [42] | 41.21 | 21.57 | 11.82 | 33.45 | 9.77 | 18.01 | 44.89 | 5.99 | 35.73 | 32.72 | 26.38 | 19.03 | 26.78 | 11.31 | 7.68 |
ARGE [45] | 64.00 | 44.90 | 61.90 | 57.30 | 35.00 | 54.60 | 59.12 | 23.17 | 58.41 | 41.40 | 39.50 | 38.27 | - | - | - |
ARVGE [45] | 63.80 | 45.00 | 62.70 | 54.40 | 26.10 | 52.90 | 58.22 | 20.62 | 23.04 | 41.55 | 40.01 | 37.80 | - | - | - |
GAE [16] | 53.25 | 40.69 | 41.97 | 41.26 | 18.34 | 29.13 | 64.08 | 22.97 | 49.26 | 17.33 | 11.93 | 15.35 | - | - | - |
VGAE [16] | 55.95 | 38.45 | 41.50 | 44.38 | 22.71 | 31.88 | 65.48 | 25.09 | 50.95 | 28.67 | 30.28 | 20.49 | - | - | - |
MGAE [17] | 63.43 | 45.57 | 38.01 | 63.56 | 39.75 | 39.49 | 43.88 | 8.16 | 41.98 | 50.14 | 47.97 | 39.20 | 38.04 | 32.43 | 29.02 |
AGC [32] | 68.92 | 53.68 | 65.61 | 67.00 | 41.13 | 62.48 | 69.78 | 31.59 | 68.72 | 47.65 | 45.28 | 40.36 | 40.54 | 32.46 | 31.84 |
DAEGC [18] | 70.40 | 52.80 | 68.20 | 67.20 | 39.70 | 63.60 | 67.10 | 26.60 | 65.90 | 38.25 | 37.63 | 23.64 | 39.87 | 32.81 | 19.05 |
SDCN [15] | 60.24 | 50.04 | 61.84 | 65.96 | 38.71 | 63.62 | 65.78 | 29.47 | 65.16 | - | - | - | - | - | - |
ARGA_AX [46] | 59.70 | 45.50 | 57.90 | 54.70 | 26.30 | 52.70 | 63.70 | 24.50 | 63.90 | - | - | - | - | - | - |
ARVGA_AX [46] | 71.10 | 52.60 | 69.30 | 58.10 | 33.80 | 52.50 | 64.00 | 23.90 | 64.40 | - | - | - | - | - | - |
DGI [25] | 71.81 | 54.09 | 69.88 | 68.60 | 43.75 | 64.64 | - | - | - | 44.37 | 42.20 | 40.16 | - | - | - |
GMM-VGAE [19] | 71.50 | 54.43 | 67.76 | 67.44 | 42.30 | 63.22 | 71.03 | 30.28 | 69.74 | - | - | - | - | - | - |
EVGC [48] | 72.95 | 55.76 | 71.01 | 67.02 | 41.89 | 62.89 | 70.80 | 35.63 | 70.32 | 51.46 | 49.37 | 45.14 | - | - | - |
DNENC-Att [20] | 70.40 | 52.80 | 68.20 | 67.20 | 39.70 | 63.60 | 67.10 | 26.60 | 65.90 | - | - | - | - | - | - |
DNENC-Con [20] | 68.30 | 51.20 | 65.90 | 69.20 | 42.60 | 63.90 | 67.70 | 27.50 | 67.50 | - | - | - | - | - | - |
FGC [35] | 72.90 | 56.12 | 63.27 | 69.01 | 44.02 | 64.43 | 70.01 | 31.56 | 69.10 | 51.10 | 44.12 | 34.79 | 48.25 | 35.24 | 35.52 |
CGC [12] | 75.15 | 56.90 | 66.22 | 69.31 | 43.61 | 64.74 | 67.43 | 33.07 | 67.14 | 59.04 | 53.20 | 45.43 | 50.18 | 34.10 | 43.79 |
CCGC [3] | 73.88 | 55.56 | 70.98 | 69.84 | 44.33 | 62.71 | 68.32 | 31.08 | 68.84 | 60.21 | 54.54 | 44.32 | 51.21 | 34.55 | 44.89 |
SCGC [5] | 73.88 | 56.10 | 70.81 | 70.81 | 45.25 | 64.80 | 67.76 | 33.82 | 68.23 | 60.43 | 53.76 | 44.56 | 51.09 | 34.02 | 44.56 |
GCMA | 76.12 | 57.21 | 71.43 | 71.95 | 45.98 | 65.21 | 72.04 | 33.45 | 71.04 | 61.32 | 55.43 | 45.80 | 51.85 | 36.01 | 44.23 |
Methods | Squirrel | Chameleon | Wisconsin | Cornell | Texas | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | |
DAEGC [18] | 25.55 | 2.36 | 24.07 | 31.08 | 7.89 | 9.17 | 39.62 | 12.02 | 6.22 | 42.56 | 12.37 | 30.20 | 45.99 | 11.25 | 18.09 |
ARVGA-Col-M [49] | - | - | - | - | - | - | 54.34 | 11.41 | - | - | - | - | 59.89 | 16.37 | - |
RWR-Col-M [49] | - | - | - | - | - | - | 53.58 | 16.25 | - | - | - | - | 57.22 | 13.82 | - |
FGC [35] | 25.11 | 1.32 | 22.13 | 34.21 | 11.31 | 9.05 | 50.19 | 12.92 | 25.93 | 44.10 | 8.60 | 32.68 | 53.48 | 5.16 | 17.04 |
CGC [12] | 27.23 | 2.98 | 20.57 | 36.31 | 11.21 | 12.97 | 55.85 | 23.03 | 27.29 | 44.62 | 14.11 | 21.91 | 61.50 | 21.48 | 27.20 |
CCGC [3] | 26.03 | 1.78 | 20.21 | 33.21 | 10.98 | 10.22 | 53.31 | 22.67 | 25.81 | 45.54 | 13.62 | 21.66 | 60.98 | 19.34 | 22.73 |
SCGC [5] | 26.76 | 1.98 | 20.99 | 34.11 | 11.37 | 10.67 | 54.34 | 21.57 | 25.81 | 45.76 | 13.53 | 22.09 | 61.22 | 19.77 | 21.94 |
GCMA | 30.15 | 4.21 | 24.83 | 39.97 | 15.32 | 14.33 | 56.95 | 24.12 | 29.11 | 46.30 | 15.76 | 22.78 | 63.20 | 22.01 | 29.22 |
Methods | Cora | CiteSeer | PubMed | Wiki | Large Cora | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | ACC% | NMI% | F1% | |
GCMA f | 72.74 | 50.83 | 64.04 | 68.88 | 42.97 | 62.93 | 68.55 | 31.86 | 67.77 | 58.14 | 53.97 | 46.23 | 44.81 | 28.36 | 32.69 |
GCMA | 74.17 | 54.25 | 67.26 | 60.18 | 38.90 | 56.76 | 67.75 | 33.37 | 67.72 | 57.01 | 52.02 | 39.25 | 47.36 | 33.73 | 43.61 |
GCMA | 75.43 | 55.75 | 68.20 | 66.94 | 41.63 | 61.59 | 68.91 | 32.43 | 68.87 | 59.22 | 55.13 | 43.52 | 50.92 | 21.40 | 27.20 |
GCMA G | 74.54 | 54.45 | 68.77 | 68.78 | 42.37 | 62.94 | 68.76 | 31.50 | 68.52 | 59.96 | 54.95 | 43.42 | 47.76 | 33.88 | 43.81 |
GCMA | 76.12 | 57.21 | 71.43 | 71.95 | 45.98 | 65.21 | 72.04 | 33.45 | 71.04 | 61.32 | 55.43 | 45.80 | 51.85 | 36.01 | 44.23 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, R. Preserving Global Information for Graph Clustering with Masked Autoencoders. Mathematics 2024, 12, 1574. https://doi.org/10.3390/math12101574
Chen R. Preserving Global Information for Graph Clustering with Masked Autoencoders. Mathematics. 2024; 12(10):1574. https://doi.org/10.3390/math12101574
Chicago/Turabian StyleChen, Rui. 2024. "Preserving Global Information for Graph Clustering with Masked Autoencoders" Mathematics 12, no. 10: 1574. https://doi.org/10.3390/math12101574
APA StyleChen, R. (2024). Preserving Global Information for Graph Clustering with Masked Autoencoders. Mathematics, 12(10), 1574. https://doi.org/10.3390/math12101574