A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes
Abstract
:1. Introduction
- (1)
- An unsupervised network embedding learning model as MRNR is proposed, which is not only able to utilize network structural features and node’s text features but also adds triplet features constructed based on node’s text relationships. Finally, the representation vectors obtained from the learning procedure contain more feature factors and better reflect the multifaceted features of nodes, improving the accuracy and robustness of node representation. By adding the attention mechanism to the triplets and co-occurrence words, different weights can be assigned based on the importance and similarity, improving the expressiveness and differentiation of the node representation.
- (2)
- Compared to the existing graph attention network models, the MRNR algorithm proposed in this paper can directly calculate different weights for co-occurrence words and triples in text. This type of attention weight is fine-grained attention information. However, the existing network representation learning algorithms with attention mechanisms often take text features as the initial representation vector of nodes and then adjust the element values in the network representation vector in the neural network. Text features exhibit insufficient participation during the training process.
- (3)
- The MRNR algorithm has a reasonable framework and clear objectives, and it is an unsupervised machine-learning method that achieves the goal of modeling multiple features in a unified framework. Compared to existing graph neural networks, the MRNR algorithm proposed in this paper can be applied to unlabeled networks. At the same time, it can also be applied to labeled networks.
2. Related Work
3. Algorithm Design
3.1. Definitions
3.2. MRNR Modeling
- (1)
- Network node relationship modeling
- (2)
- Node’s text Relationship Modeling
- (3)
- Triplet relationship modeling
4. Experimentation
4.1. Data Sets
4.2. Introduction to the Contrast Algorithm
- (1)
- DeepWalk
- (2)
- LINE
- (3)
- GraRep
- (4)
- MFDW
- (5)
- Text Feature (TF)
- (6)
- TADW
4.3. Experimental Setup
4.4. Analysis of Experimental Results
4.5. Network Embedding Visualization
4.6. Case Analysis
5. Summary
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tu, C.C.; Yang, C.; Liu, Z.Y.; Sun, M.S. Network representation learning: An overview. Sci. Sin. Informationis 2017, 47, 980–996. [Google Scholar]
- Zhang, D.K.; Yin, J.; Zhu, X.Q.; Zhang, C.Q. Network representation learning: A survey. IEEE Trans. Big Data 2018, 6, 3–28. [Google Scholar] [CrossRef]
- Yang, C.; Xiao, Y.X.; Zhang, Y.; Sun, Y.Z.; Han, J.W. Heterogeneous network representation learning: A unified framework with survey and benchmark. IEEE Trans. Knowl. Data Eng. 2020, 34, 4861–4867. [Google Scholar] [CrossRef]
- Cui, P.; Wang, X.; Pei, J.; Zhu, W.W. A survey on network embedding. IEEE Trans. Knowl. Data Eng. 2018, 31, 833–852. [Google Scholar] [CrossRef]
- Han, Z.M.; Liu, D.; Zheng, C.Y. Coupling network vertex representation learning based on network embedding method. Chin. Sci. Inf. Sci. 2020, 50, 1197–1216. [Google Scholar]
- Wang, D.X.; Cui, P.; Zhu, W.W. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
- Mikolov, T.; Karafiát, M.; Burget, L.; Černocký, J.; Khudanpur, S. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, 26–30 September 2010. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 5–10 December 2013. [Google Scholar]
- Liu, Z.Y.; Sun, M.S.; Lin, X.K.; Xie, R.B. Knowledge representation learning:a review. J. Comput. Res. Dev. 2016, 2, 247–261. [Google Scholar]
- Feng, J. Knowledge Graph Embedding by Translating on Hyperplanes; American Association for Artificial Intelligence: Beijing, China, 2014. [Google Scholar]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014. [Google Scholar]
- Grover, A.; Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
- Tang, J.; Qu, M.; Wang, M.Z.; Zhang, M.; Yan, J. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015. [Google Scholar]
- Derr, T.; Ma, Y.; Tang, J. Signed Graph Convolutional Network. arXiv 2018, arXiv:1808.06354. [Google Scholar]
- You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; Shen, Y. Graph Contrastive Learning with Augmentations. arXiv 2020, arXiv:2010.13902. [Google Scholar]
- Cao, S.S.; Lu, W.; Xu, Q.K. GraRep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, Melbourne, Australia, 18–23 October 2015. [Google Scholar]
- Wang, X.; Ji, H.Y.; Shi, C.; Wang, B.; Ye, Y.F.; Cui, P. Heterogeneous graph attention network. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Thirty-First Conference on Neural Information Processing Systems, Long Beach Conventio, CA, USA, 4–9 December 2017. [Google Scholar]
- Thekumparampil, K.K.; Wang, C.; Oh, S.; Li, L.J. Attention-Based Graph Neural Network for Semi-Supervised Learning. arXiv 2018, arXiv:1803.03735. [Google Scholar]
- Cen, Y.K.; Zou, X.; Zhang, J.W.; Yang, H.X.; Zhou, J.R.; Tang, J. Representation learning for attributed multiplex heterogeneous network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, Alaska, AK, USA, 4–8 August 2019. [Google Scholar]
- Li, J.; Liu, Y.; Zou, L. DynGCN: A dynamic graph convolutional network based on spatial-temporal modeling. In Proceedings of the 21st International Conference, Amsterdam, The Netherlands, 20–24 October 2020; Springer: Cham, Switzerland, 2020; Volume 12, pp. 83–95. [Google Scholar]
- Dong, W.; Wu, J.S.; Luo, Y.; Ge, Z.Y.; Wang, P. Node representation learning in graph via node-to-neighbourhood mutual information maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Zhao, Y.; Feng, H.L.; Zhou, H.; Yang, Y.R.; Chen, X.Y.; Xie, R.B.; Zhuang, F.Z.; Li, Q. EIGAT: Incorporating global information in local attention for knowledge representation learning. Knowl. -Based Syst. 2022, 237, 107909. [Google Scholar] [CrossRef]
- Chang, Y.M.; Chen, C.; Hu, W.B.; Zheng, Z.B.; Zhou, X.C.; Chen, S.Z. Megnn: Meta-path extracted graph neural network for heterogeneous graph representation learning. Knowl. -Based Syst. 2022, 235, 107611. [Google Scholar] [CrossRef]
- Sun, D.D.; Li, D.S.; Ding, Z.L.; Zhang, X.Y.; Tang, J. Dual-decoder graph autoencoder for unsupervised graph representation learning. Knowl. -Based Syst. 2021, 234, 107564. [Google Scholar] [CrossRef]
- Yang, B.; Yih, W.T.; He, X.; Gao, J.; Deng, L. Embedding entities and relations for learning and inference in knowledge bases. arXiv 2015, arXiv:1412.6575. [Google Scholar]
- Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D knowledge graph embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Yu, C.; Zhang, Z.; An, L.; Li, G. A knowledge graph completion model integrating entity description and network structure. Aslib J. Inf. Manag. 2023, 75, 500–522. [Google Scholar] [CrossRef]
- Liu, W.Q.; Cai, H.Y.; Cheng, X.; Xie, S.F.; Yu, Y.P.; Zhang, D.K. Learning high-order structural and attribute information by knowledge graph attention networks for enhancing knowledge graph embedding. Knowl. -Based Syst. 2022, 250, 109002. [Google Scholar] [CrossRef]
- Wei, H.Y.; Liang, J.; Liu, D.; Wang, F. Contrastive graph structure learning via information bottleneck for recommendation. In Proceedings of the Advances in Neural Information Processing Systems 35 (NeurIPS 2022), New Orleans, LA, USA, 6–14 December 2021. [Google Scholar]
- Cai, X.H.; Huang, C.; Xia, L.H.; Ren, X.B. LightGCL: Simple yet effective graph contrastive learning for recommendation. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
Symbols | Meaning |
---|---|
G = (V, E) | inputted network |
V | set of nodes |
E | the set of edges |
|V| | the number of nodes |
Rv | the trained network representation vector |
att | attention parameter |
T | labels of the nodes v |
the loss of network node relationship modeling | |
the loss of node’s text word features and attention mechanism learning | |
negative sample set of node | |
sampling label of node | |
the context node of node | |
the set of nodes | |
Sigmoid function | |
C | node’s sequence corpus |
the learning rate | |
the sum of word representation vectors | |
the word representation vectors | |
the attention weight of the word | |
head entity | |
tail entity | |
relation entity | |
knowledge triplet |
Data Set | Number of Nodes | Number of Sides | Average Degree | Training Set | Validation Set | Test Set |
---|---|---|---|---|---|---|
Citeseer | 4610 | 5923 | 2.57 | 1333 | 667 | 2610 |
DBLP | 17,725 | 105,781 | 11.926 | 3333 | 1667 | 12,725 |
SDBLP | 3119 | 39,516 | 25.339 | 666 | 334 | 3119 |
Data Set | Methods of Comparison | Percentage of Data | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | ||
Citeseer | DeepWalk | 55.89 | 59.30 | 60.89 | 61.48 | 62.19 | 62.30 | 62.62 | 62.33 | 63.95 |
LINE | 42.64 | 47.06 | 48.04 | 49.57 | 50.43 | 51.02 | 51.18 | 53.07 | 53.63 | |
GraRep | 39.38 | 53.09 | 57.85 | 59.75 | 59.97 | 61.05 | 61.57 | 62.09 | 60.89 | |
MFDW | 57.62 | 60.79 | 62.33 | 63.05 | 62.96 | 63.00 | 63.00 | 63.48 | 64.30 | |
Text Feature | 57.69 | 61.30 | 62.76 | 63.05 | 63.48 | 63.30 | 62.87 | 62.19 | 63.95 | |
MRNR | 75.76 | 77.92 | 78.56 | 79.15 | 79.62 | 79.12 | 79.18 | 79.22 | 79.93 | |
DBLP | DeepWalk | 62.26 | 64.34 | 65.42 | 65.98 | 66.24 | 66.18 | 66.60 | 67.03 | 66.77 |
LINE | 64.49 | 66.53 | 67.49 | 67.87 | 67.98 | 68.30 | 69.03 | 68.89 | 68.86 | |
GraRep | 58.92 | 65.92 | 67.26 | 67.92 | 68.77 | 68.88 | 69.26 | 69.56 | 69.79 | |
MFDW | 65.02 | 74.68 | 74.88 | 75.02 | 75.05 | 75.13 | 75.22 | 74.57 | 75.51 | |
Text Feature | 66.17 | 69.46 | 70.49 | 71.15 | 71.29 | 71.44 | 71.54 | 71.57 | 71.83 | |
MRNR | 81.96 | 81.82 | 83.96 | 84.04 | 84.88 | 85.02 | 85.37 | 85.33 | 84.89 | |
SDBLP | DeepWalk | 79.76 | 80.65 | 81.88 | 81.49 | 82.56 | 82.35 | 82.73 | 82.71 | 83.37 |
LINE | 73.79 | 77.01 | 78.11 | 81.49 | 79.31 | 78.97 | 79.63 | 78.82 | 78.77 | |
GraRep | 80.99 | 82.52 | 84.14 | 84.78 | 84.97 | 84.17 | 85.36 | 85.27 | 84.95 | |
MFDW | 79.79 | 83.08 | 84.38 | 84.12 | 84.53 | 84.29 | 84.70 | 84.55 | 84.53 | |
Text Feature | 65.03 | 71.23 | 72.64 | 73.86 | 74.54 | 75.07 | 75.14 | 76.00 | 75.33 | |
MRNR | 82.02 | 82.66 | 83.85 | 84.34 | 84.66 | 84.56 | 84.68 | 85.63 | 85.37 |
Algorithm | Vertex Title |
---|---|
DeepWalk | Statistics Localization Regions and Modular Symmetries in Quantum Field Theory |
Extensions of Conformal Nets and Super Selection Structures | |
Modular Covariance Pct Spin and Statistics | |
TADW | Remarks on Causality in Relativistic Quantum Field Theory |
Modular Covariance Pct Spin and Statistics | |
An Algebraic Spin and Statistics Theorem | |
MRNR | Higher-dimensional Algebra and Topological Quantum Field Theory |
Statistics Localization Regions and Modular Symmetries in Quantum Field Theory | |
Quantum Field Theory as Eigenvalue Problem Manuscript |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, W.; Cui, B.; Ye, Z.; Liu, Z. A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes. Mathematics 2023, 11, 4788. https://doi.org/10.3390/math11234788
Zhang W, Cui B, Ye Z, Liu Z. A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes. Mathematics. 2023; 11(23):4788. https://doi.org/10.3390/math11234788
Chicago/Turabian StyleZhang, Wei, Baoyang Cui, Zhonglin Ye, and Zhen Liu. 2023. "A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes" Mathematics 11, no. 23: 4788. https://doi.org/10.3390/math11234788
APA StyleZhang, W., Cui, B., Ye, Z., & Liu, Z. (2023). A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes. Mathematics, 11(23), 4788. https://doi.org/10.3390/math11234788