Multi-View Learning-Based Fast Edge Embedding for Heterogeneous Graphs
Abstract
:1. Introduction
- Difficult to extract complex network features. With the development of big data technology, HINs have grown increasingly complex. As a result, the latent features they contain are becoming more complex and diverse, making them more difficult to extract. However, the existing indirect embedding models are unable to learn edges directly, causing a significant loss of edge information. The direct edge models are generally designed for homogeneous information networks, resulting in poor feature separation and low feature extraction accuracy when applied to heterogeneous information networks.
- Difficult to balance speed and performance. As the size of HINs increases, the network analysis tasks become more time-sensitive, requiring an embedding model with high speed and performance. However, the existing edge embedding models usually convert the original network into a larger edge graph. Moreover, when using a deep model to enhance nonlinear feature extraction, the time complexity of the model increases. Therefore, edge embedding models with high speed and performance must be further researched.
- A shallow single-view learning strategy is designed to rapidly learn intra-view features for each view in the edge graph, where each vertex type is considered a separate view;
- A novel shallow cross-view learning strategy is designed to further learn the inter-view features between two views;
- A multi-head graph attention mechanism is used to accurately aggregate local features of multiple views, so as to generate a global edge embedding;
- Extensive experiments on three network analysis tasks and four public datasets show the good performance of our MFVEE model.
2. Definition
3. Framework
3.1. Overview
3.2. Edge Graph Construction
3.3. Single-View Learning for Intra-View Features
- Sampling. To capture the one-order and high-order intra-view neighbors, we generate some training samples for the skip-gram model. First, an R1-type vertex is randomly chosen from the edge graph as the starting vertex, and the random walk with a restart rate of 30% is repeatedly performed to obtain a vertex sequence of length L. Next, all R1-type vertexes are further selected from the generated walk sequence as a candidate subsequence. Then, this candidate subsequence is divided into several samples, and each sample consists of Win adjacent vertexes. The Win is the preset hyper-parameter, representing the window size of a sample. According to Section 4.6, the default value of Win in this stage is set as three. For example, the first to the Win-th in the subsequence is the first sample, and the second to the (Win + 1)-th is the second sample. When the window size Win is three, a sample <5, vi, 6> is generated where vi is the central vertex and the other vertexes are its contextual neighbors or positive vertexes. Additionally, the remaining vertexes 10 and 12 in the candidate subsequence are the negative vertexes. In this way, each sample includes both one-order and high-order proximity. Finally, this process is repeated until every R1-type vertex in the edge graph has been used as a starting vertex for a random walk.
- Input. The generated samples are input. In this paper, each view has a private skip-gram model consisting of a simple three-layer structure: an input layer, a hidden layer, and an output layer, as shown in Figure 3a. There is a central matrix XC ∈ N×d between the input layer and the hidden layer, a context matrix XW ∈ d×N between the hidden layer and the output layer where two matrices are randomly initialized, is the number of R1-type vertexes, and d is the dimension of the edge embedding.
- Learning process. First, a sample <5, vi, 6> is divided into the central vertex vi and contextual neighbors [8,9]. Second, the one-hot code of the central vertex vi is used as input of the skip-gram model to calculate the code of the hidden layer and defined as:
- Loss Function. We aim to learn the proximity between vertexes in view R1 by adjusting the similarity between the central vertex vi and its context neighbors. Therefore, a sample is divided into the central vertex and its contextual neighbors. The contextual neighbors are the positive training samples for the central vertex. And the other R1-type vertexes are the negative training samples for the central vertex. Usually, the number of negative samples is much greater than the number of positive samples. To speed up training, the negative sampling strategy is used to randomly select M samples from all negative samples as the final negative samples. For a positive sample, the similarity to the central vertex vi should converge to one. For a negative sample, the similarity to the central vertex vi should converge to 0. Therefore, the loss function is defined as follows.
3.4. Edge Graph Construction
- Sampling. To capture the one-order and high-order inter-view neighbors, we perform the inter-view sampling procedure. First, a T1-type link is randomly selected from the edge graph, and a vertex of this edge is randomly selected as the starting vertex. From the starting vertex, the random walk with a restart rate of 30% is repeatedly performed to obtain a vertex sequence of length L. Second, all the vertexes related to type T1 (belonging to one of the two end vertex types of type T1) in the vertex sequence are chosen sequentially as the positive subsequence. The remaining vertexes are chosen sequentially as the negative subsequence. Next, the positive subsequence is divided into multiple windows of length Win. The default value of Win in this stage has been set to three based on careful consideration and experimentation in Section 4.6 to ensure optimal performance and achieve the desired results. For example, when the Win is 3, [1,2,5] is a window and [2,5,7] is another window. Any two nodes in a window can constitute a positive sample. So, the window [1,2,5] can be generated as three positive samples <1,T1,2>, <1,T1,5>, and <2,T1,5>. In this way, the generated positive samples may contain one-order or higher-order semantic similarity. For example, if vertex 1 is adjacent to vertex 2, these two vertexes have one-order semantic similarity, otherwise, they have a high-order semantic similarity. Then, according to the same method, a large number of negative samples are generated from the negative subsequence. Finally, the process is repeated until every T1-type link has been treated as a starting place for a random walk. The whole sampling process is shown in Figure 4a.
- Input. The positive samples <vi, T1, vj> and the negative samples <vi, not T1, vo> are the first input. The intra-view features matrices of all views XC~XC are the second input. The projection matrix of the T1-type semantic subspace with size d × d is the fourth input, which is initialized randomly.
- Learning process. Based on the idea that “if a link exists between two vertexes with different types, the coordinates of the two vertexes in the semantic subspace should be close”, the cross-view feature learning of the link type T1 is to project two vertexes with different types into the same semantic subspace, and then pull the coordinates of the two vertexes as close together as possible when the two vertexes are 1-order or high-order neighbors, or push the coordinates of the two vertexes as far away as possible when the two vertexes are not neighbors.
- Loss Function: In the learning process, for one positive sample < vi, T1, vj >, vertex vi is pulled close to vertex vj in the T1 semantic subspace, increasing the similarity between them. And for one negative sample <vi, not T1, vo>, vertex vi is pulled far to vertex vo in T1 semantic subspace, decreasing the similarity between them. The similarity loss function is defined as follows.
3.5. Multi-View Feature Aggregation
- Input: The feature matrices of all views XC~XC are the first input. The adjacency matrix of the edge graph A is the second input.
- Learning process: Taking the attention head F1 in this process as an example.
- Loss Function: In the learning process, the inner product of the vertex features in the obtained feature matrix is used to compute the probability of adjacency between vertexes. And the probability is used to reconstruct edge graph adjacency matrix as follows:
3.6. Model Training
4. Experiment
4.1. Experimental Setup
- Datasets. Five datasets were used in our experiments, as shown in Table 1. (1) AMiner [27] (http://arnetminer.org/aminernetwork (10 March 2023)). The AMiner is a widely used academic graph in the research community for tasks such as paper recommendation, academic network analysis, and citation prediction. The latest version of AMiner contains billions of edges. To simplify this dataset, we have extracted a core set that has four node types and three edge types from it. (2) IMDb (https://datasets.imdbws.com/ (10 March 2023)). IMDb is a widely used online dataset that provides information about movies, TV shows, video games, and other forms of entertainment. It contains a vast collection of structured data about movies, including cast and crew information, plot summaries, user ratings, and other details. To simplify this dataset, we extracted a core set that has four node types and three edge types from it. (3) ACM (https:/www.aminer.org/citation (10 March 2023)). The ACM Citation Network dataset is commonly used for research in bibliometrics and network analysis, as well as for developing machine learning algorithms for citation prediction and recommendation systems. Since we are only interested in papers and their citation relationships, we have extracted a core set consisting of four node types and four edge types. (4) Douban [28] (https://www.dropbox.com/s/u2ejjezjk08lz1o/Douban.tar.gz?dl=0 (10 March 2023)). The Douban dataset is a collection of structured data about movies, TV shows, music, books, and other forms of entertainment, along with user ratings, reviews, and other metadata. It contains numerous nodes representing different types of media, including those representing different types of media (movies, TV shows, music, and books), as well as nodes representing users and groups. To simplify the Douban dataset, we extracted a core set comprising four node types and three edge types.
- Tasks and metrics. Three downstream edge-based network analysis tasks and their corresponding evaluation metrics were selected. (1) Edge classification is the first task. This task predicts the category of an edge based on the previously learned low-dimensional edge embedding vectors. It is common in recent network analysis tasks [29,30]. We pre-labeled edges in four networks based on their end node labels, and the edge labels are invisible to the model training. Micro-F1 and Macro-F1 are the two widely accepted evaluation metrics, with values between [0 and 1]. Micro-F1 first calculates the precision and recall for each class, and then performs a weighted average of the precision and recall for all classes, without considering class imbalance. Macro-F1 is a metric that calculates F1 values independently for each class, and then performs an arithmetic average of F1 values for all classes, without considering the sample size differences of the classes. (2) Edge clustering is the second task. The task applies a simple K-Means algorithm to divide the edges in a HIN into several disjoint subgroups with a class represented by each subgroup, based on the learned low-dimensional edge embedding vectors. Similar to the edge classification task, edge clustering is also a common network analysis task [31,32], and the labels of edges in four networks were pre-labeled. We selected NMI (Normalized Mutual Information) as the evaluation metric for this task, with values between [0 and 1]. A higher NMI value indicates that the clustering results are more consistent with the labels, while the opposite indicates that the clustering results are less consistent with the labels. (3) Link prediction is the third task. Link prediction is also a very common task for network analysis [33,34]. Based on the learned edge embeddings, this task predicts whether an edge exists in the network. If the edge exists, the label of it is set as one. Otherwise, the label is set as 0. We chose ACC (Accuracy) and AUC (Area Under the Curve) as our evaluation metrics for this task, with values between [0 and 1]. ACC measures the correctness of model predictions in link prediction, that is the ratio of the number of correct predictions to the number of all predictions. The AUC metric is the comparison of the similarity values of edges to the similarity values of non-existent edges.
- Baselines. As displayed in Table 2, the node embedding models HIN2vec and HAN, indirect edge embedding models AspEm and HEER, and direct edge embedding models Edge2vec and CensNet were used in the experiments. Among them, HAN and CensNet are deep models, while others are shallow models. By adjusting the default parameters, the best experimental results were obtained for all these algorithms, and the performance of our proposed model MVFEE was compared with these five typical baseline algorithms on the four real-world datasets mentioned above.
- Parameters settings. The default parameters used for each algorithm are set as follows, and for each dataset, we fine-tuned the parameters to achieve the best performance based on these default values. (1) HIN2vec. We set the window size to 3, the walk length to 80, and the embedding dimension to 128. The learning rate was set to 0.025, the number of negative samples was set to 5, and the number of iterations was set to 10. (2) HAN. We used four attention heads in each layer and set the embedding dimension to 128. The learning rate was set to 0.005 and the weight decay was set to 0.01. (3) AspEm. We set the number of negative examples to 5, the embedding dimension to 128, and the initial learning rate to 0.025. (4) HEER. We set the embedding dimension to 128, the window size to 3, and the batch size was set to 50. (5) Edge2vec. We set the negative sampling to 500, the embedding dimension to 128, and the learning rate was set to 0.025. (6) CensNet. The dropout was set to 0.5, the initial learning rate was set to 0.01, the embedding dimension to 128, the weight decay was set to 0.0005, and the number of epochs was set to 200.
- Setting. The experimental platform was a PC server equipped with a T4 graphics card. The server has a 12-core Intel Core i7-12700 processor, 128 GB of RAM, and runs on the Ubuntu 20.04 operating system. We programmed the MVFEE algorithm using the PyCharm IDE and the Python programming language. The source code for the other algorithms can be downloaded from URLs in Table 2.
4.2. Edge Classification
4.2.1. Performance Analysis of Our MVFEE Model
4.2.2. Performance Analysis of Multi-View Feature Aggregation
4.3. Edge Clustering
4.3.1. Performance Analysis of Our MVFEE Model
4.3.2. Performance Analysis of Multi-View Feature Aggregation
4.4. Link Prediction
4.4.1. Performance Analysis of Our MVFEE Model
4.4.2. Performance Analysis of Multi-View Feature Aggregation
4.5. Scalability
4.6. Parameter Sensitivity
4.7. Visualization
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xiang, L.; Yu, Y.; Zhu, J. Moment-based analysis of pinning synchronization in complex networks. Asian J. Control 2022, 24, 669–685. [Google Scholar] [CrossRef]
- Chen, L.; Wang, L.; Zeng, C.; Liu, H.; Chen, J. DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction. Mathematics 2022, 10, 4193. [Google Scholar] [CrossRef]
- Zhang, C.; Li, K.; Wang, S.; Zhou, B.; Wang, L.; Sun, F. Learning Heterogeneous Graph Embedding with Metapath-Based Aggregation for Link Prediction. Mathematics 2023, 11, 578. [Google Scholar] [CrossRef]
- Chen, L.; Li, Y.; Deng, X.; Liu, Z.; Lv, M.; He, T. Semantic-aware network embedding via optimized random walk and paragaraph2vec. J. Comput. Sci. 2022, 63, 101825. [Google Scholar] [CrossRef]
- Huang, C.; Fang, Y.; Lin, X.; Cao, X.; Zhang, W. ABLE: Meta-Path Prediction in Heterogeneous Information Networks. ACM Trans. Knowl. Discov. Data 2022, 16, 1–21. [Google Scholar] [CrossRef]
- Huang, C.; Fang, Y.; Lin, X.; Cao, X.; Zhang, W.; Orlowska, M. Estimating Node Importance Values in Heterogeneous Information Networks. In Proceedings of the 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 846–858. [Google Scholar]
- Luo, L.; Fang, Y.; Cao, X.; Zhang, X.; Zhang, W. Detecting Communities from Heterogeneous Graphs: A Context Path-based Graph Neural Network Model. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, 1–5 November 2021; pp. 1170–1180. [Google Scholar]
- Wang, X.; Bo, D.; Shi, C.; Fan, S.; Ye, Y.; Philip, S.Y. A survey on heterogeneous graph embedding: Methods, techniques, applications and sources. IEEE Trans. Big Data 2022, 9, 415–436. [Google Scholar] [CrossRef]
- Chen, L.; Chen, F.; Liu, Z.; Lv, M.; He, T.; Zhang, S. Parallel gravitational clustering based on grid partitioning for large-scale data. Appl. Intell. 2022, 53, 2506–2526. [Google Scholar] [CrossRef]
- Lei, Y.; Chen, L.; Li, Y.; Xiao, R.; Liu, Z. Robust and fast representation learning for heterogeneous information networks. Front. Phys. 2023, 11, 357. [Google Scholar] [CrossRef]
- Shi, C.; Hu, B.; Zhao, W.X.; Philip, S.Y. Heterogeneous information network embedding for recommendation. IEEE Trans. Knowl. Data Eng. 2018, 31, 357–370. [Google Scholar] [CrossRef] [Green Version]
- Fu, T.; Lee, W.C.; Lei, Z. Hin2vec: Explore meta-paths in heterogeneous information networks for representation learning. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 1797–1806. [Google Scholar]
- Wang, X.; Ji, H.; Shi, C.; Wang, B.; Ye, Y.; Cui, P.; Yu, P.S. Heterogeneous graph attention network. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2022–2032. [Google Scholar]
- Grover, A.; Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
- Shi, Y.; Gui, H.; Zhu, Q.; Kaplan, L.; Han, J. Aspem: Embedding learning by aspects in heterogeneous information networks. In Proceedings of the 2018 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, San Diego, CA, USA, 3–5 May 2018; pp. 144–152. [Google Scholar]
- Shi, Y.; Zhu, Q.; Guo, F.; Zhang, C.; Han, J. Easing embedding learning by comprehensive transcription of heterogeneous information networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2190–2199. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Chen, L.; Zheng, H.; Li, Y.; Liu, Z.; Zhao, L.; Tang, H. Enhanced density peak-based community detection algorithm. J. Intell. Inf. Syst. 2022, 59, 263–284. [Google Scholar] [CrossRef]
- Chen, L.; Guo, Q.; Liu, Z.; Zhang, S.; Zhang, H. Enhanced synchronization-inspired clustering for high-dimensional data. Complex Intell. Syst. 2021, 7, 203–223. [Google Scholar] [CrossRef]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. arXiv 2017, arXiv:1706.02216. [Google Scholar]
- Wang, C.; Wang, C.; Wang, Z.; Ye, X.; Yu, P.S. Edge2vec: Edge-based social network embedding. ACM Trans. Knowl. Discov. Data (TKDD) 2020, 14, 1–24. [Google Scholar] [CrossRef]
- Chen, H.; Koga, H. Gl2vec: Graph embedding enriched by line graphs with edge features. In Proceedings of the Neural Information Processing: 26th International Conference, ICONIP 2019, Sydney, Australia, 12–15 December 2019; Proceedings, Part III 26. Springer International Publishing: Cham, Switzerland, 2019; pp. 3–14. [Google Scholar]
- Jiang, X.; Zhu, R.; Li, S.; Ji, P. Co-embedding of nodes and edges with graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 45, 7075–7086. [Google Scholar] [CrossRef] [PubMed]
- Lozano, M.A.; Escolano, F.; Curado, M.; Hancock, E.R. Network embedding from the line graph: Random walkers and boosted classification. Pattern Recognit. Lett. 2021, 143, 36–42. [Google Scholar] [CrossRef]
- Tang, J.; Zhang, J.; Yao, L.; Li, J.; Zhang, L.; Su, Z. ArnetMiner: Extraction and Mining of Academic Social Networks. In Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD’2008), Las Vegas, NV, USA, 24–27 August 2008; pp. 990–998. [Google Scholar]
- Song, W.; Xiao, Z.; Wang, Y.; Charlin, L.; Zhang, M.; Tang, J. Session-based social recommendation via dynamic graph attention networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 555–563. [Google Scholar]
- Aggarwal, C.; He, G.; Zhao, P. Edge classification in networks. In Proceedings of the 2016 IEEE 32nd International Conference on Data Engineering (ICDE), Helsinki, Finland, 16–20 May 2016; pp. 1038–1049. [Google Scholar]
- Cai, B.; Wang, Y.; Zeng, L.; Hu, Y.; Li, H. Edge classification based on Convolutional Neural Networks for community detection in complex network. Phys. A Stat. Mech. Its Appl. 2020, 556, 124826. [Google Scholar] [CrossRef]
- Kim, P.; Kim, S. Detecting overlapping and hierarchical communities in complex network using interaction-based edge clustering. Phys. A Stat. Mech. Its Appl. 2015, 417, 46–56. [Google Scholar] [CrossRef]
- Zhang, X.K.; Tian, X.; Li, Y.N.; Song, C. Label propagation algorithm based on edge clustering coefficient for community detection in complex networks. Int. J. Mod. Phys. B 2014, 28, 1450216. [Google Scholar] [CrossRef]
- Hu, B.; Fang, Y.; Shi, C. Adversarial learning on heterogeneous information networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 120–129. [Google Scholar]
- Zhang, C.; Swami, A.; Chawla, N.V. Shne: Representation learning for semantic-associated heterogeneous networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 690–698. [Google Scholar]
Dataset | Nodes | Number of Nodes | Edges | Number of Edges | Average Degree | Types of Label |
---|---|---|---|---|---|---|
AMiner | Author (A) Paper (P) Conference (C) Reference (R) | 439,447 | P-A P-C P-R | 875,223 | 3.98 | 6 |
IMDb | Movie (M) Actor (A) Director (D) Keyword (K) | 11,373 | M-A M-D M-K | 29,513 | 5.19 | 5 |
ACM | Paper (P) Author (A) Subject (S) Facility (F) | 14,128 | P-P P-A P-S A-F | 90,278 | 12.78 | 3 |
Douban | Movie (M) User (U) Actor (A) Director (D) | 34,804 | M-U M-A M-D | 1,113,141 | 63.97 | 4 |
Algorithms | Code Source | Implement |
---|---|---|
HIN2vec [12] | https://github.com/csiesheep/hin2vec (10 March 2023) | Python |
HAN [13] | https://github.com/Jhy1993/HAN (10 March 2023) | Tensorflow |
AspEm [15] | https://github.com/ysyushi/aspem (10 March 2023) | C++ |
HEER [16] | https://github.com/GentleZhu/HEER (10 March 2023) | Pytorch |
Edge2vec [23] | https://github.com/shatter15/edge2vec (10 March 2023) | Tensorflow |
CensNet [25] | https://github.com/ronghangzhu/CensNet (10 March 2023) | Pytorch |
AMiner | IMDb | ACM | Douban | |
---|---|---|---|---|
NMI | NMI | NMI | NMI | |
HIN2vec | 0.5798 | 0.4925 | 0.3916 | 0.2796 |
HAN | 0.6118 | 0.5139 | 0.4178 | 0.2863 |
AspEm | 0.5906 | 0.5061 | 0.4066 | 0.2842 |
HEER | 0.5887 | 0.5050 | 0.4018 | 0.2875 |
Edge2vec | 0.6237 | 0.5337 | 0.4253 | 0.3295 |
CensNet | 0.6344 | 0.5423 | 0.4264 | 0.3274 |
MVFEE | 0.6421 | 0.5539 | 0.4335 | 0.3289 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.; Deng, X.; He, T.; Chen, L.; Deng, G.; Hu, Y. Multi-View Learning-Based Fast Edge Embedding for Heterogeneous Graphs. Mathematics 2023, 11, 2974. https://doi.org/10.3390/math11132974
Liu C, Deng X, He T, Chen L, Deng G, Hu Y. Multi-View Learning-Based Fast Edge Embedding for Heterogeneous Graphs. Mathematics. 2023; 11(13):2974. https://doi.org/10.3390/math11132974
Chicago/Turabian StyleLiu, Canwei, Xingye Deng, Tingqin He, Lei Chen, Guangyang Deng, and Yuanyu Hu. 2023. "Multi-View Learning-Based Fast Edge Embedding for Heterogeneous Graphs" Mathematics 11, no. 13: 2974. https://doi.org/10.3390/math11132974
APA StyleLiu, C., Deng, X., He, T., Chen, L., Deng, G., & Hu, Y. (2023). Multi-View Learning-Based Fast Edge Embedding for Heterogeneous Graphs. Mathematics, 11(13), 2974. https://doi.org/10.3390/math11132974