DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction
Abstract
:1. Introduction
- Discrete-Time Dynamic Graph
- Continuous-time Dynamic Graph
- Continuous-time Dynamic Heterogeneous Graph
- Our Work
- For the first time, we embed the dynamics of a whole graph into a dynamic heterogeneous network representation vector. In the past, the dynamic heterogeneous network-embedding methods have only focused on the semantic and time decay effects contained in the meta-path, while the dynamics of the whole network would affect the local embedding. As the so-called “collective influence individuals,” the traditional methods rarely modeled the evolutionary periodic effects from the overall perspective of the dynamic heterogeneous network. In this method, the current scale and trend of the whole graph are embedded into the representation vector so that the graph representation can represent the properties of the network from the perspective of the whole network.
- For the first time, we embed node dynamics into the representation of the dynamic heterogeneous network. We have observed that in dynamic heterogeneous networks, although each event has its own individual characteristics, the temporal events generated on the associated nodes will be affected by the connectivity of shared heterogeneous nodes, and the connectivity of nodes changes with time, resulting in the similarity and collective characteristics between temporal edges changing with time. We define the collective characteristics of events at the same node as node dynamics. It provides a regularization mechanism that goes beyond a single event to ensure that the temporal edges on the node can conform to the joint distribution of node dynamics and time.
- The experimental results on three real datasets show that DHGEEP is superior to several baselines.
2. Related Work
- Metapath2vec [54]: A static-heterogeneous-network shallow-embedding method based on the metapath to capture semantics and structure. The random swimming based on meta-path specifies the neighbor of a node and then uses a heterogeneous skip-gram model to implement embedding.
- DySAT [16]: A dynamic-homogeneous-network shallow-embedding method. DySAT divides the dynamic network into different subgraphs of snapshots and then aggregates the information of each subgraph. It captures dynamics from two sources, structural neighborhood and temporary dynamics, and uses multiple heads of attention to capture multiple dynamics.
- HTNE [55]: A dynamic-homogeneous-network shallow-embedding method based on the Hawkes process. HTNE integrates the Hawkes process into network embedding to discover the influence of historical neighbors on current neighbors.
- MTNE [35]: A dynamic-homogeneous-network shallow-embedding method that simulates the evolution of networks. MTNE considers the effect of mesoscopic dynamics, especially the temporal dynamics of the particular motifs when the network evolves.
- DHNE [56]: A dynamic-heterogeneous-network shallow-embedding method which constructs comprehensive historical-current networks based on subgraphs of snapshots in time step. Additionally, meta-paths are used to capture semantics.
- StHNE [50]: A static-heterogeneous-network shallow-embedding method which can capture a network representation of the structure and semantics in the static heterogeneous map based on the first-order and second-order approximation of the meta-path.
- DyHNE [50]: A dynamic-heterogeneous-network shallow-embedding method based on StHNE, which can learn the nodes’ embedding based on perturbation theory when the structure and semantics of the augmented adjacency matrix of the meta-path change.
- MAGNN [57]: A static-heterogeneous-network deep-embedding method based on meta-path aggregation for static heterogeneous graphs.
- HDGNN [58]: A dynamic-heterogeneous-network deep-embedding method which can integrate the characteristics of time evolution and capture the structural characteristics of attribute graphs and the dynamic complex relationships between nodes.
- THINE [51]: A dynamic-heterogeneous-network deep-embedding method, which uses the Hawkes process to simulate the evolution of the temporal network and combines the attention mechanism and meta-paths.
3. Problem Formalization
4. Methods
4.1. Modeling the Dynamics of Nodes
4.2. Modeling the Dynamics of the Entire Graph
4.3. Design of the Loss Function
Algorithm 1 DEGEEP: Algorithm for dynamic heterogeneous network embedding. |
|
5. Experiments
5.1. Datasets
- AMiner: This dataset contains data on papers published in top-level journals and presented at conferences in the computer science field. The data are provided by the scientific and technological information and mining platform of Tsinghua University and include author, paper, and venue nodes from research fields such as data mining, medical information, theory, visualization, and databases. There are two types of temporal heterogeneous events, cooperative relationships (A-A) and submission relationships (A-V). The temporal link prediction task aims to predict temporal events in the latest period of time.
- DBLP: This dataset contains data relating to top-level conference papers in the computer field, including author, paper, conference, and other nodes. Events include cooperation (A-A) and participation (A-V) and subsequent events involving databases, data mining, information retrieval, artificial intelligence, and others. The tasks are the same as for AMiner.
- Yelp: This dataset consists of comments between users and merchants published by the Yelp comment platform. The network consists of user, business, and star nodes; and there are three types of sequential events. The link prediction task involves predicting the temporal events in the latest period of time.
5.2. Offline Training
5.3. Results and Analysis
- 1
- In the link prediction task, according to Equation (1), the embedding vector of the edge is represented by the embedding vector of the two nodes, namely, . In this task, a link prediction task can be regarded as a classification task. The test data consist of positive and negative links. Positive links are the actual links present in the future subgraph or test graph. Additionally, a test graph is split chronologically from the training graph to evaluate the model’s performance. Therefore, edges in this test graph or subgraph are considered positive examples. From the sample test graph, an equal number of non–edge node pairs are sampled as negative links. Based on these settings, we used a logistic regression classifier for the link prediction task with input . In the logical regression classifier, we established a regression formula for the classification boundary according to the sampling results. A common gradient boosting method was used in the process of fitting parameters. After training, we found the best regression coefficient for classification. For the Yelp dataset, the link prediction task is to predict the co-user relationships regarding goods, that is, whether two users have purchased and used the same goods. As for the two academic networks, AMiner and DBLP, the prediction task is to predict the relationship between authors, that is, the co-author relationship. The datasets do not contain the co-author edges; that is, there are no edges between authors. During training, 25% of author–paper (AP) edges were randomly deleted, because the meta-path author–paper–author (APA) implies co-author information, and there may be problems with information leakage. For Yelp, the connections between users were determined by whether they were friends in the original dataset. Similarly, we removed 25% of user–business (UB) edges and then trained a logistic regression classifier to predict the connections between users. In each dataset, we randomly selected 25,000 edges as positive edges and 25,000 as negative edges. In this task, we selected three common indicators of static network link prediction, AUC, f1, and acc. The link prediction results are listed in Table 2.In Table 2, we can see that the methods using time information (DySAT, HTNE, MTNE, DHNE, DyHNE, HDGNN, THINE, and DHGEEP) are obviously better than the static-graph methods (Deepwalk, LINE, Metapath2vec, StHNE, and MAGNN), which further illustrates the importance of time information. As for network heterogeneity, the method considering network heterogeneity is not better than the method based on a homogeneous network, which shows that the influence of network heterogeneity is not as great as that of network dynamics in this task.
- 2
- In the task of predicting temporal events of homogeneous nodes, we predicted the connections generated by the nodes of the same type. Taking the academic networks as an example, we predicted the future cooperative relationship between authors, that is, the co-author relationship. For the relationships contained in some specific meta-paths, such as APA, we selected the first 80% of all instances in the dataset, recorded these as time t, and used the events before this as the training set. The remaining data were used as the test set. In the experiment, the node embeddings obtained by various methods were used as input, and the negative square Euclidean distance was used to estimate the probability of generating edges between nodes to predict the connecting edge with the highest probability of generating nodes after t. We only conducted this experiment on the two academic datasets. Finally, we evaluated the performances of the various algorithms. In this task, predicting temporal events of homogeneous nodes is seen as a ranking problem. For every test node, its most probable future neighbors are ranked and compared with actual future neighbors. With our settings, the top K possible edges are ranked in a test graph and compared with the ground truth edges. Finally, the metrics chosen for this ranking problem were precision@K and recall@K. For the task of predicting temporal events of homogeneous nodes, we set or . The experimental results are presented in Table 3. Some of the baseline methods are only suitable for homogeneous networks and therefore cannot accurately represent the characteristics of dynamic heterogeneous networks. In addition, for the Yelp dataset, the differences are too large among the numbers of nodes of the three types. For example, there are 24,586 user nodes and only 5 star nodes. Thus, there are too few predictable results for predicting temporal events, so this dataset was not used in the experiment.It can be seen from the experimental results in Table 3 that our DHGEEP method is effective at predicting temporal events at homogeneous nodes. Additionally, we can see that the methods considering time information (DySAT, HTNE, MTNE, DHNE, DyHNE, HDGNN, THINE and DHGEEP) are better than those with static graphs (Deepwalk, LINE, Metapath2vec, StHNE, and MAGNN). On average, this advantage is not as obvious as that in the last task. In addition, we can see that the deep methods (MAGNN, HDGNN, THINE, and DHGEEP) are generally superior to the shallow methods (Deepwalk, LINE, DySAT, HTNE, MTNE, Metapath2vec, StHNE, DHNE, and DyHNE). This means that the deep models can obtain more comprehensive information than the shallow models.
- 3
- In order to prove that DHGEEP can well handle the heterogeneity of the network, we applied DHGEEP to the temporal event prediction of heterogeneous nodes in addition to the temporal event prediction of homogeneous nodes. This task was essentially the same as Task 2, except that the objects for predicting temporal events were different. As in the above task, the metrics chosen in this ranking problem were precision@K and recall@K. In the task of predicting temporal events of heterogeneous nodes, we set or . Specifically, we predicted the connections generated by different types of nodes. Taking the academic networks as an example, we predicted an author’s future participation in a conference, that is, the A–C relationship. The training process and testing methods were the same as those in Experiment 2. We only conducted this experiment using the two academic datasets. The experimental results are presented in Table 4. Our method achieved the highest precision and recall, which shows that DHGEEP can well handle the heterogeneity of the dynamic network. The pattern shown in Table 4 for the AMiner dataset is less obvious than that for the DBLP dataset. We could still find a pattern similar to that shown in the above tasks. The methods considering time information (DySAT, HTNE, MTNE, DHNE, HDGNN, THINE and DHGEEP) are better than the methods using static graphs (Deepwalk, LINE, Metapath2vec, and MAGNN). In addition, the deep methods (MAGNN, HDGNN, THINE and DHGEEP) are generally superior to the shallow methods (Deepwalk, LINE, DySAT, HTNE, MTNE, Metapath2vec, and DHNE). This means that the deep models can obtain more comprehensive information than the shallow models.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Porter, M.A. Nonlinearity + Networks: A 2020 Vision. In Emerging Frontiers in Nonlinear Science; Kevrekidis, P.G., Cuevas-Maraver, J., Saxena, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 131–159. [Google Scholar] [CrossRef]
- Kazemi, S.M.; Goel, R.; Jain, K.; Kobyzev, I.; Sethi, A.; Forsyth, P.; Poupart, P. Representation Learning for Dynamic Graphs: A Survey. J. Mach. Learn. Res. 2020, 21, 2648–2720. [Google Scholar]
- Shang, Y. The Estrada index of evolving graphs. Appl. Math. Comput. 2015, 250, 415–423. [Google Scholar] [CrossRef]
- Shang, Y. Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs. PLoS ONE 2015, 10, e0123426. [Google Scholar] [CrossRef] [PubMed]
- Barros, C.D.T.; Mendonça, M.R.F.; Vieira, A.B.; Ziviani, A. A Survey on Embedding Dynamic Graphs. ACM Comput. Surv. 2021, 55, 1–37. [Google Scholar] [CrossRef]
- Jin, D.; Kim, S. On Generalizing Static Node Embedding to Dynamic Settings: A Practitioner’s Guide. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Tempe, AZ, USA, 21–25 February 2022; pp. 410–420. [Google Scholar] [CrossRef]
- Chen, C.; Tao, Y.; Lin, H. Dynamic Network Embeddings for Network Evolution Analysis. arXiv 2019, arXiv:1906.09860. [Google Scholar]
- Guo, X.; Zhou, B.; Skiena, S. Subset Node Representation Learning over Large Dynamic Graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 516–526. [Google Scholar] [CrossRef]
- Wang, Y.; Li, P.; Bai, C.; Leskovec, J. TEDIC: Neural Modeling of Behavioral Patterns in Dynamic Social Interaction Networks. In Proceedings of the Web Conference, Ljubljana, Slovenia, 25–29 April 2021; pp. 693–705. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, J.; Xu, X.; Fu, C.; Zhang, D.; Zhang, Q.; Xuan, Q. E-LSTM-D: A Deep Learning Framework for Dynamic Network Link Prediction. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3699–3712. [Google Scholar] [CrossRef] [Green Version]
- Goyal, P.; Kamra, N.; He, X.; Liu, Y. DynGEM: Deep Embedding Method for Dynamic Graphs. arXiv 2018, arXiv:1805.11273. [Google Scholar]
- Yang, M.; Zhou, M.; Kalander, M.; Huang, Z.; King, I. Discrete-Time Temporal Network Embedding via Implicit Hierarchical Learning in Hyperbolic Space. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1975–1985. [Google Scholar] [CrossRef]
- Li, Z.; Jin, X.; Li, W.; Guan, S.; Guo, J.; Shen, H.; Wang, Y.; Cheng, X. Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Montréal, BC, Canada, 11–15 July 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 408–417. [Google Scholar] [CrossRef]
- Wu, J.; Cao, M.; Cheung, J.C.K.; Hamilton, W.L. TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; pp. 5730–5746. [Google Scholar] [CrossRef]
- Fathy, A.; Li, K. TemporalGAT: Attention-Based Dynamic Graph Representation Learning. In Advances in Knowledge Discovery and Data Mining; Lecture Notes in Computer Science; Lauw, H.W., Wong, R.C.W., Ntoulas, A., Lim, E.P., Ng, S.K., Pan, S.J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 12084, pp. 413–423. [Google Scholar] [CrossRef]
- Sankar, A.; Wu, Y.; Gou, L.; Zhang, W.; Yang, H. DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks. In Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 22 January 2020; pp. 519–527. [Google Scholar] [CrossRef] [Green Version]
- Huang, R.; Ma, L.; He, J.; Chu, X. T-GAN: A deep learning framework for prediction of temporal complex networks with adaptive graph convolution and attention mechanism. Displays 2021, 68, 102023. [Google Scholar] [CrossRef]
- Lei, K.; Qin, M.; Bai, B.; Zhang, G.; Yang, M. GCN-GAN: A Non-Linear Temporal Link Prediction Model for Weighted Dynamic Networks. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 388–396. [Google Scholar] [CrossRef] [Green Version]
- Wang, J.; Jin, Y.; Song, G.; Ma, X. EPNE: Evolutionary Pattern Preserving Network Embedding. 24th European Conference on Artificial Intelligence, Santiago de Compostela, Spain, 29 August–8 September 2020; pp. 1603–1610. [Google Scholar] [CrossRef]
- Cai, L.; Chen, Z.; Luo, C.; Gui, J.; Ni, J.; Li, D.; Chen, H. Structural Temporal Graph Neural Networks for Anomaly Detection in Dynamic Graphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, online, 1–5 November 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 3747–3756. [Google Scholar]
- Huang, S.; Hitti, Y.; Rabusseau, G.; Rabbany, R. Laplacian Change Point Detection for Dynamic Graphs. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Online, 6–10 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 349–358. [Google Scholar] [CrossRef]
- Zhou, H.; Tan, Q.; Huang, X.; Zhou, K.; Wang, X. Temporal Augmented Graph Neural Networks for Session-Based Recommendations. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 1798–1802. [Google Scholar] [CrossRef]
- Beladev, M.; Rokach, L.; Katz, G.; Guy, I.; Radinsky, K. tdGraphEmbed: Temporal Dynamic Graph-Level Embedding. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual Event, 19–23 October 2020; pp. 55–64. [Google Scholar] [CrossRef]
- Hou, C.; Zhang, H.; He, S.; Tang, K. GloDyNE: Global Topology Preserving Dynamic Network Embedding. IEEE Trans. Knowl. Data Eng. 2020, 34, 4826–4837. [Google Scholar] [CrossRef]
- Ma, Y.; Guo, Z.; Ren, Z.; Tang, J.; Yin, D. Streaming Graph Neural Networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 719–728. [Google Scholar]
- Zhou, D.; Zheng, L.; Han, J.; He, J. A Data-Driven Graph Generative Model for Temporal Interaction Networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, 6–10 July 2020; pp. 401–411. [Google Scholar] [CrossRef]
- Li, X.; Zhang, M.; Wu, S.; Liu, Z.; Wang, L.; Yu, P.S. Dynamic Graph Collaborative Filtering. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17 November 2020; pp. 322–331. [Google Scholar] [CrossRef]
- Trivedi, R.; Farajtabar, M.; Biswal, P.; Zha, H. DyRep: Learning Representations over Dynamic Graphs. In International Conference on Learning Representations; ICLR: New Orleans, LA, USA, 2019. [Google Scholar]
- Xu, D.; Ruan, C.; Korpeoglu, E.; Kumar, S.; Achan, K. Inductive representation learning on temporal graphs. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Xia, W.; Li, Y.; Tian, J.; Li, S. Forecasting Interaction Order on Temporal Graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 1884–1893. [Google Scholar] [CrossRef]
- Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; Bronstein, M. Temporal Graph Networks for Deep Learning on Dynamic Graphs. In Proceedings of the ICML 2020 Workshop on Graph Representation Learning, Vienna, Austria, 12–18 July 2020. [Google Scholar]
- Fan, Z.; Liu, Z.; Zhang, J.; Xiong, Y.; Zheng, L.; Yu, P.S. Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 433–442. [Google Scholar]
- Chang, X.; Liu, X.; Wen, J.; Li, S.; Fang, Y.; Song, L.; Qi, Y. Continuous-Time Dynamic Graph Learning via Neural Interaction Processes. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual Event, 19–23 October 2020; pp. 145–154. [Google Scholar] [CrossRef]
- Jung, J.; Jung, J.; Kang, U. Learning to Walk across Time for Interpretable Temporal Knowledge Graph Completion. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 786–795. [Google Scholar] [CrossRef]
- Huang, H.; Fang, Z.; Wang, X.; Miao, Y.; Jin, H. Motif-Preserving Temporal Network Embedding. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 11–17 July 2020; International Joint Conferences on Artificial Intelligence Organization: Yokohama, Japan, 2020; pp. 1237–1243. [Google Scholar] [CrossRef]
- Liu, Y.; Ma, J.; Li, P. Neural Predicting Higher-Order Patterns in Temporal Networks. In Proceedings of the ACM Web Conference 2022, Barcelona, Spain, 26–29 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1340–1351. [Google Scholar] [CrossRef]
- Wang, Y.; Chang, Y.Y.; Liu, Y.; Leskovec, J.; Li, P. Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Du, H.; Zhou, Y.; Ma, Y.; Wang, S. Astrologer: Exploiting graph neural Hawkes process for event propagation prediction with spatio-temporal characteristics. Knowl.-Based Syst. 2021, 228, 107247. [Google Scholar] [CrossRef]
- Yang, C.; Wang, C.; Lu, Y.; Gong, X.; Shi, C.; Wang, W.; Zhang, X. Few-shot Link Prediction in Dynamic Networks. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event, 21–25 February 2022; pp. 1245–1255. [Google Scholar] [CrossRef]
- Yang, Z.; Ding, M.; Xu, B.; Yang, H.; Tang, J. STAM: A Spatiotemporal Aggregation Method for Graph Neural Network-based Recommendation. In Proceedings of the ACM Web Conference 2022, Lyon, France, 25–29 April 2022; pp. 3217–3228. [Google Scholar] [CrossRef]
- Sun, L.; Zhang, Z.; Zhang, J.; Wang, F.; Peng, H.; Su, S.; Yu, P.S. Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs. In Proceedings of the AAAI Conference on Artificial Intelligence 2021, online, 2–9 February 2021; Volume 35, pp. 4375–4383. [Google Scholar]
- Lu, Y.; Wang, X.; Shi, C.; Yu, P.S.; Ye, Y. Temporal Network Embedding with Micro- and Macro-Dynamics. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 469–478. [Google Scholar] [CrossRef] [Green Version]
- Wen, Z.; Fang, Y. TREND: TempoRal Event and Node Dynamics for Graph Representation Learning. In Proceedings of the ACM Web Conference 2022, Los Angeles, CA, USA, 7–11 November 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1159–1169. [Google Scholar] [CrossRef]
- Hu, Z.; Dong, Y.; Wang, K.; Sun, Y. Heterogeneous Graph Transformer. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 24–29 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 2704–2710. [Google Scholar]
- Li, Q.; Shang, Y.; Qiao, X.; Dai, W. Heterogeneous Dynamic Graph Attention Network. In Proceedings of the 2020 IEEE International Conference on Knowledge Graph (ICKG), Nanjing, China, 9–11 August 2020; pp. 404–411. [Google Scholar] [CrossRef]
- Xie, Y.; Ou, Z.; Chen, L.; Liu, Y.; Xu, K.; Yang, C.; Zheng, Z. Learning and Updating Node Embedding on Dynamic Heterogeneous Information Network. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Virtual Event, 8–12 March 2021; pp. 184–192. [Google Scholar] [CrossRef]
- Deng, S.; Rangwala, H.; Ning, Y. Dynamic Knowledge Graph based Multi-Event Forecasting. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, 6–10 July 2020; pp. 1585–1595. [Google Scholar] [CrossRef]
- Wang, Y.; Duan, Z.; Huang, Y.; Xu, H.; Feng, J.; Ren, A. MTHetGNN: A heterogeneous graph embedding framework for multivariate time series forecasting. Pattern Recognit. Lett. 2022, 153, 151–158. [Google Scholar] [CrossRef]
- Jiang, S.; Koch, B.; Sun, Y. HINTS: Citation Time Series Prediction for New Publications via Dynamic Heterogeneous Information Network Embedding. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 12–23 April 2021; pp. 3158–3167. [Google Scholar] [CrossRef]
- Wang, X.; Lu, Y.; Shi, C.; Wang, R.; Cui, P.; Mou, S. Dynamic Heterogeneous Information Network Embedding with Meta-Path Based Proximity. IEEE Trans. Knowl. Data Eng. 2022, 34, 1117–1132. [Google Scholar] [CrossRef]
- Huang, H.; Shi, R.; Zhou, W.; Wang, X.; Jin, H.; Fu, X. Temporal Heterogeneous Information Network Embedding. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), Montreal, QC, Canada, 19–26 August 2021; Zhou, Z.H., Ed.; International Joint Conferences on Artificial Intelligence Organization: Yokohama, Japan; pp. 1470–1476. [Google Scholar]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. DeepWalk: Online Learning of Social Representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 21 August 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 701–710. [Google Scholar] [CrossRef] [Green Version]
- Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; Mei, Q. LINE: Large-Scale Information Network Embedding. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2015; pp. 1067–1077. [Google Scholar] [CrossRef]
- Dong, Y.; Chawla, N.V.; Swami, A. Metapath2vec: Scalable Representation Learning for Heterogeneous Networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 135–144. [Google Scholar] [CrossRef]
- Zuo, Y.; Liu, G.; Lin, H.; Guo, J.; Hu, X.; Wu, J. Embedding Temporal Network via Neighborhood Formation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 2857–2866. [Google Scholar] [CrossRef]
- Yin, Y.; Ji, L.X.; Zhang, J.P.; Pei, Y.L. DHNE: Network Representation Learning Method for Dynamic Heterogeneous Networks. IEEE Access 2019, 7, 134782–134792. [Google Scholar] [CrossRef]
- Fu, X.; Zhang, J.; Meng, Z.; King, I. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 2331–2341. [Google Scholar] [CrossRef]
- Zhou, F.; Xu, X.; Li, C.; Trajcevski, G.; Zhong, T.; Zhang, K. A Heterogeneous Dynamical Graph Neural Networks Approach to Quantify Scientific Impact. arXiv 2020, arXiv:2003.12042. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 5–10 December 2013; Volume 2, pp. 3111–3119. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Shang, Y. Resilient group consensus in heterogeneously robust networks with hybrid dynamics. Math. Methods Appl. Sci. 2021, 44, 1456–1469. [Google Scholar] [CrossRef]
Datasets | Aminer | DBLP | Yelp |
---|---|---|---|
Node Types | Author(A)Paper(P)Conference(C) | Author(A)Paper(P)Conference(C) | Star(S)User(U)Business(B) |
Nodes | 10,206 10,457 2584 | 22,662 22,670 2938 | 5 24,586 800 |
Meta-path | APA APPA APCPA | APA APPA APCPA | USU BSB BUB BSUSB UBSBU |
Time Steps | 10 | 15 | 15 |
Datasets | Aminer | DBLP | Yelp | ||||||
---|---|---|---|---|---|---|---|---|---|
Metrics | AUC | f1 | acc | AUC | f1 | acc | AUC | f1 | acc |
Deepwalk | 77.07% | 71.19% | 71.09% | 85.59% | 81.28% | 81.28% | 50.32% | 63.28% | 52.78% |
LINE | 68.49% | 64.79% | 63.54% | 75.11% | 71.24% | 70.09% | 58.30% | 57.56% | 55.57% |
DySAT | 80.25% | 67.91% | 66.14% | 82.16% | 75.61% | 75.03% | 53.67% | 54.27% | 52.88% |
HTNE | 76.53% | 73.18% | 72.55% | 90.77% | 83.16% | 82.87% | 65.27% | 64.77% | 60.65% |
MTNE | 82.78% | 75.23% | 74.72% | 94.06% | 86.74% | 86.61% | 67.75% | 65.34% | 61.69% |
Metapath2vec | 70.20% | 65.43% | 64.83% | 79.17% | 74.02% | 73.17% | 51.18% | 46.99% | 51.03% |
StHNE | 79.19% | 73.66% | 69.61% | 81.58% | 76.02% | 72.47% | 73.60% | 70.30% | 63.50% |
DHNE | 63.86% | 76.89% | 64.97% | 75.46% | 70.82% | 69.27% | 50.41% | 50.52% | 50.53% |
DyHNE | 72.06% | 74.25% | 74.25% | 82.77% | 76.54% | 72.97% | 75.58% | 71.33% | 66.67% |
MAGNN | 66.34% | 63.90% | 62.83% | 67.80% | 70.02% | 63.62% | 73.29% | 69.19% | 60.88% |
HDGNN | 89.80% | 82.33% | 82.09% | 92.03% | 84.37% | 84.17% | 76.87% | 71.86% | 71.04% |
MTNE | 91.16% | 88.08% | 88.25% | 94.65% | 90.66% | 90.71% | 79.33% | 72.36% | 72.51% |
DHGEEP | 91.57% | 88.85% | 88.98% | 94.72% | 91.67% | 91.71% | 79.99% | 72.41% | 72.93% |
Datasets | Aminer | DBLP | ||||||
---|---|---|---|---|---|---|---|---|
Metrics | Precision | Recall | Precision | Recall | ||||
Top | @5 | @10 | @5 | @10 | @5 | @10 | @5 | @10 |
Deepwalk | 9.81% | 8.03% | 2.29% | 4.50% | 9.80% | 7.98% | 2.19% | 4.25% |
LINE | 7.51% | 6.11% | 2.24% | 3.66% | 5.23% | 4.12% | 1.47% | 2.31% |
DySAT | 3.35% | 2.74% | 0.96% | 1.59% | 1.80% | 1.13% | 0.48% | 0.62% |
HTNE | 9.98% | 8.20% | 3.01% | 4.94% | 7.75% | 6.46% | 2.25% | 3.68% |
MTNE | 10.45% | 8.38% | 3.15% | 5.06% | 7.96% | 6.39% | 2.30% | 3.65% |
Metapath2vec | 2.21% | 2.46% | 0.67% | 1.49% | 2.33% | 2.07% | 0.69% | 1.23% |
StHNE | 4.60% | 3.40% | 1.40% | 2.03% | 3.67% | 2.91% | 1.01% | 1.60% |
DHNE | 3.32% | 2.24% | 1.02% | 1.34% | 6.34% | 4.87% | 1.96% | 2.91% |
DyHNE | 6.04% | 4.04% | 1.74% | 2.40% | 3.89% | 3.09% | 1.12% | 1.75% |
MAGNN | 2.65% | 2.18% | 0.81% | 1.33% | 1.62% | 1.26% | 0.45% | 0.69% |
HDGNN | 12.04% | 10.85% | 3.82% | 6.45% | 9.13% | 8.79% | 2.03% | 4.78% |
THINE | 14.05% | 12.07% | 4.31% | 7.25% | 11.67% | 9.47% | 3.48% | 5.51% |
DHGEEP | 14.59% | 13.65% | 4.81% | 7.68% | 11.94% | 9.98% | 3.67% | 5.85% |
Datasets | Aminer | DBLP | ||||||
---|---|---|---|---|---|---|---|---|
Metrics | Precision | Recall | Precision | Recall | ||||
Top | @2 | @4 | @2 | @4 | @2 | @4 | @2 | @4 |
Deepwalk | 10.33% | 10.21% | 2.28% | 6.75% | 1.76% | 2.55% | 0.54% | 1.65% |
LINE | 6.28% | 3.60% | 1.97% | 2.21% | 1.65% | 1.02% | 0.46% | 0.63% |
DySAT | 10.46% | 7.56% | 3.84% | 5.649% | 3.90% | 3.39% | 0.96% | 2.09% |
HTNE | 8.60% | 6.51% | 2.98% | 4.61% | 1.95% | 1.29% | 0.55% | 0.72% |
MTNE | 10.46% | 6.98% | 3.24% | 4.73% | 2.20% | 1.76% | 0.64% | 1.03% |
Metapath2vec | 17.44% | 13.83% | 5.96% | 9.76% | 2.72% | 2.57% | 0.83% | 1.68% |
DHNE | 9.43% | 7.23% | 4.12% | 5.11% | 3.40% | 3.23% | 0.84% | 1.84% |
MAGNN | 8.47% | 6.35% | 3.14% | 4.23% | 3.18% | 2.97% | 0.79% | 1.51% |
HDGNN | 18.30% | 14.37% | 6.68% | 10.94% | 4.23% | 3.87% | 1.26% | 2.41% |
MTNE | 22.79% | 18.62% | 8.31% | 12.91% | 4.55% | 4.08% | 1.53% | 2.70% |
DHGEEP | 30.23% | 24.07% | 10.48% | 16.29% | 4.56% | 4.34% | 1.61% | 2.82% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, L.; Wang, L.; Zeng, C.; Liu, H.; Chen, J. DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction. Mathematics 2022, 10, 4193. https://doi.org/10.3390/math10224193
Chen L, Wang L, Zeng C, Liu H, Chen J. DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction. Mathematics. 2022; 10(22):4193. https://doi.org/10.3390/math10224193
Chicago/Turabian StyleChen, Libin, Luyao Wang, Chengyi Zeng, Hongfu Liu, and Jing Chen. 2022. "DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction" Mathematics 10, no. 22: 4193. https://doi.org/10.3390/math10224193
APA StyleChen, L., Wang, L., Zeng, C., Liu, H., & Chen, J. (2022). DHGEEP: A Dynamic Heterogeneous Graph-Embedding Method for Evolutionary Prediction. Mathematics, 10(22), 4193. https://doi.org/10.3390/math10224193