The Graph Attention Recommendation Method for Enhancing User Features Based on Knowledge Graphs
Abstract
:1. Introduction
- The potential relationships between users have been considered, and the entities of the top k most similar users are selected for linking, enhancing the initial entity set representation of the user–item pair.
- The relevance of entities to different users is calculated from two perspectives: user–relationship and user–entity. The top k neighbors with the highest relevance scores are selected for aggregation, reducing the noise caused by irrelevant entities.
- The topological proximity structure of ripple-concentrated entities is processed, with hierarchical weighting applied to each layer. A weighted aggregation using a concat aggregator is performed, strengthening the user’s feature representation and improving the accuracy of the recommendation results.
2. Relate Work
3. Problem Description
4. KGAEUF Recommendation Model Algorithm
4.1. Heterogeneous Propagation Processing Layer
4.1.1. Neighborhood Collaborative Propagation
4.1.2. Knowledge Graph Propagation
4.2. Knowledge-Aware Attention Layer
4.3. Prediction Layer
4.4. Loss Function
5. Experiment
- How does the KGAEUF model perform compared to the state-of-the-art propagation-based recommendation models?
- Does the use of similar user entity set enhancement representation and the neighbor score selection denoising module contribute to improving the accuracy of the recommendation results?
- How do different parameter choices affect the proposed model method?
5.1. Experimental Datasets
- The Last.FM dataset is provided by the Last.FM online music system and contains information on the social networks, tags, and music artists of over 2000 users.
- The Book-Crossing dataset collects ratings (0–10) from different readers in the Book-Crossing community for various books.
5.2. Comparison Models
- RippleNet [23] is a classic propagation-based method that enhances user representations by propagating users’ latent preferences in the knowledge graph.
- KGCN [24] is a classic propagation-based method that selectively aggregates neighborhood information using graph convolution, enriching user–item representations.
- KGAT [25] is a classic propagation-based method that forms a collaborative knowledge graph by combining the user–item interaction graph with the knowledge graph, using attention mechanisms to distinguish the importance of different neighbors in the collaborative knowledge graph.
- KAT [6] is a neighbor sampling mechanism is designed, which calculates the maximum node influence by extracting the largest connected subgraph, avoiding the instability in model performance caused by random sampling.
- MDAR [28] combines DeepFM with the attention mechanism, integrating relationships into the attention network of the knowledge graph.
- RFAN [29] emphasizes the importance of modeling relationships between entities in the knowledge graph and extends user preferences by exploring various relationships around entities.
- KPMER [30]: By utilizing a lightweight graph convolutional network to more effectively aggregate higher-order neighborhood information from the knowledge graph, this method improves the accuracy of neighborhood feature extraction and ensures the acquisition of more appropriate shared information.
5.3. Experimental Environment and Parameter Settings
5.4. Evaluation Metrics
5.5. Performance Comparison
- The AUC value of KGAEUF on the Last.FM dataset reached 0.854, representing a 1.2% improvement over MDAR, and the F1 score reached 0.783, improving by 1.5% over MDAR. This improvement is attributed to the introduction of the user collaborative propagation module, which enhances the initial user entity representation, significantly improving the model’s ability to distinguish between positive and negative samples.
- The AUC value of KGAEUF on the Book-Crossing dataset reached 0.758, representing a 1.2% improvement over RFAN, and the F1 score reached 0.683, improving by 1.1% over RFAN. This indicates that the noise-reduction module effectively mitigates the negative impact of noisy entities on recall and simultaneously improves the model’s precision.
- Compared with the classic RippleNet model, RippleNet generates the initial entity set through historical interactions but does not consider user similarity. KGAEUF’s user collaborative propagation module enhances the initial representation, significantly improving the model’s discriminative ability. Compared with the KGAT model, which uses a fixed-attention mechanism to assign weights to neighboring entities, KGAEUF introduces knowledge-aware relation vectors, making weight assignment more targeted. Compared with RFAN, which ignores the impact of multi-level topological structures during entity-enhanced representation, KGAEUF’s hierarchical weighting mechanism dynamically adjusts layer weights, further optimizing user and item embeddings.
- KGAEUF performs differently on the Last.FM and Book-Crossing datasets, mainly because the Last.FM dataset includes richer social network and music artist information, with concentrated user interests and relatively uniform entity distribution. As a result, the user collaborative propagation module can more effectively leverage user similarity, improving recommendation performance. In contrast, the Book-Crossing dataset exhibits more diverse user interests, and random noise has a greater impact on model performance, resulting in a slightly lower improvement margin. However, the noise-reduction module plays a positive role in reducing the interference of irrelevant entities, ensuring the stability of AUC and F1 scores.
5.6. Ablation Experiment
- After removing the KGAEUF2 module, the model’s AUC value on the Last.FM dataset decreased by 1.8%, and the F1 score decreased by 1%. On the Book-Crossing dataset, the AUC value decreased by 1.3%, and the F1 score decreased by 1.3%. This indicates that linking entities of similar users plays a critical role in enhancing embedding representations.
- After removing the KGAEUF1 module, the model’s AUC value on the Last.FM dataset decreased by 2%, and the F1 score decreased by 1.3%. On the Book-Crossing dataset, the AUC value decreased by 1.5%, and the F1 score decreased by 1.5%. This indicates that the noise-reduction module also plays a critical role in enhancing embedding representations.
- Considering the evaluation metrics across both datasets, KGAEUF2 demonstrates better recommendation accuracy than KGAEUF1. The results indicate that addressing both user–relation and user–entity perspectives and setting scoring functions tailored to different user preferences to filter irrelevant entity information play a more crucial role in learning user and item embeddings.
5.7. Aggregator Selection
- The concat aggregator performed best across all metrics, particularly achieving an AUC of 0.854 and an F1 score of 0.783 on the Last.FM dataset. This is because the concat aggregator preserves the information features of all ripple sets, enabling more fine-grained embedding representations. The sum aggregator ranked second in performance, achieving an AUC of 0.723 on the Book-Crossing dataset. However, summation operations may cause feature imbalance (with high-dimensional features dominating), affecting recommendation diversity. The pool aggregator performed the worst, with metrics on all datasets lower than the other two aggregators. This is because pooling operations discard some feature details, making the model unable to fully capture high-order entity relationships.
- The concat aggregator retains the feature information of all ripple sets at different levels through concatenation, effectively avoiding information loss and providing richer contextual information for user and item embeddings. In contrast, the summation method of the sum aggregator may introduce feature weight imbalances, while the pooling method of the pool aggregator directly discards some detailed information, limiting the model’s ability to capture complex relationships.
5.8. Choice of k Value
- On the Last.FM dataset, the best results are achieved when k = 8, with an AUC of 0.854 and an F1 score of 0.753. On the Book-Crossing dataset, the best results are achieved when k = 16, with an AUC of 0.758 and an F1 score of 0.683. When k is too small, the results are suboptimal because a small k cannot represent sufficient neighbor information to comprehensively capture user interest features, leading to insufficient information in the embeddings. Conversely, if k is too large, the introduction of noisy entities interferes with model performance, especially affecting precision and AUC metrics.
- In the Last.FM dataset, user interests are more concentrated, so a smaller k value is sufficient to capture user preferences. In the Book-Crossing dataset, user interests are more dispersed and diverse, requiring a larger k value to adequately model user behavior.
5.9. Optimal Number of Layers L
- From the experiments, we observe that the model achieves the best recommendation performance on the Last.FM and Book-Crossing datasets when L = 3 and L = 2. When L = 1, considering only direct neighbors, the model cannot capture the associations of high-order entities, resulting in incomplete user embedding representations. When L = 4 or higher, increasing the number of propagation layers may introduce low-relevance high-order neighbors. These noisy entities are irrelevant to user preferences but interfere with embedding representations, reducing the model’s recommendation performance.
- In the Last.FM dataset, user interests are more concentrated, and high-order neighbor information contributes more significantly to user preference modeling, so the best performance is achieved when L = 3. In the Book-Crossing dataset, user interests are more dispersed, and high-order neighbors may introduce more irrelevant information, so the best performance is achieved when L = 2.
5.10. Recommendation Case Analysis
5.11. Limitations of the Model
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Nesmaoui, R.; Louhichi, M.; Lazaae, M. A Collaborative Filtering Movies Recommendation System based on Graph Neural Network. Procedia Comput. Sci. 2023, 220, 456–461. [Google Scholar] [CrossRef]
- Yang, S.; Du, Q.; Zhu, G.; Cao, J.; Chen, L. Balanced influence maximization in social networks based on deep reinforcement learning. Neural Netw. 2024, 169, 334–351. [Google Scholar] [CrossRef]
- Dai, G.; Wang, X.; Zou, X.; Liu, C.; Cen, S. MRGAT: Multi-Relational Graph Attention Network for knowledge graph completion. Neural Netw. 2022, 154, 234–245. [Google Scholar] [CrossRef] [PubMed]
- Yang, S.; Du, Q.; Zhu, G.; Cao, J.; Qin, W. Neural attentive influence maximization model in social networks via reverse influence sampling on historical behavior sequences. Expert Syst. Appl. 2024, 249, 123491. [Google Scholar] [CrossRef]
- Yan, X.; Gu, C.; Feng, Y.; Han, J. Predicting Drug-drug Interaction with Graph Mutual Interaction Attention Mechanism. Methods 2024, 223, 16–25. [Google Scholar] [CrossRef] [PubMed]
- Liu, T.; Zhang, X.; Wang, W.; Mu, W. KAT: Knowledge-aware attentive recommendation model integrating two-terminal neighbor features. Int. J. Mach. Learn. Cybern. 2024, 15, 4941–4958. [Google Scholar] [CrossRef]
- Pham, H.V.; Tuan, N.T.; Tuan, L.M.; Moore, P. IDGCN: A Proposed Knowledge Graph Embedding With Graph Convolution Network For Context-Aware Recommendation Systems. J. Organ. Comput. Electron. Commer. 2024. [Google Scholar] [CrossRef]
- Li, X.; Xie, Q.; Zhu, Q.Y.; Ren, K.; Sun, J.Z. Knowledge graph-based recommendation method for cold chain logistics. Expert Syst. Appl. 2023, 227, 120230. [Google Scholar] [CrossRef]
- Bertram, N.; Dunkel, J.; Hermoso, R. I am all EARS: Using open data and knowledge graph embeddings for music recommendations. Expert Syst. Appl. 2023, 229, 120347. [Google Scholar] [CrossRef]
- Zhang, F.; Yuan, N.; Lian, D.; Xie, X.; Ma, W. Collaborative knowledge base embedding for recommender system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
- Khan, N.; Ma, Z.; Ma, R.; Polat, K. Continual knowledge graph embedding enhancement for joint interaction-based next click recommendation. Knowl.-Based Syst. 2024, 304, 112475. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, F.; Xie, X.; Guo, M. DKN: Deep knowledge-aware network for news recommendation. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018. [Google Scholar]
- He, Q.; Liu, S.; Liu, Y. Optimal Recommendation Models Base on knowledge Representation and Graph Attention Network. IEEE Access 2023, 11, 19809–19818. [Google Scholar] [CrossRef]
- Cao, Y.; Wang, X.; He, X.; Hu, Z.; Chua, T.S. Unifying Knowledge Graph Learning and Recommendation: Towards a Better Understanding of User Preferences. In Proceedings of the World Wode Web Conference, San Francisco, CA, USA, 13–17 May 2019. [Google Scholar]
- Zhang, J.; Zain, A.M.; Zhou, K.; Chen, X.; Zhang, R.M. A review of recommender systems based on knowledge graph embedding. Expert Syst. Appl. 2024, 250, 123876. [Google Scholar] [CrossRef]
- Jiang, L.; Yan, G.; Luo, H.; Chang, W. Improved Collaborative Recommendation Model: Integrating Knowledge Embedding and Graph Contrastive Learning. Electrons 2023, 12, 4238. [Google Scholar] [CrossRef]
- Wang, X.; Wang, D.; Xu, C.; He, C.; Cao, Y. Explainable reasoning over knowledge graphs for recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, Vancouver, BC, Canada, 2–7 October 2018. [Google Scholar]
- Sun, Z.; Yang, J.; Zhang, J.; Bozzon, A.; Huang, L.K. Recurrent knowledge graph embedding for effective recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, Vancouver, BC, Canada, 2–7 October 2018. [Google Scholar]
- Hu, B.; Shi, C.; Zhao, W.X.; Yu, P.S. Leveraging Meta-path based Context for Top-N Recommendation with A Neural Co-Attention Model. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018. [Google Scholar]
- Luo, Z.; Zhang, T. An Interpretable Movie Recommendation Approach Fusing User Features and Knowledge Graphs. J. Man. Eng. 2024, 38, 128–139. [Google Scholar]
- Lu, J.; Li, J.; Li, W. Heterogeneous propagation graph convolution network for a recommendation system based on a knowledge graph. Eng. Appl. Artifi. Intell. 2024, 138, 109395. [Google Scholar] [CrossRef]
- Yang, S.; Du, Q.; Zhu, G.; Cao, J.; Qin, W. Extending influence maximization by optimizing the network topology. Expert Syst. Appl. 2024, 249, 123491. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, F.; Wang, J.; Zhao, M.; Li, W. RippNet:propagating user preference on the knowledge graph for recommender systems. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Turin, Italy, 22–26 October 2018. [Google Scholar]
- Wang, H.; Zhao, M.; Xie, X.; Li, W.; Guo, M. Knowledge graph convolutional networks for recommender systems. In Proceedings of the World Wide Web, San Francisco, CA, USA, 13–17 May 2019. [Google Scholar]
- Wang, X.; He, X.; Cao, Y.; Meng, L.; Chua, T.S. KGAT: Knowledge Graph Attention Network for Recommendation. In Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar]
- Wang, Z.; Lin, G.; Tan, H.; Chen, Q.; Liu, X. CKAN:collaborative knowledge-aware attentive network for recommender systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 25–30 July 2020. [Google Scholar]
- Huang, W.; Wu, J.; Song, W.; Wang, Z. Cross attention fusion knowledge graph optimized recommendation. Appl. Intell. 2022, 52, 10297–10306. [Google Scholar] [CrossRef]
- Li, S.; Xue, Q.; Wang, P. MDAR: A Knowlegde-Graph-Enhanced Multi-Task Recommendation System Based on a DeepAFM and a Relation-Fused Multi-Gead Graph Attention Network. Appl. Sci. 2023, 13, 8697. [Google Scholar] [CrossRef]
- Duan, H.; Liu, P.; Ding, Q. RFAN: Relation-fused multi-head attention network for knowledge graph enhanced recommendation. Appl. Intell. 2023, 53, 1068–1083. [Google Scholar] [CrossRef]
- Guo, L.; Liu, T.; Zhou, S.; Tang, H.; Zheng, X. Knowledge Graph-Based Personalized Multitask Enhanced Recommendation. IEEE Trans. Comput. Soc. Syst. 2024, 11, 7685–7697. [Google Scholar] [CrossRef]
- Yang, S.; Du, Q.; Zhu, G.; Cao, J.; Qin, W. GAA-PPO: A novel graph adversarial attack method by incorporating proximal policy optimization. Neurocomputing 2023, 557, 126707. [Google Scholar] [CrossRef]
- Yang, S.; Du, Q.; Zhu, G.; Cao, J.; Wang, Y. MVE-FLK: A multi-task legal judgment prediction via multi-view encoder fusing legal keywords. Knowl.-Based Syst. 2022, 239, 107960. [Google Scholar] [CrossRef]
- Liu, J.; Wang, W.; Yi, B. Contrastive multi-interest graph attention network for knowledge-aware recommendation. Expert Syst. Appl. 2024, 225, 124748. [Google Scholar] [CrossRef]
- Hu, Z.; Xia, F. Multi-stream graph attention network for recommendation with knowledge graph. J. Web Semant. 2024, 84, 100831. [Google Scholar] [CrossRef]
- Kannikaklalang, N.; Thanviset, W.; Wongthanavasu, S. BiGCAN: A novel SRS-based bidirectional graph Convolution Attention Network for dynamic user preference and next-item recommendation. Expert Syst. Appl. 2025, 265, 126016. [Google Scholar] [CrossRef]
- Li, B.; Zhu, L. Turing instability analysis of a reaction–diffusion system for rumor propagation in continuous space and complex networks. Inf. Process. Manag. 2024, 61, 103621. [Google Scholar] [CrossRef]
Dataset | Users | Items | Interactions | Entities | Relations | Triples |
---|---|---|---|---|---|---|
Last.FM | 1872 | 3846 | 42,346 | 9366 | 60 | 15,518 |
Book-Crossing | 17,860 | 14,976 | 139,746 | 77,903 | 25 | 151,500 |
Last.FM | Last.FM | Book-Crossing | Book-Crossing | |
---|---|---|---|---|
Model | AUC | F1 | AUC | F1 |
RippleNet | 0.776 (−7.8%) | 0.702 (−8.1%) | 0.721 (−3.7%) | 0.647 (−3.6%) |
KGCN | 0.802 (−5.2%) | 0.708 (−7.5%) | 0.684 (−7.4%) | 0.631 (−5.2%) |
KGAT | 0.829 (−2.5%) | 0.742 (−4.1%) | 0.731 (−2.7%) | 0.654 (−2.9%) |
KAT | 0.810 (−4.4%) | 0.724 (−5.9%) | 0.716 (−4.2%) | 0.648 (−3.5%) |
KPMER | 0.799 (−5.5%) | 0.712 (−7.1%) | 0.732 (−2.6%) | 0.659 (−2.4%) |
RFAN | 0.839 (−1.5%) | 0.764 (−1.9%) | 0.746 (−1.2%) | 0.672 (−1.1%) |
MDAR | 0.842 (−1.2%) | 0.768 (−1.5%) | 0.753 (−0.5%) | 0.675 (−0.8%) |
KGAEUF | 0.854 | 0.783 | 0.758 | 0.683 |
Last.FM | Last.FM | Book-Crossing | Book-Crossing | |
---|---|---|---|---|
Model | AUC | F1 | AUC | F1 |
KGAEUF1 | 0.834 | 0.770 | 0.743 | 0.668 |
KGAEUF2 | 0.836 | 0.773 | 0.745 | 0.670 |
KGAEUF | 0.854 | 0.783 | 0.758 | 0.683 |
Last.FM | Last.FM | Book-Crossing | Book-Crossing | |
---|---|---|---|---|
Aggregator | AUC | F1 | AUC | F1 |
pool | 0.812 | 0.753 | 0.712 | 0.651 |
sum | 0.814 | 0.756 | 0.723 | 0.663 |
concat | 0.854 | 0.783 | 0.758 | 0.683 |
Last.FM | Last.FM | Book-Crossing | Book-Crossing | |
---|---|---|---|---|
k | AUC | F1 | AUC | F1 |
2 | 0.822 | 0.760 | 0.735 | 0.664 |
4 | 0.836 | 0.774 | 0.745 | 0.670 |
8 | 0.854 | 0.783 | 0.750 | 0.675 |
16 | 0.848 | 0.774 | 0.758 | 0.683 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, H.; Li, Q.; Luo, H.; Tang, Y. The Graph Attention Recommendation Method for Enhancing User Features Based on Knowledge Graphs. Mathematics 2025, 13, 390. https://doi.org/10.3390/math13030390
Wang H, Li Q, Luo H, Tang Y. The Graph Attention Recommendation Method for Enhancing User Features Based on Knowledge Graphs. Mathematics. 2025; 13(3):390. https://doi.org/10.3390/math13030390
Chicago/Turabian StyleWang, Hui, Qin Li, Huilan Luo, and Yanfei Tang. 2025. "The Graph Attention Recommendation Method for Enhancing User Features Based on Knowledge Graphs" Mathematics 13, no. 3: 390. https://doi.org/10.3390/math13030390
APA StyleWang, H., Li, Q., Luo, H., & Tang, Y. (2025). The Graph Attention Recommendation Method for Enhancing User Features Based on Knowledge Graphs. Mathematics, 13(3), 390. https://doi.org/10.3390/math13030390