Social Recommendation Based on Multi-Auxiliary Information Constrastive Learning
Abstract
:1. Introduction
- We mine item association relationships, user social relationships, and user–item interaction relationships as auxiliary information to alleviate the problem of data sparsity. Unlike the existing social recommendation models of graph neural networks that independently carry out interactive information and social information dissemination, we design a dissemination mode to make multiple auxiliary information affect the formation of user/item embedding simultaneously.
- We design self-supervised auxiliary tasks for the social recommendation scenario to improve the node embedding quality and alleviate data sparsity, construct several views according to different combinations of auxiliary information, and maximize the mutual information of node embedding under different perspectives based on contrastive self-supervised learning.
- We conduct extensive experiments on two public datasets to demonstrate the effectiveness of the proposed model and analyze the benefits of auxiliary information and self-supervised tasks.
2. Related Work
2.1. Graph Neural Networks (GNNs)
2.2. Contrastive Self-Supervised Learning
- The auxiliary task performs data enhancement based on the original sample. The original and positive/negative samples generate corresponding low-dimensional embedding through the encoder and construct the loss function through contrast learning.
- The main task generates the corresponding low-dimensional embedding of the original sample through the encoder and constructs the principal loss function through the main task. Finally, by comparing the loss function with the main loss function, higher quality embedding can be obtained.
3. Problem Analysis
- They carry out user–item interaction information and social information dissemination in two graphs instead of being affected by both simultaneously (more realistic).
- As auxiliary information, social relationship alleviates the problem of data sparsity in the recommendation system, but its role is limited, and more auxiliary information is needed.
4. Methodology
4.1. Heterogeneous Network Construction
4.2. Main Tasks
4.3. Self-Supervised Auxiliary Tasks
5. Experiments
5.1. The Datasets
5.2. Baselines and Metrics
- BPR: Bayesian personalized ranking [35], a classical sorting recommendation algorithm. Based on user–item interaction information, it is assumed that the target user prefers interactive items rather than non-interactive items.
- SBPR: A classic social recommendation algorithm [36] that integrates social relations to optimize the item preference priority of target users based on BPR.
- DiffNet: A social recommendation algorithm based on a graph neural network [37] that simulates the social influence of dynamic propagation in user social networks.
- LightGCN: A recommendation algorithm based on a graph convolutional neural network [13] that learns node embedding in a simple convolution mode suitable for collaborative filtering.
- SGL: A recommendation algorithm based on self-supervised learning and a graph convolution neural network [27] that creates multiple views by randomly changing the graph structure to improve the embedding quality.
- SEPT: A graph convolution neural network recommendation algorithm based on self-supervised learning and collaborative training [34] that finds more suitable positive/negative samples for self-supervised learning through collaborative training to learn more accurate embedding.
5.3. Results
5.3.1. Overall Comparison
- SlightGCN’s recommendation performance is significantly better than that of the other five baseline models on the two datasets. Specifically, the evaluation metrics of SlightGCN on DoubanMovie and DoubanBook improved by 1.84–4.26% and 2.08−3.30%, respectively, compared with the suboptimal model. In addition, the interactive data density of the DoubanMovie dataset is higher than that of DoubanBook. Thus, the recommendation performance of all models on the DoubanMovie dataset is significantly better than that of DoubanBook.
- The results show that SBPR is superior to BPR in some indicators, indicating that SBPR can alleviate data sparsity to a certain extent by integrating users’ social relationships based on BPR. SBPR’s simple approach of integrating users’ social relationships by directly optimizing the order of items does not accurately simulate the impact of social relationships on users’ preferences. DiffNet enables the influence of friends’ preferences on users to propagate dynamically in social networks through multi-layer graph neural network simulation. The performance of DiffNet is better than that of SBPR owing to the ability of the graph neural network to capture graph relation and the dynamic propagation of social influence. LightGCN’s simple convolution mode makes it more suitable for collaborative filtering, and its recommendation performance is significantly higher than that of the previous recommendation model. Based on LightGCN as a graph encoder, collaborative training is used to find more suitable positive and negative samples for self-supervised learning, and SEPT improves the recommendation performance based on LightGCN. Using LightGCN as the graph encoder and designing the self-supervised auxiliary task based on random graph structural disturbance, the SGL recommendation performance was considerably improved.
- SlightGCN performs best on both datasets. SlightGCN has the following advantages over other models. First, it not only uses user–item interaction relationships and user-social relationships but also mines item association relationships from existing information to form multi-auxiliary information. Second, a more appropriate convolution mode is designed so that users are affected by both user–item interaction relationships and user-social relationships, and objects are affected by both user–item interaction relationships and item association relationships. Third, multi-view self-supervised auxiliary tasks are designed to learn more accurate user/item embedding.
5.3.2. Cold-Start User Experiment
5.3.3. The Benefit of Self-Supervised Auxiliary Tasks
5.3.4. Parameter Sensitive Analysis
- The optimal λ1 = 0.009 on DoubanMovie is far less than the optimal λ1 = 0.05 on DoubanBook, indicating that the model relies much more on self-supervised auxiliary tasks on DoubanBook than on DoubanMovie. The reasons for the above phenomenon are as follows: the user–item interaction density on DoubanBook is lower than that on DoubanMovie. To learn higher-quality node embedding and improve recommendation performance, the model in the DoubanBook environment needs to obtain more information from self-supervised auxiliary tasks.
- The improvement of the four metrics in DoubanBook with sparse data is much more significant than that in DoubanMovie, indicating the effectiveness of self-supervised auxiliary tasks in alleviating the data sparsity problem.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Aljunid, M.F.; Huchaiah, M.D. An efficient hybrid recommendation model based on collaborative filtering recommender systems. CAAI Trans. Intell. Technol. 2021, 6, 13. [Google Scholar]
- Meng, X.W.; Liu, S.D.; Zhang, Y.J.; Hu, X. Research on Social Recommender Systems. J. Softw. 2015, 26, 1356–1372. [Google Scholar]
- Zhang, Y.J.; Dong, Z.; Meng, X.W. Research on personalized advertising recommendation systems and their applications. J. Comput. 2021, 44, 531–563. [Google Scholar]
- Shang, M.; Luo, X.; Liu, Z.; Chen, J.; Yuan, Y.; Zhou, M. Randomized latent factor model for high-dimensional and sparse matrices from industrial applications. IEEE/CAA J. Autom. Sin. 2018, 6, 131–141. [Google Scholar] [CrossRef]
- Sinha, R.R.; Swearingen, K. Comparing recommendations made by online systems and friends. DELOS 2001, 106. Available online: https://www.researchgate.net/publication/2394806_Comparing_Recommendations_Made_by_Online_Systems_and_Friends (accessed on 28 April 2022).
- Iyengar, R.; Han, S.; Gupta, S. Do Friends Influence Purchases in a Social Network? Harvard Business School Marketing Unit Working Paper No. 09-123; Harvard Business School: Boston, MA, USA, 2009. [Google Scholar]
- Kang, M.; Bi, Y.; Wu, Z.; Wang, J.; Xiao, J. A heterogeneous conversational recommender system for financial products. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 26–30. [Google Scholar]
- Yu, J.; Gao, M.; Li, J.; Yin, H.; Liu, H. Adaptive implicit friends identification over heterogeneous network for social recommendation. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 357–366. [Google Scholar]
- Zhao, H.; Zhou, Y.; Song, Y.; Lee, D.L. Motif enhanced recommendation over heterogeneous information network. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), Toulon, France, 24–26 April 2017. [Google Scholar]
- Berg, R.; Kipf, T.N.; Welling, M. Graph convolutional matrix completion. In Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018), London, UK, 19–23 August 2018. [Google Scholar]
- Wang, X.; He, X.; Wang, M.; Feng, F.; Chua, T.-S. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 165–174. [Google Scholar]
- He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, China, 25–30 July 2020; pp. 639–648. [Google Scholar]
- Wu, S.; Sun, F.; Zhang, W.; Xie, X.; Cui, B. Graph neural networks in recommender systems: A survey. arXiv 2011, arXiv:2011.02260. [Google Scholar] [CrossRef]
- Fan, W.; Ma, Y.; Li, Q.; He, Y.; Zhao, E.; Tang, J.; Yin, D. Graph neural networks for social recommendation. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 417–426. [Google Scholar]
- Wu, Q.; Zhang, H.; Gao, X.; He, P.; Weng, P.; Gao, H.; Chen, G. Dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2091–2102. [Google Scholar]
- Song, W.; Xiao, Z.; Wang, Y.; Charlin, L.; Zhang, M.; Tang, J. Session-based social recommendation via dynamic graph attention networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, VIC, Australia, 11–15 February 2019; pp. 555–563. [Google Scholar]
- Yu, J.; Yin, H.; Li, J.; Gao, M.; Huang, Z.; Cui, L. Enhance social recommendation with adversarial graph convolutional networks. IEEE Trans. Knowl. Data Eng. 2020, 34, 3727–3739. [Google Scholar] [CrossRef]
- Yu, J.; Yin, H.; Li, J.; Wang, Q.; Hung, N.Q.; Zhang, X. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 413–424. [Google Scholar]
- Huang, C.; Xu, H.; Xu, Y.; Dai, P.; Xia, L.; Lu, M.; Bo, L.; Xing, H.; Lai, X.; Ye, Y. Knowledge-aware coupled graph neural network for social recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 2–9 February 2021; Volume 35, pp. 4115–4122. [Google Scholar]
- Yang, L.; Liu, Z.; Dou, Y.; Ma, J.; Yu, P.S. Consisrec: Enhancing GNN for social recommendation via consistent neighbor aggregation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, 11–15 July 2021; pp. 2141–2145. [Google Scholar]
- Blau, Y.; Michaeli, T. Rethinking lossy compression: The rate-distortion-perception tradeoff. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 10–15 June 2019; pp. 675–685. [Google Scholar]
- Bojanowski, P.; Grave, E.; Joulin, A.; Mikolov, T. Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 2017, 5, 135–146. [Google Scholar] [CrossRef] [Green Version]
- Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. ICLR2019. Available online: https://openreview.net/forum?id=B1xsqj09Fm (accessed on 28 April 2022).
- Qiu, J.; Chen, Q.; Dong, Y.; Zhang, J.; Yang, H.; Ding, M.; Wang, K.; Tang, J. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, CA, USA, 6–10 July 2020; pp. 1150–1160. [Google Scholar]
- Velickovic, P.; Fedus, W.; Hamilton, W.L.; Liò, P.; Bengio, Y.; Hjelm, R.D. Deep Graph Infomax. ICLR (Poster) 2019, 2, 4. [Google Scholar]
- Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; Xie, X. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, 11–15 July 2021; pp. 726–735. [Google Scholar]
- Ma, J.; Zhou, C.; Yang, H.; Cui, P.; Wang, X.; Zhu, W. Disentangled self-supervision in sequential recommenders. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, CA, USA, 6–10 July 2020; pp. 483–491. [Google Scholar]
- Yao, T.; Yi, X.; Cheng, D.Z.; Felix, X.Y.; Menon, A.K.; Hong, L.; Chi, E.H.; Tjoa, S.; Kang, J.; Ettinger, E. Self-Supervised Learning for Deep Models in Recommendations. 2020. Available online: https://openreview.net/forum?id=BCHN5z8nMRW (accessed on 13 May 2022).
- Zhou, K.; Wang, H.; Zhao, W.X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; Wen, J.R. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Galway, Ireland, 19–23 October 2020; pp. 1893–1902. [Google Scholar]
- Long, X.; Huang, C.; Xu, Y.; Xu, H.; Dai, P.; Xia, L.; Bo, L. Social recommendation with self-supervised metagraph informax network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QID, Australia, 1–5 November 2021; pp. 1160–1169. [Google Scholar]
- Liu, Z.; Chen, Y.; Li, J.; Yu, P.S.; McAuley, J.; Xiong, C. Contrastive self-supervised sequential recommendation with robust augmentation. arXiv 2021, arXiv:2108.06479. [Google Scholar]
- Wu, J.; Fan, W.; Chen, J.; Liu, S.; Li, Q.; Tang, K. Disentangled contrastive learning for social recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 4570–4574. [Google Scholar]
- Yu, J.; Yin, H.; Gao, M.; Xia, X.; Zhang, X.; Viet Hung, N.Q. Socially-aware self-supervised tri-training for recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 2084–2092. [Google Scholar]
- Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009; pp. 452–461. [Google Scholar]
- Zhao, T.; McAuley, J.; King, I. Leveraging social connections to improve personalized ranking for collaborative filtering. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, Shanghai, China, 3–7 November 2014; pp. 261–270. [Google Scholar]
- Wu, L.; Sun, P.; Fu, Y.; Hong, R.; Wang, X.; Wang, M. A neural influence diffusion model for social recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 235–244. [Google Scholar]
The Dataset | Number of Users | Number of Items | Number of User–Item Interactions | Number of Social Connections | Density of Interactions |
---|---|---|---|---|---|
DoubanMovie | 13,367 | 12,677 | 1,067,278 | 4085 | 0.63% |
DoubanBook | 13,024 | 22,347 | 792,062 | 84,575 | 0.27% |
BPR | SBPR | DiffNet | SEPT | LightGCN | SGL | SlightGCN | Ascension | |
---|---|---|---|---|---|---|---|---|
Prec@10 | 0.1243 | 0.1175 | 0.1311 | 0.1617 | 0.1602 | 0.1642 | 0.1712 | 4.26% |
Prec@20 | 0.1049 | 0.0994 | 0.1119 | 0.1343 | 0.1331 | 0.1352 | 0.1407 | 4.07% |
Rec@10 | 0.0784 | 0.0903 | 0.1012 | 0.1218 | 0.1186 | 0.1252 | 0.1281 | 2.32% |
Rec@20 | 0.1281 | 0.1426 | 0.1647 | 0.1862 | 0.1833 | 0.1890 | 0.1931 | 2.17% |
F1@10 | 0.0962 | 0.1021 | 0.1142 | 0.1389 | 0.1363 | 0.1421 | 0.1465 | 3.10% |
F1@20 | 0.1153 | 0.1171 | 0.1333 | 0.1560 | 0.1542 | 0.1576 | 0.1628 | 3.30% |
NDCG@10 | 0.1493 | 0.1480 | 0.1642 | 0.2061 | 0.2034 | 0.2110 | 0.2166 | 2.65% |
NDCG@20 | 0.1498 | 0.1517 | 0.1706 | 0.2070 | 0.2043 | 0.2122 | 0.2161 | 1.84% |
BPR | SBPR | DiffNet | SEPT | LightGCN | SGL | SlightGCN | Ascension | |
---|---|---|---|---|---|---|---|---|
Prec@10 | 0.0715 | 0.0664 | 0.0727 | 0.0942 | 0.0876 | 0.1061 | 0.1096 | 3.30% |
Prec@20 | 0.0561 | 0.0539 | 0.0583 | 0.0743 | 0.0702 | 0.0826 | 0.0850 | 2.91% |
Rec@10 | 0.0696 | 0.0725 | 0.0830 | 0.1008 | 0.0909 | 0.1079 | 0.1104 | 2.32% |
Rec@20 | 0.1027 | 0.1093 | 0.1239 | 0.1480 | 0.1372 | 0.1587 | 0.1620 | 2.08% |
F1@10 | 0.0705 | 0.0693 | 0.0775 | 0.0974 | 0.0892 | 0.1070 | 0.1100 | 2.80% |
F1@20 | 0.0726 | 0.0722 | 0.0793 | 0.0989 | 0.0929 | 0.1086 | 0.1115 | 2.67% |
NDCG@10 | 0.0956 | 0.0921 | 0.1019 | 0.1320 | 0.1195 | 0.1496 | 0.1541 | 3.01% |
NDCG@20 | 0.0980 | 0.0980 | 0.1090 | 0.1371 | 0.1261 | 0.1532 | 0.1577 | 2.94% |
Metrics | LightGCN | Variant-U | Variant-I | Variant-UI | SlightGCN |
---|---|---|---|---|---|
Prec@10 | 0.1602 | 0.1635 | 0.1630 | 0.1637 | 0.1712 |
Prec@20 | 0.1331 | 0.1348 | 0.1343 | 0.1350 | 0.1407 |
Rec@10 | 0.1186 | 0.1217 | 0.1223 | 0.1226 | 0.1281 |
Rec@20 | 0.1833 | 0.1876 | 0.1869 | 0.1881 | 0.1931 |
F1@10 | 0.1363 | 0.1395 | 0.1397 | 0.1402 | 0.1465 |
F1@20 | 0.1542 | 0.1569 | 0.1563 | 0.1572 | 0.1628 |
NDCG@10 | 0.2034 | 0.2091 | 0.2082 | 0.2095 | 0.2166 |
NDCG@20 | 0.2043 | 0.2101 | 0.2096 | 0.2104 | 0.2161 |
Evaluation Index | LightGCN | Variant U | Variant I | Variant UI | SlightGCN |
---|---|---|---|---|---|
Prec@10 | 0.0876 | 0.0890 | 0.0893 | 0.0897 | 0.1096 |
Prec@20 | 0.0702 | 0.0712 | 0.0719 | 0.0726 | 0.0850 |
Rec@10 | 0.0909 | 0.0927 | 0.0934 | 0.0941 | 0.1104 |
Rec@20 | 0.1372 | 0.1386 | 0.1390 | 0.1389 | 0.1620 |
F1@10 | 0.0892 | 0.0908 | 0.0913 | 0.0918 | 0.1100 |
F1@20 | 0.0929 | 0.0941 | 0.0948 | 0.0954 | 0.1115 |
NDCG@10 | 0.1195 | 0.1223 | 0.1226 | 0.1238 | 0.1541 |
NDCG@20 | 0.1261 | 0.1285 | 0.1290 | 0.1296 | 0.1577 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, F.; Cao, Y.; Wu, H.; Wang, X.; Song, Y.; Gao, M. Social Recommendation Based on Multi-Auxiliary Information Constrastive Learning. Mathematics 2022, 10, 4130. https://doi.org/10.3390/math10214130
Jiang F, Cao Y, Wu H, Wang X, Song Y, Gao M. Social Recommendation Based on Multi-Auxiliary Information Constrastive Learning. Mathematics. 2022; 10(21):4130. https://doi.org/10.3390/math10214130
Chicago/Turabian StyleJiang, Feng, Yang Cao, Huan Wu, Xibin Wang, Yuqi Song, and Min Gao. 2022. "Social Recommendation Based on Multi-Auxiliary Information Constrastive Learning" Mathematics 10, no. 21: 4130. https://doi.org/10.3390/math10214130
APA StyleJiang, F., Cao, Y., Wu, H., Wang, X., Song, Y., & Gao, M. (2022). Social Recommendation Based on Multi-Auxiliary Information Constrastive Learning. Mathematics, 10(21), 4130. https://doi.org/10.3390/math10214130