A Comparative Study of Rank Aggregation Methods in Recommendation Systems
Abstract
:1. Introduction
2. Related Works
3. Background of the Research
3.1. Recommender Systems
3.2. Rank Aggregation
4. Experimental Evaluation
4.1. Experimental Setup
4.2. Parameters Tuning
4.3. Evaluation Metrics
4.4. Evaluation Protocol
5. Results
5.1. Results on MovieLens 100k
5.2. Results on MovieLens 1M
6. Conclusions
- Based on the analysis in Section 5.1, when reporting experimental results on the MovieLens 100k dataset, it is worth considering the following unsupervised aggregation methods: LogISR, Bordafuse and CombMNZ. On the other hand, among the supervised aggregation methods, it is worth considering: RRF, Slidefuse, Bayesfuse, RBC, LognISR, and Posfuse.
- Based on the analysis in Section 5.2, when reporting the results of experiments on the MovieLens 1M dataset, it is worth considering the following unsupervised aggregation methods: LogISR, Bordafuse, and CombMNZ. On the other hand, for the supervised aggregation methods, practically all the methods used gave statistically significant results for NDCG@10 and P@10 measures.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
u | Generic user |
Specific user | |
Active user in system for which recommendations are generated | |
U | The set of all users |
i | Generic item |
Specific item | |
I | Set of all items |
Specific recommendation algorithm | |
A | Set of n recommendation algorithms |
Generic ranking | |
Ranking recommended to user by algorithm where | |
The position of item in ranking | |
T | Set of n rankings |
Appendix A
References
- Bawden, D.; Robinson, L. Information Overload: An Overview. In Oxford Encyclopedia of Political Decision Making; Oxford University Press: Oxford, UK, 2020. [Google Scholar] [CrossRef]
- Wani, A.; Joshi, I.; Khandve, S.; Wagh, V.; Joshi, R. Evaluating Deep Learning Approaches for Covid19 Fake News Detection. In Combating Online Hostile Posts in Regional Languages during Emergency Situation; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 153–163. [Google Scholar] [CrossRef]
- Burke, R.; Felfernig, A.; Göker, M.H. Recommender Systems: An Overview. AI Mag. 2011, 32, 13–18. [Google Scholar] [CrossRef] [Green Version]
- Rafailidis, D.; Nanopoulos, A. Modeling Users Preference Dynamics and Side Information in Recommender Systems. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 782–792. [Google Scholar] [CrossRef]
- Bennett, J.; Lanning, S.; Netflix, N. The Netflix Prize. In Proceedings of the KDD Cup and Workshop in Conjunction with KDD, San Jose, CA, USA, 12 August 2007. [Google Scholar]
- Deshpande, M.; Karypis, G. Item-Based Top-N Recommendation Algorithms. ACM Trans. Inf. Syst. 2004, 22, 143–177. [Google Scholar] [CrossRef]
- Karatzoglou, A.; Baltrunas, L.; Shi, Y. Learning to Rank for Recommender Systems. In Proceedings of the 7th ACM Conference on Recommender Systems, Hong Kong, China, 12–16 October 2013; pp. 493–494. [Google Scholar]
- Steck, H. Evaluation of Recommendations: Rating-Prediction and Ranking. In Proceedings of the 7th ACM Conference on Recommender Systems, Hong Kong, China, 12–16 October 2013; pp. 213–220. [Google Scholar]
- Shani, G.; Gunawardana, A. Evaluating recommendation systems. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2011; pp. 257–297. [Google Scholar]
- Anelli, V.W.; Bellogín, A.; Di Noia, T.; Jannach, D.; Pomo, C. Top-N Recommendation Algorithms: A Quest for the State-of-the-Art. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, Barcelona, Spain, 4–7 July 2022; pp. 121–131. [Google Scholar]
- Aggarwal, C.C. Advanced Topics in Recommender Systems. In Recommender Systems: The Textbook; Springer International Publishing: Cham, Switzerland, 2016; pp. 411–448. [Google Scholar] [CrossRef]
- Oliveira, S.E.L.; Diniz, V.; Lacerda, A.; Merschmanm, L.; Pappa, G.L. Is Rank Aggregation Effective in Recommender Systems? An Experimental Analysis. ACM Trans. Intell. Syst. Technol. 2020, 11. [Google Scholar] [CrossRef] [Green Version]
- Beel, J.; Breitinger, C.; Langer, S.; Lommatzsch, A.; Gipp, B. Towards reproducibility in recommender-systems research. User Model. User-Adapt. Interact. 2016, 26, 69–101. [Google Scholar] [CrossRef] [Green Version]
- Sun, Z.; Han, L.; Huang, W.; Wang, X.; Zeng, X.; Wang, M.; Yan, H. Recommender systems based on social networks. J. Syst. Softw. 2015, 99, 109–119. [Google Scholar] [CrossRef]
- Dacrema, M.F.; Boglio, S.; Cremonesi, P.; Jannach, D. A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research. ACM Trans. Inf. Syst. 2021, 39, 1–49. [Google Scholar] [CrossRef]
- Cremonesi, P.; Jannach, D. Progress in Recommender Systems Research: Crisis? What Crisis? AI Mag. 2022, 42, 43–54. [Google Scholar] [CrossRef]
- List, C. Social Choice Theory. In The Stanford Encyclopedia of Philosophy, Spring 2022 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2022. [Google Scholar]
- Dwork, C.; Kumar, R.; Naor, M.; Sivakumar, D. Rank Aggregation Methods for the Web. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, 1–5 May 2001; pp. 613–622. [Google Scholar]
- DeConde, R.P.; Hawley, S.; Falcon, S.; Clegg, N.; Knudsen, B.; Etzioni, R. Combining Results of Microarray Experiments: A Rank Aggregation Approach. Stat. Appl. Genet. Mol. Biol. 2006, 5. [Google Scholar] [CrossRef]
- Fagin, R.; Kumar, R.; Sivakumar, D. Efficient Similarity Search and Classification via Rank Aggregation. In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, San Diego, CA, USA, 9–12 June 2003; pp. 301–312. [Google Scholar]
- Lin, S. Rank aggregation methods. WIREs Comput. Stat. 2010, 2, 555–570. [Google Scholar] [CrossRef]
- Smyth, B.; Cotter, P. Personalized TV listings service for the digital TV age. Knowl.-Based Syst. 2000, 13, 53–59. [Google Scholar] [CrossRef]
- Torres, R.; McNee, S.; Abel, M.; Konstan, J.; Riedl, J. Enhancing digital libraries with TechLens. In Proceedings of the 2004 Joint ACM/IEEE Conference on Digital Libraries, Tuscon, AZ, USA, 7–11 June 2004; pp. 228–236. [Google Scholar]
- Boratto, L.; Carta, S.A.; Vargiu, E.; Armano, G.; Paddeu, G. State-of-the-Art in Group Recommendation and New Approaches for Automatic Identification of Groups. In Information Retrieval and Mining in Distributed Environments; Soro, A., Vargiu, E., Armano, G., Paddeu, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–20. [Google Scholar] [CrossRef]
- Baltrunas, L.; Makcinskas, T.; Ricci, F. Group Recommendations with Rank Aggregation and Collaborative Filtering. In Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys), Barcelona, Spain, 26–30 September 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 119–126. [Google Scholar] [CrossRef] [Green Version]
- Tang, Y.; Tong, Q. BordaRank: A ranking aggregation based approach to collaborative filtering. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Yalcin, E.; Ismailoglu, F.; Bilge, A. An entropy empowered hybridized aggregation technique for group recommender systems. Expert Syst. Appl. 2021, 166, 114111. [Google Scholar] [CrossRef]
- Bartholdi, J.; Tovey, C.A.; Trick, M.A. Voting Schemes for which It Can Be Difficult to Tell Who Won the Election. Soc. Choice Welf. 1989, 6, 157–165. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Ziviani, N.; Moura, E.S.D.; Hata, I.; Lacerda, A.; Veloso, A. Multiobjective Pareto-Efficient Approaches for Recommender Systems. ACM Trans. Intell. Syst. Technol. 2015, 5, 1–20. [Google Scholar] [CrossRef]
- Oliveira, S.; Diniz, V.; Lacerda, A.; Pappa, G.L. Evolutionary rank aggregation for recommender systems. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 255–262. [Google Scholar] [CrossRef]
- Oliveira, S.; Diniz, V.; Lacerda, A.; Pappa, G.L. Multi-objective Evolutionary Rank Aggregation for Recommender Systems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Bałchanowski, M.; Boryczka, U. Aggregation of Rankings Using Metaheuristics in Recommendation Systems. Electronics 2022, 11, 369. [Google Scholar] [CrossRef]
- Ricci, F.; Rokach, L.; Shapira, B.F.; Rokach, L.; Shapira, B. Recommender Systems: Introduction and Challenges. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Eds.; Springer: Boston, MA, USA, 2015; pp. 1–34. [Google Scholar] [CrossRef]
- Bell, R.M.; Koren, Y.; Volinsky, C. All Together Now: A Perspective on the Netflix Prize. Chance 2010, 23, 24–29. [Google Scholar] [CrossRef]
- Bell, R.M.; Koren, Y.; Volinsky, C. The BellKor Solution to the Netflix Prize; Technical Report; AT&T Labs: Atlanta, GA, USA, 2007; Available online: http://www.pzs.dstu.dp.ua/DataMining/recom/bibl/ProgressPrize2007_KorBell.pdf (accessed on 12 December 2022).
- Khatwani, S.; Chandak, M. Building Personalized and Non Personalized recommendation systems. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), Pune, India, 9–10 September 2016; pp. 623–628. [Google Scholar] [CrossRef]
- Pazzani, M.J.; Billsus, D.P.; Kobsa, A.; Nejdl, W. Content-Based Recommendation Systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Brusilovsky, P., Kobsa, A., Nejdl, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 325–341. [Google Scholar] [CrossRef] [Green Version]
- Schafer, J.B.; Frankowski, D.; Herlocker, J.; Sen, S. Collaborative Filtering Recommender Systems. In The Adaptive Web; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4321, pp. 291–324. [Google Scholar]
- Aggarwal, C.C. Knowledge-Based Recommender Systems. In Recommender Systems: The Textbook; Springer International Publishing: Cham, Switzerland, 2016; pp. 167–197. [Google Scholar] [CrossRef]
- Çano, E.; Morisio, M. Hybrid recommender systems: A systematic literature review. Intell. Data Anal. 2017, 21, 1487–1524. [Google Scholar] [CrossRef] [Green Version]
- Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorization Techniques for Recommender Systems. Computer 2009, 42, 30–37. [Google Scholar] [CrossRef]
- Piatetsky-Shapiro, G. Interview with Simon Funk. Sigkdd Explor. 2007, 9, 38–40. [Google Scholar] [CrossRef]
- Ekstrand, M.D. LensKit for Python: Next-Generation Software for Recommender Systems Experiments. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM), Galway, Ireland, 19–23 October 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 2999–3006. [Google Scholar] [CrossRef]
- Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI), Montreal, QC, Canada, 18–21 June 2009; AUAI Press: Arlington, VA, USA, 2009; pp. 452–461. [Google Scholar]
- Hu, Y.; Koren, Y.; Volinsky, C. Collaborative Filtering for Implicit Feedback Datasets. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining (ICDM), Pisa, Italy, 15–19 December 2008; pp. 263–272. [Google Scholar] [CrossRef]
- Klementiev, A.; Roth, D.; Small, K. Unsupervised Rank Aggregation with Distance-Based Models. In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland, 5–9 July 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 472–479. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.T.; Liu, T.Y.; Qin, T.; Ma, Z.M.; Li, H. Supervised Rank Aggregation. In Proceedings of the 16th International Conference on World Wide Web (WWW), Banff, AB, Canada, 8–12 May 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 481–490. [Google Scholar] [CrossRef]
- Liu, T.Y. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 2009, 3, 225–331. [Google Scholar] [CrossRef]
- Li, X.; Wang, X.; Xiao, G. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings Bioinform. 2017, 20, 178–189. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fox, E.A.; Shaw, J.A. Combination of Multiple Searches. In Proceedings of the TREC. 1993. Available online: https://trec.nist.gov/pubs/trec2/papers/txt/23.txt (accessed on 12 December 2022).
- Mourão, A.; Martins, F.; Magalhães, J. Multimodal medical information retrieval with unsupervised rank fusion. Comput. Med. Imaging Graph. 2015, 39, 35–45. [Google Scholar] [CrossRef]
- Aslam, J.A.; Montague, M.H. Models for Metasearch. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA, USA, 9–13 September 2001; Croft, W.B., Harper, D.J., Kraft, D.H., Zobel, J., Eds.; ACM: Cambridge, MA, USA, 2001; pp. 275–284. [Google Scholar] [CrossRef]
- Montague, M.H.; Aslam, J.A. Condorcet fusion for improved retrieval. In Proceedings of the 2002 ACM CIKM International Conference on Information and Knowledge Management, McLean, VA, USA, 4–9 November 2002; ACM: Cambridge, MA, USA, 2002; pp. 538–548. [Google Scholar] [CrossRef]
- Lee, J.H. Analyses of Multiple Evidence Combination. In Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Philadelphia, PA, USA, 27–31 July 1997; ACM: Cambridge, MA, USA, 1997; pp. 267–276. [Google Scholar]
- Cormack, G.V.; Clarke, C.L.A.; Buettcher, S. Reciprocal Rank Fusion Outperforms Condorcet and Individual Rank Learning Methods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Boston, MA, USA, 19–23 July 2009; Association for Computing Machinery: New York, NY, USA, 2009; pp. 758–759. [Google Scholar] [CrossRef] [Green Version]
- Lillis, D.; Toolan, F.; Collier, R.W.; Dunnion, J. Extending Probabilistic Data Fusion Using Sliding Windows. In Proceedings of the Advances in Information Retrieval, 30th European Conference on IR Research, Glasgow, UK, 30 March–3 April 2008; Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2008; Volume 4956, pp. 358–369. [Google Scholar] [CrossRef] [Green Version]
- Wu, S.; Crestani, F. Data fusion with estimated weights. In Proceedings of the 2002 ACM CIKM International Conference on Information and Knowledge Management, McLean, VA, USA, 4–9 November 2002; ACM: Cambridge, MA, USA, 2002; pp. 648–651. [Google Scholar] [CrossRef]
- Bailey, P.; Moffat, A.; Scholer, F.; Thomas, P. Retrieval Consistency in the Presence of Query Variations. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, 7–11 August 2017; Kando, N., Sakai, T., Joho, H., Li, H., de Vries, A.P., White, R.W., Eds.; ACM: Cambridge, MA, USA, 2017; pp. 395–404. [Google Scholar] [CrossRef]
- Lillis, D.; Zhang, L.; Toolan, F.; Collier, R.W.; Leonard, D.; Dunnion, J. Estimating probabilities for effective data fusion. In Proceedings of the Proceeding of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Geneva, Switzerland, 19–23 July 2010; Crestani, F., Marchand-Maillet, S., Chen, H., Efthimiadis, E.N., Savoy, J., Eds.; ACM: Cambridge, MA, USA, 2010; pp. 347–354. [Google Scholar] [CrossRef]
- Bassani, E. ranx: A Blazing-Fast Python Library for Ranking Evaluation and Comparison. In Proceedings of the European Conference on Information Retrieval (ECIR), Stavanger, Norway, 10–14 April 2022; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2022; Volume 13186, pp. 259–264. [Google Scholar]
- Bassani, E.; Romelli, L. ranx.fuse: A Python Library for Metasearch. In Proceedings of the 31st ACM International Conference on Information and Knowledge Management (CIKM), Atlanta, GA, USA, 17–21 October 2022; ACM: Cambridge, MA, USA, 2022; pp. 4808–4812. [Google Scholar]
- Harper, F.M.; Konstan, J.A. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 2015, 5, 1–19. [Google Scholar] [CrossRef]
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar]
- Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for Hyper-Parameter Optimization. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Granada, Spain, 12–15 December 2011; Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K., Eds.; Curran Associates, Inc.: New York, NY, USA, 2011; Volume 24. [Google Scholar]
- Smucker, M.D.; Allan, J.; Carterette, B. A Comparison of Statistical Significance Tests for Information Retrieval Evaluation. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management (CIKM), Lisbon, Portugal, 6–10 November 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 623–632. [Google Scholar] [CrossRef]
- Lin, Z.; Li, Y.; Guo, X. Consensus measure of rankings. arXiv 2017, arXiv:1704.08464. [Google Scholar] [CrossRef]
- Asudeh, A.; Jagadish, H.V.; Stoyanovich, J.; Das, G. Designing Fair Ranking Schemes. In Proceedings of the 2019 International Conference on Management of Data, Amsterdam, The Netherlands, 30 June–5 July 2019; pp. 1259–1276. [Google Scholar] [CrossRef] [Green Version]
- Kuhlman, C.; Rundensteiner, E. Rank Aggregation Algorithms for Fair Consensus. Proc. VLDB Endow. 2020, 13, 2706–2719. [Google Scholar] [CrossRef]
Algorithm | Type | Description | Reference |
---|---|---|---|
UserKNN | Neighborhood based | User-user nearest-neighbor collaborative filtering | [43] |
ItemKNN | Neighborhood based | Item-item nearest-neighbor collaborative filtering | [6] |
BPR | Matrix factorization | Bayesian Personalized Ranking with matrix factorization, optimized with TensorFlow | [44] |
ImplicitMF | Matrix factorization | Implicit matrix factorization trained with alternating least squares (ALS) | [45] |
MostPopular | Non-personalized | Recommend the most popular items | [43] |
Aggregation Method | Method Type | Reference |
---|---|---|
CombMIN | Unsupervised | [50] |
CombMED | [50] | |
CombANZ | [50] | |
LogISR | [51] | |
Bordafuse | [52] | |
Condorcet | [53] | |
CombMAX | [50] | |
CombSUM | [50] | |
CombMNZ | [50] | |
ISR | [51] | |
CombGMNZ | Supervised | [54] |
RRF | [55] | |
Slidefuse | [56] | |
Bayesfuse | [52] | |
WMNZ | [57] | |
RBC | [58] | |
LognISR | [51] | |
Posfuse | [59] | |
Weighted Sum | [50] | |
Weighted Borda | [52] |
Algorithm | Parameter | Range | Data Type | Best Value |
---|---|---|---|---|
UserKNN | nnbrs | [2–50] | int | 23 |
min_nbrs | [0–10] | int | 4 | |
ItemKNN | nnbrs | [2–50] | int | 44 |
min_nbrs | [0–10] | int | 7 | |
BPR | num factors | [2–50] | int | 50 |
reg | [0–1] | double | 0 | |
neg count | [0–20] | int | 12 | |
ImplicitMF | num factors | [2–50] | int | 21 |
method | cg, lu | categorical | lu | |
reg | [0–1] | double | 0.78 | |
weight | [0–10] | double | 1 | |
MostPopular | topN | int | 10 |
Type | Algorithm Name | NDCG@10 | MAP@10 | P@1 | P@10 | Recall@10 |
---|---|---|---|---|---|---|
Recommendation algorithms | BPR | 0.174 | 0.053 | 0.222 | 0.163 | 0.116 |
ImplicitMF | 0.181 | 0.059 | 0.230 | 0.166 | 0.119 | |
Item kNN | 0.181 | 0.058 | 0.230 | 0.165 | 0.122 | |
Most Popular | 0.107 | 0.028 | 0.150 | 0.106 | 0.065 | |
User kNN | 0.186 | 0.060 | 0.233 | 0.172 | 0.123 | |
Unsupervised aggregation methods | CombMIN | 0.120 | 0.031 | 0.129 | 0.125 | 0.083 |
CombMED | 0.149 | 0.042 | 0.192 | 0.144 | 0.098 | |
CombANZ | 0.146 | 0.041 | 0.148 | 0.147 | 0.100 | |
LogISR | 0.193 | 0.061 | 0.251 | 0.176 | 0.124 | |
Bordafuse | 0.193 | 0.062 | 0.240 | 0.177 | 0.125 | |
Condorcet | 0.178 | 0.057 | 0.246 | 0.161 | 0.114 | |
CombMAX | 0.172 | 0.052 | 0.217 | 0.159 | 0.110 | |
CombSUM | 0.184 | 0.058 | 0.241 | 0.166 | 0.116 | |
CombMNZ | 0.192 | 0.061 | 0.244 | 0.174 | 0.122 | |
ISR | 0.187 | 0.059 | 0.247 | 0.169 | 0.119 | |
Supervised aggregation methods | CombGMNZ | 0.191 | 0.061 | 0.242 | 0.174 | 0.122 |
RRF | 0.193 | 0.062 | 0.241 | 0.176 | 0.125 | |
Slidefuse | 0.196 | 0.064 | 0.244 | 0.178 | 0.127 | |
Bayesfuse | 0.195 | 0.064 | 0.244 | 0.178 | 0.126 | |
WMNZ | 0.184 | 0.060 | 0.238 | 0.164 | 0.121 | |
RBC | 0.193 | 0.062 | 0.243 | 0.176 | 0.125 | |
LognISR | 0.193 | 0.061 | 0.252 | 0.175 | 0.124 | |
Posfuse | 0.196 | 0.064 | 0.247 | 0.179 | 0.128 | |
Weighted Sum | 0.183 | 0.058 | 0.227 | 0.169 | 0.121 | |
Weighted Borda | 0.183 | 0.059 | 0.233 | 0.165 | 0.122 |
Type | Algorithm Name | NDCG@10 | MAP@10 | P@1 | P@10 | Recall@10 |
---|---|---|---|---|---|---|
Recommendation algorithms | BPR | 0.113 | 0.027 | 0.146 | 0.116 | 0.065 |
ImplicitMF | 0.115 | 0.027 | 0.143 | 0.117 | 0.062 | |
Item kNN | 0.125 | 0.030 | 0.168 | 0.127 | 0.063 | |
Most Popular | 0.098 | 0.018 | 0.131 | 0.103 | 0.038 | |
User kNN | 0.123 | 0.030 | 0.150 | 0.126 | 0.068 | |
Unsupervised aggregation methods | CombMIN | 0.105 | 0.022 | 0.117 | 0.114 | 0.058 |
CombMED | 0.116 | 0.025 | 0.134 | 0.122 | 0.063 | |
CombANZ | 0.116 | 0.025 | 0.126 | 0.123 | 0.064 | |
LogISR | 0.130 | 0.030 | 0.165 | 0.131 | 0.067 | |
Bordafuse | 0.130 | 0.030 | 0.171 | 0.130 | 0.066 | |
Condorcet | 0.116 | 0.026 | 0.159 | 0.117 | 0.058 | |
CombMAX | 0.123 | 0.028 | 0.146 | 0.127 | 0.066 | |
CombSUM | 0.129 | 0.030 | 0.166 | 0.129 | 0.067 | |
CombMNZ | 0.131 | 0.031 | 0.171 | 0.131 | 0.067 | |
ISR | 0.129 | 0.029 | 0.165 | 0.130 | 0.067 | |
Supervised aggregation methods | CombGMNZ | 0.131 | 0.031 | 0.171 | 0.131 | 0.067 |
RRF | 0.130 | 0.030 | 0.173 | 0.131 | 0.066 | |
Slidefuse | 0.129 | 0.030 | 0.170 | 0.131 | 0.067 | |
Bayesfuse | 0.129 | 0.030 | 0.172 | 0.131 | 0.067 | |
WMNZ | 0.129 | 0.030 | 0.167 | 0.131 | 0.067 | |
RBC | 0.130 | 0.030 | 0.173 | 0.131 | 0.066 | |
LognISR | 0.130 | 0.030 | 0.165 | 0.131 | 0.067 | |
Posfuse | 0.130 | 0.030 | 0.172 | 0.131 | 0.067 | |
Weighted Sum | 0.129 | 0.030 | 0.168 | 0.131 | 0.067 | |
Weighted Borda | 0.125 | 0.030 | 0.167 | 0.127 | 0.063 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bałchanowski, M.; Boryczka, U. A Comparative Study of Rank Aggregation Methods in Recommendation Systems. Entropy 2023, 25, 132. https://doi.org/10.3390/e25010132
Bałchanowski M, Boryczka U. A Comparative Study of Rank Aggregation Methods in Recommendation Systems. Entropy. 2023; 25(1):132. https://doi.org/10.3390/e25010132
Chicago/Turabian StyleBałchanowski, Michał, and Urszula Boryczka. 2023. "A Comparative Study of Rank Aggregation Methods in Recommendation Systems" Entropy 25, no. 1: 132. https://doi.org/10.3390/e25010132
APA StyleBałchanowski, M., & Boryczka, U. (2023). A Comparative Study of Rank Aggregation Methods in Recommendation Systems. Entropy, 25(1), 132. https://doi.org/10.3390/e25010132