TD-DNN: A Time Decay-Based Deep Neural Network for Recommendation System
Abstract
:1. Introduction
- In this paper, a Time Decay-based Deep Neural Network (TD-DNN) is proposed, in which, firstly, the noisy ratings are identified from the dataset and corrected using the Matrix Factorization approach. After this, to give the weightage to the most recent ratings, a power decay function is used. The resultant data matrix is inserted as the input into the DNN model for the training purposes.
- The model incorporates the embedding layer to a Multi-Layer Perceptron to learn the N-dimensional and non-linear interactions between users and movies. The Huber loss function, which has shown outstanding efficacy across all the loss functions, is used.
- The experiment has been conducted on benchmark datasets to examine the new model’s efficiency and compare it with the existing approaches. The results on the MovieLens-100k, ML-1M and Epinions datasets show that the proposed TD-DNN architecture enhances recommendation efficiency and solves the data sparsity problem.
2. Literature Review
3. Motivation
4. Proposed Methodology
- Data Preprocessing Phase
- Constructing Deep Neural Network
4.1. Data Preprocessing Phase
4.1.1. Detecting and Correcting the Noisy Ratings
Algorithm 1: Steps of Detecting and Correcting Noisy Ratings | |
Inputs: set of available ratings {r(user, item)}, Threshold for Classification—T1_u, T2_u, T1_i, T2_i, T_u, T_i, δ = 1 | |
1. | Weak_user = {}, Average_user = {}, Strong_user = {}, |
2. | Weak_item = {}, Average _item = {}, Strong_item = {}, |
3. | Possible_noise = {} |
4. | for each rating in set of available ratings r(user, item) |
5. | if r(user, item) < T1_u |
6. | Add r(user, item) to the Weak_user set |
7. | else if r(user, item) ≥ T1_u and r(user, item) < T2_u |
8. | Add r(user, item) to the Average_user set |
9. | else |
10. | Add r(user, item) to the Strong_user set |
11. | if r(user, item) < T1_i, |
12. | Add r(user, item) to the Weak_item set |
13. | else if r(user, item) ≥ T1_i, and r(user, item) < T2_i |
14. | Add r(user, item) to the Average_item set |
15. | else |
16. | Add r(user, item) to the Strong_item set |
17. | end for |
18. | for each user in the r(user, item) |
19. | Classify each user into Critical, Average and Strong user as follows: |
20. | Critical user: |
21. | if |Weak_user| ≥ |Strong_user| + |Average_user| |
22. | Average user: |
23. | if |Average_user| ≥ |Strong_user| + |Weak_user| |
24. | Strong user: |
25. | if |Strong_user| ≥ |Average_user| + |Weak_user| |
26. | end for |
27. | for each item in the r(user, item) |
28. | Classify each item using the above discussed sets |
29. | Critical items: |
30. | if |Weak_item| ≥ |Strong_item| + |Average_item| |
31. | Average item: |
32. | if |Average_item| ≥ |Strong_item| + |Weak_item| |
33. | Strong item: |
34. | if |Strong_item| ≥ |Average_item| + |Weak_item| |
35. | end for |
36. | for each rating r(user, item) |
37. | if user is Critical, item is Weakly_recommended, and r(user, item) ≥ T1 |
38. | Add r(user, item) to the {Possible_noise} |
39. | if user is Average, item is Averagely_recommended and (r(user, item) < T1 or r(user, item) ≥ T2) |
40. | Add r(user, item) to the {Possible_noise} |
41. | if user is Benevolent, item is Strongly_recommended, and r(user, item) < T2 |
42. | Add r(user, item) to the {Possible_noise} |
43. | end for |
#Correct the noisy ratings | |
44. | for each rating r(user, item) in {Possible_noise}, predict a new rating R(user, item) using Matrix Factorization approach |
45. | k = 3 # it represents the features |
46. | if (abs (R(user, item) − r(user, item)) > δ) |
47. | Replace r(user, item) by R(user, item) in the original rating set r |
48. | end if |
49. | end for |
Output: r = {r(user, item)} − set of corrected ratings |
4.1.2. Applying Time Decay Functions
4.2. Constructing Deep Neural Network
4.2.1. Input Layer
4.2.2. Multi-Layer Perceptron
4.2.3. Output Layer
4.2.4. Training Process
Loss Functions
Optimizer
5. Experimental Evaluation
5.1. Data Description
5.2. Evaluation Measure
5.3. Evaluation of TD-DNN Model
- Item-based CF (IBCF) [77]: In the CF technique, the Adjusted Cosine (ACOS) measure is used to compute the similarity between items. It is a modified version of vector-based similarity that accounts for the fact that various users have varying rating methods; in other words, some people may rate items highly in general, while others may prefer to rate them lower.
- User-based (UBCF) CF [20]: The performance of the proposed model is compared with the user-based CF approach. To compute the similarity among users, a recently developed MPIP measure is used. It combines the modified proximity, modified impact, and modified popularity factors in order to compute the similarity.
- SVD [28]: The SVD method is used to reduce the dimensionality for prediction tasks. In this algorithm, the rating matrix is decomposed and is utilized to make predictions. The SVD technique has the advantage that it can handle matrices with different numbers of columns and rows.
- DLCRS [44]: In the DLCRS approach, embedding vectors of users and movies are fed into layers which consist of element-wise products of users and items. It then outputs a linear interaction between user and movie. In contrast, in the basic MF model, the dot products of the embedding vectors of users and items are faded.
- Deep Edu [11]: This algorithm map features into the N-dimensional dense vector before inserting them into a Multi-Layer Perceptron. It is applied to the movie’s dataset to compare its performance with the proposed model.
- Neural Network using Batch Normalization (NNBN) [78]: This approach applies batch processing on each layer to prevent the model from overfitting.
- NCF [39]: NCF is the most popular deep learning technique used for movie recommendations. Neural Collaborative Filtering replaces the user–item inner product with a neural architecture. It combines the predicted ratings of Generalized Matrix Factorization (GMF) and Multi-Layer Perceptron (MLP) approaches.
- Neural Matrix Factorization (NMF) [45]: In this approach, pairs of users–movies embeddings are combined using Generalized Matrix Factorization (GMF), and then, MLP is applied to generate the prediction.
5.4. Selection of Hyper Parameters
5.5. Experimental Results with Discussion
6. Conclusions and Future Scope
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mertens, P. Recommender Systems. Wirtschaftsinformatik 1997, 39, 401–404. [Google Scholar]
- Jain, G.; Mishra, N.; Sharma, S. CRLRM: Category Based Recommendation Using Linear Regression Model. In Proceedings of the 2013 3rd International Conference on Advances in Computing and Communications, ICACC, Cochin, India, 29–31 August 2013; pp. 17–20. [Google Scholar] [CrossRef]
- Chen, R.; Hua, Q.; Chang, Y.S.; Wang, B.; Zhang, L.; Kong, X. A Survey of Collaborative Filtering-Based Recommender Systems: From Traditional Methods to Hybrid Methods Based on Social Networks. IEEE Access 2018, 6, 64301–64320. [Google Scholar] [CrossRef]
- Jain, G.; Mahara, T.; Tripathi, K.N. A Survey of Similarity Measures for Collaborative Filtering-Based Recommender System. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; Volume 1053, pp. 343–352. [Google Scholar]
- Wang, D.; Liang, Y.; Xu, D.; Feng, X.; Guan, R. A Content-Based Recommender System for Computer Science Publications. Knowl.-Based Syst. 2018, 157, 1–9. [Google Scholar] [CrossRef]
- Jain, G.; Mahara, T. An Efficient Similarity Measure to Alleviate the Cold-Start Problem. In Proceedings of the 2019 15th International Conference on Information Processing: Internet of Things ICINPRO, Bengaluru, India, 20–22 December 2019. [Google Scholar]
- Sulikowski, P.; Zdziebko, T.; Turzyński, D. Modeling Online User Product Interest for Recommender Systems and Ergonomics Studies. Concurr. Comput. Pract. Exp. 2019, 31, e4301. [Google Scholar] [CrossRef]
- Agarwal, A.; Mishra, D.S.; Kolekar, S.V. Knowledge-Based Recommendation System Using Semantic Web Rules Based on Learning Styles for MOOCs. Cogent Eng. 2022, 9, 2022568. [Google Scholar] [CrossRef]
- Chavare, S.R.; Awati, C.J.; Shirgave, S.K. Smart Recommender System Using Deep Learning. In Proceedings of the 6th International Conference on Inventive Computation Technologies, ICICT, Coimbatore, India, 20–22 January 2021; pp. 590–594. [Google Scholar] [CrossRef]
- Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
- Ullah, F.; Zhang, B.; Khan, R.U.; Chung, T.S.; Attique, M.; Khan, K.; El Khediri, S.; Jan, S. Deep Edu: A Deep Neural Collaborative Filtering for Educational Services Recommendation. IEEE Access 2020, 8, 110915–110928. [Google Scholar] [CrossRef]
- Zhang, L.; Luo, T.; Zhang, F.; Wu, Y. A Recommendation Model Based on Deep Neural Network. IEEE Access 2018, 6, 9454–9463. [Google Scholar] [CrossRef]
- Batmaz, Z.; Yurekli, A.; Bilge, A.; Kaleli, C. A Review on Deep Learning for Recommender Systems: Challenges and Remedies. Artif. Intell. Rev. 2019, 52, 1–37. [Google Scholar] [CrossRef]
- Qi, H.; Jin, H. Unsteady Helical Flows of a Generalized Oldroyd-B Fluid with Fractional Derivative. Nonlinear Anal. Real World Appl. 2009, 10, 2700–2708. [Google Scholar] [CrossRef]
- Cheng, H.T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. Wide & Deep Learning for Recommender Systems. In Proceedings of the DLRS 2016: Workshop on Deep Learning for Recommender Systems, Boston, MA, USA, 15 September 2016; pp. 7–10. [Google Scholar] [CrossRef] [Green Version]
- Covington, P.; Adams, J.; Sargin, E. Deep Neural Networks for Youtube Recommendations. In Proceedings of the RecSys 2016, 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 191–198. [Google Scholar] [CrossRef] [Green Version]
- Okura, S. Embedding-Based News Recommendation for Millions of Users. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1933–1942. [Google Scholar] [CrossRef]
- Tan, Z.; He, L. An Efficient Similarity Measure for User-Based Collaborative Filtering Recommender Systems Inspired by the Physical Resonance Principle. IEEE Access 2017, 5, 27211–27228. [Google Scholar] [CrossRef]
- Singh, P.K.; Sinha, S.; Choudhury, P. An Improved Item-Based Collaborative Filtering Using a Modified Bhattacharyya Coefficient and User—User Similarity as Weight; Springer: London, UK, 2022; Volume 64, ISBN 1011502101651. [Google Scholar]
- Manochandar, S.; Punniyamoorthy, M. A New User Similarity Measure in a New Prediction Model for Collaborative Filtering. Appl. Intell. 2020, 51, 19–21. [Google Scholar] [CrossRef]
- Bag, S.; Kumar, S.; Tiwari, M. An Efficient Recommendation Generation Using Relevant Jaccard Similarity. Inf. Sci. 2019, 483, 53–64. [Google Scholar] [CrossRef]
- Sun, S.B.; Zhang, Z.H.; Dong, X.L.; Zhang, H.R.; Li, T.J.; Zhang, L.; Min, F. Integrating Triangle and Jaccard Similarities for Recommendation. PLoS ONE 2017, 12, e0183570. [Google Scholar] [CrossRef] [Green Version]
- Miyahara, K.; Pazzani, M.J. Collaborative Filtering with the Simple Bayesian Classifier. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2000; Volume 1886 LNAI, pp. 679–689. [Google Scholar] [CrossRef] [Green Version]
- Hofmann, T.; Puzicha, J. Latent Class Models for Collaborative Filtering. IJCAI 1999, 99, 688–693. [Google Scholar]
- Vucetic, S.; Obradovic, Z. Collaborative Filtering Using a Regression-Based Approach. Knowl. Inf. Syst. 2005, 7, 1–22. [Google Scholar] [CrossRef] [Green Version]
- Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorizations Techniques for Recommender System. Computer 2009, 42, 30–37. [Google Scholar] [CrossRef]
- Yu, K.; Zhu, S.; Lafferty, J.; Gong, Y. Fast Nonparametric Matrix Factorization for Large-Scale Collaborative Filtering. In Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, Boston, MA, USA, 19–23 July 2009; pp. 211–218. [Google Scholar] [CrossRef]
- Vozalis, M.G.; Margaritis, K.G. Using SVD and Demographic Data for the Enhancement of Generalized Collaborative Filtering. Inf. Sci. 2007, 177, 3017–3037. [Google Scholar] [CrossRef]
- Salakhutdinov, R.; Mnih, A. Probabilistic Matrix Factorization. In Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS 2007), Vancouver, BC, Canada, 3–6 December 2017; pp. 1–8. [Google Scholar]
- Chen, K.; Mao, H.; Shi, X.; Xu, Y.; Liu, A. Trust-Aware and Location-Based Collaborative Filtering for Web Service QoS Prediction. In Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), Turin, Italy, 4–8 July 2017; Volume 2, pp. 143–148. [Google Scholar] [CrossRef]
- Xu, G.; Tang, Z.; Ma, C.; Liu, Y.; Daneshmand, M. A Collaborative Filtering Recommendation Algorithm Based on User Confidence and Time Context. J. Electr. Comput. Eng. 2019, 2019, 7070487. [Google Scholar] [CrossRef]
- Ma, T.; Guo, L.; Tang, M.; Tian, Y.; Al-Rodhaan, M.; Al-Dhelaan, A. A Collaborative Filtering Recommendation Algorithm Based on Hierarchical Structure and Time Awareness. IEICE Trans. Inf. Syst. 2016, E99D, 1512–1520. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Zhang, Z.; He, J.; Zhang, Z. Ur: A User-Based Collaborative Filtering Recommendation System Based on Trust Mechanism and Time Weighting. In Proceedings of the International Conference on Parallel and Distributed Systems—ICPADS, Tianjin, China, 4–6 December 2019; Volume 2019, pp. 69–76. [Google Scholar]
- Mu, R. A Survey of Recommender Systems Based on Deep Learning. IEEE Access 2018, 6, 69009–69022. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets Geoffrey. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Wang, X.; Wang, Y. Improving Content-Based and Hybrid Music Recommendation Using Deep Learning. In Proceedings of the MM 2014, 2014 ACM Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 627–636. [Google Scholar] [CrossRef]
- Habib, M.; Faris, M.; Qaddoura, R.; Alomari, A.; Faris, H. A Predictive Text System for Medical Recommendations in Telemedicine: A Deep Learning Approach in the Arabic Context. IEEE Access 2021, 9, 85690–85708. [Google Scholar] [CrossRef]
- Sulikowski, P. Deep Learning-Enhanced Framework for Performance Evaluation of a Recommending Interface with Varied Recommendation Position and Intensity Based on Eye-Tracking Equipment Data Processing. Electronics 2020, 9, 266. [Google Scholar] [CrossRef] [Green Version]
- He, X.; Liao, L.; Chua, T.; Zhang, H.; Nie, L.; Hu, X. Neural Collaborative Filtering. In Proceedings of the 26th International World Wide Web Conference, WWW 2017, Perth, Australia, 3–7 April 2017. [Google Scholar] [CrossRef] [Green Version]
- Xue, H.J.; Dai, X.Y.; Zhang, J.; Huang, S.; Chen, J. Deep Matrix Factorization Models for Recommender Systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 3203–3209. [Google Scholar] [CrossRef] [Green Version]
- Xiong, R.; Wang, J.; Li, Z.; Li, B.; Hung, P.C.K. Personalized LSTM Based Matrix Factorization for Online QoS Prediction. In Proceedings of the 2018 IEEE International Conference on Web Services (ICWS)—Part of the 2018 IEEE World Congress on Services, San Francisco, CA, USA, 2–7 July 2018; Volume 63, pp. 34–41. [Google Scholar] [CrossRef]
- Zhang, R.; Liu, Q.D.; Wei, J.X. Collaborative filtering for recommender systems. In Proceedings of the 2014 Second International Conference on Advanced Cloud and Big Data, Huangshan, China, 20–22 November 2014; pp. 301–308. [Google Scholar]
- Bhalse, N.; Thakur, R. Algorithm for Movie Recommendation System Using Collaborative Filtering. Mater. Today Proc. 2021, in press. [Google Scholar] [CrossRef]
- Aljunid, M.F.; Dh, M. An Efficient Deep Learning Approach for Collaborative Filtering Recommender System. Procedia Comput. Sci. 2020, 171, 829–836. [Google Scholar]
- Kapetanakis, S.; Polatidis, N.; Alshammari, G.; Petridis, M. A Novel Recommendation Method Based on General Matrix Factorization and Artificial Neural Networks. Neural Comput. Appl. 2020, 32, 12327–12334. [Google Scholar] [CrossRef]
- Wang, H.; Wang, N.; Yeung, D.Y. Collaborative Deep Learning for Recommender Systems. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, 10–13 August 2015; pp. 1235–1244. [Google Scholar] [CrossRef] [Green Version]
- Kim, D.; Park, C.; Oh, J.; Lee, S.; Yu, H. Convolutional Matrix Factorization for Document Context-Aware Recommendation. In Proceedings of the RecSys 2016, 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 233–240. [Google Scholar] [CrossRef]
- Senior, A. Pearson’s and Cosine Correlation. In Proceedings of the International Conference on Trends in Electronics and Informatics ICEI 2017 Calculating, Tirunelveli, India, 11–12 May 2017; pp. 1000–1004. [Google Scholar]
- Al-bashiri, H.; Abdulgabber, M.A.; Romli, A.; Hujainah, F. Collaborative Filtering Similarity Measures: Revisiting. In Proceedings of the ICAIP 2017: International Conference on Advances in Image Processing, Bangkok Thailand, 25–27 August 2017; pp. 195–200. [Google Scholar] [CrossRef]
- Toledo, R.Y.; Mota, Y.C.; Martínez, L. Correcting Noisy Ratings in Collaborative Recommender Systems. Knowl.-Based Syst. 2015, 76, 96–108. [Google Scholar] [CrossRef]
- Pinnapareddy, N.R. Deep Learning Based Recommendation Systems; San Jose State University: San Jose, CA, USA, 2018. [Google Scholar]
- Choudhary, P.; Kant, V.; Dwivedi, P. Handling Natural Noise in Multi Criteria Recommender System Utilizing Effective Similarity Measure and Particle Swarm Optimization. Procedia Comput. Sci. 2017, 115, 853–862. [Google Scholar] [CrossRef]
- O’Mahony, M.P.; Hurley, N.J.; Silvestre, G.C.M. Detecting Noise in Recommender System Databases. In Proceedings of the IUI, International Conference on Intelligent User Interfaces, Sydney, Australia, 29 January–1 February 2006; Volume 2006, pp. 109–115. [Google Scholar] [CrossRef]
- Li, D.; Chen, C.; Gong, Z.; Lu, T.; Chu, S.M.; Gu, N. Collaborative Filtering with Noisy Ratings. In Proceedings of the SIAM International Conference on Data Mining, SDM 2019, Calgary, AB, Canada, 2–4 May 2019; pp. 747–755. [Google Scholar] [CrossRef] [Green Version]
- Phan, H.X.; Jung, J.J. Preference Based User Rating Correction Process for Interactive Recommendation Systems. Multimed. Tools Appl. 2013, 65, 119–132. [Google Scholar] [CrossRef]
- Amatriain, X.; Pujol, J.M.; Tintarev, N.; Oliver, N. Rate It Again: Increasing Recommendation Accuracy by User Re-Rating. In Proceedings of the RecSys’09, 3rd ACM Conference on Recommender Systems, New York, NY, USA, 23–25 October 2009; pp. 173–180. [Google Scholar] [CrossRef]
- Li, B.; Chen, L.; Zhu, X.; Zhang, C. Noisy but Non-Malicious User Detection in Social Recommender Systems. World Wide Web 2013, 16, 677–699. [Google Scholar] [CrossRef]
- Bokde, D.; Girase, S.; Mukhopadhyay, D. Matrix Factorization Model in Collaborative Filtering Algorithms: A Survey. Procedia Comput. Sci. 2015, 49, 136–146. [Google Scholar]
- Larrain, S.; Trattner, C.; Parra, D.; Graells-Garrido, E.; Nørvåg, K. Good Times Bad Times: A Study on Recency Effects in Collaborative Filtering for Social Tagging. In Proceedings of the RecSys 2015, 9th ACM Conference on Recommender Systems, Vienna, Austria, 16–20 September 2015; pp. 269–272. [Google Scholar]
- He, L.; Wu, F. A Time-Context-Based Collaborative Filtering Algorithm. In Proceedings of the 2009 IEEE International Conference on Granular Computing, Nanchang, China, 17–19 August 2009; pp. 209–213. [Google Scholar]
- Chen, Y.C.; Hui, L.; Thaipisutikul, T. A Collaborative Filtering Recommendation System with Dynamic Time Decay. J. Supercomput. 2021, 77, 244–262. [Google Scholar] [CrossRef]
- Xia, C.; Jiang, X.; Liu, S.; Luo, Z.; Zhang, Y. Dynamic Item-Based Recommendation Algorithm with Time Decay. In Proceedings of the 2010 6th International Conference on Natural Computation, ICNC 2010, Yantai, China, 10–12 August 2010; Volume 1, pp. 242–247. [Google Scholar]
- Ding, Y.; Li, X. Time Weight Collaborative Filtering. In Proceedings of the International Conference on Information and Knowledge Management, Bremen, Germany, 31 October 2005–5 November 2005; pp. 485–492. [Google Scholar]
- Zimdars, A.; Chickering, D.M.; Meek, C. Using Temporal Data for Making Recommendations. arXiv 2001, arXiv:1301.2320. [Google Scholar] [CrossRef]
- Sharma, S.; Rana, V.; Kumar, V. Deep Learning Based Semantic Personalized Recommendation System. Int. J. Inf. Manag. Data Insights 2021, 1, 100028. [Google Scholar] [CrossRef]
- Fu, M.; Qu, H.; Yi, Z.; Lu, L.; Liu, Y. A Novel Deep Learning-Based Collaborative Filtering Model for Recommendation System. IEEE Trans. Cybern. 2019, 49, 1084–1096. [Google Scholar] [CrossRef]
- Hong, W.; Zheng, N.; Xiong, Z.; Hu, Z. A Parallel Deep Neural Network Using Reviews and Item Metadata for Cross-Domain Recommendation. IEEE Access 2020, 8, 41774–41783. [Google Scholar] [CrossRef]
- Understanding the 3 Most Common Loss Functions for Machine Learning Regression |by George Seif| towards Data Science. Available online: https://towardsdatascience.com/understanding-the-3-most-common-loss-functions-for-machine-learning-regression-23e0ef3e14d3 (accessed on 28 April 2022).
- Dogo, E.M.; Afolabi, O.J.; Nwulu, N.I.; Twala, B.; Aigbavboa, C.O. A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks. In Proceedings of the International Conference on Computational Techniques, Electronics and Mechanical Systems, Belgaum, India, 21–22 December 2018; pp. 92–99. [Google Scholar] [CrossRef]
- Messaoud, S.; Bradai, A.; Moulay, E. Online GMM Clustering and Mini-Batch Gradient Descent Based Optimization for Industrial IoT 4.0. IEEE Trans. Ind. Inform. 2020, 16, 1427–1435. [Google Scholar] [CrossRef]
- Zaheer, R. A Study of the Optimization Algorithms in Deep Learning. In Proceedings of the 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 10–11 January 2019; pp. 536–539. [Google Scholar]
- Khan, Z.A.; Zubair, S.; Alquhayz, H.; Azeem, M.; Ditta, A. Design of Momentum Fractional Stochastic Gradient Descent for Recommender Systems. IEEE Access 2019, 7, 179575–179590. [Google Scholar] [CrossRef]
- Khan, Z.A.; Raja, M.A.Z.; Chaudhary, N.I.; Mehmood, K.; He, Y. MISGD: Moving-Information-Based Stochastic Gradient Descent Paradigm for Personalized Fuzzy Recommender Systems. Int. J. Fuzzy Syst. 2022, 24, 686–712. [Google Scholar] [CrossRef]
- Khan, Z.A.; Zubair, S.; Chaudhary, N.I.; Raja, M.A.Z.; Khan, F.A.; Dedovic, N. Design of Normalized Fractional SGD Computing Paradigm for Recommender Systems. Neural Comput. Appl. 2020, 32, 10245–10262. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. ADAM: A Method for Stochastic Optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
- ML|ADAM (Adaptive Moment Estimation) Optimization—GeeksforGeeks. Available online: https://www.geeksforgeeks.org/adam-adaptive-moment-estimation-optimization-ml/ (accessed on 28 April 2022).
- Wang, Y.; Deng, J.; Gao, J.; Zhang, P. A Hybrid User Similarity Model for Collaborative Filtering. Inf. Sci. 2017, 418–419, 102–118. [Google Scholar] [CrossRef]
- Lee, H.; Lee, J. Scalable Deep Learning-Based Recommendation Systems. ICT Express 2019, 5, 84–88. [Google Scholar] [CrossRef]
- Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. A Comprehensive Survey and Performance Analysis of Activation Functions in Deep Learning. arXiv 2021, arXiv:2109.14545. [Google Scholar]
Users Sets | |
---|---|
If r (user, item) < T1_u | add rating into Weak_user set |
If r (user, item) ≥ T1_u and r (user, item) < T2_u | add rating into Average_user set |
If r (user, item) ≥ T2_u | add rating to the Strong_user set |
Item Sets | |
If r (user, item) < T1_i | add rating into Weak_item set |
If r (user, item) ≥ T1_i and r (user, item) < T2_i | add rating into Average_item set |
If r (user, item) ≥ T2_i | add rating into Strong_item set |
Thresholds | Formula |
---|---|
T1_u or T1_i or T1 | Rmin + round (1/3 × (Rmax − Rmin) |
T2_u, or T2_i or T2 | Rmax − round (1/3 × (Rmax − Rmin) |
User Classes | |
---|---|
If |Weak_user| ≥ |Strong_user| + |Average_user| | User is Critical |
If |Average_user| ≥ |Strong_user| + |Weak_user| | User is Average |
If |Strong_user| ≥ |Average _user| + |Weak_user| | User is Benevolent |
Item Classes | |
If |Weak_item| ≥ |Strong_item| + |Average_item| | Item is Weakly recommended |
If |Average_item| ≥ |Strong_item| + |Weak_item| | Item is Averagely recommended |
If |Strong_item| ≥ |Average_item| + |Weak_item| | Item is Strongly recommended |
Rating Classes | |
If r (user, item) < T1 | Rating is Weak |
If T1 ≤ r (user, item) < T2 | Rating is Average |
If r (user, item) ≥ T2 | Rating is Strong |
Categories | User Class | Item Class | Rating Class |
---|---|---|---|
Category-1 | Critical | Weakly_recommended | Weak |
Category-2 | Average | Averagely_recommended | Average |
Category-3 | Benevolent | Strongly_recommended | Strong |
I1 | I2 | I3 | I4 | |
---|---|---|---|---|
U1 | 3 | 5 | 4 | 4 |
U2 | 2 | 5 | 4 | 4 |
U3 | 3 | 3 | 2 | 5 |
U4 | 4 | 4 | - | 1 |
Classification | Users | |Weak_User| | |Average_User| | |Strong_User| | Class |
---|---|---|---|---|---|
Users | U1 | 0 | 1 | 3 | Benevolent |
U2 | 1 | 0 | 3 | Benevolent | |
U3 | 1 | 0 | 2 | Average | |
U4 | 1 | 0 | 3 | Benevolent | |
Items | |Weak_item| | |Average_item| | |Strong_item| | Class | |
Items | I1 | 1 | 2 | 1 | Averagely recommended |
I2 | 0 | 1 | 3 | Strongly recommended | |
I3 | 1 | 0 | 2 | Strongly recommended | |
I4 | 1 | 0 | 3 | Strongly recommended |
I1 | I2 | I3 | I4 | |
---|---|---|---|---|
U1 | 3 | 5 | 4 | 4 |
U2 | 3 | 5 | 4 | 4 |
U3 | 3 | 3 | 2 | 5 |
U4 | 4 | 4 | - | 1 |
Function | Mathematical Expression | References |
---|---|---|
Exponential | [31,63] | |
Concave | [62] | |
Convex | [62] | |
Linear | [62] | |
Logistic | [33,59] | |
Power | [59] |
Features | ML-100k | ML-1M | Epinions |
---|---|---|---|
# of users (M) | 943 | 6040 | 40,163 |
# of items (N) | 1682 | 3952 | 139,738 |
# of ratings (R) | 100,000 | 1,000,000 | 664,824 |
# of noisy ratings | 10,602 | 112,938 | 53,247 |
# of corrected noisy ratings | 3092 | 32,807 | 12,780 |
# of ratings per user | 106.6 | 165.8 | 16.55 |
# of ratings per movie | 59.5 | 253.03 | 4.75 |
Density Index | 6.30 | 4.18 | 0.01 |
Time range | Sep’97–Apr’98 | Apr’00–Feb’03 | Jul’99–May’11 |
Parameters | Values |
---|---|
Activation Function | ReLU |
Loss Function | Huber |
Optimizer | Adam |
Regularizer | L2 (le−6) |
Learning Rates | 0.001 |
Total Hidden Layers | 4 |
Neuron in Layers | 512-256-64-8 |
Epochs | 250 |
Dataset | Function | 20 | 60 | 100 | 150 | 200 |
---|---|---|---|---|---|---|
ML-100k | Concave | 0.5109 | 0.5096 | 0.5101 | 0.5110 | 0.5120 |
Convex | 0.5866 | 0.5869 | 0.5882 | 0.5901 | 0.5911 | |
Exponential | 0.5973 | 0.5985 | 0.6004 | 0.6021 | 0.6033 | |
Linear | 0.4785 | 0.4781 | 0.4785 | 0.4788 | 0.4794 | |
Logistic | 0.4662 | 0.4661 | 0.4664 | 0.4674 | 0.4680 | |
Power | 0.4261 | 0.4249 | 0.4256 | 0.4261 | 0.4270 |
Methods | ML-100k | ML-1M | Epinions | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
IBCF | 0.7897 | 1.0687 | 0.7795 | 1.0564 | 1.0075 | 1.3745 |
UBCF | 0.7429 | 1.0240 | 0.7455 | 1.0245 | 0.9853 | 1.3510 |
SVD | 0.7515 | 1.0024 | 0.7215 | 0.9295 | 0.8935 | 1.3365 |
DLCRS | 0.7421 | 0.9993 | 0.7125 | 0.9112 | 0.8844 | 1.3122 |
Deep EDU | 0.6725 | 0.8932 | 0.6571 | 0.8794 | 0.7634 | 0.9891 |
NNBN | 0.7206 | 0.9134 | 0.6987 | 0.8858 | 0.8025 | 1.2452 |
NCF | 0.6513 | 0.8710 | 0.6338 | 0.8532 | 0.7621 | 0.9835 |
NMF | 0.7321 | 0.9916 | 0.7260 | 1.0362 | 0.8234 | 1.2912 |
TD-DNN (Proposed) | 0.6275 | 0.8312 | 0.6196 | 0.8284 | 0.7364 | 0.9638 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jain, G.; Mahara, T.; Sharma, S.C.; Agarwal, S.; Kim, H. TD-DNN: A Time Decay-Based Deep Neural Network for Recommendation System. Appl. Sci. 2022, 12, 6398. https://doi.org/10.3390/app12136398
Jain G, Mahara T, Sharma SC, Agarwal S, Kim H. TD-DNN: A Time Decay-Based Deep Neural Network for Recommendation System. Applied Sciences. 2022; 12(13):6398. https://doi.org/10.3390/app12136398
Chicago/Turabian StyleJain, Gourav, Tripti Mahara, Subhash Chander Sharma, Saurabh Agarwal, and Hyunsung Kim. 2022. "TD-DNN: A Time Decay-Based Deep Neural Network for Recommendation System" Applied Sciences 12, no. 13: 6398. https://doi.org/10.3390/app12136398
APA StyleJain, G., Mahara, T., Sharma, S. C., Agarwal, S., & Kim, H. (2022). TD-DNN: A Time Decay-Based Deep Neural Network for Recommendation System. Applied Sciences, 12(13), 6398. https://doi.org/10.3390/app12136398