A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis
Abstract
:1. Introduction
- It proposes an AutoLFA model that aggregates the merits of both the LFA model and the DNN-based model by a customized self-adaptive weighting strategy;
- Theoretical analyses and model designs are provided for the proposed AutoLFA model;
- Extensive experiments on five real recommendation datasets are conducted to evaluate the proposed AutoLFA model. The results demonstrate that AutoLFA achieves significantly better recommendation performance than the related state-of-the-art models.
2. Related Work
3. Preliminaries
4. The Proposed AutoLFA
4.1. The Latent Feature Analysis-Based (LFA-Based) Model
4.2. The Autoencoder-Based Model
4.3. Self-Adaptive Aggregation
4.4. Theoretical Analysis
5. Experiments
- RQ 1: Does the proposed AutoLFA model outperform state-of-the-art models in accurately predicting user behavior data?
- RQ 2: How does the AutoLFA model self-adaptively control the ensemble weights of its base models during the training process to ensure optimal performance?
- RQ 3: Are the base models of AutoLFA diversified in their ability to represent the same user behavior data matrix, thereby enhancing the performance of AutoLFA?
- RQ 4: What is the impact of the number of Latent Features and hidden units in the base models on the accuracy of AutoLFA?
5.1. General Settings
5.2. Performance Comparison (RQ. 1)
5.2.1. Comparison of Prediction Accuracy
- AutoLFA achieves the lowest RMSE/MAE in most cases, with only ten cases of loss and one case of a tie in comparison. The total count of loss/tie/win cases is 7/1/62.
- All p-values are below the significance level of 0.1, indicating that AutoLFA outperforms all competitors in terms of prediction accuracy.
- AutoLFA obtains the lowest F-rank among all participants, confirming its highest accuracy across all datasets.
5.2.2. Comparison of Computational Efficiency
- LFA-based models generally exhibit higher computational efficiency compared to DNN-based models, as they are trained on observed user behavior data, unlike DNN-based models.
- Due to their complex data form and architecture, GNN-based models consume significant computational resources and time. From Figure 2, it is evident that IGMC surpasses 3000 s in time costs.
- Except for slightly longer time consumption on dataset D4, AutoLFA’s time consumption falls between LFA-based and GNN-based models. It is slightly higher than the original Autoencoder-based model but faster than other DNN-based models in most cases.
5.3. The Self-Ensembling of MMA (RQ. 2)
- In most cases (e.g., Figure 3a–d), the ensemble weights of the Autoencoder-based model gradually increase and surpass the LFA-based model as the training progresses until the base models are fitted.
- In some instances, the LFA-based model’s weight may exceed that of the Autoencoder-based model. For example, in Figure 3e, the ensemble weight of the LFA-based model is greater than those of the Autoencoder-based model due to their faster convergence.
5.4. Distribution of Latent Features of Base Models (RQ. 3)
- The distribution of Latent Features in the Autoencoder-based model tends to have more values concentrated at the extremes (i.e., 0 or 1), as shown in Figure 4f,h,i, while in the LFA-based model, the distribution tends to follow a normal distribution.
- After encoding, the Autoencoder-based model’s Latent Features are more likely to exhibit unusually high values within specific ranges. In contrast, in the LFA-based model, there are no extreme values, as depicted in Figure 4a,c,d.
- In some cases, the distribution of Latent Features in the Autoencoder-based model appears to be slightly more uniform compared to the LFA-based model, as illustrated in Figure 4e,j.
5.5. Influence of Numbers of Latent Features and Hidden Units to Base Models (RQ. 4)
- Increasing the number of Latent Features/hidden units from 2/20 to 20/300 results in a rapid improvement in the accuracy of AutoLFA. During this range, AutoLFA substantially increases accuracy without incurring significant computational costs.
- Once the number of Latent Features/hidden units reaches 25/400, the rate of accuracy improvement becomes less prominent in Figure 5b–d.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Deng, S.; Zhai, Y.; Wu, D.; Yue, D.; Fu, X.; He, Y. A Lightweight Dynamic Storage Algorithm With Adaptive Encoding for Energy Internet. IEEE Trans. Serv. Comput. 2023, 1–14. [Google Scholar] [CrossRef]
- He, Y.; Wu, B.; Wu, D.; Beyazit, E.; Chen, S.; Wu, X. Toward Mining Capricious Data Streams: A Generative Approach. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1228–1240. [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Liang, Y.; Xu, D.; Feng, X.; Guan, R. A content-based recommender system for computer science publications. Knowl.-Based Syst. 2018, 157, 1–9. [Google Scholar] [CrossRef]
- Bai, X.; Huang, M.; Xu, M.; Liu, J. Reconfiguration Optimization of Relative Motion Between Elliptical Orbits Using Lyapunov-Floquet Transformation. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 923–936. [Google Scholar] [CrossRef]
- You, D.; Niu, S.; Dong, S.; Yan, H.; Chen, Z.; Wu, D.; Shen, L.; Wu, X. Counterfactual explanation generation with minimal feature boundary. Inf. Sci. 2023, 625, 342–366. [Google Scholar] [CrossRef]
- You, D.; Xiao, J.; Wang, Y.; Yan, H.; Wu, D.; Chen, Z.; Shen, L.; Wu, X. Online Learning From Incomplete and Imbalanced Data Streams. IEEE Trans. Knowl. Data Eng. 2023, 1–14. [Google Scholar] [CrossRef]
- Chen, J.; Wang, Q.; Peng, W.; Xu, H.; Li, X.; Xu, W. Disparity-based multiscale fusion network for transportation detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18855–18863. [Google Scholar] [CrossRef]
- Cao, B.; Zhao, J.; Lv, Z.; Yang, P. Diversified personalized recommendation optimization based on mobile data. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2133–2139. [Google Scholar] [CrossRef]
- Zhang, S.; Yao, L.; Wu, B.; Xu, X.; Zhang, X.; Zhu, L. Unraveling metric vector spaces with factorization for recommendation. IEEE Trans. Ind. Inform. 2019, 16, 732–742. [Google Scholar] [CrossRef]
- Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorization Techniques for Recommender Systems. Computer 2009, 42, 30–37. [Google Scholar]
- Li, P.; Wang, Z.; Ren, Z.; Bing, L.; Lam, W. Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, 7–11 August 2017; pp. 345–354. [Google Scholar]
- Wu, D.; Luo, X. Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data. IEEE/CAA J. Autom. Sin. 2021, 8, 796–805. [Google Scholar] [CrossRef]
- Wu, D.; Shang, M.; Luo, X.; Wang, Z. An L1-and-L2-Norm-Oriented Latent Factor Model for Recommender Systems. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 5775–5788. [Google Scholar] [CrossRef]
- Luo, X.; Wu, H.; Li, Z. Neulft: A Novel Approach to Nonlinear Canonical Polyadic Decomposition on High-Dimensional Incomplete Tensors. IEEE Trans. Knowl. Data Eng. 2023, 35, 6148–6166. [Google Scholar] [CrossRef]
- Wu, D.; He, Y.; Luo, X.; Zhou, M. A Latent Factor Analysis-Based Approach to Online Sparse Streaming Feature Selection. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6744–6758. [Google Scholar] [CrossRef]
- Wu, D. Robust Latent Feature Learning for Incomplete Big Data; Springer Nature: Berlin, Germany, 2022. [Google Scholar]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. Pufa-gan: A frequency-aware generative adversarial network for 3d point cloud upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Yuan, H.; Hamzaoui, R.; Su, H.; Hou, J.; Yang, H. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 2021, 30, 6623–6636. [Google Scholar] [CrossRef]
- Liu, X.; He, J.; Liu, M.; Yin, Z.; Yin, L.; Zheng, W. A Scenario-Generic Neural Machine Translation Data Augmentation Method. Electronics 2023, 12, 2320. [Google Scholar] [CrossRef]
- Liu, X.; Zhao, J.; Li, J.; Cao, B.; Lv, Z. Federated neural architecture search for medical data security. IEEE Trans. Ind. Inform. 2022, 18, 5628–5636. [Google Scholar] [CrossRef]
- Huang, T.; Liang, C.; Wu, D.; He, Y. A Debiasing Autoencoder for Recommender System. IEEE Trans. Consum. Electron. 2023. [Google Scholar] [CrossRef]
- Zenggang, X.; Mingyang, Z.; Xuemin, Z.; Sanyuan, Z.; Fang, X.; Xiaochao, Z.; Yunyun, W.; Xiang, L. Social similarity routing algorithm based on socially aware networks in the big data environment. J. Signal Process. Syst. 2022, 94, 1253–1267. [Google Scholar] [CrossRef]
- Xiong, Z.; Li, X.; Zhang, X.; Deng, M.; Xu, F.; Zhou, B.; Zeng, M. A Comprehensive Confirmation-based Selfish Node Detection Algorithm for Socially Aware Networks. J. Signal Process. Syst. 2023, 1–19. [Google Scholar] [CrossRef]
- Xie, L.; Zhu, Y.; Yin, M.; Wang, Z.; Ou, D.; Zheng, H.; Liu, H.; Yin, G. Self-feature-based point cloud registration method with a novel convolutional Siamese point net for optical measurement of blade profile. Mech. Syst. Signal Process. 2022, 178, 109243. [Google Scholar] [CrossRef]
- Wu, D.; Sun, B.; Shang, M. Hyperparameter Learning for Deep Learning-based Recommender Systems. IEEE Trans. Serv. Comput. 2023, 1–13. [Google Scholar] [CrossRef]
- Chen, F.; Yin, G.; Dong, Y.; Li, G.; Zhang, W. KHGCN: Knowledge-Enhanced Recommendation with Hierarchical Graph Capsule Network. Entropy 2023, 25, 697. [Google Scholar] [CrossRef] [PubMed]
- Muller, L.; Martel, J.; Indiveri, G. Kernelized synaptic weight matrices. In International Conference on Machine Learning; PMLR: London, UK, 2018; pp. 3654–3663. [Google Scholar]
- Han, H.; Liang, Y.; Bella, G.; Giunchiglia, F.; Li, D. LFDNN: A Novel Hybrid Recommendation Model Based on DeepFM and LightGBM. Entropy 2023, 25, 638. [Google Scholar] [CrossRef]
- Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep learning based recommender system: A survey and new perspectives. ACM Comput. Surv. 2019, 52, 1–38. [Google Scholar] [CrossRef] [Green Version]
- Lu, S.; Liu, M.; Yin, L.; Yin, Z.; Liu, X.; Zheng, W. The multi-modal fusion in visual question answering: A review of attention mechanisms. PeerJ Comput. Sci. 2023, 9, e1400. [Google Scholar] [CrossRef]
- Shen, X.; Jiang, H.; Liu, D.; Yang, K.; Deng, F.; Lui, J.C.; Liu, J.; Dustdar, S.; Luo, J. PupilRec: Leveraging Pupil Morphology for Recommending on Smartphones. IEEE Internet Things J. 2022, 9, 15538–15553. [Google Scholar] [CrossRef]
- Shen, Y.; Ding, N.; Zheng, H.-T.; Li, Y.; Yang, M. Modeling relation paths for knowledge graph completion. IEEE Trans. Knowl. Data Eng. 2020, 33, 3607–3617. [Google Scholar] [CrossRef]
- Wang, H.; Wang, B.; Luo, P.; Ma, F.; Zhou, Y.; Mohamed, M.A. State evaluation based on feature identification of measurement data: For resilient power system. CSEE J. Power Energy Syst. 2021, 8, 983–992. [Google Scholar]
- Wang, Y.; Xu, N.; Liu, A.-A.; Li, W.; Zhang, Y. High-order interaction learning for image captioning. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4417–4430. [Google Scholar] [CrossRef]
- Luo, X.; Zhou, Y.; Liu, Z.; Zhou, M. Fast and Accurate Non-Negative Latent Factor Analysis of High-Dimensional and Sparse Matrices in Recommender Systems. IEEE Trans. Knowl. Data Eng. 2023, 35, 3897–3911. [Google Scholar] [CrossRef]
- Chen, J.; Xu, M.; Xu, W.; Li, D.; Peng, W.; Xu, H. A Flow Feedback Traffic Prediction Based on Visual Quantified Features. IEEE Trans. Intell. Transp. Syst. 2023, 1–9. [Google Scholar] [CrossRef]
- Deng, X.; Liu, E.; Li, S.; Duan, Y.; Xu, M. Interpretable Multi-modal Image Registration Network Based on Disentangled Convolutional Sparse Coding. IEEE Trans. Image Process. 2023, 32, 1078–1091. [Google Scholar] [CrossRef]
- Fan, W.; Yang, L.; Bouguila, N. Unsupervised grouped axial data modeling via hierarchical Bayesian nonparametric models with Watson distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9654–9668. [Google Scholar] [CrossRef] [PubMed]
- Guo, C.; Hu, J. Fixed-Time Stabilization of High-Order Uncertain Nonlinear Systems: Output Feedback Control Design and Settling Time Analysis. J. Syst. Sci. Complex. 2023, 1–22. [Google Scholar] [CrossRef]
- Huang, H.; Xue, C.; Zhang, W.; Guo, M. Torsion design of CFRP-CFST columns using a data-driven optimization approach. Eng. Struct. 2022, 251, 113479. [Google Scholar] [CrossRef]
- Sedhain, S.; Menon, A.K.; Sanner, S.; Xie, L. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; pp. 111–112. [Google Scholar]
- Wu, D.; Luo, X.; Shang, M.; He, Y.; Wang, G.; Wu, X. A Data-Characteristic-Aware Latent Factor Model for Web Services QoS Prediction. IEEE Trans. Knowl. Data Eng. 2022, 34, 2525–2538. [Google Scholar] [CrossRef]
- Luo, X.; Zhou, M.; Li, S.; Wu, D.; Liu, Z.; Shang, M. Algorithms of Unconstrained Non-Negative Latent Factor Analysis for Recommender Systems. IEEE Trans. Big Data 2021, 7, 227–240. [Google Scholar] [CrossRef]
- Yuan, Y.; Luo, X.; Shang, M.; Wu, D. A generalized and fast-converging non-negative latent factor model for predicting user preferences in recommender systems. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 498–507. [Google Scholar]
- Ren, X.; Song, M.; Haihong, E.; Song, J. Context-aware probabilistic matrix factorization modeling for point-of-interest recommendation. Neurocomputing 2017, 241, 38–55. [Google Scholar] [CrossRef]
- Wu, D.; Jin, L.; Luo, X. PMLF: Prediction-Sampling-Based Multilayer-Structured Latent Factor Analysis. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17–20 November 2020; pp. 671–680. [Google Scholar]
- Wu, D.; Luo, X.; He, Y.; Zhou, M. A Prediction-Sampling-Based Multilayer-Structured Latent Factor Model for Accurate Representation to High-Dimensional and Sparse Data. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–14. [Google Scholar] [CrossRef]
- Wang, C.; Liu, Q.; Wu, R.; Chen, E.; Liu, C.; Huang, X.; Huang, Z. Confidence-aware matrix factorization for recommender systems. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 434–442. [Google Scholar]
- Wu, D.; He, Q.; Luo, X.; Shang, M.; He, Y.; Wang, G. A Posterior-Neighborhood-Regularized Latent Factor Model for Highly Accurate Web Service QoS Prediction. IEEE Trans. Serv. Comput. 2022, 15, 793–805. [Google Scholar] [CrossRef]
- Wu, D.; Zhang, P.; He, Y.; Luo, X. A Double-Space and Double-Norm Ensembled Latent Factor Model for Highly Accurate Web Service QoS Prediction. IEEE Trans. Serv. Comput. 2023, 16, 802–814. [Google Scholar] [CrossRef]
- Leng, C.; Zhang, H.; Cai, G.; Cheng, I.; Basu, A. Graph regularized Lp smooth non-negative matrix factorization for data representation. IEEE/CAA J. Autom. Sin. 2019, 6, 584–595. [Google Scholar] [CrossRef]
- Wu, D.; Luo, X.; Shang, M.; He, Y.; Wang, G.; Zhou, M. A Deep Latent Factor Model for High-Dimensional and Sparse Matrices in Recommender Systems. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 4285–4296. [Google Scholar] [CrossRef]
- Sun, B.; Wu, D.; Shang, M.; He, Y. Toward auto-learning hyperparameters for deep learning-based recommender systems. In Proceedings of the International Conference on Database Systems for Advanced Applications: 27th International Conference, DASFAA 2022, Virtual Event, 11–14 April 2022; Proceedings, Part II. Springer: Cham, Switzerland, 2022; pp. 323–331. [Google Scholar]
- Wang, Q.; Peng, B.; Shi, X.; Shang, T.; Shang, M. DCCR: Deep collaborative conjunctive recommender for rating prediction. IEEE Access 2019, 7, 60186–60198. [Google Scholar] [CrossRef]
- Zhang, M.; Chen, Y. Inductive matrix completion based on graph neural networks. In Proceedings of the International Conference on Learning Representations, Formerly Addis Ababa, Ethiopia, 26 April–1 May 2020. [Google Scholar]
- He, X.; Chua, T.-S. Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Tokyo, Japan, 7–11 August 2017; pp. 355–364. [Google Scholar]
- Han, S.C.; Lim, T.; Long, S.; Burgstaller, B.; Poon, J. GLocal-K: Global and Local Kernels for Recommender Systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, Australia, 1–5 November 2021; pp. 3063–3067. [Google Scholar]
- Xiao, J.; Ye, H.; He, X.; Zhang, H.; Wu, F.; Chua, T.-S. Attentional factorization machines: Learning the weight of feature interactions via attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI, Melbourne, Australia, 19–25 August 2017; pp. 3119–3125. [Google Scholar]
- Kim, D.H.; Park, C.; Oh, J.; Lee, S.; Yu, H. Convolutional matrix factorization for document context-aware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 233–240. [Google Scholar]
- Wu, D.; Luo, X.; Wang, G.; Shang, M.; Yuan, Y.; Yan, H. A Highly Accurate Framework for Self-Labeled Semisupervised Classification in Industrial Applications. IEEE Trans. Ind. Inform. 2018, 14, 909–920. [Google Scholar] [CrossRef]
No. | Name | |M| | |N| | |HO| | Density * |
---|---|---|---|---|---|
D1 | MovieLens_1M | 6040 | 3952 | 1,000,209 | 4.19% |
D2 | MovieLens_100k | 943 | 1682 | 100,000 | 6.30% |
D3 | MovieLens_HetRec | 2113 | 10,109 | 855,598 | 4.01% |
D4 | Yahoo | 15,400 | 1000 | 365,704 | 2.37% |
D5 | Douban | 3000 | 3000 | 136,891 | 1.52% |
Model | Description |
---|---|
MF [10] | A representative LFA-based model for factorizing user-item matrix data in recommender systems. Computer 2009. |
AutoRec [41] | A notable DNNs-based model for representing user-item data in recommender systems. WWW 2015. |
NRR [11] | A DNNs-based multi-task learning framework for rating prediction in recommender systems. SIGIR 2017. |
SparseFC [27] | A DNNs-based model that reparametrizes weight matrices into low-dimensional vectors to capture important features. ICML 2018. |
IGMC [55] | A GNNs-based model for inductive matrix completion without using side information. ICLR 2019. |
FML [9] | An LFA-based model that combines metric learning (distance space) and collaborative filtering. IEEE TII 2020. |
GLocal-K [57] | A DNNs-based model for generalizing and representing user-item data in a low-dimensional space with important features. CIKM 2021. |
Dataset | Metric | MF | AutoRec | NRR | SparseFC | IGMC | FML | Glocal-K | AutoLFA |
---|---|---|---|---|---|---|---|---|---|
D1 | RMSE | 0.857 • | 0.847 • | 0.881 • | 0.839 ∘ | 0.867 • | 0.849 • | 0.839 ∘ | 0.842 |
MAE | 0.673 • | 0.667 • | 0.691 • | 0.656 ∘ | 0.681 • | 0.667 • | 0.655 ∘ | 0.664 | |
D2 | RMSE | 0.913 • | 0.897 • | 0.923 • | 0.899 • | 0.915 • | 0.904 • | 0.892 • | 0.887 |
MAE | 0.719 • | 0.706 • | 0.725 • | 0.706 • | 0.722 • | 0.718 • | 0.697 | 0.699 | |
D3 | RMSE | 0.757 • | 0.752 • | 0.774 • | 0.749 • | 0.769 • | 0.754 • | 0.756 • | 0.744 |
MAE | 0.572 • | 0.569 • | 0.583 • | 0.567 • | 0.582 • | 0.573 • | 0.573 • | 0.562 | |
D4 | RMSE | 1.206 • | 1.172 • | 1.227 • | 1.203 • | 1.133 ∘ | 1.176 • | 1.204 • | 1.167 |
MAE | 0.937 • | 0.900 • | 0.949 • | 0.915 • | 0.848 ∘ | 0.937 • | 0.905 • | 0.895 | |
D5 | RMSE | 0.738 • | 0.744 • | 0.726 • | 0.745 • | 0.751 • | 0.762 • | 0.737 | 0.737 |
MAE | 0.588 • | 0.588 • | 0.573 • | 0.587 • | 0.594 • | 0.598 • | 0.580 ∘ | 0.584 | |
Statistic | loss/tie/win | 0/0/10 | 0/0/10 | 0/0/10 | 2/0/8 | 2/0/8 | 0/0/10 | 3/1/6 | 7/1/62 * |
p-value | 0.0039 | 0.0039 | 0.0039 | 0.039 | 0.0195 | 0.039 | 0.0977 | - | |
F-rank | 5.7 | 3.75 | 6.6 | 3.5 | 5.9 | 5.45 | 3.05 | 2.05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, S.; Liao, X.; Li, G.; Xian, K.; Li, Y.; Liang, C. A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis. Entropy 2023, 25, 1062. https://doi.org/10.3390/e25071062
Guo S, Liao X, Li G, Xian K, Li Y, Liang C. A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis. Entropy. 2023; 25(7):1062. https://doi.org/10.3390/e25071062
Chicago/Turabian StyleGuo, Shangzhi, Xiaofeng Liao, Gang Li, Kaiyi Xian, Yuhang Li, and Cheng Liang. 2023. "A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis" Entropy 25, no. 7: 1062. https://doi.org/10.3390/e25071062
APA StyleGuo, S., Liao, X., Li, G., Xian, K., Li, Y., & Liang, C. (2023). A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis. Entropy, 25(7), 1062. https://doi.org/10.3390/e25071062