Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction
Abstract
:1. Introduction
- 1.
- A new KT model is proposed by us, which is called Deep Knowledge Tracing Based on Spatial and Temporal Deep Representation Learning for Learning Performance Prediction (DKT-STDRL). This model tries to combine the advantages of CKT [14] and DKT [12] in extracting the spatial features and temporal features of students’ learning process, so as to mine students’ learning characteristics in two ways;
- 2.
- The DKT-STDRL model combines the spatial features of students learning sequence over the past period of time extracted by the multilayer convolutional neural network with the data of students’ historical answers to obtain the joint learning features. Then, we employ BiLSTM to further learn the time-series information of the joint learning features. The overlapped neural networks make the model take into account the spatial learning features when extracting the temporal features of students learning, so it is conducive to mining and obtaining deeper learning information of students;
- 3.
- We use BiLSTM to perform a two-way time series analysis of the joint learning features, which enables the model to analyze the learning features of students at each time step from both a past and future perspective, so as to obtain more accurate predictions;
- 4.
- 5.
- We completed enough experiments to compare the prediction performance of DKT-STDRL with DKT [12], CKT [14], and four variants of DKT-STDRL (DKT-TDRL, DKT-SDRL1, DKT-STDRRP, and DKT-STDRRJ) when they are set up with the same hyperparameters on the same datasets, to observe the impacts of the different aspects of spatial features, temporal features, prior features, and joint features on the prediction metrics.
2. Related Work and Motivation
3. Proposed Method
3.1. Problem Definition
3.2. Model Architecture
3.2.1. Using CNN to Extract Spatial Features of Students’ Learning Sequences
- 1.
- Convert the input to an embedded matrix.In order to make the model better extract spatial features of students’ learning process, CKT uses the embedding matrix to represent the given students’ exercise history . k is the max sequence length. Specifically, is randomly initialized as an n-dimensional vector by embedding; n is far less than the total counts of knowledge points in the whole dataset M. Then, according to whether the answer result is correct or not, the same n-dimensional zero vector is spliced on the right or left side of , so that is transformed into a dimensional matrix through embedding [14];
- 2.
- Calculate the prior knowledge of different students.Because the learners’ prior knowledge varies with each individual and can be reflected by students’ past exercise performance and correct ratio on relevant skills [31], CKT obtains the prior knowledge of different students by calculating values for historical relevant performance (HRP) and concept-wise percent correct (CPC) [14];
- 3.
- Calculate the learning rate of different students.Because the learning efficiency of different students is different, and the continuous exercise performance of students over a period can reflect the learning efficiency of different students in different learning stages, CKT extracts the learning rate of different students through multi-layer CNN.
3.2.2. Intermediate Data Processing
3.2.3. Using BiLSTM to Extract the Temporal Features of Students’ Learning Process
3.3. Optimization
4. Results and Discussion
4.1. Experimental Datasets
4.2. Experimental Environment
4.3. Results and Discussion
- DKT-TDRL. To study the influence of spatial features on the prediction effect of the DKT-STDRL model, we removed the part of extracting spatial features with CNN from the DKT-STDRL model and obtaine dthe variant model DKT-TDRL. Specifically, the DKT-TDRL model first takes the input data represented by the embedding matrix and the students’ personalized prior knowledge state as the prior feature. Then, through the intermediate data processing process, it is input into BiLSTM for bidirectional time feature extraction. Finally, the prediction result is output;
- DKT-SDRL1. For the purpose of studying the impact of time features of DKT-STDRL on the prediction effect, the part of extracting time features was removed from the DKT-STDRL model. Since DKT-STDRL adopts BiLSTM to the express bidirectional feature of the sequence, two schemes can be obtained to study the influence of unidirectional and bidirectional temporal feature extraction, respectively. The first scheme is to remove the bidirectional temporal feature extraction structure of the DKT-STDRL model. Following this idea, to obtain the prediction results of the next time step, the model is transformed into the CKT model. The second scheme is to change the BiLSTM structure of the DKT-STDRL model into the LSTM structure, which can show the influence of one-way temporal output on the prediction effect of the model. Then, we can compare the difference between two-way time characteristics and one-way time characteristics in solving the prediction problem of students’ learning performance. We abbreviate the second scheme as DKT-SDRL1. Specifically, DKT-SDRL1 first extracts and uses the prior learning features, and then a multi-layer convolution structure is used to extract students’ spatial learning features. Then, after a simple intermediate process, it is input to the one-way temporal feature extraction layer based on LSTM. Finally, the prediction results are output;
- DKT-STDRRP. In order to study the influence of the prior learning features of the DKT-STDRL model on the prediction effect, we removed the part of extracting the prior learning features from the DKT-STDRL model. This variant model is called DKT-STDRRP. DKT-STDRRP transforms the input students’ learning history data into an embedded matrix, and then extracts spatial features through CNN layers and extracts bidirectional temporal features through the BiLSTM layer, so as to predict students’ learning performance. By comparing the prediction results before and after removing the prior learning features, it is helpful to intuitively understand the role of the prior features in the task of predicting learning performance;
- DKT-STDRRJ. For studying the impact of the joint features in the DKT-STDRL model on the prediction, the process of merging the one-hot coded exercise history data is removed from the DKT-STDRL model in the intermediate data processing. In other words, the variant scheme keeps the structure of the first half of the DKT-STDRL model unchanged, inputs the one-hot coded spatial features directly into the temporal feature extraction part, and then obtains the final prediction result. This scheme is called DKT-STDRRJ.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
LMS | Learning Management System |
KT | Knowledge Tracing |
DKT-STDRL | Deep Knowledge Tracing Based on Spatial and Temporal Deep Representation |
Learning for Learning Performance Prediction | |
BKT | Bayesian Knowledge Tracing |
HMM | Hidden Markov Model |
KT-IDEM | Knowledge Tracing: Item Difficulty Effect Model |
PC-BKT | Personalized Clustered BKT |
RNN | Recurrent Neural Network |
DL | Deep Learning |
DKVMN | Dynamic Key-Value Memory Network |
MANN | Memory-Augmented Neural Network |
CNN | Convolutional Neural Network |
CKT | Convolutional Knowledge Tracing |
GKT | Graph-based Knowledge Tracing |
GNN | Graph Neural Network |
BiLSTM | Bidirectional Long Short-Term Memory |
IRT | Item Response Theory |
LSTM | Long Short-Term Memory |
GLU | Gate Linear Unit |
SKVMN | Sequential Key-Value Memory Networks |
EERNN | Exercise-Enhanced Recurrent Neural Network |
SAKT | Self-Attentive Knowledge Tracing |
HRP | Historical Relevant Performance |
CPC | Concept-wise Percent Correct |
RMSE | Root Mean Squared Error |
AUC | Area Under Curve |
ACC | Accuracy |
OBE | Outcome-Based Education |
References
- Khlaif, Z.N.; Salha, S.; Affouneh, S.; Rashed, H.; ElKimishy, L.A. The COVID-19 Epidemic: Teachers’ Responses to School Closure in Developing Countries. Technol. Pedagog. Educ. 2021, 30, 95–109. [Google Scholar] [CrossRef]
- Cavus, N. Distance Learning and Learning Management Systems. Procedia Soc. Behav. Sci. 2015, 191, 872–877. [Google Scholar] [CrossRef] [Green Version]
- Pardos, Z.; Bergner, Y.; Seaton, D.; Pritchard, D. Adapting Bayesian Knowledge Tracing to a Massive Open Online Course in edX. In Proceedings of the International Conference on Educational Data Mining, Memphis, TN, USA, 6–9 July 2013; Citeseer: Princeton, NJ, USA, 2013. [Google Scholar]
- Heffernan, N.T.; Ostrow, K.S.; Kelly, K.; Selent, D.; Van Inwegen, E.G.; Xiong, X.; Williams, J.J. The Future of Adaptive Learning: Does the Crowd Hold the Key? Int. J. Artif. Intell. Educ. 2016, 26, 615–644. [Google Scholar] [CrossRef] [Green Version]
- Gorshenin, A. Toward Modern Educational IT-ecosystems: From Learning Management Systems to Digital Platforms. In Proceedings of the 2018 10th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Moscow, Russia, 5–9 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
- Hernández-Blanco, A.; Herrera-Flores, B.; Tomás, D.; Navarro-Colorado, B. A Systematic Review of Deep Learning Approaches to Educational Data Mining. Complexity 2019, 2019, 1306039. [Google Scholar] [CrossRef]
- Teodorescu, O.; Popescu, P.; Mocanu, M.; Mihaescu, C. Continuous Student Knowledge Tracing Using SVD and Concept Maps. Adv. Electr. Comp. Eng. 2021, 21, 75–82. [Google Scholar] [CrossRef]
- Corbett, A.T.; Anderson, J.R. Knowledge tracing: Modeling the Acquisition of Procedural Knowledge. User Model. User-Adapt. Interact. 1995, 4, 253–278. [Google Scholar] [CrossRef]
- Rastrollo-Guerrero, J.L.; Gómez-Pulido, J.A.; Durán-Domínguez, A. Analyzing and Predicting Students’ Performance by Means of Machine Learning: A Review. Appl. Sci. 2020, 10, 1042. [Google Scholar] [CrossRef] [Green Version]
- Pardos, Z.A.; Heffernan, N.T. KT-IDEM: Introducing Item Difficulty to the Knowledge Tracing Model. In Proceedings of the User Modeling, Adaption and Personalization, Girona, Spain, 11–15 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 243–254. [Google Scholar] [CrossRef]
- Nedungadi, P.; Remya, M.S. Predicting Students’ Performance on Intelligent Tutoring System—Personalized Clustered BKT (PC-BKT) Model. In Proceedings of the 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, Madrid, Spain, 22–25 October 2014; IEEE Computer Society: Los Alamitos, CA, USA, 2014; pp. 1–6. [Google Scholar] [CrossRef]
- Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.; Sohl-Dickstein, J. Deep Knowledge Tracing. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, Montreal, QC, Canada, 7–12 December 2015; MIT Press: Cambridge, MA, USA, 2015; Volume 1, pp. 505–513. [Google Scholar]
- Zhang, J.; Shi, X.; King, I.; Yeung, D.Y. Dynamic Key-Value Memory Networks for Knowledge Tracing. In Proceedings of the 26th International Conference on World Wide Web, WWW ’17, Perth, Australia, 3–7 April 2017; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2017; pp. 765–774. [Google Scholar] [CrossRef] [Green Version]
- Shen, S.; Liu, Q.; Chen, E.; Wu, H.; Huang, Z.; Zhao, W.; Su, Y.; Ma, H.; Wang, S. Convolutional Knowledge Tracing: Modeling Individualization in Student Learning Process. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 25–30 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1857–1860. [Google Scholar] [CrossRef]
- Nakagawa, H.; Iwasawa, Y.; Matsuo, Y. Graph-based Knowledge Tracing: Modeling Student Proficiency Using Graph Neural Network. In Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence WI ’19, Thessaloniki, Greece, 14–17 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 156–163. [Google Scholar] [CrossRef]
- Graves, A.; Schmidhuber, J. Framewise Phoneme Classification with Bidirectional LSTM Networks. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 4, pp. 2047–2052. [Google Scholar] [CrossRef] [Green Version]
- Kim, B.H.; Vizitei, E.; Ganapathi, V. GritNet: Student Performance Prediction with Deep Learning. arXiv 2018, arXiv:1804.07405. [Google Scholar]
- Carmona, C.; Millán, E.; Pérez-de-la Cruz, J.L.; Trella, M.; Conejo, R. Introducing Prerequisite Relations in a Multi-layered Bayesian Student Model. In Proceedings of the User Modeling 2005, Edinburgh, UK, 24–29 July 2005; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2005; pp. 347–356. [Google Scholar] [CrossRef] [Green Version]
- Drasgow, F.; Hulin, C.L. Item Response Theory. In Handbook of Industrial and Organizational Psychology; Consulting Psychologists Press: Palo Alto, CA, USA, 1990; Volume 1, pp. 577–636. [Google Scholar]
- De la Torre, J. DINA Model and Parameter Estimation: A Didactic. J. Educ. Behav. Stat. 2009, 34, 115–130. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Xiong, X.; Zhao, S.; Botelho, A.; Heffernan, N.T. Incorporating Rich Features into Deep Knowledge Tracing. In Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale, L@S ’17, Cambridge, MA, USA, 20–21 April 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 169–172. [Google Scholar] [CrossRef] [Green Version]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language Modeling with Gated Convolutional Networks. In Proceedings of the 34th International Conference on Machine Learning, ICML’17, Sydney, Australia, 6–11 August 2017; JMLR: Cambdridge, MA, USA, 2017; pp. 933–941. [Google Scholar]
- Gori, M.; Monfardini, G.; Scarselli, F. A New Model for Learning in Graph Domains. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; Volume 2, pp. 729–734. [Google Scholar] [CrossRef]
- Abdelrahman, G.; Wang, Q. Knowledge Tracing with Sequential Key-Value Memory Networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, Paris, France, 21–25 July 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 175–184. [Google Scholar] [CrossRef] [Green Version]
- Liu, Q.; Huang, Z.; Yin, Y.; Chen, E.; Xiong, H.; Su, Y.; Hu, G. EKT: Exercise-Aware Knowledge Tracing for Student Performance Prediction. IEEE Trans. Knowl. Data Eng. 2019, 33, 100–115. [Google Scholar] [CrossRef] [Green Version]
- Pandey, S.; Karypis, G. A Self-Attentive model for Knowledge Tracing. arXiv 2019, arXiv:1907.06837. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Gang, W. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Zeng, C.; Zhu, D.; Wang, Z.; Wu, M.; Xiong, W.; Zhao, N. Spatial and Temporal Learning Representation for End-to-End Recording Device Identification. EURASIP J. Adv. Signal Process. 2021, 2021, 41. [Google Scholar] [CrossRef]
- Wang, F.; Liu, Q.; Chen, E.; Huang, Z.; Chen, Y.; Yin, Y.; Huang, Z.; Wang, S. Neural Cognitive Diagnosis for Intelligent Education Systems. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6153–6161. [Google Scholar] [CrossRef]
- Feng, M.; Heffernan, N.; Koedinger, K. Addressing the Assessment Challenge with An Online System that Tutors as It Assesses. User Model. User-Adapt. Interact. 2009, 19, 243–266. [Google Scholar] [CrossRef] [Green Version]
- Xiong, X.; Zhao, S.; Van Inwegen, E.G.; Beck, J.E. Going Deeper with Deep Knowledge Tracing. In Proceedings of the 9th International Conference on Educational Data Mining, Raleigh, NC, USA, 29 June–2 July 2016; pp. 545–550. [Google Scholar]
- Koedinger, K.; Cunningham, K.; Skogsholm, A.; Leber, B.; Stamper, J. A Data Repository for the EDM Community. Handb. Educ. Data Min. 2010, 43, 43–56. [Google Scholar] [CrossRef]
Category | KTs | Year | Technology | AUC | Key Article | Utility | Characteristics |
---|---|---|---|---|---|---|---|
Probability Graph based KT | BKT | 1994 | HMM | 0.670 | Corbett et al. [8] | Programming performance prediction | Single knowledge |
KT-IDEM | 2011 | HMM+IRT | 0.690 | Pardos et al. [10] | Performance prediction on math exercises | Ignore long term dependences | |
Deep Learning based KT | DKT | 2015 | RNN/LSTM | 0.805 | Piech et al. [12] | Performance prediction on math courses | Multiple knowledge, single feature |
DKT+ | 2017 | RNN/LSTM | 0.863 | Zhang et al. [21] | Performance prediction on statistics courses | Multiple knowledge, multiple features | |
DKVMN | 2017 | MVNN | 0.816 | Zhang et al. [13] | Performance prediction on math exercises | Adaptively update knowledge mastery | |
SKVMN | 2019 | LSTM+MVNN | 0.836 | Abdelrahman et al. [24] | Performance prediction on scientific courses | Multiple knowledge, long-term dependencies | |
GKT | 2019 | GNN | 0.723 | Nakagawa et al. [15] | Performance prediction on math courses | Model relationship between exercises and knowledge | |
EKT | 2019 | LSTM | 0.850 | Liu et al. [25] | Performance prediction on math courses | Apply semantic information of exercises | |
SAKT | 2019 | FFN+MSA | 0.848 | Pandey et al. [26] | Performance prediction on scientific courses | Model relationship among knowledge points | |
CKT | 2020 | CNN | 0.825 | Shen et al. [14] | Performance prediction on math exercises | Extract spatial features |
Dataset | Students | Skills | Records |
---|---|---|---|
ASSISTment2009 | 4151 | 110 | 325,637 |
ASSISTment2015 | 19,840 | 100 | 683,801 |
Synthetic-5 | 4000 | 50 | 200,000 |
ASSISTchall | 1709 | 102 | 942,816 |
Statics2011 | 333 | 1223 | 189,297 |
Configuration Environment | Configuration Parameters |
---|---|
Operating System | Windows 10 64-bit |
GPU | gtx 1080ti |
CPU | E5 Series (4 cores) |
Memory | 16 GB |
Programming language | Python 3.6 |
Deep learning framework | Tensorflow 1.5 |
Python library | Numpy, Scikit-learn, Pandas, Matplotlib |
Hyperparameters | Value |
---|---|
Learning rate | 0.001 |
Rate of decay for the learning rate | 0.3 |
Steps before the decay | 8 |
Batch size | 50 |
Epochs | 10 |
Shape of filters of the conv1d | (16, 50, 50) |
Layers of the hierarchical convolution | 3 |
Keep probability for dropout of the convolution | 0.2 |
Number of units of each LSTM cell | 30 |
Output keep probability of LSTM cell | 0.3 |
Datasets | Models | RMSE | AUC | ACC | |
---|---|---|---|---|---|
ASSISTment2009 | Ours | 0.2826 | 0.9591 | 0.8904 | 0.6440 |
DKT | 0.4006 | 0.8049 | 0.7663 | 0.2835 | |
CKT | 0.3912 | 0.8237 | 0.7748 | 0.3181 | |
ASSISTment2015 | Ours | 0.0766 | 0.9996 | 0.9948 | 0.9698 |
DKT | 0.4131 | 0.7235 | 0.7504 | 0.1235 | |
CKT | 0.4107 | 0.7322 | 0.7542 | 0.1338 | |
Synthetic-5 | Ours | 0.3167 | 0.9482 | 0.8790 | 0.5766 |
DKT | 0.4109 | 0.8167 | 0.7475 | 0.2868 | |
CKT | 0.4051 | 0.8279 | 0.7553 | 0.3075 | |
ASSISTchall | Ours | 0.2716 | 0.9710 | 0.9078 | 0.6847 |
DKT | 0.4538 | 0.7022 | 0.6791 | 0.1198 | |
CKT | 0.4500 | 0.7127 | 0.6857 | 0.1342 | |
Statics2011 | Ours | 0.2772 | 0.9499 | 0.9010 | 0.5621 |
DKT | 0.3697 | 0.8012 | 0.8054 | 0.2216 | |
CKT | 0.3630 | 0.8232 | 0.8101 | 0.2496 |
Datasets | Models | RMSE | AUC | ACC | |
---|---|---|---|---|---|
ASSISTment2009 | DKT-STDRL | 0.2826 | 0.9591 | 0.8904 | 0.6440 |
DKT-TDRL | 0.2845 | 0.9574 | 0.8896 | 0.6394 | |
CKT | 0.3953 | 0.8154 | 0.7687 | 0.3039 | |
DKT-SDRL1 | 0.4265 | 0.7567 | 0.7370 | 0.1894 | |
DKT | 0.4293 | 0.7509 | 0.7338 | 0.1788 | |
DKT-STDRRP | 0.2878 | 0.9557 | 0.8861 | 0.6310 | |
DKT-STDRRJ | 0.4531 | 0.6736 | 0.6890 | 0.0853 | |
ASSISTment2015 | DKT-STDRL | 0.0766 | 0.9996 | 0.9948 | 0.9698 |
DKT-TDRL | 0.0799 | 0.9995 | 0.9942 | 0.9672 | |
CKT | 0.4099 | 0.7343 | 0.7546 | 0.1370 | |
DKT-SDRL1 | 0.4176 | 0.7070 | 0.7475 | 0.1043 | |
DKT | 0.4183 | 0.7042 | 0.7468 | 0.1015 | |
DKT-STDRRP | 0.0801 | 0.9995 | 0.9943 | 0.9671 | |
DKT-STDRRJ | 0.4053 | 0.7581 | 0.7511 | 0.1563 | |
Synthetic-5 | DKT-STDRL | 0.3167 | 0.9482 | 0.8790 | 0.5766 |
DKT-TDRL | 0.3186 | 0.9469 | 0.8776 | 0.5714 | |
CKT | 0.4081 | 0.8241 | 0.7513 | 0.2972 | |
DKT-SDRL1 | 0.4496 | 0.7283 | 0.6754 | 0.1469 | |
DKT | 0.4505 | 0.7269 | 0.6752 | 0.1432 | |
DKT-STDRRP | 0.3194 | 0.9461 | 0.8768 | 0.5693 | |
DKT-STDRRJ | 0.4716 | 0.6485 | 0.6417 | 0.0612 | |
ASSISTchall | DKT-STDRL | 0.2716 | 0.9710 | 0.9078 | 0.6847 |
DKT-TDRL | 0.2790 | 0.9675 | 0.9009 | 0.6671 | |
CKT | 0.4503 | 0.7119 | 0.6846 | 0.1330 | |
DKT-SDRL1 | 0.4683 | 0.6431 | 0.6642 | 0.0627 | |
DKT | 0.4682 | 0.6430 | 0.6648 | 0.0628 | |
DKT-STDRRP | 0.2854 | 0.9633 | 0.8942 | 0.6518 | |
DKT-STDRRJ | 0.4493 | 0.7170 | 0.6935 | 0.1371 | |
Statics2011 | DKT-STDRL | 0.2772 | 0.9499 | 0.9010 | 0.5621 |
DKT-TDRL | 0.2891 | 0.9395 | 0.8956 | 0.5220 | |
CKT | 0.3630 | 0.8241 | 0.8099 | 0.2496 | |
DKT-SDRL1 | 0.3724 | 0.7942 | 0.8042 | 0.2103 | |
DKT | 0.3739 | 0.7891 | 0.8023 | 0.2036 | |
DKT-STDRRP | 0.2974 | 0.9275 | 0.8827 | 0.4940 | |
DKT-STDRRJ | 0.3756 | 0.7873 | 0.7995 | 0.1967 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lyu, L.; Wang, Z.; Yun, H.; Yang, Z.; Li, Y. Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction. Appl. Sci. 2022, 12, 7188. https://doi.org/10.3390/app12147188
Lyu L, Wang Z, Yun H, Yang Z, Li Y. Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction. Applied Sciences. 2022; 12(14):7188. https://doi.org/10.3390/app12147188
Chicago/Turabian StyleLyu, Liting, Zhifeng Wang, Haihong Yun, Zexue Yang, and Ya Li. 2022. "Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction" Applied Sciences 12, no. 14: 7188. https://doi.org/10.3390/app12147188
APA StyleLyu, L., Wang, Z., Yun, H., Yang, Z., & Li, Y. (2022). Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction. Applied Sciences, 12(14), 7188. https://doi.org/10.3390/app12147188