Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method
Abstract
:1. Introduction
- Increased computational issues regarding the size of the provided datasets;
- Operation under transductive mode with inefficient complexity for most real-life cases rejecting at the same time the extraction of an inductive mechanism as a generic solution;
2. Interpretability in Machine Learning
- -
- What each attribute represents?
- -
- Why was a specific prediction was made by the model?
- -
- Which are the key factors of a specified prediction?
- -
- Why a specific student was assigned a failing grade?
- -
- Can we describe what the model has learned?
- -
- How confident are we for the decisions of the model?
3. Related Work
- Semi-regression,
- Early prognosis, and
- Interpretation of features.
4. Dataset Description
- The demographic set S1 = {Gender, NewStudent} (Table 1).The distribution of male and female students was 76.5% and 23.5%, respectively. In addition, 87.5% of the students had enrolled in the course for the first time, while the rest failed to pass the previous year’s final exams.
- The academic performance set S2 = {Ocsi, Wrii (Table 2).The attribute named Ocsi refers to students’ absence or presence in the -th optional contact session, while the attribute named Wrii represents students’ grades (ten-point grading scale) in the -th written assignment, . Four written assignments should be submitted during the academic year, while a total sum was required for a student to undertake the final exam.
- The LMS activity set S3 = {Li, V1i, V2i, V3i, V4i, V5i, P1i, P2i, P3i, P4i (Table 3).These attributes monitor student activity on the online LMS course (i.e., logins, views, and posts).
5. Proposed Semi-Supervised Regression Wrapper Scheme
- Semi-Supervised Classification (SSC).The labels of the output attribute are discrete, i.e., .
- Semi-Supervised Regression (SSR).The labels of the output attribute are real, i.e., .
Algorithm 1. The extended framework of the COREG algorithm. |
Framework: Pool-based COREG(D, selector1, selector2, regressor1, regressor2) |
Input: |
• Initially collected labeled and unlabeled instances, where and |
• : provide the split of the original feature space F, where and |
• Define Max_iter: maximum number of semi-supervised iterations and f: performance metric |
Main process: |
• Set iter = 1, consistentSet = ∅ |
• Train selectori, regressori on |
• While iter ≤ Max_iter do |
• For each do |
• For each do |
• Compute based on selectori |
• If add j to consistentSet |
• If consistentSet is empty do |
• iter:= iter + 1 and continue to the next iteration |
• else do |
• Find the index of consistentSet s.t. |
• Update |
• For do |
• Update {}, where ~i means the opposite index of the current |
• Retrain selectori, regressori on |
• iter:= iter + 1 |
Output: |
• Apply the next rule to each met instance:
|
6. Experimental Process and Results
- First scenario:
- Second scenario:
- (selector1, selector2): We have selected kNN algorithm for detecting the appropriate neighbors and fitting appropriate models. Following the original COREG scheme for injecting further diversity between the two separate views, we kept different power parameter for the internal distance that is exploited for formatting the neighborhood. Thus, we used Euclidean distance and Minkowski of 5th power for first and second selector, respectively. Moreover, we examined four separate cases based on the number of the nearest neighbors that are considered per case: (k1, k2) ≡ (1, 1), (1, 3), (3, 1), and (3, 3).
- (regressor1, regressor2): A different pair of same models have been used for this choice. To be more specific, we have used kNN with , a typical Linear regressor (LR), the Gradient Boosting regressor which is an additive model that operates under a forward-stage manner with 2 different loss functions: Least squares regression (ls) and ‘huber’—a hybrid between ls and least absolute deviation—which are depicted as GB(ls) and GB(huber) and multi-layer perceptron that optimizes the squared-loss function by using the ‘lbfgs’ quasi-Newton solver. The last regressor is denoted as MLP, while its default hyperparameters were used: The ‘Relu’ as activation function and a hidden layer with 100 neurons. Although some further modifications of the internal parameters of each learner were investigated, as well as the combination of same learning models, but distinct regressors per view (e.g., train GB(ls) on and train GB(huber) on ), but neither this fact serves our ambitions nor any great improvement was achieved. More information could be found in Reference [41].
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Baker, R.S.J.D.; Yacef, K. The state of educational data mining in 2009: A review and future visions. JEDM J. Educ. Data Min. 2009, 1, 3–17. [Google Scholar]
- Baker, R. Data mining for education. Int. Encycl. Educ. 2010, 7, 112–118. [Google Scholar]
- Costa, E.D.B.; Fonseca, B.; Santana, M.A.; De Araújo, F.F.; Rego, J.B.A. Evaluating the effectiveness of educational data mining techniques for early prediction of students’ academic failure in introductory programming courses. Comput. Hum. Behav. 2017, 73, 247–256. [Google Scholar] [CrossRef]
- Márquez-Vera, C.; Cano, A.; Romero, C.; Noaman, A.Y.M.; Fardoun, H.M.; Ventura, S. Early dropout prediction using data mining: A case study with high school students. Expert Syst. 2016, 33, 107–124. [Google Scholar] [CrossRef]
- Kostopoulos, G.; Karlos, S.; Kotsiantis, S.B. Multiview Learning for Early Prognosis of Academic Performance: A Case Study. IEEE Trans. Learn. Technol. 2019, 12, 212–224. [Google Scholar] [CrossRef]
- Shelton, B.E.; Hung, J.-L.; Lowenthal, P.R. Predicting student success by modeling student interaction in asynchronous online courses. Distance Educ. 2017, 38, 59–69. [Google Scholar] [CrossRef]
- Rahman, M.; Watanobe, Y.; Nakamura, K. Source Code Assessment and Classification Based on Estimated Error Probability Using Attentive LSTM Language Model and Its Application in Programming Education. Appl. Sci. 2020, 10, 2973. [Google Scholar] [CrossRef]
- Zhu, X. Semi-Supervised Learning Literature Survey; University of Wisconsin-Madison: Madison, WI, USA, 2006. [Google Scholar]
- Kostopoulos, G.; Karlos, S.; Kotsiantis, S.; Ragos, O. Semi-supervised regression: A recent review. J. Intell. Fuzzy Syst. 2018, 35, 1483–1500. [Google Scholar] [CrossRef]
- Van Engelen, J.E.; Hoos, H. A survey on semi-supervised learning. Mach. Learn. 2019, 109, 373–440. [Google Scholar] [CrossRef] [Green Version]
- Sun, S. A survey of multi-view machine learning. Neural Comput. Appl. 2013, 23, 2031–2038. [Google Scholar] [CrossRef]
- Xu, C.; Tao, D.; Xu, C. A Survey on Multi-view Learning. arXiv 2013, arXiv:1304.5634. [Google Scholar]
- Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
- Karlos, S.; Fazakis, N.; Kalleris, K.; Kanas, V.G.; Kotsiantis, S.B. An incremental self-trained ensemble algorithm. In Proceedings of the IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), Rhodes, Greece, 25–27 May 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Karlos, S.; Fazakis, N.; Kotsiantis, S.; Sgarbas, K. Self-Trained Stacking Model for Semi-Supervised Learning. Int. J. Artif. Intell. Tools 2017, 26. [Google Scholar] [CrossRef]
- Fu, B.; Wang, Z.; Xu, G.; Cao, L. Multi-label learning based on iterative label propagation over graph. Pattern Recognit. Lett. 2014, 42, 85–90. [Google Scholar] [CrossRef]
- Kang, Z.; Lu, X.; Yi, J.; Xu, Z. Self-weighted multiple kernel learning for graph-based clustering and semi-supervised classification. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 2312–2318. [Google Scholar] [CrossRef] [Green Version]
- Wang, B.; Tsotsos, J.K. Dynamic label propagation for semi-supervised multi-class multi-label classification. Pattern Recognit. 2016, 52, 75–84. [Google Scholar] [CrossRef]
- Luo, Y.; Ji, R.; Guan, T.; Yu, J.; Liu, P.; Yang, Y. Every node counts: Self-ensembling graph convolutional networks for semi-supervised learning. Pattern Recognit. 2020, 106, 107451. [Google Scholar] [CrossRef]
- Ribeiro, F.D.S.; Calivá, F.; Swainson, M.; Gudmundsson, K.; Leontidis, G.; Kollias, S. Deep Bayesian Self-Training. Neural Comput. Appl. 2019, 32, 4275–4291. [Google Scholar] [CrossRef] [Green Version]
- Iscen, A.; Tolias, G.; Avrithis, Y.; Chum, O. Label Propagation for Deep Semi-Supervised Learning. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 5065–5074. [Google Scholar]
- Akusok, A.; Gritsenko, A.; Miche, Y.; Björk, K.-M.; Nian, R.; Lauren, P.; Lendasse, A. Adding reliability to ELM forecasts by confidence intervals. Neurocomputing 2017, 219, 232–241. [Google Scholar] [CrossRef]
- Conati, C.; Porayska-Pomsta, K.; Mavrikis, M. AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling. arXiv 2018, arXiv:1807.00154. [Google Scholar]
- Liz-Domínguez, M.; Caeiro-Rodríguez, M.; Llamas, M.; Mikic-Fonte, F.A. Systematic Literature Review of Predictive Analysis Tools in Higher Education. Appl. Sci. 2019, 9, 5569. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Z.-H.; Li, M. Semi-Supervised Regression with Co-Training. 2005. Available online: https://dl.acm.org/citation.cfm?id=1642439 (accessed on 31 October 2020).
- Štrumbelj, E.; Kononenko, I. Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 2014, 41, 647–665. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. SSRN Electron. J. 2017. [Google Scholar] [CrossRef] [Green Version]
- Kanakaris, N.; Karacapilidis, N.; Kournetas, G. On the Exploitation of Textual Descriptions for a Better-informed Task Assignment Process. In Proceedings of the 9th International Conference on Operations Research and Enterprise Systems, {ICORES}, Valletta, Malta, 22–24 February 2020; Parlier, G.H., Liberatore, F., Demange, M., Eds.; SCITEPRESS: Setúbal, Portugal, 2020; pp. 304–310. [Google Scholar]
- Chatzimparmpas, A.; Martins, R.M.; Jusufi, I.; Kerren, A. A survey of surveys on the use of visualization for interpreting machine learning models. Inf. Vis. 2020, 19, 207–233. [Google Scholar] [CrossRef] [Green Version]
- Lipton, Z.C. The mythos of model interpretability. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
- Hosseini, B.; Hammer, B. Interpretable Discriminative Dimensionality Reduction and Feature Selection on the Manifold. Lect. Notes Comput. Sci. 2020, 11906 LNAI, 310–326. [Google Scholar] [CrossRef]
- Plumb, G.; Molitor, D.; Talwalkar, A.S. Model Agnostic Supervised Local Explanations. Adv. Neural Inf. Process. Syst. 2018, 2520–2529. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should {I} Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd {ACM} {SIGKDD} International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Tan, S.; Caruana, R.; Hooker, G.; Lou, Y. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society—AIES ’18; ACM Press: New York, NY, USA, 2018; pp. 303–310. [Google Scholar] [CrossRef] [Green Version]
- Mollas, I.; Bassiliades, N.; Vlahavas, I.P.; Tsoumakas, G. LionForests: Local interpretation of random forests. In First International Workshop on New Foundations for Human-Centered AI (NeHuAI 2020); Saffioti, A., Serafini, L., Lukowicz, P., Eds.; CEUR: Aachen, Germany, 2020; pp. 17–24. [Google Scholar]
- Houidi, S.; Fourer, D.; Auger, F. On the Use of Concentrated Time–Frequency Representations as Input to a Deep Convolutional Neural Network: Application to Non Intrusive Load Monitoring. Entropy 2020, 22, 911. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. Adv. Neural Inf. Process. Syst. 2017, 4768–4777. [Google Scholar]
- Nuñez, H.; Maldonado, G.; Astudillo, C. Semi-supervised regression based on tree SOMs for predicting students performance. IET Conf. Publ. 2018, CP745, 65–71. [Google Scholar] [CrossRef]
- Kostopoulos, G.; Kotsiantis, S.; Fazakis, N.; Koutsonikos, G.; Pierrakeas, C. A Semi-Supervised Regression Algorithm for Grade Prediction of Students in Distance Learning Courses. Int. J. Artif. Intell. Tools 2019, 28, 1940001. [Google Scholar] [CrossRef]
- Hady, M.; Schwenker, F. Co-Training by Committee: A Generalized Framework for Semi-Supervised Learning with Committees. Int. J. Softw. Inform. 2008, 2, 95–124. [Google Scholar] [CrossRef]
- Brefeld, U.; Gärtner, T.; Scheffer, T.; Wrobel, S. Efficient co-regularised least squares regression. In Proceedings of the 23rd International Conference on World Wide Web-WWW ’14, Seoul, Korea, 7–11 April 2006; pp. 137–144. [Google Scholar]
- Liang, R.Z.; Xie, W.; Li, W.; Du, X.; Wang, J.J.Y.; Wang, J. Semi-supervised structured output prediction by local linear regression and sub-gradient descent. arXiv 2016, arXiv:1606.02279. [Google Scholar]
- Levatić, J.; Ceci, M.; Kocev, D.; Džeroski, S. Self-training for multi-target regression with tree ensembles. Knowledge-Based Syst. 2017, 123, 41–60. [Google Scholar] [CrossRef]
- Kim, S.W.; Lee, Y.G.; Tama, B.A.; Lee, S. Reliability-Enhanced Camera Lens Module Classification Using Semi-Supervised Regression Method. Appl. Sci. 2020, 10, 3832. [Google Scholar] [CrossRef]
- Chapelle, O.; Scholkopf, B.; Zien, A. Semi-supervised learning (chapelle, o. et al., eds.; 2006) [book reviews]. IEEE Trans. Neural Networks 2009, 20, 542. [Google Scholar] [CrossRef]
- Zhou, Z.-H.; Li, M. Semi-supervised learning by disagreement. Knowl. Inf. Syst. 2010, 24, 415–439. [Google Scholar] [CrossRef]
- Barreto, C.A.S.; Gorgônio, A.; Canuto, A.M.P.; João, C.X., Jr. A Distance-Weighted Selection of Unlabelled Instances for Self-training and Co-training Semi-supervised Methods. In BRACIS; Springer: Cham, Switzerland, 2020; pp. 352–366. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, L.; Mammadov, M. Learning semi-lazy Bayesian network classifier under the c.i.i.d assumption. Knowledge-Based Syst. 2020, 208, 106422. [Google Scholar] [CrossRef]
- Fazakis, N.; Karlos, S.; Kotsiantis, S.; Sgarbas, K. A multi-scheme semi-supervised regression approach. Pattern Recognit. Lett. 2019, 125, 758–765. [Google Scholar] [CrossRef]
- Guo, X.; Uehara, K. Graph-based Semi-Supervised Regression and Its Extensions. Int. J. Adv. Comput. Sci. Appl. 2015, 6. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient kNN Classification with Different Numbers of Nearest Neighbors. IEEE Trans. Neural Networks Learn. Syst. 2018, 29, 1774–1785. [Google Scholar] [CrossRef]
- Karlos, S.; Kanas, V.G.; Aridas, C.; Fazakis, N.; Kotsiantis, S. Combining Active Learning with Self-train algorithm for classification of multimodal problems. In Proceedings of the 10th International Conference on Information, Intelligence, Systems and Applications (IISA), Patras, Greece, 15–17 July 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Nigam, K.; Ghani, R. Understanding the Behavior of Co-training. Softwarepract. Exp. 2006, 36, 835–844. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Vanderplas, J. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2012, 12, 2825–2830. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; Degrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Zhu, Q. A boosting Self-Training Framework based on Instance Generation with Natural Neighbors for K Nearest Neighbor. Appl. Intell. 2020, 50, 3535–3553. [Google Scholar] [CrossRef]
- Yao, J.; Qin, S.; Qiao, S.; Che, W.; Chen, Y.; Su, G.; Miao, Q. Assessment of Landslide Susceptibility Combining Deep Learning with Semi-Supervised Learning in Jiaohe County, Jilin Province, China. Appl. Sci. 2020, 10, 5640. [Google Scholar] [CrossRef]
- Peikari, M.; Salama, S.; Nofech-Mozes, S.; Martel, A.L. A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification. Sci. Rep. 2018, 8, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Tsiakmaki, M.; Kostopoulos, G.; Kotsiantis, S.B.; Ragos, O. Transfer Learning from Deep Neural Networks for Predicting Student Performance. Appl. Sci. 2020, 10, 2145. [Google Scholar] [CrossRef] [Green Version]
- Wang, G.; Zhang, G.; Choi, K.-S.; Lam, K.-M.; Lu, J. Output based transfer learning with least squares support vector machine and its application in bladder cancer prognosis. Neurocomputing 2020, 387, 279–292. [Google Scholar] [CrossRef]
- Karlos, S.; Kostopoulos, G.; Kotsiantis, S.B. A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems. Algorithms 2020, 13, 26. [Google Scholar] [CrossRef] [Green Version]
- Yi, Y.; Chen, Y.; Dai, J.; Gui, X.; Chen, C.; Lei, G.; Wang, W. Semi-Supervised Ridge Regression with Adaptive Graph-Based Label Propagation. Appl. Sci. 2020, 8, 2636. [Google Scholar] [CrossRef] [Green Version]
Attribute Name | Description | Values |
---|---|---|
Gender | Student gender | male, female |
New Student | A student enrolled in the course for the first time | yes, no |
Attribute Name | Description | Values |
---|---|---|
Ocsi | Student presence in the i-th optional contact session | absence, presence |
Wrii | Student grade in the i-th written assignment | [0, 10] |
Attribute Name | Description | Values |
---|---|---|
Li | Total number of student logins | integer |
V1i | Number of student views in the pseudo-code forum | integer |
V2i | Number of student views in the compiler forum | integer |
V3i | Number of student views in the module forum | integer |
V4i | Number of student views in the course forum | integer |
V5i | Number of student views in the course news | integer |
P1i | Number of student posts in the pseudo-code forum | integer |
P2i | Number of student posts in the compiler forum | integer |
P3i | Number of student posts in the module forum | integer |
P4i | Number of student posts in the course forum | integer |
Size | Selectori | Regressori | ||||
---|---|---|---|---|---|---|
(k1, k2) | 3NN | LR | GB (ls) | GB (huber) | MLP | |
50 | (1,3) | 3.63 (±2.46) | 10.06 (±6.42) | 6.83 (±0.89) | 3.78 (±1.62) | 8.93 (±6.00) |
(3,1) | 3.38 (±1.47) | 10.51 (±8.12) | 6.66 (±3.15) | 5.07 (±1.09) | 14.33 (±8.91) | |
(3,3) | 3.37 (±3.49) | 15.27 (±8.15) | 7.22 (±2.72) | 4.60 (±2.19) | 18.18 (±11.08) | |
100 | (1,3) | 2.28 (±3.00) | 2.02 (±1.34) | 1.81 (±1.66) | 5.08 (±1.64) | 5.82 (±4.49) |
(3,1) | 2.33 (±1.70) | 2.89 (±1.33) | 1.95 (±1.63) | 5.89 (±2.19) | 6.14 (±5.11) | |
(3,3) | 2.26 (±2.76) | 3.21 (±2.46) | 2.42 (±2.84) | 7.05 (±1.87) | 9.42 (±4.77) | |
150 | (1,3) | 2.52 (±1.47) | 0.63 (±0.72) | 5.43 (±2.85) | 3.32 (±2.14) | 3.73 (±2.47) |
(3,1) | 5.80 (±3/94) | 1.07 (±1.62) | 4.86 (±3.02) | 3.44 (±1.28) | 9.15 (±12.48) | |
(3,3) | 1.88 (±0.52) | 2.07 (±1.53) | 6.97 (±5.35) | 2.67 (±1.31) | 8.97 (±15.04) | |
200 | (1,3) | 2.83 (±1.82) | 1.16 (±1.06) | 3.31 (±2.53) | 3.40 (±2.76) | 4.95 (±2.46) |
(3,1) | 4.04 (±3.41) | 0.67 (±0.59) | 2.93 (±2.56) | 3.52 (±2.56) | 3.05 (±1.94) | |
(3,3) | 0.88 (±1.05) | 0.45 (±0.57) | 4.58 (±3.34) | 5.41 (±2.39) | 5.85 (±2.66) |
Size | Selectori | Regressori | ||||
---|---|---|---|---|---|---|
(k1, k2) | 3NN | LR | GB (ls) | GB (huber) | MLP | |
50 | (1,3) | 5.79 (±1.89) | 21.85 (±8.10) | 4.47 (±2.70) | 7.55 (±2.12) | 12.51 (±7.92) |
(3,1) | 7.26 (±4.22) | 22.65 (±7.89) | 5.36 (±3.35) | 7.70 (±3.99) | 11.25 (±5.28) | |
(3,3) | 2.81 (±2.06) | 30.69 (±13.83) | 7.28 (±5.17) | 8.40 (±3.72) | 18.17 (±7.16) | |
100 | (1,3) | 5.56 (±1.85) | 8.30 (±6.64) | 6.00 (±1.18) | 4.64 (±4.52) | 9.26 (±7.55) |
(3,1) | 3.65 (±2.72) | 8.04 (±8.18) | 6.62 (±2.61) | 2.92 (±3.19) | 6.91 (±2.8) | |
(3,3) | 1.91 (±2.16) | 11.91 (±13.15) | 7.57 (±2.50) | 2.76 (±2.57) | 10.77 (±8.35) | |
150 | (1,3) | 8.00 (±2.54) | 6.50 (±2.71) | 4.16 (±2.44) | 6.64 (±3.00) | 16.36 (±9.00) |
(3,1) | 6.35 (±5.32) | 6.03 (±2.90) | 4.72 (±3.25) | 6.47 (±3.87) | 3.41 (±4.76) | |
(3,3) | 1.85 (±2.54) | 9.46 (±6.59) | 5.46 (±3.83) | 8.57 (±5.82) | 15.41 (±10.79) | |
200 | (1,3) | 2.35 (±2.53) | 1.48 (±1.19) | 3.94 (±2.37) | 3.86 (±2.52) | 13.25 (±6.64) |
(3,1) | 1.80 (±1.02) | 1.55 (±1.31) | 3.88 (±2.76) | 3.94 (±3.33) | 7.21 (±6.20) | |
(3,3) | 1.64 (±1.31) | 2.61 (±2.56) | 5.91 (±5.13) | 5.27 (±2.44) | 15.36 (±10.65) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Karlos, S.; Kostopoulos, G.; Kotsiantis, S. Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method. Appl. Sci. 2020, 10, 8413. https://doi.org/10.3390/app10238413
Karlos S, Kostopoulos G, Kotsiantis S. Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method. Applied Sciences. 2020; 10(23):8413. https://doi.org/10.3390/app10238413
Chicago/Turabian StyleKarlos, Stamatis, Georgios Kostopoulos, and Sotiris Kotsiantis. 2020. "Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method" Applied Sciences 10, no. 23: 8413. https://doi.org/10.3390/app10238413
APA StyleKarlos, S., Kostopoulos, G., & Kotsiantis, S. (2020). Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method. Applied Sciences, 10(23), 8413. https://doi.org/10.3390/app10238413