Leveraging AI and Machine Learning for National Student Survey: Actionable Insights from Textual Feedback to Enhance Quality of Teaching and Learning in UK’s Higher Education
Abstract
:1. Introduction
2. Literature Review
2.1. Student Evaluation of Teaching
2.2. Text Mining
3. Methodology: Information Extraction and Teaching Intervention
- Phase 1—Information Extraction
- ○
- Annotation Scheme Development: to develop a set of fine-grained categories for students’ free-text responses, to be used in analysis.
- ○
- Automated Test Response Categorization: to apply state-of-art supervised machine learning methods in classification to categorize students’ free-text responses automatically, using the annotation scheme developed earlier.
- ○
- Issue Subtraction and Summarization: to apply state-of-the-art unsupervised machine learning methods in topic modeling to automatically identify the key issues in each response category and the top-10 issues, taking into account the institutional context.
- Phase 2—Teaching Intervention
- ○
- Design: to make an action list of teaching interventions to address the issues identified above.
- ○
- Implementation: to implement these actions in selected modules.
- ○
- Evaluation: to conduct an end-of-semester survey in each selected module and evaluate its results.
3.1. Annotation Scheme Development
3.2. Free-Text Analysis—Machine Learning Approach
3.2.1. Framework Overview
- (1)
- Classification: responses were grouped into smaller categories (e.g., teaching quality, assessment, etc.) to reveal the finer-grained issues to fit the NSS categories, as well as to yield a better performance in topic modeling.
- (2)
- Topic modeling: for each category, important topics were drawn and interpreted into meaningful issues upon which teaching interventions could be developed.
3.2.2. Automated Test Response Categorization—Classification
- (1)
- Labeled data: the text responses that were categorized manually against the annotation scheme, consisting of 20% of the total data, used to train the classifier for the supervised text-mining task.
- (2)
- Unlabeled data: the remaining 80% of the text responses, automatically categorized by the classification algorithm.
3.2.3. Issue Subtraction and Summarization—Topic Modeling
3.3. Teaching Intervention
4. Case Study: NSS Data from a UK University
4.1. Phase 1: Textual Analysis with Machine Learning
4.1.1. Classification of Open-Ended Responses
4.1.2. Results of Topic Modeling on NSS Data
4.2. Phase 2: Teaching Intervention and Evaluation
4.2.1. Discussion on Issues Identified
4.2.2. Teaching Intervention: Design and Implementation
- Module A, Level 5: 54 students.
- Module B, Level 5: 4 students.
- Module C, Level 6: 49 students.
- Module D, Level 6: 35 students.
4.2.3. Evaluation: Discussion on Questionnaire Results
5. Discussions and Implications for Decision Making
5.1. Effectiveness of the Teaching Intervention
5.2. Comparison of Level 5 and Level 6 Results
6. Conclusions
6.1. Significance of Using Automatic Textual Analysis
6.2. Limitation and Future Work
6.3. Summary
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Ortega, J.L.G.; Fuentes, A.R. The accuracy of student’s evaluations. Study of their evaluative competences. REDU Rev. Docencia Univ. 2017, 15, 349–366. [Google Scholar] [CrossRef] [Green Version]
- Fuentes, A.R.; Ortega, J.G. An Analytical Perspective of the Reliability of Multi-Evaluation in Higher Education. Mathematics 2021, 9, 1223. [Google Scholar] [CrossRef]
- Botaccio, L.A.; Ortega, J.L.G.; Rincón, A.N.; Fuentes, A.R. Evaluation for Teachers and Students in Higher Education. Sustainability 2020, 12, 4078. [Google Scholar] [CrossRef]
- Higher Education Funding Council for England. National Student Survey Results 2016. Available online: http://www.hefce.ac.uk/lt/nss/results/2016/ (accessed on 20 August 2021).
- Harrison, R.; Meyer, L.; Rawstorne, P.; Razee, H.; Chitkara, U.; Mears, S.; Balasooriya, C. Evaluating and enhancing quality in higher education teaching practice: A meta review. Stud. High. Educ. 2022, 47, 80–96. [Google Scholar] [CrossRef]
- Paek, S.; Kim, N. Analysis of Worldwide Research Trends on the Impact of Artificial Intelligence in Education. Sustainability 2021, 13, 7941. [Google Scholar] [CrossRef]
- Shah, M.; Nair, C.S. The changing nature of teaching and unit evaluations in Australian universities. Qual. Assur. Educ. 2012, 20, 274–288. [Google Scholar] [CrossRef]
- NSS. National Student Survey. 2015. Available online: http://www.hefce.ac.uk/lt/nss/results/2015/ (accessed on 20 August 2021).
- Nair, C.S.; Pawley, D.; Mertova, P. Quality in action: Closing the loop. Qual. Assur. Educ. 2010, 18, 144–155. [Google Scholar] [CrossRef]
- Symons, R. Listening to the Student Voice at the University of Sydney: Closing the Loop in the Quality Enhancement and Improvement Cycle; Australian Association for Institutional Research Forum: Coffs Harbour, Australia, 2006; Available online: http://www.aair.org.au/app/webroot/media/pdf/AAIR%20Fora/Forum2006/Symons.pdf (accessed on 14 September 2021).
- Boring, A.; Ottoboni, K. Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness. Sci. Res. 2016, 1–11. [Google Scholar] [CrossRef]
- Alauddin, M.; Kifle, T. Does the student evaluation of teaching instrument really measure instructors’ teaching effectiveness? An econometric analysis of students’ perceptions in economics courses. Econ. Anal. Policy 2014, 44, 156–168. [Google Scholar] [CrossRef] [Green Version]
- Darwin, S. Moving beyond face value: Re-envisioning higher education evaluation as a generator of professional knowledge. Assess. Eval. High. Educ. 2012, 37, 733–745. [Google Scholar] [CrossRef]
- Clayson, D.E. Student Evaluations of Teaching: Are They Related to What Students Learn? A Meta-Analysis and Review of the Literature. J. Mark. Educ. 2008, 31, 16–30. [Google Scholar] [CrossRef] [Green Version]
- Galbraith, C.S.; Merrill, G.B.; Kline, D.M. Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A Neural Network and Bayesian Analyses. Res. High. Educ. 2012, 53, 353–374. [Google Scholar] [CrossRef]
- Friborg, O.; Rosenvinge, J.H. A comparison of open-ended and closed questions in the prediction of mental health. Qual. Quant. 2013, 47, 1397–1411. [Google Scholar] [CrossRef]
- Ulmeanu, M.-E.; Doicin, C.-V.; Spânu, P. Comparative Evaluation of Sustainable Framework in STEM Intensive Programs for Secondary and Tertiary Education. Sustainability 2021, 13, 978. [Google Scholar] [CrossRef]
- Arnon, S.; Reichel, N. Closed and Open-Ended Question Tools in a Telephone Survey About “The Good Teacher”: An Example of a Mixed Method Study. J. Mix. Methods Res. 2009, 3, 172–196. [Google Scholar] [CrossRef]
- Mendes, P.M.; Thomas, C.R.; Cleaver, E. The meaning of prompt feedback and other student perceptions of feedback: Should National Student Survey scores be taken at face value? Eng. Educ. 2011, 6, 31–39. [Google Scholar] [CrossRef] [Green Version]
- Richardson, J.T.E.; Slater, J.B.; Wilson, J. The National Student Survey: Development, findings and implications. Stud. High. Educ. 2007, 32, 557–580. [Google Scholar] [CrossRef]
- Deeley, S.J.; Fischbacher-Smith, M.; Karadzhov, D.; Koristashevskaya, E. Exploring the ‘wicked’ problem of student dissatisfaction with assessment and feedback in higher education. High. Educ. Pedagog. 2019, 4, 385–405. [Google Scholar] [CrossRef]
- Mackay, J.R.D.; Hughes, K.; Marzetti, H.; Lent, N.; Rhind, S.M. Using National Student Survey (NSS) qualitative data and social identity theory to explore students’ experiences of assessment and feedback. High. Educ. Pedagog. 2019, 4, 315–330. [Google Scholar] [CrossRef] [Green Version]
- Langan, A.M.; Scott, N.; Partington, S.; Oczujda, A. Coherence between text comments and the quantitative ratings in the UK’s National Student Survey. J. Furth. High. Educ. 2017, 41, 16–29. [Google Scholar] [CrossRef]
- Hammonds, F.; Mariano, G.J.; Ammons, G.; Chambers, S. Student evaluations of teaching: Improving teaching quality in higher education. Perspect. Policy Pract. High. Educ. 2017, 21, 26–33. [Google Scholar] [CrossRef]
- Cheng, J.H.; Marsh, H.W. National Student Survey: Are differences between universities and courses reliable and meaningful? Oxf. Rev. Educ. 2010, 36, 693–712. [Google Scholar] [CrossRef]
- Douglas, J.; Douglas, A.; Barnes, B. Measuring student satisfaction at a UK university. Qual. Assur. Educ. 2006, 14, 251–267. [Google Scholar] [CrossRef]
- Arthur, L. From performativity to professionalism: Lecturers’ responses to student feedback. Teach. High. Educ. 2009, 14, 441–454. [Google Scholar] [CrossRef]
- McClain, L.; Gulbis, A.; Hays, D. Honesty on student evaluations of teaching: Effectiveness, purpose, and timing matter! Assess. Eval. High. Educ. 2017, 2938, 1–17. [Google Scholar] [CrossRef]
- Spooren, P.; Brockx, B.; Mortelmans, D. On the Validity of Student Evaluation of Teaching: The State of the Art. Rev. Educ. Res. 2013, 83, 598–642. [Google Scholar] [CrossRef] [Green Version]
- Oberrauch, A.; Mayr, H.; Nikitin, I.; Bügler, T.; Kosler, T.; Vollmer, C. I Wanted a Profession That Makes a Difference—An Online Survey of First-Year Students’ Study Choice Motives and Sustainability-Related Attributes. Sustainability 2021, 13, 8273. [Google Scholar] [CrossRef]
- Mortelmans, D.; Spooren, P. A revalidation of the SET37 questionnaire for student evaluations of teaching. Educ. Stud. 2009, 35, 547–552. [Google Scholar] [CrossRef]
- Toland, M.D.; De Ayala, R.J. A Multilevel Factor Analysis of Students’ Evaluations of Teaching. Educ. Psychol. Meas. 2005, 65, 272–296. [Google Scholar] [CrossRef]
- MacFadyen, L.P.; Dawson, S.; Prest, S.; Gasevic, D. Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations. Assess. Eval. High. Educ. 2016, 41, 821–839. [Google Scholar] [CrossRef]
- La Rocca, M.; Parrella, M.L.; Primerano, I.; Sulis, I.; Vitale, M.P. An integrated strategy for the analysis of student evaluation of teaching: From descriptive measures to explanatory models. Qual. Quant. 2017, 51, 675–691. [Google Scholar] [CrossRef]
- Aphinyanaphongs, Y.; Fu, L.D.; Li, Z.; Peskin, E.R.; Efstathiadis, E.; Aliferis, C.F.; Statnikov, A. A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization. J. Assoc. Inf. Sci. Technol. 2014, 65, 1964–1987. [Google Scholar] [CrossRef]
- Hofmann, T. Probabilistic Latent Semantic Indexing. ACM SIGIR Forum 2017, 51, 211–218. [Google Scholar] [CrossRef]
- Wallace, B.C.; Trikalinos, T.A.; Lau, J.; Brodley, C.; Schmid, C.H. Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinform. 2010, 11, 55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Soto, A.; Kiros, R.; Kešelj, V.; Milios, E. Exploratory Visual Analysis and Interactive Pattern Extraction from Semi-Structured Data. ACM Trans. Interact. Intell. Syst. 2015, 5, 1–36. [Google Scholar] [CrossRef]
- Kjellström, S.; Golino, H. Mining concepts of health responsibility using text mining and exploratory graph analysis. Scand. J. Occup. Ther. 2018, 26, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sameh, A.; Kadray, A. Semantic Web Search Results Clustering Using Lingo and WordNet. Int. J. Res. Rev. Comput. Sci. 2020, 1, 71. [Google Scholar]
- Morandi, A.; Limousin, M.; Sayers, J.; Golwala, S.R.; Czakon, N.G.; Pierpaoli, E.; Jullo, E.; Richard, J.; Ameglio, S. X-ray, lensing and Sunyaev-Zel’dovich triaxial analysis of Abell 1835 out to R_{200}. Brain Behav. Immunit 2011, 25, 1136–1142. [Google Scholar] [CrossRef] [Green Version]
- Kleij, F.T.; Musters, P.A. Text analysis of open-ended survey responses: A complementary method to preference mapping. Food Qual. Prefer. 2003, 14, 43–52. [Google Scholar] [CrossRef]
- Deneulin, P.; Bavaud, F. Analyses of open-ended questions by renormalized associativities and textual networks: A study of perception of minerality in wine. Food Qual. Prefer. 2016, 47, 34–44. [Google Scholar] [CrossRef]
- Roberts, M.E.; Stewart, B.M.; Tingley, D.; Lucas, C.; Leder-Luis, J.; Gadarian, S.K.; Albertson, B.; Rand, D.G. Structural Topic Models for Open-Ended Survey Responses. Am. J. Political Sci. 2014, 58, 1064–1082. [Google Scholar] [CrossRef] [Green Version]
- Krosnick, J.A. Questionnaire design. In The Palgrave Handbook of Survey Research; Springer International Publishing: Cham, Switzerland, 2018; pp. 439–455. [Google Scholar] [CrossRef]
- Reja, U.; Manfreda, K.L.; Hlebec, V.; Vehovar, V. Open-ended vs. close-ended questions in web questionnaires. Dev. Appl. Stat. 2003, 19, 159–177. [Google Scholar]
- Mossholder, K.W.; Settoon, R.P.; Harris, S.G.; Armenakis, A.A. Measuring Emotion in Open-ended Survey Responses: An Application of Textual Data Analysis. J. Manag. 1995, 21, 335–355. [Google Scholar] [CrossRef]
- Ang, C.S.; Lee, K.F.; Dipolog-Ubanan, G.F. Determinants of First-Year Student Identity and satisfaction in higher education: A quantitative case study. SAGE Open 2019, 9, 1–13. [Google Scholar] [CrossRef]
- Awang, M.M.; Kutty, F.M.; Ahmad, A.R. Perceived Social Support and Well Being: First-Year Student Experience in University. Int. Educ. Stud. 2014, 7, 261–270. [Google Scholar] [CrossRef] [Green Version]
- Grant-Vallone, E.; Reid, K.; Umali, C.; Pohlert, E. An Analysis of the Effects of Self-Esteem, Social Support, and Participation in Student Support Services on Students’ Adjustment and Commitment to College. J. Coll. Stud. Retent. Res. Theory Pract. 2003, 5, 255–274. [Google Scholar] [CrossRef]
- Wilcox, P.; Winn, S.; Fyvie-Gauld, M. It was nothing to do with the university, it was just the people: The role of social support in the first-year experience of higher education. Stud. High. Educ. 2005, 30, 707–722. [Google Scholar] [CrossRef] [Green Version]
- Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Wang, S.; Manning, C.D. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Jeju, Korea, 8 July 2012; pp. 90–94. [Google Scholar]
- Onan, A.; Korukoğlu, S.; Bulut, H. Ensemble of keyword extraction methods and classifiers in text classification. Expert Syst. Appl. 2016, 57, 232–247. [Google Scholar] [CrossRef]
- Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent dirichlet allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
- Lau, J.H.; Newman, D.; Baldwin, T. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Gothenburg, Sweden, 26–30 April 2014; pp. 530–539. [Google Scholar]
- Watson, S. Closing the feedback loop: Ensuring effective action from student feedback. Tertiary Educ. Manag. 2010, 9, 145–157. [Google Scholar] [CrossRef]
- Hettrick, S. What’s wrong with computer scientists? Softw. Sustain. Inst. 2013. Available online: https://www.software.ac.uk/blog/2016-10-06-whats-wrong-computer-scientists (accessed on 21 December 2021).
- Why Your Computer Science Degree Won’t Get You a Job. Available online: https://targetjobs.co.uk/career-sectors/it-and-technology/advice/323039-why-your-computer-science-degree-wont-get-you-an-it-job (accessed on 22 December 2021).
- Ramsden, P. A performance indicator of teaching quality in higher education: The Course Experience Questionnaire. Stud. High. Educ. 1991, 16, 129–150. [Google Scholar] [CrossRef]
- Silveira, N.; Dozat, T.; De Marneffe, M.C.; Bowman, S.R.; Connor, M.; Bauer, J.; Manning, C.D. A gold standard dependency corpus for English. In Proceedings of the LERC, Reykjavik, Iceland, 26–31 May 2014. [Google Scholar]
- Wissler, L.; Almashraee, M.; Monett, D.; Paschke, A. The gold standard in corpus annotation. In Proceedings of the IEEE GSC, Passau, Germany, 26–27 June 2014. [Google Scholar] [CrossRef]
Category | Proposed Annotation Scheme | Covers NSS Core Questionnaire Categories |
---|---|---|
1 | Overall quality of teaching | The teaching on my course Personal development * |
2 | Overall level of support | Academic support Learning opportunities ** |
3 | Assessment and feedback | Assessment and feedback |
4 | Organization, management, and responses | Organization and management Student voice ** |
5 | Learning resources | Learning resources |
6 | Teaching materials and curricula | N/A |
7 | Placement and employability | N/A |
8 | Student life and social support | Learning community ** |
9 | Overall dissatisfaction | N/A |
10 | Positive or general statements | Overall satisfaction |
Action | When | |
---|---|---|
A1 | An introduction session to the module schedule and timetable | Week 1 |
A2 | Display timetable and key date on a separate page on Blackboard | All weeks |
A3 | An introductory session to explain how the module is linked to the other modules in the pathway and how it contributes to students’ careers | Week 1 |
A4 | A walk through to the Blackboard area and module handbook | Week 1 |
A5 | Detailed explanation of assignment, how it will be assessed, and how the feedback will be provided | Weeks 1–2, Week 11 |
A6 | Embed real-life examples into lectures | Lecture |
A7 | Link assignments to real-life scenarios | Coursework |
A8 | Embed gamification into lectures | Lecture |
A9 | Display the tutor’s contact method and time on Blackboard homepage and cover page of each lecture note | All weeks |
A10 | Try to identify and resolve students’ queries and issues within the tutorial session | All weeks |
A11 | Emphasis how email communication works and promote face-to-face appointments with students | All weeks |
A12 | Check students’ progress on coursework and provide oral feedback and suggestions to improve | Weeks 9–12 |
A13 | Provide a provisional mark to students’ coursework drafts by referring to the marking criteria | Weeks 9–12 |
Issue Code | Description |
---|---|
I1 | Timetable and schedule are not clear |
I2 | Assessment date and submission deadline confusion |
I3 | Assessment criteria to be made clearer |
I4 | Content to be made more interesting and stimulating |
I5 | Lack of practical tasks |
I6 | How to apply skills learnt into practice |
I7 | Timely feedback to be provided |
I8 | Feedback not helpful |
I9 | Ability to contact tutor |
I10 | Staff email response |
Issue | I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 | |
---|---|---|---|---|---|---|---|---|---|---|---|
Action | |||||||||||
A1 | x | ||||||||||
A2 | x | x | |||||||||
A3 | x | x | x | ||||||||
A4 | x | ||||||||||
A5 | x | x | |||||||||
A6 | x | x | x | x | |||||||
A7 | x | x | |||||||||
A8 | x | x | |||||||||
A9 | x | x | |||||||||
A10 | x | ||||||||||
A11 | x | x | x | x | |||||||
A12 | x | x | |||||||||
A13 | x | x | x | x |
Question | Related Issue | Related Action | |
---|---|---|---|
1. | The assessment brief helped me understand what was required from the assessment | 2, 3 | 1, 4 |
2. | The criteria used in marking assessments have been made clear in advance (e.g., in module handbooks, via Blackboard, in class) | 2, 3 | 4, 5 |
3. | Feedback supported me in terms of aiding my learning and development | 7, 8 | 10, 12, 13 |
4. | Tutors/lecturers were supportive | 4, 8, 9 | 8, 9, 10, 12, 13 |
5. | Formative feedback supported my learning | 7, 8 | 10, 12, 13 |
6. | Staff were available and easy to contact during office hours and the working week | 9, 10 | 9, 11 |
7. | Overall, I am satisfied with the quality of the module | All | All |
8. | Real-life examples helped me better understand the content of the module | 4, 5, 6 | 3, 6, 7 |
9. | The number of real-life examples provided in class is adequate | 4, 5, 6 | 6 |
10. | Real-life examples increased my motivation and engagement with the module | 4, 5, 6 | 3, 6, 7 |
11. | Marking criteria are stated in a clearer way in the module handbook compared to previous years | 3, 7 | 5 |
12. | Marking criteria are better explained by the tutor compared to previous years | 3, 7 | 4, 5 |
13. | In-class feedback on my coursework draft was helpful | 6, 7, 8 | 10 |
14. | Coursework deadlines were clearly communicated | 1, 2 | 1, 2 |
15. | Regular check of coursework progress by tutor helped me to better manage my time | 7, 8 | 10, 12 |
16. | Regular check of coursework progress helped me to improve the quality of my coursework assignment | 7, 8 | 10, 12, 13 |
17. | Tutor communicated clearly the expected outcomes of the coursework assignments | 2, 3, 8 | 4, 5, 13 |
18. | Tutor communicated clearly what should be done to improve the quality of my coursework | 8 | 10, 12 |
Question Summary | Satisfaction Rate (%) | ||||
---|---|---|---|---|---|
Level 5 | Level 6 | NSS * | Level 6 against NSS | ||
1. | Assessment brief | 85 | 100 | n/a | |
2. | Marking criteria | 96 | 94 | 71 | +23 |
3. | Feedback | 63 | 91 | 69 | +22 |
4. | Staff support | 78 | 100 | 73 | +27 |
5. | Formative feedback | 68 | 94 | n/a | |
6. | Staff contact | 74 | 94 | 83 | +11 |
7. | Overall | 59 | 100 | 73 | +27 |
8. | Real-life examples | 83 | 100 | n/a | |
9. | Real-life examples | 66 | 94 | n/a | |
10. | Real-life examples | 55 | 83 | n/a | |
11. | Marking criteria | 62 | 67 | n/a | |
12. | Marking criteria | 58 | 78 | n/a | |
13. | In-class feedback | 62 | 94 | n/a | |
14. | Coursework deadlines | 100 | 94 | n/a | |
15. | Progress check | 68 | 83 | n/a | |
16. | Progress check | 75 | 89 | n/a | |
17. | Staff communication | 90 | 100 | n/a | |
18. | Staff communication | 86 | 94 | n/a | |
Average | 74 | 92 | 74 | +22 |
A1 | A2 | A3 | A4 | A5 | A6 | A7 | |
Average rating of related questions | 97 | 94 | 92 | 93 | 85 | 92 | 92 |
A8 | A9 | A10 | A11 | A12 | A13 | ||
Average rating of related questions | 93 | 97 | 92 | 83 | 92 | 100 |
Issue | I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 |
---|---|---|---|---|---|---|---|---|---|---|
Average rating of related questions | 94 | 97 | 88 | 94 | 92 | 93 | 85 | 92 | 97 | 94 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nawaz, R.; Sun, Q.; Shardlow, M.; Kontonatsios, G.; Aljohani, N.R.; Visvizi, A.; Hassan, S.-U. Leveraging AI and Machine Learning for National Student Survey: Actionable Insights from Textual Feedback to Enhance Quality of Teaching and Learning in UK’s Higher Education. Appl. Sci. 2022, 12, 514. https://doi.org/10.3390/app12010514
Nawaz R, Sun Q, Shardlow M, Kontonatsios G, Aljohani NR, Visvizi A, Hassan S-U. Leveraging AI and Machine Learning for National Student Survey: Actionable Insights from Textual Feedback to Enhance Quality of Teaching and Learning in UK’s Higher Education. Applied Sciences. 2022; 12(1):514. https://doi.org/10.3390/app12010514
Chicago/Turabian StyleNawaz, Raheel, Quanbin Sun, Matthew Shardlow, Georgios Kontonatsios, Naif R. Aljohani, Anna Visvizi, and Saeed-Ul Hassan. 2022. "Leveraging AI and Machine Learning for National Student Survey: Actionable Insights from Textual Feedback to Enhance Quality of Teaching and Learning in UK’s Higher Education" Applied Sciences 12, no. 1: 514. https://doi.org/10.3390/app12010514
APA StyleNawaz, R., Sun, Q., Shardlow, M., Kontonatsios, G., Aljohani, N. R., Visvizi, A., & Hassan, S. -U. (2022). Leveraging AI and Machine Learning for National Student Survey: Actionable Insights from Textual Feedback to Enhance Quality of Teaching and Learning in UK’s Higher Education. Applied Sciences, 12(1), 514. https://doi.org/10.3390/app12010514