Topic Editors

Department of Computing, Macquarie University, Sydney, NSW 2109, Australia
Dr. Karina Luzia
Faculty of Science and Engineering, Macquarie University, Sydney, NSW 2000, Australia
Dr. Luke Bozzetto
Australian College of Applied Professions, Sydney, NSW 2000, Australia
Dr. Tommy Yuan
Department of Computer Science, University of York, York YO10 5GH, UK
Prof. Dr. Pengpeng Zhao
Department of Computer Science and Technology, Soochow University, Suzhou 215006, China

Explainable AI in Education

Abstract submission deadline
31 March 2025
Manuscript submission deadline
30 June 2025
Viewed by
1682

Topic Information

Dear Colleagues,

This Topic will focus on the development and implementation of explainable artificial intelligence (XAI) techniques within the education sector. This Topic will highlight how XAI can enhance transparency, accountability, and trust in educational technologies. We invite contributions that explore the technical aspects of creating and applying XAI systems for personalized learning, curriculum development, and student assessment. We are particularly interested in research that addresses the technical challenges of integrating XAI in diverse educational settings and studies that illustrate how explainability in AI contributes to more effective educational outcomes. Submissions may include case studies, empirical research, theoretical analyses, or innovative methodologies. This Topic aims to equip educators, developers, and researchers with a deeper understanding of XAI’s role in advancing education, emphasizing the technical foundations and implications of these systems in academic environments. Topics include, but are not limited to, the following:

  • XAI for Real-Time Student Feedback
  • The Effectiveness of XAI in Personalized Learning Environments
  • Technological and Methodological Advances in XAI for Educational Assessments
  • XAI for Curriculum Optimization
  • XAI in Teacher Decision-Making
  • Bias Detection and Mitigation in Educational XAI Applications
  • Cross-Cultural Implementations of XAI in Education
  • Integration Challenges of XAI in Legacy Educational Systems
  • Theoretical Models for Understanding XAI in Education
  • Transparent AI Systems for Education

Technical Program Committee Member:
Charanya Ramakrishnan, Macquarie University

Dr. Guanfeng Liu
Dr. Karina Luzia
Dr. Luke Bozzetto
Dr. Tommy Yuan
Prof. Dr. Pengpeng Zhao
Topic Editors

Keywords

  • explainable artificial intelligence (XAI)
  • AI in curriculum development
  • bias mitigation in AI
  • transparent AI systems
  • AI-enabled personalized learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600 Submit
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Education Sciences
education
2.5 4.8 2011 26.8 Days CHF 1800 Submit
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400 Submit
Information
information
2.4 6.9 2010 14.9 Days CHF 1600 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (2 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
17 pages, 430 KiB  
Article
Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course
by Daniel López-Fernández and Ricardo Vergaz
AI 2024, 5(4), 2321-2337; https://doi.org/10.3390/ai5040114 - 11 Nov 2024
Viewed by 523
Abstract
The irruption of GenAI such as ChatGPT has changed the educational landscape. Therefore, methodological guidelines and more empirical experiences are needed to better understand these tools and know how to use them to their fullest potential. This contribution presents an exploratory and correlational [...] Read more.
The irruption of GenAI such as ChatGPT has changed the educational landscape. Therefore, methodological guidelines and more empirical experiences are needed to better understand these tools and know how to use them to their fullest potential. This contribution presents an exploratory and correlational study conducted with 37 computer science students who used ChatGPT as a support tool to learn database administration. The article addresses three questions: The first one explores the degree of use of ChatGPT among computer science students to learn database administration, the second one explores the profile of students who get the most out of tools like ChatGPT to deal with database administration activities, and the third one explores how the utilization of ChatGPT can impact in academic performance. To empirically shed light on these questions the student’s grades and a comprehensive questionnaire were employed as research instruments. The obtained results indicate that traditional learning resources, such as teacher’s explanations and student’s reports, were widely used and correlated positively with student’s grades. The usage and perceived utility of ChatGPT were moderate, but positive correlations between students’ grades and ChatGPT usage were found. Indeed, a significantly higher use of this tool was identified among the group of outstanding students. This indicate that high-performing students are the ones who are using ChatGPT the most. So, a new digital trench could be rising between these students and those with a lower degree of fundamentals and worse prompting skills, who may not take advantage of all the ChatGPT possibilities. Full article
(This article belongs to the Topic Explainable AI in Education)
Show Figures

Figure 1

31 pages, 6453 KiB  
Article
An Explainable Student Performance Prediction Method Based on Dual-Level Progressive Classification Belief Rule Base
by Jiahao Mai, Fanxu Wei, Wei He, Haolan Huang and Hailong Zhu
Electronics 2024, 13(22), 4358; https://doi.org/10.3390/electronics13224358 - 6 Nov 2024
Viewed by 414
Abstract
Explainable artificial intelligence (XAI) is crucial in education for making educational technologies more transparent and trustworthy. In the domain of student performance prediction, both the results and the processes need to be recognized by experts, making the requirement for explainability very high. The [...] Read more.
Explainable artificial intelligence (XAI) is crucial in education for making educational technologies more transparent and trustworthy. In the domain of student performance prediction, both the results and the processes need to be recognized by experts, making the requirement for explainability very high. The belief rule base (BRB) is a hybrid-driven method for modeling complex systems that integrates expert knowledge with transparent reasoning processes, thus providing good explainability. However, class imbalances in student grades often lead models to ignore minority samples, resulting in inaccurate assessments. Additionally, BRB models face the challenge of losing explainability during the optimization process. Therefore, an explainable student performance prediction method based on dual-level progressive classification BRB (DLBRB-i) has been proposed. Principal component regression (PCR) is used to select key features, and models are constructed based on selected metrics. The BRB’s first layer classifies data broadly, while the second layer refines these classifications for accuracy. By incorporating explainability constraints into the population-based covariance matrix adaptation evolution strategy (P-CMA-ES) optimization process, the explainability of the model is ensured effectively. Finally, empirical analysis using real datasets validates the diagnostic accuracy and explainability of the DLBRB-i model. Full article
(This article belongs to the Topic Explainable AI in Education)
Show Figures

Figure 1

Back to TopTop