Advances in Explainable Artificial Intelligence (XAI): 3rd Edition
A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).
Deadline for manuscript submissions: 30 September 2025 | Viewed by 120
Special Issue Editor
Interests: explainable artificial intelligence; defeasible argumentation; deep learning; human-centred design; mental workload modeling
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
Recently, artificial intelligence has seen a shift in focus towards the design and deployment of intelligent systems that are interpretable and explainable, with the rise of a new field: explainable artificial intelligence (XAI). This has been echoed both in the research literature and in the press, attracting scholars from all around the world as well as a lay audience. Initially devoted to the design of post hoc methods for explainability, essentially wrapping machine- and deep-learning models with explanations, it is now expanding its boundaries to ante hoc methods for the production of self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning in order to extend modeling accuracy and precision with self-explainability and justifiability. Scholars have also started shifting the focus towards the structure of explanations since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.
It is certain that explainable artificial intelligence is gaining momentum, and this Special Issue calls for contributions exploring this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge–representation principles and automated learning capabilities, not only for experts but for the lay audience as well.
Dr. Luca Longo
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- explainable artificial intelligence (XAI)
- neuro-symbolic reasoning for XAI
- interpretable deep learning
- argument-based models of explanations
- graph neural networks for explainability
- machine learning and knowledge graphs
- human-centric explainable AI
- interpretation of black-box models
- human-understandable machine learning
- counterfactual explanations for machine learning
- natural language processing in XAI
- quantitative/qualitative evaluation metrics for XAI
- ante and post hoc XAI methods
- rule-based systems for XAI
- fuzzy systems and explainability
- human-centered learning and explanations
- model-dependent and model-agnostic explainability
- case-based explanations for AI systems
- interactive machine learning and explanations
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue polices can be found here.