Advances in Explainable Artificial Intelligence (XAI): 2nd Edition

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990). This special issue belongs to the section "Learning".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 18891

Special Issue Editor

School of Computer Science, Technological University Dublin, D08 X622 Dublin, Ireland
Interests: explainable artificial intelligence; defeasible argumentation; deep learning; human-centred design; mental workload modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence has seen a shift in focus towards the design and deployment of intelligent systems that are interpretable and explainable, with the rise of a new field: explainable artificial intelligence (XAI). This has been echoed both in the research literature and in the press, attracting scholars from all around the world as well as a lay audience. Initially devoted to the design of post hoc methods for explainability, essentially wrapping machine- and deep-learning models with explanations, it is now expanding its boundaries to ante hoc methods for the production of self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning in order to extend modelling accuracy and precision with self-explainability and justifiability. Scholars have also started shifting the focus toward the structure of explanations since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.

It is certain that explainable artificial intelligence is gaining momentum, and this Special Issue calls for contributions exploring this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge–representation principles and automated learning capabilities, not only for experts but for the lay audience as well.

Dr. Luca Longo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • neuro-symbolic reasoning for XAI
  • interpretable deep learning
  • argument-based models of explanations
  • graph neural networks for explainability
  • machine learning and knowledge-graphs
  • human-centric explainable AI
  • interpretation of black-box models
  • human-understandable machine learning
  • counterfactual explanations for machine learning
  • natural language processing in XAI
  • quantitative/qualitative evaluation metrics for XAI
  • ante and post hoc XAI methods
  • rule-based systems for XAI
  • fuzzy systems and explainability
  • human-centered learning and explanations
  • model-dependent and model-agnostic explainability
  • case-based explanations for AI systems
  • interactive machine learning and explanations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 6643 KiB  
Article
Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI
by Ding Li, Yan Liu and Jun Huang
Mach. Learn. Knowl. Extr. 2024, 6(2), 1087-1113; https://doi.org/10.3390/make6020050 - 16 May 2024
Cited by 1 | Viewed by 1430
Abstract
Software vulnerability detection aims to proactively reduce the risk to software security and reliability. Despite advancements in deep-learning-based detection, a semantic gap still remains between learned features and human-understandable vulnerability semantics. In this paper, we present an XAI-based framework to assess program code [...] Read more.
Software vulnerability detection aims to proactively reduce the risk to software security and reliability. Despite advancements in deep-learning-based detection, a semantic gap still remains between learned features and human-understandable vulnerability semantics. In this paper, we present an XAI-based framework to assess program code in a graph context as feature representations and their effect on code vulnerability classification into multiple Common Weakness Enumeration (CWE) types. Our XAI framework is deep-learning-model-agnostic and programming-language-neutral. We rank the feature importance of 40 syntactic constructs for each of the top 20 distributed CWE types from three datasets in Java and C++. By means of four metrics of information retrieval, we measure the similarity of human-understandable CWE types using each CWE type’s feature contribution ranking learned from XAI methods. We observe that the subtle semantic difference between CWE types occurs after the variation in neighboring features’ contribution rankings. Our study shows that the XAI explanation results have approximately 78% Top-1 to 89% Top-5 similarity hit rates and a mean average precision of 0.70 compared with the baseline of CWE similarity identified by the open community experts. Our framework allows for code vulnerability patterns to be learned and contributing factors to be assessed at the same stage. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

27 pages, 1266 KiB  
Article
A Meta Algorithm for Interpretable Ensemble Learning: The League of Experts
by Richard Vogel, Tobias Schlosser, Robert Manthey, Marc Ritter, Matthias Vodel, Maximilian Eibl and Kristan Alexander Schneider
Mach. Learn. Knowl. Extr. 2024, 6(2), 800-826; https://doi.org/10.3390/make6020038 - 9 Apr 2024
Viewed by 1804
Abstract
Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method’s bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations [...] Read more.
Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method’s bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations of black box models. These approaches add additional sources for errors without resolving their shortcomings. Less effort is directed into the design of intrinsically interpretable approaches. Methods. We introduce an intrinsically interpretable methodology motivated by ensemble learning: the League of Experts (LoE) model. We establish the theoretical framework first and then deduce a modular meta algorithm. In our description, we focus primarily on classification problems. However, LoE applies equally to regression problems. Specific to classification problems, we employ classical decision trees as classifier ensembles as a particular instance. This choice facilitates the derivation of human-understandable decision rules for the underlying classification problem, which results in a derived rule learning system denoted as RuleLoE. Results. In addition to 12 KEEL classification datasets, we employ two standard datasets from particularly relevant domains—medicine and finance—to illustrate the LoE algorithm. The performance of LoE with respect to its accuracy and rule coverage is comparable to common state-of-the-art classification methods. Moreover, LoE delivers a clearly understandable set of decision rules with adjustable complexity, describing the classification problem. Conclusions. LoE is a reliable method for classification and regression problems with an accuracy that seems to be appropriate for situations in which underlying causalities are in the center of interest rather than just accurate predictions or classifications. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

26 pages, 1391 KiB  
Article
SHapley Additive exPlanations (SHAP) for Efficient Feature Selection in Rolling Bearing Fault Diagnosis
by Mailson Ribeiro Santos, Affonso Guedes and Ignacio Sanchez-Gendriz
Mach. Learn. Knowl. Extr. 2024, 6(1), 316-341; https://doi.org/10.3390/make6010016 - 5 Feb 2024
Cited by 4 | Viewed by 3345
Abstract
This study introduces an efficient methodology for addressing fault detection, classification, and severity estimation in rolling element bearings. The methodology is structured into three sequential phases, each dedicated to generating distinct machine-learning-based models for the tasks of fault detection, classification, and severity estimation. [...] Read more.
This study introduces an efficient methodology for addressing fault detection, classification, and severity estimation in rolling element bearings. The methodology is structured into three sequential phases, each dedicated to generating distinct machine-learning-based models for the tasks of fault detection, classification, and severity estimation. To enhance the effectiveness of fault diagnosis, information acquired in one phase is leveraged in the subsequent phase. Additionally, in the pursuit of attaining models that are both compact and efficient, an explainable artificial intelligence (XAI) technique is incorporated to meticulously select optimal features for the machine learning (ML) models. The chosen ML technique for the tasks of fault detection, classification, and severity estimation is the support vector machine (SVM). To validate the approach, the widely recognized Case Western Reserve University benchmark is utilized. The results obtained emphasize the efficiency and efficacy of the proposal. Remarkably, even with a highly limited number of features, evaluation metrics consistently indicate an accuracy of over 90% in the majority of cases when employing this approach. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

36 pages, 1553 KiB  
Article
Explainable Artificial Intelligence Using Expressive Boolean Formulas
by Gili Rosenberg, John Kyle Brubaker, Martin J. A. Schuetz, Grant Salton, Zhihuai Zhu, Elton Yechao Zhu, Serdar Kadıoğlu, Sima E. Borujeni and Helmut G. Katzgraber
Mach. Learn. Knowl. Extr. 2023, 5(4), 1760-1795; https://doi.org/10.3390/make5040086 - 24 Nov 2023
Cited by 3 | Viewed by 6648
Abstract
We propose and implement an interpretable machine learning classification model for Explainable AI (XAI) based on expressive Boolean formulas. Potential applications include credit scoring and diagnosis of medical conditions. The Boolean formula defines a rule with tunable complexity (or interpretability) according to which [...] Read more.
We propose and implement an interpretable machine learning classification model for Explainable AI (XAI) based on expressive Boolean formulas. Potential applications include credit scoring and diagnosis of medical conditions. The Boolean formula defines a rule with tunable complexity (or interpretability) according to which input data are classified. Such a formula can include any operator that can be applied to one or more Boolean variables, thus providing higher expressivity compared to more rigid rule- and tree-based approaches. The classifier is trained using native local optimization techniques, efficiently searching the space of feasible formulas. Shallow rules can be determined by fast Integer Linear Programming (ILP) or Quadratic Unconstrained Binary Optimization (QUBO) solvers, potentially powered by special-purpose hardware or quantum devices. We combine the expressivity and efficiency of the native local optimizer with the fast operation of these devices by executing non-local moves that optimize over the subtrees of the full Boolean formula. We provide extensive numerical benchmarking results featuring several baselines on well-known public datasets. Based on the results, we find that the native local rule classifier is generally competitive with the other classifiers. The addition of non-local moves achieves similar results with fewer iterations. Therefore, using specialized or quantum hardware could lead to a significant speedup through the rapid proposal of non-local moves. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

19 pages, 1405 KiB  
Article
Explainable Stacked Ensemble Deep Learning (SEDL) Framework to Determine Cause of Death from Verbal Autopsies
by Michael T. Mapundu, Chodziwadziwa W. Kabudula, Eustasius Musenge, Victor Olago and Turgay Celik
Mach. Learn. Knowl. Extr. 2023, 5(4), 1570-1588; https://doi.org/10.3390/make5040079 - 25 Oct 2023
Cited by 4 | Viewed by 1967
Abstract
Verbal autopsies (VA) are commonly used in Low- and Medium-Income Countries (LMIC) to determine cause of death (CoD) where death occurs outside clinical settings, with the most commonly used international gold standard being physician medical certification. Interviewers elicit information from relatives of the [...] Read more.
Verbal autopsies (VA) are commonly used in Low- and Medium-Income Countries (LMIC) to determine cause of death (CoD) where death occurs outside clinical settings, with the most commonly used international gold standard being physician medical certification. Interviewers elicit information from relatives of the deceased, regarding circumstances and events that might have led to death. This information is stored in textual format as VA narratives. The narratives entail detailed information that can be used to determine CoD. However, this approach still remains a manual task that is costly, inconsistent, time-consuming and subjective (prone to errors), amongst many drawbacks. As such, this negatively affects the VA reporting process, despite it being vital for strengthening health priorities and informing civil registration systems. Therefore, this study seeks to close this gap by applying novel deep learning (DL) interpretable approaches for reviewing VA narratives and generate CoD prediction in a timely, easily interpretable, cost-effective and error-free way. We validate our DL models using optimisation and performance accuracy machine learning (ML) curves as a function of training samples. We report on validation with training set accuracy (LSTM = 76.11%, CNN = 76.35%, and SEDL = 82.1%), validation accuracy (LSTM = 67.05%, CNN = 66.16%, and SEDL = 82%) and test set accuracy (LSTM = 67%, CNN = 66.2%, and SEDL = 82%) for our models. Furthermore, we also present Local Interpretable Model-agnostic Explanations (LIME) for ease of interpretability of the results, thereby building trust in the use of machines in healthcare. We presented robust deep learning methods to determine CoD from VAs, with the stacked ensemble deep learning (SEDL) approaches performing optimally and better than Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN). Our empirical results suggest that ensemble DL methods may be integrated in the CoD process to help experts get to a diagnosis. Ultimately, this will reduce the turnaround time needed by physicians to go through the narratives in order to be able to give an appropriate diagnosis, cut costs and minimise errors. This study was limited by the number of samples needed for training our models and the high levels of lexical variability in the words used in our textual information. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

23 pages, 3936 KiB  
Article
Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations
by Robert S. Sullivan and Luca Longo
Mach. Learn. Knowl. Extr. 2023, 5(4), 1433-1455; https://doi.org/10.3390/make5040072 - 9 Oct 2023
Cited by 3 | Viewed by 2660
Abstract
Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this [...] Read more.
Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep the capacity high. We investigate training a Deep Convolutional Q-learning agent across 20 Atari games intentionally reducing Experience Replay capacity from 1×106 to 5×102. We find that a reduction from 1×104 to 5×103 doesn’t significantly affect rewards, offering a practical path to resource-efficient DRL. To illuminate agent decisions and align them with game mechanics, we employ a novel method: visualizing Experience Replay via Deep SHAP Explainer. This approach fosters comprehension and transparent, interpretable explanations, though any capacity reduction must be cautious to avoid overfitting. Our study demonstrates the feasibility of reducing Experience Replay and advocates for transparent, interpretable decision explanations using the Deep SHAP Explainer to promote enhancing resource efficiency in Experience Replay. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

Back to TopTop