Advances in Explainable Artificial Intelligence (XAI)

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990). This special issue belongs to the section "Learning".

Deadline for manuscript submissions: closed (15 July 2023) | Viewed by 85481

Special Issue Editor

School of Computer Science, Technological University Dublin, D08 X622 Dublin, Ireland
Interests: explainable artificial intelligence; defeasible argumentation; deep learning; human-centred design; mental workload modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence has seen a shift in focus towards the design and deployment of intelligent systems that are interpretable and explainable, with the rise of a new field: explainable artificial intelligence (XAI). This has echoed both in the research literature and in the press, attracting scholars from all around the world as well as a lay audience. Initially devoted to the design of post-hoc methods for explainability, essentially wrapping machine- and deep-learning models with explanations, it is now expanding its boundaries to ante-hoc methods for the production of self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning in order to extend modelling accuracy and precision with self-explainability and justifiability. Scholars started also shifting the focus on the structure of explanations since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.
 
It is certain that explainable artificial intelligence is gaining momentum, and this Special Issue calls for contributions in this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge-representation principles and automated learning capabilities, not only for experts but for the lay audience as well.

Dr. Luca Longo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • neuro-symbolic reasoning for XAI
  • interpretable deep learning
  • argument-based models of explanations
  • graph neural networks for explainability
  • machine learning and knowledge-graphs
  • human-centric explainable AI
  • interpretation of black-box models
  • human-understandable machine learning
  • counterfactual explanations for machine learning
  • natural language processing in XAI
  • quantitative/qualitative evaluation metrics for XAI
  • ante and post-hoc XAI methods
  • rule-based systems for XAI
  • fuzzy systems and explainability
  • human-centered learning and explanations
  • model-dependent and model-agnostic explainability
  • case-based explanations for AI systems
  • interactive machine learning and explanations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 1544 KiB  
Article
Alternative Formulations of Decision Rule Learning from Neural Networks
by Litao Qiao, Weijia Wang and Bill Lin
Mach. Learn. Knowl. Extr. 2023, 5(3), 937-956; https://doi.org/10.3390/make5030049 - 3 Aug 2023
Viewed by 1781
Abstract
This paper extends recent work on decision rule learning from neural networks for tabular data classification. We propose alternative formulations to trainable Boolean logic operators as neurons with continuous weights, including trainable NAND neurons. These alternative formulations provide uniform treatments to different trainable [...] Read more.
This paper extends recent work on decision rule learning from neural networks for tabular data classification. We propose alternative formulations to trainable Boolean logic operators as neurons with continuous weights, including trainable NAND neurons. These alternative formulations provide uniform treatments to different trainable logic neurons so that they can be uniformly trained, which enables, for example, the direct application of existing sparsity-promoting neural net training techniques like reweighted L1 regularization to derive sparse networks that translate to simpler rules. In addition, we present an alternative network architecture based on trainable NAND neurons by applying De Morgan’s law to realize a NAND-NAND network instead of an AND-OR network, both of which can be readily mapped to decision rule sets. Our experimental results show that these alternative formulations can also generate accurate decision rule sets that achieve state-of-the-art performance in terms of accuracy in tabular learning applications. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

15 pages, 2157 KiB  
Article
Achievable Minimally-Contrastive Counterfactual Explanations
by Hosein Barzekar and Susan McRoy
Mach. Learn. Knowl. Extr. 2023, 5(3), 922-936; https://doi.org/10.3390/make5030048 - 3 Aug 2023
Viewed by 2023
Abstract
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but [...] Read more.
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

21 pages, 17177 KiB  
Article
What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep Time Series Classification
by Maresa Schröder, Alireza Zamanian and Narges Ahmidi
Mach. Learn. Knowl. Extr. 2023, 5(2), 539-559; https://doi.org/10.3390/make5020032 - 18 May 2023
Cited by 2 | Viewed by 2290
Abstract
Saliency methods are designed to provide explainability for deep image processing models by assigning feature-wise importance scores and thus detecting informative regions in the input images. Recently, these methods have been widely adapted to the time series domain, aiming to identify important temporal [...] Read more.
Saliency methods are designed to provide explainability for deep image processing models by assigning feature-wise importance scores and thus detecting informative regions in the input images. Recently, these methods have been widely adapted to the time series domain, aiming to identify important temporal regions in a time series. This paper extends our former work on identifying the systematic failure of such methods in the time series domain to produce relevant results when informative patterns are based on underlying latent information rather than temporal regions. First, we both visually and quantitatively assess the quality of explanations provided by multiple state-of-the-art saliency methods, including Integrated Gradients, Deep-Lift, Kernel SHAP, and Lime using univariate simulated time series data with temporal or latent patterns. In addition, to emphasize the severity of the latent feature saliency detection problem, we also run experiments on a real-world predictive maintenance dataset with known latent patterns. We identify Integrated Gradients, Deep-Lift, and the input-cell attention mechanism as potential candidates for refinement to yield latent saliency scores. Finally, we provide recommendations on using saliency methods for time series classification and suggest a guideline for developing latent saliency methods for time series. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

18 pages, 3481 KiB  
Article
Painting the Black Box White: Experimental Findings from Applying XAI to an ECG Reading Setting
by Federico Cabitza, Andrea Campagner, Chiara Natali, Enea Parimbelli, Luca Ronzio and Matteo Cameli
Mach. Learn. Knowl. Extr. 2023, 5(1), 269-286; https://doi.org/10.3390/make5010017 - 8 Mar 2023
Cited by 7 | Viewed by 3456
Abstract
The emergence of black-box, subsymbolic, and statistical AI systems has motivated a rapid increase in the interest regarding explainable AI (XAI), which encompasses both inherently explainable techniques, as well as approaches to make black-box AI systems explainable to human decision makers. Rather than [...] Read more.
The emergence of black-box, subsymbolic, and statistical AI systems has motivated a rapid increase in the interest regarding explainable AI (XAI), which encompasses both inherently explainable techniques, as well as approaches to make black-box AI systems explainable to human decision makers. Rather than always making black boxes transparent, these approaches are at risk of painting the black boxes white, thus failing to provide a level of transparency that would increase the system’s usability and comprehensibility, or even at risk of generating new errors (i.e., white-box paradox). To address these usability-related issues, in this work we focus on the cognitive dimension of users’ perception of explanations and XAI systems. We investigated these perceptions in light of their relationship with users’ characteristics (e.g., expertise) through a questionnaire-based user study involved 44 cardiology residents and specialists in an AI-supported ECG reading task. Our results point to the relevance and correlation of the dimensions of trust, perceived quality of explanations, and tendency to defer the decision process to automation (i.e., technology dominance). This contribution calls for the evaluation of AI-based support systems from a human–AI interaction-oriented perspective, laying the ground for further investigation of XAI and its effects on decision making and user experience. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

12 pages, 2247 KiB  
Article
An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME
by Ioannis D. Apostolopoulos, Ifigeneia Athanasoula, Mpesi Tzani and Peter P. Groumpos
Mach. Learn. Knowl. Extr. 2022, 4(4), 1124-1135; https://doi.org/10.3390/make4040057 - 6 Dec 2022
Cited by 8 | Viewed by 3257
Abstract
Climate change is expected to increase fire events and activity with multiple impacts on human lives. Large grids of forest and city monitoring devices can assist in incident detection, accelerating human intervention in extinguishing fires before they get out of control. Artificial Intelligence [...] Read more.
Climate change is expected to increase fire events and activity with multiple impacts on human lives. Large grids of forest and city monitoring devices can assist in incident detection, accelerating human intervention in extinguishing fires before they get out of control. Artificial Intelligence promises to automate the detection of fire-related incidents. This study enrols 53,585 fire/smoke and normal images and benchmarks seventeen state-of-the-art Convolutional Neural Networks for distinguishing between the two classes. The Xception network proves to be superior to the rest of the CNNs, obtaining very high accuracy. Grad-CAM++ and LIME algorithms improve the post hoc explainability of Xception and verify that it is learning features found in the critical locations of the image. Both methods agree on the suggested locations, strengthening the abovementioned outcome. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

23 pages, 4081 KiB  
Article
On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps
by Arjun Vinayak Chikkankod and Luca Longo
Mach. Learn. Knowl. Extr. 2022, 4(4), 1042-1064; https://doi.org/10.3390/make4040053 - 18 Nov 2022
Cited by 12 | Viewed by 2982
Abstract
Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and [...] Read more.
Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

22 pages, 923 KiB  
Article
A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence
by Mi-Young Kim, Shahin Atakishiyev, Housam Khalifa Bashier Babiker, Nawshad Farruque, Randy Goebel, Osmar R. Zaïane, Mohammad-Hossein Motallebi, Juliano Rabelo, Talat Syed, Hengshuai Yao and Peter Chun
Mach. Learn. Knowl. Extr. 2021, 3(4), 900-921; https://doi.org/10.3390/make3040045 - 18 Nov 2021
Cited by 27 | Viewed by 6624
Abstract
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging [...] Read more.
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

31 pages, 4782 KiB  
Article
Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain
by Samanta Knapič, Avleen Malhi, Rohit Saluja and Kary Främling
Mach. Learn. Knowl. Extr. 2021, 3(3), 740-770; https://doi.org/10.3390/make3030037 - 19 Sep 2021
Cited by 78 | Viewed by 10870
Abstract
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions [...] Read more.
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

47 pages, 6520 KiB  
Article
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
by Giulia Vilone and Luca Longo
Mach. Learn. Knowl. Extr. 2021, 3(3), 615-661; https://doi.org/10.3390/make3030032 - 4 Aug 2021
Cited by 80 | Viewed by 16321
Abstract
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic [...] Read more.
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

17 pages, 1553 KiB  
Article
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
by Muhammad Rehman Zafar and Naimul Khan
Mach. Learn. Knowl. Extr. 2021, 3(3), 525-541; https://doi.org/10.3390/make3030027 - 30 Jun 2021
Cited by 121 | Viewed by 11591
Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., [...] Read more.
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Other

Jump to: Research

31 pages, 1054 KiB  
Systematic Review
XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process
by Tobias Clement, Nils Kemmerzell, Mohamed Abdelaal and Michael Amberg
Mach. Learn. Knowl. Extr. 2023, 5(1), 78-108; https://doi.org/10.3390/make5010006 - 11 Jan 2023
Cited by 39 | Viewed by 18933
Abstract
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and [...] Read more.
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Back to TopTop