Interpretable and Explainable AI Applications

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: closed (30 September 2024) | Viewed by 22563

Special Issue Editors


E-Mail Website
Guest Editor
School of Innovation, Design and Engineering (IDT), Mälardalen University, Box 883, 721 23 Västerås, Sweden
Interests: deep learning; XAI; human-centric AI; case-based reasoning; data mining; fuzzy logic and other machine learning and machine intelligence approaches for analytics—especially in big data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computing & Informatics, Drexel University, Philadelphia, PA 19802, USA
Interests: use-inspired textual agents; explainable agency; case-based reasoning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The sub-fields of interpretable machine learning (IML) and explainable artificial intelligence (XAI) have substantial overlap. Both are concerned with the broader goal that AI systems, those that include AI methods, need to be interpretable to users and designers. AI is a much broader field than ML, so these fields have a great deal of synergy and must consider one another’s contributions. This Special Issue will target contributions that consider the broader view of the field that aims to investigate how AI systems explain themselves, either via interpretability or a combination of interpretability and explainability. IML/XAI will also play a vital role in sustainability issues with the sustainable development of AI applications as humans/society can trust AI.

The aim of this Special Issue is to provide a leading forum for the timely, in-depth presentation of recent advances in the research and development of interpretability and explainability techniques for AI applications.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • How artificial intelligence methods and systems explain their decisions;
  • Interpretability of AI models and methods;
  • Validation of explainability or interpretability approaches for AI;
  • Robustness of methods for interpretability and explainability;
  • Applications adopting AI methods with explainability or interpretability methods;
  • Applications benefiting from different types of explanation contents, e.g., counterfactuals, feature attribution, instance attribution;
  • Social aspects of explainability and interpretability in AI;
  • Accountability of AI systems.

Please include in your submission a statement regarding whether your manuscript’s contribution is in computing and engineering or in social aspects. If your submission includes contributions in both aspects, please indicate which authors are contributing to each.

We look forward to receiving your contributions.

You may choose our Joint Special Issue in Sustainability.

Prof. Dr. Mobyen Uddin Ahmed
Prof. Dr. Rosina O. Weber
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainability
  • interpretability
  • artificial intelligence applications
  • validation
  • accountability

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 746 KiB  
Article
Evaluating Anomaly Explanations Using Ground Truth
by Liat Antwarg Friedman, Chen Galed, Lior Rokach and Bracha Shapira
AI 2024, 5(4), 2375-2392; https://doi.org/10.3390/ai5040117 - 15 Nov 2024
Viewed by 838
Abstract
The widespread use of machine and deep learning algorithms for anomaly detection has created a critical need for robust explanations that can identify the features contributing to anomalies. However, effective evaluation methodologies for anomaly explanations are currently lacking, especially those that compare the [...] Read more.
The widespread use of machine and deep learning algorithms for anomaly detection has created a critical need for robust explanations that can identify the features contributing to anomalies. However, effective evaluation methodologies for anomaly explanations are currently lacking, especially those that compare the explanations against the true underlying causes, or ground truth. This paper aims to address this gap by introducing a rigorous, ground-truth-based framework for evaluating anomaly explanation methods, which enables the assessment of explanation correctness and robustness—key factors for actionable insights in anomaly detection. To achieve this, we present an innovative benchmark dataset of digital circuit truth tables with model-based anomalies, accompanied by local ground truth explanations. These explanations were generated using a novel algorithm designed to accurately identify influential features within each anomaly. Additionally, we propose an evaluation methodology based on correctness and robustness metrics, specifically tailored to quantify the reliability of anomaly explanations. This dataset and evaluation framework are publicly available to facilitate further research and standardize evaluation practices. Our experiments demonstrate the utility of this dataset and methodology by evaluating common model-agnostic explanation methods in an anomaly detection context. The results highlight the importance of ground-truth-based evaluation for reliable and interpretable anomaly explanations, advancing both theory and practical applications in explainable AI. This work establishes a foundation for rigorous, evidence-based assessments of anomaly explanations, fostering greater transparency and trust in AI-driven anomaly detection systems. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

29 pages, 7459 KiB  
Article
Leveraging Explainable Artificial Intelligence (XAI) for Expert Interpretability in Predicting Rapid Kidney Enlargement Risks in Autosomal Dominant Polycystic Kidney Disease (ADPKD)
by Latifa Dwiyanti, Hidetaka Nambo and Nur Hamid
AI 2024, 5(4), 2037-2065; https://doi.org/10.3390/ai5040100 - 28 Oct 2024
Viewed by 1253
Abstract
Autosomal dominant polycystic kidney disease (ADPKD) is the predominant hereditary factor leading to end-stage renal disease (ESRD) worldwide, affecting individuals across all races with a prevalence of 1 in 400 to 1 in 1000. The disease presents significant challenges in management, particularly with [...] Read more.
Autosomal dominant polycystic kidney disease (ADPKD) is the predominant hereditary factor leading to end-stage renal disease (ESRD) worldwide, affecting individuals across all races with a prevalence of 1 in 400 to 1 in 1000. The disease presents significant challenges in management, particularly with limited options for slowing cyst progression, as well as the use of tolvaptan being restricted to high-risk patients due to potential liver injury. However, determining high-risk status typically requires magnetic resonance imaging (MRI) to calculate total kidney volume (TKV), a time-consuming process demanding specialized expertise. Motivated by these challenges, this study proposes alternative methods for high-risk categorization that do not rely on TKV data. Utilizing historical patient data, we aim to predict rapid kidney enlargement in ADPKD patients to support clinical decision-making. We applied seven machine learning algorithms—Random Forest, Logistic Regression, Support Vector Machine (SVM), Light Gradient Boosting Machine (LightGBM), Gradient Boosting Tree, XGBoost, and Deep Neural Network (DNN)—to data from the Polycystic Kidney Disease Outcomes Consortium (PKDOC) database. The XGBoost model, combined with the Synthetic Minority Oversampling Technique (SMOTE), yielded the best performance. We also leveraged explainable artificial intelligence (XAI) techniques, specifically Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), to visualize and clarify the model’s predictions. Furthermore, we generated text summaries to enhance interpretability. To evaluate the effectiveness of our approach, we proposed new metrics to assess explainability and conducted a survey with 27 doctors to compare models with and without XAI techniques. The results indicated that incorporating XAI and textual summaries significantly improved expert explainability and increased confidence in the model’s ability to support treatment decisions for ADPKD patients. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

11 pages, 517 KiB  
Article
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
by Parisa Mahya and Johannes Fürnkranz
AI 2023, 4(2), 426-436; https://doi.org/10.3390/ai4020023 - 19 May 2023
Cited by 2 | Viewed by 4151
Abstract
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. [...] Read more.
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations and interpretable models on several datasets for rule-based and feature-based interpretable models. The results seem to underline that often directly learned interpretable models approximate the black-box models at least as well as their post-hoc surrogates, even though the former do not have direct access to the black-box model. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1570 KiB  
Review
Explainable Image Classification: The Journey So Far and the Road Ahead
by Vidhya Kamakshi and Narayanan C. Krishnan
AI 2023, 4(3), 620-651; https://doi.org/10.3390/ai4030033 - 1 Aug 2023
Cited by 8 | Viewed by 14067
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff [...] Read more.
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Back to TopTop