Explainable AI for Image-Aided Diagnosis

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "AI in Imaging".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 7657

Special Issue Editors


E-Mail Website
Guest Editor
Department of Engineering, University of Trás-os-Montes and Alto Douro and INESC TEC, 5000-801 Vila Real, Portugal
Interests: medical image analysis; bio-image analysis; computer vision; image and video processing; machine learning; artificial intelligence, with a focus on the application of computer-aided diagnosis across various imaging modalities, including ophthalmology, endoscopic capsule videos and the computed tomography of the lung
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Departamento de Engenharias, University of Trás-os-Montes and Alto Douro, Vila Real, Portugal
Interests: intelligent system; machine learning; control; robotics and vision

E-Mail Website
Guest Editor
ISR, University of Coimbra & UTAD, Coimbra, Portugal
Interests: system identification; control theory; differential games and gas networks

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is a rapidly advancing field with many potential applications, with computed aided diagnosis (CAD) systems being among the most promising. Gastroenterology, ophthalmology, and radiology are examples of specialities where AI algorithms can be used to automatically identify potential abnormalities or diseases from images, helping specialists and other medical professionals make more accurate and timely diagnosis, and ultimately resulting in earlier treatment and better patient outcomes. Although AI algorithms can achieve high-performance levels in many tasks, the AI decision making process is not easy to understand. For this reason, it is often referred to as a "black box", an arrangement that makes it difficult for practitioners to understand the reasoning behind an AI algorithm's decisions and to limit the ability to trust and interpret the results. This lack of transparency and explanation in AI algorithms is a complex issue that is being addressed by the field of explainable artificial intelligence (XAI). XAI encompasses various approaches and methodologies aiming to make AI systems more transparent, interpretable, and explainable to users. These include techniques such as model interpretability, feature importance, and counterfactual analysis, among others. By developing AI systems that are more transparent and explainable, XAI aims to increase trust in the technology and ensure that it is used responsibly.

This Special Issue aims to review the latest research progress in the field of XAI and its application in the different medical imaging research areas. We encourage the submission of conceptual, empirical, and literature review papers focusing on this field.

Different types and approaches of XAI are welcome in this Special Issue.

Dr. António Cunha
Dr. Paulo A.C. Salgado
Dr. Teresa Paula Perdicoúlis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • model interpretability
  • explainable artificial intelligence
  • human-understandable AI systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 7266 KiB  
Article
Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification
by Rawan Ghnemat, Sawsan Alodibat and Qasem Abu Al-Haija
J. Imaging 2023, 9(9), 177; https://doi.org/10.3390/jimaging9090177 - 30 Aug 2023
Cited by 11 | Viewed by 6813
Abstract
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box [...] Read more.
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis. Full article
(This article belongs to the Special Issue Explainable AI for Image-Aided Diagnosis)
Show Figures

Figure 1

Back to TopTop