Topic Editors

Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Department of Signal Theory, Telematics and Communications, University of Granada, 18071 Granada, Spain
Dr. Zhengchao Dong
1. Molecular Imaging and Neuropathology Division, Columbia University and New York State Psychiatric Institute, New York, NY 10032, USA
2. New York State Psychiatric Institute, New York, NY 10032, USA

Explainable AI for Health

Abstract submission deadline
closed (8 May 2024)
Manuscript submission deadline
closed (8 August 2024)
Viewed by
10901

Topic Information

Dear Colleagues,

Health is a state of complete physical, mental, and social well-being and not merely the absence of disease and infirmity. Artificial intelligence (AI) has recently been widely used in health and related fields. In the past, AI has shown itself as a complex tool and a solution assisting medical professionals in diagnosing various diseases. However, AIs are still black boxes that do not help decision-making. The poor explainability causes distrust from clinicians/doctors who train to make an explainable diagnosis.

Thus, there is an urgent need for novel methodologies to improve the explainability of existing AI methods used routinely in clinical practices. Explainable deep learning (DL) methods will help interpret the diagnosis for patients and physicians. This Special Issue highlights advances in explainable AI theories and models in health. Both conventional and new explainable AI-related papers are welcome.

Prof. Dr. Yudong Zhang
Prof. Dr. Juan Manuel Gorriz
Dr. Zhengchao Dong
Topic Editors

Keywords

  • oncological imaging
  • tumor detection and diagnosis
  • omics
  • supervised and unsupervised learning
  • kernel methods
  • deep neural networks
  • mathematical modeling
  • graph neural network
  • attention neural network
  • healthcare
  • disease diagnosis

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400
Cancers
cancers
4.5 8.0 2009 16.3 Days CHF 2900
Cells
cells
5.1 9.9 2012 17.5 Days CHF 2700
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (4 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
24 pages, 393 KiB  
Article
A Hybrid System Based on Bayesian Networks and Deep Learning for Explainable Mental Health Diagnosis
by Juan Pavez and Héctor Allende
Appl. Sci. 2024, 14(18), 8283; https://doi.org/10.3390/app14188283 - 14 Sep 2024
Viewed by 1274
Abstract
Mental illnesses are becoming one of the most common health concerns among the population. Despite the proven efficacy of psychological treatments, mental illnesses are largely underdiagnosed, particularly in developing countries. A key factor contributing to this is the scarcity of mental health providers [...] Read more.
Mental illnesses are becoming one of the most common health concerns among the population. Despite the proven efficacy of psychological treatments, mental illnesses are largely underdiagnosed, particularly in developing countries. A key factor contributing to this is the scarcity of mental health providers capable of diagnosing. In this work, we propose a novel method that combines the general capabilities and accuracy of Large Language models with the explainability of Bayesian Networks. Our system analyzes descriptions of symptoms provided by users and written in natural language and, based on these descriptions, asks questions to confirm or refine the initial diagnosis made by the deep learning model. We trained our model on a large-scale dataset collected from various internet sources, comprising over 2.3 million data points. The initial prediction from the Large Language model is refined through symptom confirmation questions derived from a probabilistic graphical model constructed by experts based on the DSM-5 diagnostic manual. We present results from symptom descriptions sourced from the internet and clinical vignettes extracted from behavioral science exams, demonstrating the effectiveness of our hybrid model in classifying mental health disorders. Our model achieves high accuracy in classifying a wide range of mental health disorders, providing transparent and explainable predictions. Full article
(This article belongs to the Topic Explainable AI for Health)
Show Figures

Figure 1

13 pages, 1862 KiB  
Article
Deep Learning-Based Early Warning Score for Predicting Clinical Deterioration in General Ward Cancer Patients
by Ryoung-Eun Ko, Zero Kim, Bomi Jeon, Migyeong Ji, Chi Ryang Chung, Gee Young Suh, Myung Jin Chung and Baek Hwan Cho
Cancers 2023, 15(21), 5145; https://doi.org/10.3390/cancers15215145 - 26 Oct 2023
Cited by 1 | Viewed by 2096
Abstract
Background: Cancer patients who are admitted to hospitals are at high risk of short-term deterioration due to treatment-related or cancer-specific complications. A rapid response system (RRS) is initiated when patients who are deteriorating or at risk of deteriorating are identified. This study was [...] Read more.
Background: Cancer patients who are admitted to hospitals are at high risk of short-term deterioration due to treatment-related or cancer-specific complications. A rapid response system (RRS) is initiated when patients who are deteriorating or at risk of deteriorating are identified. This study was conducted to develop a deep learning-based early warning score (EWS) for cancer patients (Can-EWS) using delta values in vital signs. Methods: A retrospective cohort study was conducted on all oncology patients who were admitted to the general ward between 2016 and 2020. The data were divided into a training set (January 2016–December 2019) and a held-out test set (January 2020–December 2020). The primary outcome was clinical deterioration, defined as the composite of in-hospital cardiac arrest (IHCA) and unexpected intensive care unit (ICU) transfer. Results: During the study period, 19,739 cancer patients were admitted to the general wards and eligible for this study. Clinical deterioration occurred in 894 cases. IHCA and unexpected ICU transfer prevalence was 1.77 per 1000 admissions and 43.45 per 1000 admissions, respectively. We developed two models: Can-EWS V1, which used input vectors of the original five input variables, and Can-EWS V2, which used input vectors of 10 variables (including an additional five delta variables). The cross-validation performance of the clinical deterioration for Can-EWS V2 (AUROC, 0.946; 95% confidence interval [CI], 0.943–0.948) was higher than that for MEWS of 5 (AUROC, 0.589; 95% CI, 0.587–0.560; p < 0.001) and Can-EWS V1 (AUROC, 0.927; 95% CI, 0.924–0.931). As a virtual prognostic study, additional validation was performed on held-out test data. The AUROC and 95% CI were 0.588 (95% CI, 0.588–0.589), 0.890 (95% CI, 0.888–0.891), and 0.898 (95% CI, 0.897–0.899), for MEWS of 5, Can-EWS V1, and the deployed model Can-EWS V2, respectively. Can-EWS V2 outperformed other approaches for specificities, positive predictive values, negative predictive values, and the number of false alarms per day at the same sensitivity level on the held-out test data. Conclusions: We have developed and validated a deep learning-based EWS for cancer patients using the original values and differences between consecutive measurements of basic vital signs. The Can-EWS has acceptable discriminatory power and sensitivity, with extremely decreased false alarms compared with MEWS. Full article
(This article belongs to the Topic Explainable AI for Health)
Show Figures

Figure 1

15 pages, 2055 KiB  
Article
Privacy-Preserving Convolutional Bi-LSTM Network for Robust Analysis of Encrypted Time-Series Medical Images
by Manjur Kolhar and Sultan Mesfer Aldossary
AI 2023, 4(3), 706-720; https://doi.org/10.3390/ai4030037 - 28 Aug 2023
Cited by 4 | Viewed by 2229
Abstract
Deep learning (DL) algorithms can improve healthcare applications. DL has improved medical imaging diagnosis, therapy, and illness management. The use of deep learning algorithms on sensitive medical images presents privacy and data security problems. Improving medical imaging while protecting patient anonymity is difficult. [...] Read more.
Deep learning (DL) algorithms can improve healthcare applications. DL has improved medical imaging diagnosis, therapy, and illness management. The use of deep learning algorithms on sensitive medical images presents privacy and data security problems. Improving medical imaging while protecting patient anonymity is difficult. Thus, privacy-preserving approaches for deep learning model training and inference are gaining popularity. These picture sequences are analyzed using state-of-the-art computer aided detection/diagnosis techniques (CAD). Algorithms that upload medical photos to servers pose privacy issues. This article presents a convolutional Bi-LSTM network to assess completely homomorphic-encrypted (HE) time-series medical images. From secret image sequences, convolutional blocks learn to extract selective spatial features and Bi-LSTM-based analytical sequence layers learn to encode time data. A weighted unit and sequence voting layer uses geographical with varying weights to boost efficiency and reduce incorrect diagnoses. Two rigid benchmarks—the CheXpert, and the BreaKHis public datasets—illustrate the framework’s efficacy. The technique outperforms numerous rival methods with an accuracy above 0.99 for both datasets. These results demonstrate that the proposed outline can extract visual representations and sequential dynamics from encrypted medical picture sequences, protecting privacy while attaining good medical image analysis performance. Full article
(This article belongs to the Topic Explainable AI for Health)
Show Figures

Figure 1

16 pages, 1251 KiB  
Article
Feature Importance Analysis of a Deep Learning Model for Predicting Late Bladder Toxicity Occurrence in Uterine Cervical Cancer Patients
by Wonjoong Cheon, Mira Han, Seonghoon Jeong, Eun Sang Oh, Sung Uk Lee, Se Byeong Lee, Dongho Shin, Young Kyung Lim, Jong Hwi Jeong, Haksoo Kim and Joo Young Kim
Cancers 2023, 15(13), 3463; https://doi.org/10.3390/cancers15133463 - 2 Jul 2023
Cited by 6 | Viewed by 2047
Abstract
(1) In this study, we developed a deep learning (DL) model that can be used to predict late bladder toxicity. (2) We collected data obtained from 281 uterine cervical cancer patients who underwent definitive radiation therapy. The DL model was trained using 16 [...] Read more.
(1) In this study, we developed a deep learning (DL) model that can be used to predict late bladder toxicity. (2) We collected data obtained from 281 uterine cervical cancer patients who underwent definitive radiation therapy. The DL model was trained using 16 features, including patient, tumor, treatment, and dose parameters, and its performance was compared with that of a multivariable logistic regression model using the following metrics: accuracy, prediction, recall, F1-score, and area under the receiver operating characteristic curve (AUROC). In addition, permutation feature importance was calculated to interpret the DL model for each feature, and the lightweight DL model was designed to focus on the top five important features. (3) The DL model outperformed the multivariable logistic regression model on our dataset. It achieved an F1-score of 0.76 and an AUROC of 0.81, while the corresponding values for the multivariable logistic regression were 0.14 and 0.43, respectively. The DL model identified the doses for the most exposed 2 cc volume of the bladder (BD2cc) as the most important feature, followed by BD5cc and the ICRU bladder point. In the case of the lightweight DL model, the F-score and AUROC were 0.90 and 0.91, respectively. (4) The DL models exhibited superior performance in predicting late bladder toxicity compared with the statistical method. Through the interpretation of the model, it further emphasized its potential for improving patient outcomes and minimizing treatment-related complications with a high level of reliability. Full article
(This article belongs to the Topic Explainable AI for Health)
Show Figures

Figure 1

Back to TopTop