Next Article in Journal
Effectiveness and Safety of Immune Checkpoint Inhibitors in Older Cancer Patients
Next Article in Special Issue
Joint Expedition: Exploring the Intersection of Digital Health and AI in Precision Medicine with Team Integration
Previous Article in Journal
Remimazolam for Procedural Sedation in Older Patients: A Systematic Review and Meta-Analysis with Trial Sequential Analysis
Previous Article in Special Issue
Breaking Barriers—The Intersection of AI and Assistive Technology in Autism Care: A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

The Promise of Explainable AI in Digital Health for Precision Medicine: A Systematic Review

by
Ben Allen
Department of Psychology, University of Kansas, Lawrence, KS 66045, USA
J. Pers. Med. 2024, 14(3), 277; https://doi.org/10.3390/jpm14030277
Submission received: 18 December 2023 / Revised: 14 February 2024 / Accepted: 24 February 2024 / Published: 1 March 2024

Abstract

:
This review synthesizes the literature on explaining machine-learning models for digital health data in precision medicine. As healthcare increasingly tailors treatments to individual characteristics, the integration of artificial intelligence with digital health data becomes crucial. Leveraging a topic-modeling approach, this paper distills the key themes of 27 journal articles. We included peer-reviewed journal articles written in English, with no time constraints on the search. A Google Scholar search, conducted up to 19 September 2023, yielded 27 journal articles. Through a topic-modeling approach, the identified topics encompassed optimizing patient healthcare through data-driven medicine, predictive modeling with data and algorithms, predicting diseases with deep learning of biomedical data, and machine learning in medicine. This review delves into specific applications of explainable artificial intelligence, emphasizing its role in fostering transparency, accountability, and trust within the healthcare domain. Our review highlights the necessity for further development and validation of explanation methods to advance precision healthcare delivery.

Graphical Abstract

1. Introduction

Precision medicine is a way of personalizing treatments and interventions to the patient’s characteristics, such as genetics, environment, and lifestyle [1]. This personalized medicine is a shift in healthcare to the use of information unique to the patient as a guide for diagnosis and prognosis [2]. Precision medicine has the allure of increasing the reach of medical treatment beyond the one-size-fits-all approach, especially when leveraging advanced bioinformatic strategies to interpret and apply clinical data and provide patients with customized medical care [3]. Precision medicine has the potential to make healthcare more efficient and effective [4].
A major catalyst toward the success of personalized medicine is the integration of different forms of digital healthcare data with artificial intelligence to make more accurate interpretations of diagnostic information, reduce medical errors, and improve health system workflow and promote health [5]. A noteworthy example comes from a study of an artificial-intelligence system trained to suggest different chemotherapy treatments based on the predicted treatment response given the patient gene-expression data [6]. The prediction models showed accuracy near 80% and might eventually help cancer patients avoid failing therapies. There are similar studies of artificial-intelligence systems trained to suggest different antidepressant treatments based on digital health records [7]. Overall, these studies suggest that clinical support systems could help personalize healthcare delivery when given the right reservoir of digital health data [8].
Digital health technologies are a rich reservoir of big health data for personalizing medicine. For example, wearable biosensors can measure valuable health-related physiological data for patient monitoring and management [9]. Telemedicine also has the potential to make healthcare more cost-effective while meeting the increasing demand for and insufficient supply of healthcare providers [10]. Overall, artificial-intelligence applications can leverage digital health data to implement personalized treatment strategies [1,5].
One of the key challenges in implementing precision medicine is integrating diverse and comprehensive data sources that encompass genetic, environmental, and lifestyle factors to ensure healthcare systems improve patient outcomes and effectively manage diseases [1,4]. The integration of complex datasets can be an overwhelming task for even a team of humans, but is relatively trivial for artificial-intelligence systems. Evidence of such integration is demonstrated in machine-learning models trained to make breast cancer diagnoses using digital health records along with analysis of mammography images [11]. Notably the prediction model showed a specificity of 77% and sensitivity of 87%, suggesting potential for reducing false-negatives. As artificial-intelligence systems become more prevalent in healthcare, such systems will be able to leverage genetic, environmental, and lifestyle data to help advance a personalized medicine approach [12].
Integrating important health data requires responsible handling and safeguards to prevent misuse of protected health information [13]. More recently, integrating and interpreting complex health data for personalized medicine is becoming the job of artificial intelligence [5]. Yet, developing countries may have limited access to these artificial-intelligence applications, highlighting the importance of open-source code [14]. Moreover, there can be misuse of health information embedded in prediction models used by artificial-intelligence systems to extract patterns from data to inform medical decisions [15]. As artificial-intelligence applications become more prevalent in healthcare, there is a great need to ensure ethical issues are considered along the way [16].
Integrating artificial intelligence into healthcare offers several potential benefits, including more accurate diagnostic/prognostic tools, more efficient personalization of treatment strategies using big data, and overall better optimization of healthcare workflows [5,17]. The sheer volume of patient health data available also makes integration via artificial intelligence into healthcare a necessity. Artificial-intelligence systems can quickly extract meaningful patterns and insights from multiple data sources, enabling better-informed decisions about how to personalize healthcare [15].
But the desire for accurate artificial-intelligence systems must be balanced with the goal of transparency and interpretability to build trust among healthcare practitioners and patients and ensure the responsible integration of insights into clinical decision-making [18]. It is important to foster a collaborative approach between human expertise and machine intelligence by understanding an artificial-intelligence-system’s rationale when making medical decisions [5]. The rising field of explainable artificial intelligence centers around the ability to comprehend and interpret artificial-intelligence systems [19]. Explainable artificial intelligence promotes trust through transparency and accountability in artificial-intelligence applications for healthcare [17]. For precision medicine, healthcare practitioners are more likely to trust in the outcome of complex algorithms they can understand, giving explainable methods a position to ensure transparent models for personalized treatment strategies [19,20].
This review is a critical evaluation of the literature on how explainable artificial intelligence can facilitate the pursuit of precision medicine using digital health data. A secondary objective was to offer key strategies and knowledge gaps in addressing the challenges in interpretability and transparency of artificial-intelligence systems for precision medicine using digital health data. The primary inquiry addressed in this review was discerning the core themes and the status of research at the confluence of digital health, precision medicine, and explainable artificial-intelligence methodologies. This systematic review serves to pinpoint the benefits and challenges of applying explainable artificial-intelligence methods with digital health data for precision medicine.
This paper consolidates recent literature and offers a comprehensive synthesis of how to apply explainable artificial-intelligence methods to the utilization of digital health data in precision medicine. Machine learning is an effective approach to identifying treatment targets and accurately predicting treatment outcomes [21]. For example, there is evidence for using an artificial-intelligence-based system to select patients for intervention using the electrocardiograph signal to predict atrial fibrillation [22]. Employing a topic-modeling approach, this study extracted key themes and emerging trends from the literature on using explainable artificial intelligence and digital health data for precision medicine. Topic modeling is an unsupervised learning method for uncovering prevalent themes within a body of text [23,24]. Therefore, this paper provides a compilation for precision medicine of explainable artificial intelligence approaches to digital health.

2. Materials and Methods

2.1. Topic-Modeling-Procedure Overview

Insights derived from a topic-modeling analysis of the relevant literature directed this review. Specifically, the latent Dirichlet allocation (LDA) algorithm helped uncover prevalent themes in a final corpus of journal articles by analyzing the probability patterns of words and word pairs across the documents. The methods outlined in the subsequent sections followed the PRISMA 2020 checklist and were pre-registered on the Open Science Foundation (https://osf.io/tpxh6, registered on 19 September 2023) [25]. The Supplementary Materials include the checklist and code for the data analysis is available at https://zenodo.org/records/10398384, accessed on 19 September 2023.

2.2. Journal Article Search Strategy

A google scholar search (accessed on 19 September 2023) identified 434 journal articles relevant to this review. The search terms included: (“precision medicine” AND “digital health” AND “interpretable machine learning” OR “explainable artificial intelligence”). Inclusion criteria were that the article be written in English and was a peer-reviewed journal article, and full text was available, and included the search terms in the body of the text. The search was not restricted by date, though the earliest article matching our search terms was published in 2018. Citations and full-text articles were imported to the Zotero reference management software (https://www.zotero.org/). Zotero automatically classifies articles by type (i.e., journal article, pre-print, thesis, etc.). Each article’s classification was verified by the author. To screen for keywords in the text body, the reference sections were removed from each article, and spelling and grammar were checked through Google Docs. From each journal article, we extracted bigrams (consecutive word pairs). Articles only containing search terms in the reference section were excluded. Figure 1 shows the PRISMA 2020 flowchart which illustrates how the final set of articles was determined [26]. Table 1 shows the resulting 27 articles that directly connected explainable artificial intelligence to digital health and precision medicine.

2.3. Topic Modeling R

All text analysis and pre-processing occurred through the R programming language (version 4.3.1, 16 June 2023). We used the full text of each journal article, with the reference sections deleted; articles were segmented into paragraphs (n = 1733). The paragraphs were pre-processed by deleting punctuation, numbers, stop words, and symbols using the tm R package (version 0.7-8). Finally, we lemmatized each word and tokenized the text into unigrams, bigrams, and trigrams. This process helped combine counts of similar words with slightly different spellings. We removed 1 paragraph with fewer than 5 terms and removed all terms that occurred in only 1 paragraph (n = 241,027), resulting in 1732 paragraphs and 262 unique terms.
Using the ldatuning R package (version 1.0.2), we calculated coherence metrics for topic models of various sizes to estimate the optimal number of topics inherent to the collection of paragraphs. Next, we randomly split paragraphs into ten subsets, computed coherence metrics for topic models ranging from 2 to 20 topics, and repeated the process ten times to prevent bias. The median coherence scores across iterations suggested that a 5-topic model was optimal based on coherence. Subsequently, we employed the Gibbs algorithm to estimate a 5-topic latent Dirichlet allocation model for the entire corpus.

3. Results

Using a latent Dirichlet allocation model, we built a five-topic model based on the corpus of 27 journal articles that matched search terms. As topic modeling is unsupervised machine learning, one of the identified topics did not directly relate to the keywords. It identified a segment of paragraphs that described methods used to conduct literature reviews. That topic is omitted from the results below. The remaining four topics are discussed below based on an evaluation of each topic’s most probable n-grams, 100 most probable paragraphs, and their parent paper’s findings on precision medicine, digital health, and explainable artificial intelligence. The last two topics are merged under one heading because they are both related to deep learning and explainable artificial-intelligence research.

3.1. AI Explainability Addresses Ethical Challenges in Healthcare

Artificial intelligence (AI) is integral to offering solutions to various challenges in healthcare, including the standardization of digital health applications and ethical concerns related to patient data use [27,28]. Precision medicine, a common application of AI in digital health, involves tailoring healthcare interventions to subgroups of patients by using prediction models trained on patient characteristics and contextual factors [29,30,31]. However, the reliance on AI in healthcare raises issues regarding transparency and accountability with black-box AI systems whose decision-making processes are opaque [32,33]. Explainable artificial intelligence emerges as a solution to enhance transparency, ensuring that AI-driven decisions are comprehensible to healthcare providers and patients alike [29,34].
Explainable artificial intelligence provides explanations that increase the trustworthiness in the diagnoses and treatments suggested by machine-learning models [32,34,35,36]. While accuracy is necessary in AI systems, healthcare is a critical domain and requires transparent AI systems that offer reliable explanations [28,37]. When combined with rigorous internal and external validation, explainable artificial intelligence can improve model troubleshooting and system auditing, aligning the AI system with potential regulatory requirements, such as those outlined in the regulations on automated artificial-intelligence systems put forth by the European Union [37,38,39].
AI is well-suited to help precision medicine by computing mathematical mappings of the connections between patient characteristics and personalized treatment strategies [40,41,42,43]. However, challenges persist in the validation of machine-learning models for clinical applications [29,44]. Public and private collaborative efforts involving clinicians, computer scientists, and statisticians are essential to effectively map a machine-learning model onto an explanation that can be understood in the service of precision medicine [40,45].
There are going to be ever-present ethical and social concerns, including issues of accountability, data privacy, and bias [32,46]. Explainable artificial intelligence offers a pathway to addressing these concerns by providing transparent explanations for AI-driven decisions, fostering trust and acceptance among stakeholders [47,48]. Differences between machine-learning models trained on data from practical application vs. proxies make it challenging to have a unitary assessment of interpretability or explainability [49]. As AI continues to grow, there is an ongoing ethical need for the development of explainable artificial-intelligence methods in healthcare [17,50].

3.2. Integrating Explainable AI in Healthcare for Trustworthy Precision Medicine

Integrating explainable artificial intelligence with digital health data is gaining momentum in precision medicine, addressing the need for transparent and understandable models essential for clinical applicability [19,51,52]. As machine-learning models become more complex, interpretability is crucial in clinical contexts such as microbiome research [51,53]. Explainable artificial-intelligence applications can help predict an Alzheimer’s disease diagnosis in a pool of patients with mild impairments, showcasing how interpretable machine-learning algorithms can help explain complex patterns that inform individual patient predictions [54,55]. Such models offer patient-level interpretations, aiding clinicians and patients in understanding the patterns of features that predict conversion to dementia, thus enhancing trust in using explainable artificial intelligence as an aid to medical decisions [54,56].
Methods of extracting explanations from complex models can aid in the discovery of new personalized approaches to therapy and new biomarkers [57]. For example, Bayesian networks may serve as a framework for visualizing interactions between biological entities (taxa, genes, metabolites) within a specific environment (human gut) over time [51]. A model agnostic approach to explainability is offered by Shapley additive explanations, which enhance understanding at both global and local levels, improve predictive accuracy, and facilitate informed medical decisions [56,58]. Shapley values can enable visual explanations of how a model makes patient-level predictions and also the impact of changes in training data on model explanations [59]. Yet, a key barrier to advances of AI in healthcare in integrating data across platforms and institutions for precision medicine is the lack of clear governance frameworks for the privacy and security of data [60].
Development of AI systems for disease identification, such as in COVID-19 diagnosis, are underway, highlighting the importance of visual explanations in optimizing diagnostic accuracy [58,61]. For example, a recent study used explainable artificial-intelligence methods to create a multi-modal (visual, text) explanation as an aid in understanding and trusting a melanoma diagnosis [62]. More broadly, explainable artificial intelligence has potential to aid in communicating transparent decision support for healthcare systems that helps healthcare professionals make informed and reliable decisions [58,63]. Moreover, many legal and technological challenges associated with diagnostic models of electronic health records are solved by sharing prediction models and Bayesian networks of comorbidities on health outcomes, rather than the protected health information itself [64]. Overall, explainable artificial-intelligence methods are important for building trustworthiness for AI healthcare systems, supporting advancements in precision medicine and clinical decision-making [49,58,65].

3.3. Advancing Precision Medicine through Deep Learning and Explainable Artificial Intelligence

The great potential of deep learning is as a transformative force in the analysis of health information for precision medicine because of its ability to find patterns in unstructured data, such as images from medical scans, that are important for diagnosis and treatment decisions [66,67]. This ability has advanced the field, enabling the differentiation of medical conditions with high accuracy, as shown in studies comparing benign nevus and melanoma through skin-lesion images [66,68]. Explainable artificial-intelligence approaches to understanding clinical systems using deep learning offer explanatory metrics that can be used in validation studies, and help address ethical considerations and regulatory compliance [56,66].
Deep-learning models combined with explainable artificial intelligence have potential for broad applications in precision medicine, from enhancing disease diagnosis to facilitating drug discovery [69,70,71]. Deep-learning models offer more exact and efficient diagnosis for diseases requiring analysis of medical images (i.e., cancer, dementia), compared with human experts [72]. Explainable artificial-intelligence approaches to deep-learning models of medical images often include some form of visual explanation highlighting the image segments the model used to make the diagnosis [73,74]. Deep learning can also reduce drug discovery costs by efficiently screening for potential candidates, reducing time compared with traditional methods [29].
Deep-learning models for detecting, segmenting, and classifying biomedical images have accuracy that sometimes meets or exceeds human experts [75,76]. Multimodal data-fusion techniques that combine medical imaging data with other data sources show further improved diagnostic accuracy [77]. Explainable artificial intelligence makes AI algorithms more transparent and controllable, building trust among medical professionals in AI-assisted decisions [78]. Overall, explainable artificial intelligence integration into the healthcare systems can build trust and reliance in deep-learning approaches to diagnosis and drug discovery [56,69,79].

4. Discussion

This review paper gives an overview of key themes in research into digital health using explainable artificial intelligence for precision medicine. We used a topic-modeling approach to extract common themes across 27 full-text journal articles matching search criteria (“precision medicine” AND “digital health” AND “interpretable machine learning” OR “explainable artificial intelligence”). Thus, this review offers a glimpse at the current landscape in explainable artificial-intelligence-driven precision medicine using digital health. Through applying a latent Dirichlet allocation model, the topic model highlights core thematic areas that underscore an emerging focus on explainable artificial intelligence as a key to addressing ethical challenges [27,29,34,58,66,80,81,82]. These challenges include transparency, trust, and interdisciplinary collaboration in advancing healthcare innovations. Explainable artificial intelligence has many qualities that bridge the gap between complex AI algorithms, such as deep learning, and their practical applications in healthcare, enhancing the acceptability and effectiveness of AI interventions in clinical settings [19,51,52]. By facilitating a better understanding of AI-driven predictions, explainable artificial intelligence enables healthcare professionals to make informed decisions, thus fostering a collaborative environment where AI serves as a supportive tool for opaque decision-making [40,45]. The high-stakes clinical context makes it crucial to integrate explainable artificial intelligence into healthcare systems for advancing personalized treatment strategies that are grounded in an understanding of AI-generated insights.
The alliance between deep learning and explainable artificial intelligence is also critical for advancing precision medicine [66,67]. Deep learning can analyze medical imaging data with electronic health records. Coupled with the explanatory power of explainable artificial intelligence, it offers unprecedented opportunities for diagnosing and treating diseases with greater precision [83]. Explainable artificial intelligence increases accessibility and trust by medical professionals by enhancing the credibility and applicability of deep-learning models in healthcare [56,69,79]. This synergy between deep learning and explainable artificial intelligence can accelerate the pace of medical discoveries and ensure that such advancements are in accordance with the ethical needs of both practitioners and patients.
In sum, this review highlights the ethical importance of explainability when deploying AI systems in healthcare. Precision medicine and patient-centric approaches to healthcare that are driven by AI must be transparent to be trusted. In the future, AI and human expertise will be working in tandem to deliver personalized and ethical healthcare solutions. The implementation of AI systems by physicians is limited by the transparency of the systems and their ability to be understood [68]. However, explainable artificial intelligence can help forge the path towards building trust in precision medicine based on digital health data [84]. The widespread adoption of machine-learning models using digital health data for precision medicine is hindered by the slow progress in developing explainable methods [85]. Thus, integrating explainable artificial-intelligence approaches into healthcare systems is one key to realizing the full potential of AI in precision medicine.

4.1. Limitations

The keywords used to find journal articles limited the topics to interpretable machine learning and/or explainable artificial intelligence. The discovered topics will not necessarily reflect all possible themes in the burgeoning field of artificial intelligence more broadly. The interested reader can see reviews with a broader focus on artificial intelligence and digital health or precision medicine [86,87]. Moreover, five articles in the initial search were published in journals behind a paywall, and not accessible despite contacting authors. The Supplementary Materials include a list of the articles not available, as well as the code for text processing and topic modeling.
As this systematic review was not aimed at quantifying the evidence for a specific effect, traditional risk assessment of bias in individual studies did not directly apply to our topic-modeling synthesis of text from journal articles for a systematic review. The goal of our systematic review was to identify patterns and themes across a body of literature rather than evaluate the methodological quality of individual studies. Nonetheless, our study meets benchmark questions used to assess the overall quality of systematic reviews [88]. There were clear inclusion and exclusion criteria relevant for tapping the appropriate scientific literature, and a comprehensive literature search, unrestricted by time; our topic modeling of journal articles ensured all selected papers were adequately encoded.

4.2. Future Directions

Researchers at the crossroads of digital health and precision medicine should strive to understand their artificial-intelligence applications. For example, explainable artificial-intelligence approaches could help advance biophysical models and understanding of biological processes, as well as improve trust in using artificial-intelligence applications with digital health data to make medical decisions [82]. A barrier to progress is that machine-learning models need big data, yet repositories of publicly available digital health data are limited. Future studies using artificial intelligence should collect a multi-site, nationally representative sample that provides publicly available data from different digital health domains [89]. Ultimately, these endeavors could result in a transparent artificial-intelligence system utilizing digital health data for precision medicine.
The future of personalized medicine appears to be increasing the trustworthiness of AI systems by making them explainable. There are policy implications for how explainable methods can help meet regulations and policies regarding transparency. Future research could include multi-site studies that validate local explaining methods that make reliable predictions at the patient level. The end product of these studies should include some applications for healthcare workers that visualize explanations for diagnostic or treatment planning. Such multi-site studies could also help encourage collaboration across different areas of expertise as the use of artificial intelligence in healthcare grows.

5. Conclusions

This paper provides an up-to-date assessment of themes in research related to explainable artificial intelligence, digital health, and precision medicine. The potential contributions of explainable artificial intelligence to precision medicine span both theoretical and translational aspects. For example, explainable artificial intelligence holds promise for both enhancing our comprehension of disease mechanisms and visualizing regions of medical images important for making a diagnosis. In general, the convergence with digital health is in its early stages, yet precision medicine stands to benefit in many ways by embracing explainable artificial intelligence.

Supplementary Materials

The following supporting information can be downloaded at: https://osf.io/tpxh6. Code can be downloaded at https://zenodo.org/records/10398384.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this report came from published articles that are copyrighted. However, the articles used are listed in the report.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Collins, F.S.; Varmus, H. A New Initiative on Precision Medicine. N. Engl. J. Med. 2015, 372, 793–795. [Google Scholar] [CrossRef]
  2. Johnson, K.B.; Wei, W.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]
  3. Larry Jameson, J.; Longo, D.L. Precision Medicine—Personalized, Problematic, and Promising. Obstet. Gynecol. Surv. 2015, 70, 612. [Google Scholar] [CrossRef]
  4. Hamburg, M.A.; Collins, F.S. The Path to Personalized Medicine. N. Engl. J. Med. 2010, 363, 301–304. [Google Scholar] [CrossRef] [PubMed]
  5. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  6. Huang, C.; Clayton, E.A.; Matyunina, L.V.; McDonald, L.D.; Benigno, B.B.; Vannberg, F.; McDonald, J.F. Machine Learning Predicts Individual Cancer Patient Responses to Therapeutic Drugs with High Accuracy. Sci. Rep. 2018, 8, 16444. [Google Scholar] [CrossRef] [PubMed]
  7. Sheu, Y.; Magdamo, C.; Miller, M.; Das, S.; Blacker, D.; Smoller, J.W. AI-Assisted Prediction of Differential Response to Antidepressant Classes Using Electronic Health Records. Npj Digit. Med. 2023, 6, 73. [Google Scholar] [CrossRef] [PubMed]
  8. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
  9. Li, X.; Dunn, J.; Salins, D.; Zhou, G.; Zhou, W.; Rose, S.M.S.-F.; Perelman, D.; Colbert, E.; Runge, R.; Rego, S.; et al. Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information. PLoS Biol. 2017, 15, e2001402. [Google Scholar] [CrossRef]
  10. Kvedar, J.; Coye, M.J.; Everett, W. Connected Health: A Review of Technologies and Strategies to Improve Patient Care with Telemedicine and Telehealth. Health Aff. 2014, 33, 194–199. [Google Scholar] [CrossRef]
  11. Akselrod-Ballin, A.; Chorev, M.; Shoshan, Y.; Spiro, A.; Hazan, A.; Melamed, R.; Barkan, E.; Herzel, E.; Naor, S.; Karavani, E.; et al. Predicting Breast Cancer by Applying Deep Learning to Linked Health Records and Mammograms. Radiology 2019, 292, 331–342. [Google Scholar] [CrossRef]
  12. Bohr, A.; Memarzadeh, K. The Rise of Artificial Intelligence in Healthcare Applications. Artif. Intell. Healthc. 2020, 25–60. [Google Scholar] [CrossRef]
  13. McGuire, A.L.; Gibbs, R.A. No Longer De-Identified. Science 2006, 312, 370–371. [Google Scholar] [CrossRef]
  14. Farhud, D.D.; Zokaei, S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran. J. Public Health 2021, 50, i–v. [Google Scholar] [CrossRef]
  15. Obermeyer, Z.; Emanuel, E.J. Predicting the Future—Big Data, Machine Learning, and Clinical Medicine. N. Engl. J. Med. 2016, 375, 1216–1219. [Google Scholar] [CrossRef]
  16. Obafemi-Ajayi, T.; Perkins, A.; Nanduri, B.; Wunsch, D.C., II; Foster, J.A.; Peckham, J. No-Boundary Thinking: A Viable Solution to Ethical Data-Driven AI in Precision Medicine. AI Ethics 2022, 2, 635–643. [Google Scholar] [CrossRef]
  17. Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
  18. Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. N. Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef] [PubMed]
  19. Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  20. Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and Explainability of Artificial Intelligence in Medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1312. [Google Scholar] [CrossRef] [PubMed]
  21. Peralta, M.; Jannin, P.; Baxter, J.S.H. Machine Learning in Deep Brain Stimulation: A Systematic Review. Artif. Intell. Med. 2021, 122, 102198. [Google Scholar] [CrossRef]
  22. Attia, Z.I.; Noseworthy, P.A.; Lopez-Jimenez, F.; Asirvatham, S.J.; Deshmukh, A.J.; Gersh, B.J.; Carter, R.E.; Yao, X.; Rabinstein, A.A.; Erickson, B.J. An Artificial Intelligence-Enabled ECG Algorithm for the Identification of Patients with Atrial Fibrillation during Sinus Rhythm: A Retrospective Analysis of Outcome Prediction. Lancet 2019, 394, 861–867. [Google Scholar] [CrossRef] [PubMed]
  23. Thakur, K.; Kumar, V. Application of Text Mining Techniques on Scholarly Research Articles: Methods and Tools. New Rev. Acad. Librariansh. 2022, 28, 279–302. [Google Scholar] [CrossRef]
  24. Abdelrazek, A.; Eid, Y.; Gawish, E.; Medhat, W.; Hassan, A. Topic Modeling Algorithms and Applications: A Survey. Inf. Syst. 2023, 112, 102131. [Google Scholar] [CrossRef]
  25. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. Int. J. Surg. 2021, 88, 105906. [Google Scholar] [CrossRef] [PubMed]
  26. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R Package and Shiny App for Producing PRISMA 2020-Compliant Flow Diagrams, with Interactivity for Optimised Digital Transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
  27. Ishengoma, F. Artificial Intelligence in Digital Health: Issues and Dimensions of Ethical Concerns. Innov. Softw. 2022, 3, 81–108. [Google Scholar] [CrossRef]
  28. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  29. Roy, S.; Meena, T.; Lim, S. Demystifying Supervised Learning in Healthcare 4.0: A New Reality of Transforming Diagnostic Medicine. Diagnostics 2022, 12, 2549. [Google Scholar] [CrossRef]
  30. Kosorok, M.R.; Laber, E.B. Precision Medicine. Annu. Rev. Stat. Its Appl. 2019, 6, 263–286. [Google Scholar] [CrossRef]
  31. Madai, V.I.; Higgins, D.C. Artificial Intelligence in Healthcare: Lost In Translation? arXiv 2021, arXiv:2107.13454. [Google Scholar]
  32. Kuwaiti, A.A.; Nazer, K.; Al-Reedy, A.; Al-Shehri, S.; Al-Muhanna, A.; Subbarayalu, A.V.; Al Muhanna, D.; Al-Muhanna, F.A. A Review of the Role of Artificial Intelligence in Healthcare. J. Pers. Med. 2023, 13, 951. [Google Scholar] [CrossRef]
  33. Clement, T.; Kemmerzell, N.; Abdelaal, M.; Amberg, M. XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process. Mach. Learn. Knowl. Extr. 2023, 5, 78–108. [Google Scholar] [CrossRef]
  34. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2019, 51, 1–42. [Google Scholar] [CrossRef]
  35. Jadhav, S.; Deng, G.; Zawin, M.; Kaufman, A.E. COVID-View: Diagnosis of COVID-19 Using Chest CT. IEEE Trans. Vis. Comput. Graph. 2022, 28, 227–237. [Google Scholar] [CrossRef] [PubMed]
  36. Giuste, F.; Shi, W.; Zhu, Y.; Naren, T.; Isgut, M.; Sha, Y.; Tong, L.; Gupte, M.; Wang, M.D. Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review. IEEE Rev. Biomed. Eng. 2022, 16, 5–21. [Google Scholar] [CrossRef] [PubMed]
  37. Goodman, B.; Flaxman, S. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Mag. 2017, 38, 50–57. [Google Scholar] [CrossRef]
  38. Article 22 GDPR—Automated Individual Decision-Making, Including Profiling. In General Data Protection Regulation (EU GDPR); European Parliament: Strasbourg, France; Council of the European Union: Brussels, Belgium, 2018.
  39. Ghassemi, M.; Oakden-Rayner, L.; Beam, A.L. The False Hope of Current Approaches to Explainable Artificial Intelligence in Health Care. Lancet Digit. Health 2021, 3, e745–e750. [Google Scholar] [CrossRef] [PubMed]
  40. Wellnhofer, E. Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular Imaging. Front. Cardiovasc. Med. 2022, 9, 890809. [Google Scholar] [CrossRef] [PubMed]
  41. Mayerhoefer, M.E.; Materka, A.; Langs, G.; Häggström, I.; Szczypiński, P.; Gibbs, P.; Cook, G. Introduction to Radiomics. J. Nucl. Med. 2020, 61, 488–495. [Google Scholar] [CrossRef]
  42. Piccialli, F.; Calabrò, F.; Crisci, D.; Cuomo, S.; Prezioso, E.; Mandile, R.; Troncone, R.; Greco, L.; Auricchio, R. Precision Medicine and Machine Learning towards the Prediction of the Outcome of Potential Celiac Disease. Sci. Rep. 2021, 11, 5683. [Google Scholar] [CrossRef] [PubMed]
  43. Schork, N.J. Artificial Intelligence and Personalized Medicine. Cancer Treat. Res. 2019, 178, 265–283. [Google Scholar] [CrossRef] [PubMed]
  44. Kimmelman, J.; Tannock, I. The Paradox of Precision Medicine. Nat. Rev. Clin. Oncol. 2018, 15, 341–342. [Google Scholar] [CrossRef]
  45. Boehm, K.M.; Khosravi, P.; Vanguri, R.; Gao, J.; Shah, S.P. Harnessing Multimodal Data Integration to Advance Precision Oncology. Nat. Rev. Cancer 2021, 22, 114–126. [Google Scholar] [CrossRef] [PubMed]
  46. Choudhury, A.; Asan, O. Impact of Accountability, Training, and Human Factors on the Use of Artificial Intelligence in Healthcare: Exploring the Perceptions of Healthcare Practitioners in the US. Hum. Factors Healthc. 2022, 2, 100021. [Google Scholar] [CrossRef]
  47. Poon, A.I.F.; Sung, J.J.Y. Opening the Black Box of AI-Medicine. J. Gastroenterol. Hepatol. 2021, 36, 581–584. [Google Scholar] [CrossRef] [PubMed]
  48. Bærøe, K.; Miyata-Sturm, A.; Henden, E. How to Achieve Trustworthy Artificial Intelligence for Health. Bull. World Health Organ. 2020, 98, 257–262. [Google Scholar] [CrossRef] [PubMed]
  49. Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
  50. Towards Trustable Machine Learning. Nat. Biomed. Eng. 2018, 2, 709–710. [CrossRef]
  51. Laccourreye, P.; Bielza, C.; Larrañaga, P. Explainable Machine Learning for Longitudinal Multi-Omic Microbiome. Mathematics 2022, 10, 1994. [Google Scholar] [CrossRef]
  52. Carrieri, A.P.; Haiminen, N.; Maudsley-Barton, S.; Gardiner, L.-J.; Murphy, B.; Mayes, A.E.; Paterson, S.; Grimshaw, S.; Winn, M.; Shand, C.; et al. Explainable AI Reveals Changes in Skin Microbiome Composition Linked to Phenotypic Differences. Sci. Rep. 2021, 11, 4565. [Google Scholar] [CrossRef] [PubMed]
  53. Wong, C.W.; Yost, S.E.; Lee, J.S.; Gillece, J.D.; Folkerts, M.; Reining, L.; Highlander, S.K.; Eftekhari, Z.; Mortimer, J.; Yuan, Y. Analysis of Gut Microbiome Using Explainable Machine Learning Predicts Risk of Diarrhea Associated with Tyrosine Kinase Inhibitor Neratinib: A Pilot Study. Front. Oncol. 2021, 11, 604584. [Google Scholar] [CrossRef] [PubMed]
  54. Chun, M.Y.; Park, C.J.; Kim, J.; Jeong, J.H.; Jang, H.; Kim, K.; Seo, S.W. Prediction of Conversion to Dementia Using Interpretable Machine Learning in Patients with Amnestic Mild Cognitive Impairment. Front. Aging Neurosci. 2022, 14, 898940. [Google Scholar] [CrossRef] [PubMed]
  55. Murdoch, W.J.; Singh, C.; Kumbier, K.; Abbasi-Asl, R.; Yu, B. Definitions, Methods, and Applications in Interpretable Machine Learning. Proc. Natl. Acad. Sci. USA 2019, 116, 22071–22080. [Google Scholar] [CrossRef] [PubMed]
  56. Lundberg, S.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  57. Wang, R.C.; Wang, Z. Precision Medicine: Disease Subtyping and Tailored Treatment. Cancers 2023, 15, 3837. [Google Scholar] [CrossRef] [PubMed]
  58. Albahri, A.S.; Duhaim, A.M.; Fadhel, M.A.; Alnoor, A.; Baqer, N.S.; Alzubaidi, L.; Albahri, O.S.; Alamoodi, A.H.; Bai, J.; Salhi, A.; et al. A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Inf. Fusion 2023, 96, 156–191. [Google Scholar] [CrossRef]
  59. Martínez-Agüero, S.; Soguero-Ruiz, C.; Alonso-Moral, J.M.; Mora-Jiménez, I.; Álvarez-Rodríguez, J.; Marques, A.G. Interpretable Clinical Time-Series Modeling with Intelligent Feature Selection for Early Prediction of Antimicrobial Multidrug Resistance. Future Gener. Comput. Syst. 2022, 133, 68–83. [Google Scholar] [CrossRef]
  60. Ho, C.W.-L.; Caals, K. A Call for an Ethics and Governance Action Plan to Harness the Power of Artificial Intelligence and Digitalization in Nephrology. Semin. Nephrol. 2021, 41, 282–293. [Google Scholar] [CrossRef]
  61. Rostami, M.; Oussalah, M. A Novel Explainable COVID-19 Diagnosis Method by Integration of Feature Selection with Random Forest. Inform. Med. Unlocked 2022, 30, 100941. [Google Scholar] [CrossRef]
  62. Lucieri, A.; Bajwa, M.N.; Braun, S.A.; Malik, M.I.; Dengel, A.; Ahmed, S. ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis of Skin Lesions. Comput. Methods Programs Biomed. 2022, 215, 106620. [Google Scholar] [CrossRef]
  63. Müller, H.; Holzinger, A.; Plass, M.; Brcic, L.; Stumptner, C.; Zatloukal, K. Explainability and Causability for Artificial Intelligence-Supported Medical Image Analysis in the Context of the European In Vitro Diagnostic Regulation. New Biotechnol. 2022, 70, 67–72. [Google Scholar] [CrossRef] [PubMed]
  64. Wesołowski, S.; Lemmon, G.; Hernandez, E.J.; Henrie, A.; Miller, T.A.; Weyhrauch, D.; Puchalski, M.D.; Bray, B.E.; Shah, R.U.; Deshmukh, V.G.; et al. An Explainable Artificial Intelligence Approach for Predicting Cardiovascular Outcomes Using Electronic Health Records. PLoS Digit. Health 2022, 1, e0000004. [Google Scholar] [CrossRef] [PubMed]
  65. Lucieri, A.; Bajwa, M.N.; Dengel, A.; Ahmed, S. Achievements and Challenges in Explaining Deep Learning Based Computer-Aided Diagnosis Systems. arXiv 2020, arXiv:2011.13169. [Google Scholar]
  66. Shazly, S.A.; Trabuco, E.C.; Ngufor, C.G.; Famuyide, A.O. Introduction to Machine Learning in Obstetrics and Gynecology. Obstet. Gynecol. 2022, 139, 669–679. [Google Scholar] [CrossRef] [PubMed]
  67. Gerussi, A.; Scaravaglio, M.; Cristoferi, L.; Verda, D.; Milani, C.; De Bernardi, E.; Ippolito, D.; Asselta, R.; Invernizzi, P.; Kather, J.N.; et al. Artificial Intelligence for Precision Medicine in Autoimmune Liver Disease. Front. Immunol. 2022, 13, 966329. [Google Scholar] [CrossRef]
  68. Reyes, M.; Meier, R.; Pereira, S.; Silva, C.A.; Dahlweid, F.-M.; von Tengg-Kobligk, H.; Summers, R.M.; Wiest, R. On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Radiol. Artif. Intell. 2020, 2, e190043. [Google Scholar] [CrossRef]
  69. Zafar, I.; Anwar, S.; Kanwal, F.; Yousaf, W.; Nisa, F.U.; Kausar, T.; Ain, Q.U.; Unar, A.; Kamal, M.A.; Rashid, S.; et al. Reviewing Methods of Deep Learning for Intelligent Healthcare Systems in Genomics and Biomedicine. Biomed. Signal Process. Control 2023, 86, 105263. [Google Scholar] [CrossRef]
  70. Lötsch, J.; Kringel, D.; Ultsch, A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics 2021, 2, 1–17. [Google Scholar] [CrossRef]
  71. Kırboğa, K.K.; Abbasi, S. Explainability and White Box in Drug Discovery. Chem. Biol. Drug Des. 2023, 102, 217–233. [Google Scholar] [CrossRef]
  72. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-Cam: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  73. Hong, G.-S.; Jang, M.; Kyung, S.; Cho, K.; Jeong, J.; Lee, G.Y.; Shin, K.; Kim, K.D.; Ryu, S.M.; Seo, J.B.; et al. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J. Radiol. 2023, 24, e58. [Google Scholar] [CrossRef]
  74. van der Velden, B.H.M.; Kuijf, H.J.; Gilhuijs, K.G.A.; Viergever, M.A. Explainable Artificial Intelligence (XAI) in Deep Learning-Based Medical Image Analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef]
  75. Chorba, J.S.; Shapiro, A.M.; Le, L.; Maidens, J.; Prince, J.; Pham, S.; Kanzawa, M.M.; Barbosa, D.N.; Currie, C.; Brooks, C.; et al. Deep Learning Algorithm for Automated Cardiac Murmur Detection via a Digital Stethoscope Platform. J. Am. Heart Assoc. 2021, 10, e019905. [Google Scholar] [CrossRef]
  76. Zhou, L.-Q.; Wu, X.-L.; Huang, S.-Y.; Wu, G.-G.; Ye, H.-R.; Wei, Q.; Bao, L.-Y.; Deng, Y.-B.; Li, X.-R.; Cui, X.-W. Lymph Node Metastasis Prediction from Primary Breast Cancer US Images Using Deep Learning. Radiology 2020, 294, 19–28. [Google Scholar] [CrossRef]
  77. Hassan, R.; Islam, F.; Uddin, Z.; Ghoshal, G.; Hassan, M.M.; Huda, S.; Fortino, G. Prostate Cancer Classification from Ultrasound and MRI Images Using Deep Learning Based Explainable Artificial Intelligence. Future Gener. Comput. Syst. 2022, 127, 462–472. [Google Scholar] [CrossRef]
  78. Salih, A.; Boscolo Galazzo, I.; Gkontra, P.; Lee, A.M.; Lekadir, K.; Raisi-Estabragh, Z.; Petersen, S.E. Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models. Circ. Cardiovasc. Imaging 2023, 16, e014519. [Google Scholar] [CrossRef]
  79. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
  80. Wickramasinghe, N.; Jayaraman, P.P. A Vision for Leveraging the Concept of Digital Twins to Support the Provision of Personalized Cancer Care. IEEE Internet Comput. 2021, 26, 17–24. [Google Scholar] [CrossRef]
  81. Baumgartner, A.J.; Thompson, J.A.; Kern, D.S.; Ojemann, S.G. Novel Targets in Deep Brain Stimulation for Movement Disorders. Neurosurg. Rev. 2022, 45, 2593–2613. [Google Scholar] [CrossRef]
  82. Iqbal, J.D.; Krauthammer, M.; Biller-Andorno, N. The Use and Ethics of Digital Twins in Medicine. J. Law. Med. Ethics 2022, 50, 583–596. [Google Scholar] [CrossRef] [PubMed]
  83. Payrovnaziri, S.N.; Chen, Z.; Rengifo-Moreno, P.; Miller, T.; Bian, J.; Chen, J.H.; Liu, X.; He, Z. Explainable Artificial Intelligence Models Using Real-World Electronic Health Record Data: A Systematic Scoping Review. J. Am. Med. Inform. Assoc. JAMIA 2020, 27, 1173–1185. [Google Scholar] [CrossRef] [PubMed]
  84. Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.-Z. XAI-Explainable Artificial Intelligence. Sci. Robot. 2019, 4, eaay7120. [Google Scholar] [CrossRef] [PubMed]
  85. Pinto, M.F.; Leal, A.; Lopes, F.; Pais, J.; Dourado, A.; Sales, F.; Martins, P.; Teixeira, C.A. On the Clinical Acceptance of Black-box Systems for EEG Seizure Prediction. Epilepsia Open 2022, 7, 247–259. [Google Scholar] [CrossRef] [PubMed]
  86. Gunasekeran, D.V.; Tseng, R.M.W.W.; Tham, Y.-C.; Wong, T.Y. Applications of Digital Health for Public Health Responses to COVID-19: A Systematic Scoping Review of Artificial Intelligence, Telehealth and Related Technologies. Npj Digit. Med. 2021, 4, 40. [Google Scholar] [CrossRef] [PubMed]
  87. Mesko, B. The Role of Artificial Intelligence in Precision Medicine. Expert Rev. Precis. Med. Drug Dev. 2017, 2, 239–241. [Google Scholar] [CrossRef]
  88. Kitchenham, B. Procedures for Performing Systematic Reviews. Keele UK Keele Univ. 2004, 33, 1–26. [Google Scholar]
  89. Casey, B.J.; Cannonier, T.; Conley, M.I.; Cohen, A.O.; Barch, D.M.; Heitzeg, M.M.; Soules, M.E.; Teslovich, T.; Dellarco, D.V.; Garavan, H.; et al. The Adolescent Brain Cognitive Development (ABCD) Study: Imaging Acquisition across 21 Sites. Dev. Cogn. Neurosci. 2018, 32, 43–54. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flowchart. https://estech.shinyapps.io/prisma_flowdiagram/ (accessed on 1 January 2024.
Figure 1. PRISMA 2020 flowchart. https://estech.shinyapps.io/prisma_flowdiagram/ (accessed on 1 January 2024.
Jpm 14 00277 g001
Table 1. List of selected journal articles.
Table 1. List of selected journal articles.
AuthorYearTitlePublication Title
Evans et al.2018The Challenge of Regulating Clinical Decision Support Software After 21st Century CuresAmerican Journal of Law & Medicine
Adadi et al.2019Gastroenterology Meets Machine Learning: Status Quo and Quo VadisAdvances in bioinformatics
Shin et al.2019Current Status and Future Direction of Digital Health in KoreaThe Korean Journal of Physiology& Pharmacology
Ahirwar et al.2020Interpretable Machine Learning in Health Care: Survey and DiscussionsInternational Journal of Innovative Research in Technology and Management
Coppola et al.2021Human, All Too Human? An All-Around Appraisal of The “Artificial Intelligence Revolution” in Medical ImagingFrontiers in Psychology
Wickramasinghe et al.2021A Vision for Leveraging the Concept of Digital Twins to Support the Provision of Personalized Cancer CareIEEE Internet Computing
Bhatt et al.2022Emerging Artificial Intelligence–Empowered mHealth: Scoping ReviewJMIR mHealth and uHealth
Chun et al.2022Prediction of Conversion to Dementia Using Interpretable Machine Learning in Patients with Amnestic Mild Cognitive ImpairmentFrontiers in Aging Neuroscience
Gerussi et al.2022Artificial Intelligence for Precision Medicine in Autoimmune Liver DiseaseFrontiers in Immunology
Iqbal et al.2022The Use and Ethics of Digital Twins in MedicineJournal of Law, Medicine & Ethics
Ishengoma et al.2022Artificial Intelligence in Digital Health: Issues and Dimensions of Ethical ConcernsInnovación y Software
Khanna et al.2022Economics of Artificial Intelligence in Healthcare: Diagnosis vs. TreatmentHealthcare
Kline et al.2022Multimodal Machine Learning in Precision Health: A Scoping Reviewnpj Digital Medicine
Laccourreye et al.2022Explainable Machine Learning for Longitudinal Multi-Omic MicrobiomeMathematics
Roy et al.2022Demystifying Supervised Learning in Healthcare 4.0: A New Reality of Transforming Diagnostic MedicineDiagnostics
Shazly et al.2022Introduction to Machine Learning in Obstetrics and GynecologyObstetrics & Gynecology
Wellnhofer et al.2022Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular ImagingFrontiers in Cardiovascular Medicine
Wesołowski et al.2022An Explainable Artificial Intelligence Approach for Predicting Cardiovascular Outcomes Using Electronic Health RecordsPLOS digital health
Albahri et al.2023A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data FusionInformation Fusion
Baumgartner et al.2023Fair and Equitable AI in Biomedical Research and Healthcare: Social Science PerspectivesArtificial Intelligence in Medicine
Bharati et al.2023A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?IEEE Transactions on Artificial Intelligence
Hong et al.2023Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised LearningKorean Journal of Radiology
King et al.2023What Works Where and How for Uptake and Impact of Artificial Intelligence in Pathology: Review of Theories for a Realist EvaluationJournal of Medical Internet Research
Kuwaiti et al.2023A Review of the Role of Artificial Intelligence in HealthcareJournal of Personalized Medicine
Narayan et al.2023A Strategic Research Framework for Defeating Diabetes in India: A 21st-Century AgendaJournal of the Indian Institute of Science
Vorisek et al.2023Artificial Intelligence Bias in Health Care: Web-Based SurveyJournal of Medical Internet Research
Zafar et al.2023Reviewing Methods of Deep Learning for Intelligent Healthcare Systems in Genomics and BiomedicineBiomedical Signal Processing and Control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Allen, B. The Promise of Explainable AI in Digital Health for Precision Medicine: A Systematic Review. J. Pers. Med. 2024, 14, 277. https://doi.org/10.3390/jpm14030277

AMA Style

Allen B. The Promise of Explainable AI in Digital Health for Precision Medicine: A Systematic Review. Journal of Personalized Medicine. 2024; 14(3):277. https://doi.org/10.3390/jpm14030277

Chicago/Turabian Style

Allen, Ben. 2024. "The Promise of Explainable AI in Digital Health for Precision Medicine: A Systematic Review" Journal of Personalized Medicine 14, no. 3: 277. https://doi.org/10.3390/jpm14030277

APA Style

Allen, B. (2024). The Promise of Explainable AI in Digital Health for Precision Medicine: A Systematic Review. Journal of Personalized Medicine, 14(3), 277. https://doi.org/10.3390/jpm14030277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop