Applications of Explainable Artificial Intelligence in Diagnosis and Surgery
Abstract
:1. Introduction
1.1. Related Artificial Intelligence Concepts
1.2. Related Explainable Artificial Intelligence Concepts
1.3. Contributions
- -
- A brief introduction of the AI/DL concepts, XAI concepts, and the general pipeline of medical XAI applications gives a quick start for medical experts;
- -
- Our survey also provides a recent three-year overall literature review on the medical XAI applications in the fields of diagnosis and surgery with a thorough analysis;
- -
- We summarize the current trends, as well as discuss the challenges and the future directions on how to design a better medical XAI application.
2. Search Strategy
3. Medical Explainable Artificial Intelligence Applications
3.1. Diagnosis
3.2. Surgery
4. Discussion
4.1. Current Research Trends
4.2. Experimental Showcase: Breast Cancer Diagnosis
4.2.1. Dataset
4.2.2. Experiment Setup
4.2.3. Intrinsic XAI Method: Rule-Based
4.2.4. Post hoc XAI Method: SHAP
4.2.5. Post hoc XAI Method: LIME
4.2.6. Post hoc XAI Method: PDP
4.3. Challenges, Limitations and Research Gaps
4.4. Future Directions
4.5. Research Questions
- RQ1: What are the current research trends on medical XAI applications?
- Answer: Based on the reviewed literature included in this review, we have found that most studies in the literature applied post hoc XAI methods. In general, they followed the pipeline which we have illustrated in Figure 3.
- RQ2: How does the studies included in this review tackle the trade-off between accuracy and explainability?
- Answer: We have summarized the surveyed studies and listed their AI evaluation metrics and XAI evaluations. In terms of AI performance, most of the studies performed well. However, only a few studies have provided XAI evaluations. In addition, most of the papers did not evaluate the model’s effectiveness by medical experts. Therefore, we cannot answer how these studies tackle the trade-off between accuracy and explainability.
- RQ3: Is it possible to deploy these models in a clinical real-world environment to assist medical experts in making explainable clinical inferences?
- Answer: Currently, there are still many limitations to medical XAI applications, and it is not feasible to deploy the models into the clinical environment. However, we believe that the future direction for medical XAI applications is promising.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Alloghani, M.; Al-Jumeily, D.; Aljaaf, A.J.; Khalaf, M.; Mustafina, J.; Tan, S.Y. The Application of Artificial Intelligence Technology in Healthcare: A Systematic Review. Commun. Comput. Inf. Sci. 2020, 1174, 248–261. [Google Scholar] [CrossRef]
- Loh, E. Medicine and the rise of the robots: A qualitative review of recent advances of artificial intelligence in health. BMJ Lead. 2018, 2, 59–63. [Google Scholar] [CrossRef]
- Zhou, X.; Guo, Y.; Shen, M.; Yang, G.-Z. Application of artificial intelligence in surgery. Front. Med. 2020, 14, 417–430. [Google Scholar] [CrossRef]
- Adadii, A.; Mohammed, B. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Christopher, B.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; ISBN 978-0-387-31073-2. [Google Scholar]
- Peterson, L. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
- Vapnik, V. The Support Vector Method of Function Estimation. In Nonlinear Modeling; Springer: Boston, MA, USA, 1998; pp. 55–85. [Google Scholar]
- Safavian, S.R.; Landgrebe, D. A Survey of Decision Tree Classifier Methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef] [Green Version]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Mohammad, H.; Sulaiman, M.N. A Review on Evaluation Metrics for Data Classification Evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5, 1–11. [Google Scholar] [CrossRef]
- Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021, 76, 89–106. [Google Scholar] [CrossRef]
- Kim, M.-Y.; Atakishiyev, S.; Babiker, H.K.B.; Farruque, N.; Goebel, R.; Zaïane, O.R.; Motallebi, M.-H.; Rabelo, J.; Syed, T.; Yao, H.; et al. A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence. Mach. Learn. Knowl. Extr. 2021, 3, 45. [Google Scholar] [CrossRef]
- Adadi, A.; Berrada, M. Explainable AI for Healthcare: From Black Box to Interpretable Models. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; Volume 1076, pp. 327–337. ISBN 9789811509469. [Google Scholar]
- Kleinbaum, D.G.; Kleinbaum, D.G. Logistic Regression; Springer: Berlin/Heidelberg, Germany, 1994; ISBN 9781441917416. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 2017, 4766–4775. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Pearson, K. LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef] [Green Version]
- Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–662. [Google Scholar]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Yang, G.; Ye, Q.; Xia, J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion 2022, 77, 29–52. [Google Scholar] [CrossRef] [PubMed]
- Tjoa, E.; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 2020, 14, 1–21. [Google Scholar] [CrossRef] [PubMed]
- Kavya, R.; Christopher, J.; Panda, S.; Lazarus, Y.B. Machine Learning and XAI approaches for Allergy Diagnosis. Biomed. Signal Process. Control 2021, 69, 102681. [Google Scholar] [CrossRef]
- Amoroso, N.; Pomarico, D.; Fanizzi, A.; Didonna, V.; Giotta, F.; La Forgia, D.; Latorre, A.; Monaco, A.; Pantaleo, E.; Petruzzellis, N.; et al. A roadmap towards breast cancer therapies supported by explainable artificial intelligence. Appl. Sci. 2021, 11, 4881. [Google Scholar] [CrossRef]
- Dindorf, C.; Konradi, J.; Wolf, C.; Taetz, B.; Bleser, G.; Huthwelker, J.; Werthmann, F.; Bartaguiz, E.; Kniepert, J.; Drees, P.; et al. Classification and automated interpretation of spinal posture data using a pathology-independent classifier and explainable artificial intelligence (Xai). Sensors 2021, 21, 6323. [Google Scholar] [CrossRef] [PubMed]
- El-Sappagh, S.; Alonso, J.M.; Islam, S.M.R.; Sultan, A.M.; Kwak, K.S. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 2021, 11, 1–26. [Google Scholar] [CrossRef]
- Peng, J.; Zou, K.; Zhou, M.; Teng, Y.; Zhu, X.; Zhang, F.; Xu, J. An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients. J. Med. Syst. 2021, 45, 1–9. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1–10. [Google Scholar] [CrossRef]
- Sarp, S.; Kuzlu, M.; Wilson, E.; Cali, U.; Guler, O. The enlightening role of explainable artificial intelligence in chronic wound classification. Electronics 2021, 10, 1406. [Google Scholar] [CrossRef]
- Tan, W.; Guan, P.; Wu, L.; Chen, H.; Li, J.; Ling, Y.; Fan, T.; Wang, Y.; Li, J.; Yan, B. The use of explainable artificial intelligence to explore types of fenestral otosclerosis misdiagnosed when using temporal bone high-resolution computed tomography. Ann. Transl. Med. 2021, 9, 969. [Google Scholar] [CrossRef]
- Wu, H.; Chen, W.; Xu, S.; Xu, B. Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 1942–1955. [Google Scholar]
- Chen, J.; Dai, X.; Yuan, Q.; Lu, C.; Huang, H. Towards Interpretable Clinical Diagnosis with Bayesian Network Ensembles Stacked on Entity-Aware CNNs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 3143–3153. [Google Scholar]
- Rucco, M.; Viticchi, G.; Falsetti, L. Towards personalized diagnosis of glioblastoma in fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning. Mathematics 2020, 8, 770. [Google Scholar] [CrossRef]
- Gu, D.; Li, Y.; Jiang, F.; Wen, Z.; Liu, S.; Shi, W.; Lu, G.; Zhou, C. VINet: A Visually Interpretable Image Diagnosis Network. IEEE Trans. Multimed. 2020, 22, 1720–1729. [Google Scholar] [CrossRef]
- Kroll, J.P.; Eickhoff, S.B.; Hoffstaedter, F.; Patil, K.R. Evolving complex yet interpretable representations: Application to Alzheimer’s diagnosis and prognosis. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar] [CrossRef]
- Meldo, A.; Utkin, L.; Kovalev, M.; Kasimov, E. The natural language explanation algorithms for the lung cancer computer-aided diagnosis system. Artif. Intell. Med. 2020, 108, 101952. [Google Scholar] [CrossRef]
- Yeboah, D.; Steinmeister, L.; Hier, D.B.; Hadi, B.; Wunsch, D.C.; Olbricht, G.R.; Obafemi-Ajayi, T. An Explainable and Statistically Validated Ensemble Clustering Model Applied to the Identification of Traumatic Brain Injury Subgroups. IEEE Access 2020, 8, 180690–180705. [Google Scholar] [CrossRef]
- Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
- Wong, A.; Shafiee, M.J.; Chwyl, B.; Li, F. FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis. arXiv 2018, arXiv:1809.05989. [Google Scholar]
- Sabol, P.; Sinčák, P.; Hartono, P.; Kočan, P.; Benetinová, Z.; Blichárová, A.; Verbóová, Ľ.; Štammová, E.; Sabolová-Fabianová, A.; Jašková, A. Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. J. Biomed. Inform. 2020, 109, 103523. [Google Scholar] [CrossRef] [PubMed]
- Wei, X.; Zhu, J.; Zhang, H.; Gao, H.; Yu, R.; Liu, Z.; Zheng, X.; Gao, M.; Zhang, S. Visual Interpretability in Computer-Assisted Diagnosis of Thyroid Nodules Using Ultrasound Images. Med. Sci. Monit. 2020, 26, e927007. [Google Scholar] [CrossRef]
- Chang, Y.-W.; Tsai, S.-J.; Wu, Y.-F.; Yang, A.C. Development of an Al-Based Web Diagnostic System for Phenotyping Psychiatric Disorders. Front. Psychiatry 2020, 11, 1–10. [Google Scholar] [CrossRef]
- Magesh, P.R.; Myloth, R.D.; Tom, R.J. An Explainable Machine Learning Model for Early Detection of Parkinson’s Disease using LIME on DaTSCAN Imagery. Comput. Biol. Med. 2020, 126, 104041. [Google Scholar] [CrossRef]
- Cho, J.; Alharin, A.; Hu, Z.; Fell, N.; Sartipi, M. Predicting Post-stroke Hospital Discharge Disposition Using Interpretable Machine Learning Approaches. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: New York, NY, USA, 2019; pp. 4817–4822. [Google Scholar]
- Lamy, J.B.; Sekar, B.; Guezennec, G.; Bouaud, J.; Séroussi, B. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif. Intell. Med. 2019, 94, 42–53. [Google Scholar] [CrossRef]
- Das, D.; Ito, J.; Kadowaki, T.; Tsuda, K. An interpretable machine learning model for diagnosis of Alzheimer’s disease. PeerJ 2019, 7, e6543. [Google Scholar] [CrossRef] [Green Version]
- Yoo, T.K.; Ryu, I.H.; Choi, H.; Kim, J.K.; Lee, I.S.; Kim, J.S.; Lee, G.; Rim, T.H. Explainable machine learning approach as a tool to understand factors used to select the refractive surgery technique on the expert level. Transl. Vis. Sci. Technol. 2020, 9, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Mirchi, N.; Bissonnette, V.; Yilmaz, R.; Ledwos, N.; Winkler-Schwartz, A.; Del Maestro, R.F. The virtual operative assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. PLoS ONE 2020, 15, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L. Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1611–1617. [Google Scholar] [CrossRef] [Green Version]
- Kletz, S.; Schoeffmann, K.; Husslein, H. Learning the representation of instrument images in laparoscopy videos. Healthc. Technol. Lett. 2019, 6, 197–203. [Google Scholar] [CrossRef] [PubMed]
- Chittajallu, D.R.; Dong, B.; Tunison, P.; Collins, R.; Wells, K.; Fleshman, J.; Sankaranarayanan, G.; Schwaitzberg, S.; Cavuoto, L.; Enquobahrie, A. XAI-CBIR: Explainable ai system for content based retrieval of video frames from minimally invasive surgery videos. Proc. Int. Symp. Biomed. Imaging 2019, 2019, 66–69. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Chen, W.-Y.; Liu, Y.-C.; Kira, Z.; Wang, Y.-C.F.; Huang, J.-B. A Closer Look at Few-shot Classification. arXiv 2019, arXiv:1904.04232. [Google Scholar]
- Holzinger, A.; Carrington, A.; Müller, H. Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations. KI Kunstl. Intell. 2020, 34, 193–198. [Google Scholar] [CrossRef] [Green Version]
SN# | Reference | Year | Aim | AI Algorithm | AI Evaluation Metrics | XAI Method | XAI Method Type | XAI Evaluation? |
---|---|---|---|---|---|---|---|---|
1 | [24] | 2021 | Allergy diagnosis | kNN, SVM, C 5.0, MLP, AdaBag, RF | Accuracy: 86.39% Sensitivity: 75% | Condition-prediction (IF-THEN) rules | Rule-based | No |
2 | [25] | 2021 | Breast cancer therapies | Cluster analysis | N/A | Adaptive dimension reduction | Dimension reduction | No |
3 | [26] | 2021 | Spine | One-class SVM, binary RF | F1: 80 ± 12% MCC: 57 ± 23% BSS: 33 ± 28% | Local interpretable model-agnostic explanations (LIME) | Explanation by simplification | No |
4 | [27] | 2021 | Alzheimer’s disease | Two-layer model with RF | First layer: accuracy: 93.95% F1-score: 93.94% Second layer: 87.08% F1-score: 87.09% | SHAP, Fuzzy | Feature relevance, rule-based | No |
5 | [28] | 2021 | Hepatitis | LR, DT, kNN, SVM, RF | Accuracy: 91.9% | SHAP, LIME, partial dependence plots (PDP) | Feature relevance, explanation by simplification | No |
6 | [30] | 2021 | Chronic wound | CNN-based model: pretrained VGG-16 | Precision: 95% Recall: 94% F1-score: 94% | LIME | Explanation by simplification | No |
7 | [31] | 2021 | Fenestral otosclerosis | CNN-based model: proposed otosclerosis-logical neural network (LNN) model | AUC: 99.5% Sensitivity: 96.4% Specificity: 98.9% | Visualization of learned deep representations | Visual explanation | No |
8 | [32] | 2021 | Lymphedema (Chinese EMR) | Counterfactual multi-granularity graph supporting facts extraction (CMGE) method | Precision: 99.04% Recall: 99.00% F1-score: 99.02% | Graph neural network, counterfactual reasoning | Restricted neural network architecture | No |
9 | [33] | 2020 | Clinical diagnosis | Entity-aware Convolutional neural networks (ECNNs) | Top-3 sensitivity: 88.8% | Bayesian network ensembles | Bayesian models | Yes |
10 | [34] | 2020 | Glioblastoma multiforme (GBM) diagnosis | VGG16 | Accuracy: 97% | LIME | Explanation by simplification | No |
11 | [35] | 2020 | Pulmonary nodule diagnostic | CNN | Accuracy: 82.15% | Visually interpretable network (VINet), LRP, CAM, VBP | Visual explanation | No |
12 | [36] | 2020 | Alzheimer’s disease diagnosis | Naïve Bayes (NB), grammatical evolution | ROC: 0.913 Accuracy: 81.5% F1-score: 85.9% Brier: 0.178 | Context-free grammar (CFG) | Rule-based | No |
13 | [37] | 2020 | Lung cancer diagnosis | Neural networks, RF | N/A | LIME, natural language explanation | Explanation by simplification, text explanation | No |
14 | [38] | 2020 | Traumatic brain injury (TBI) identification | k-means, spectral clustering, gaussian mixture | N/A | Quality assessment of the clustering features | Feature relevance | No |
15 | [39] | 2020 | COVID-19 chest X-ray diagnosis | CNN-based model: proposed COVID-Net | Accuracy: 93.3% Sensitivity: 91.0% | GSInquire | Restricted neural network architecture | No |
16 | [41] | 2020 | Colorectal cancer diagnosis | CNN | Accuracy: 91.08% Precision: 91.44% Recall: 91.04% F1-score: 91.26% | Explainable Cumulative Fuzzy Class Membership Criterion (X-CFCMC) | Visual explanation | Yes |
17 | [42] | 2020 | Diagnosis of thyroid nodules | Neural network | Accuracy: 93.15% Sensitivity: 92.29% Specificity: 93.62% | CAM | Visual explanation | No |
18 | [43] | 2020 | Phenotyping psychiatric disorders diagnosis | DNN | White matter accuracy: 90.22% Sensitivity: 89.21% Specificity: 91.23% | Explainable deep neural network (EDNN) | Visual explanation | No |
19 | [44] | 2020 | Parkinson’s disease (PD) diagnosis | CNN | Accuracy: 95.2% Sensitivity: 97.5% Specificity: 90.9% | LIME | Explanation by simplification | No |
20 | [45] | 2019 | Post-stroke hospital discharge disposition | LR, RF, RF with AdaBoost, MLP | Test accuracy: 71% Precision: 64% Recall: 26% F1-score: 59% | LR, LIME | Intrinsic, Explanation by simplification | No |
21 | [46] | 2019 | Breast cancer diagnostic decision and therapeutic decision | kNN, distance-weighted kNN (WkNN), rainbow boxes-inspired algorithm (RBIA) | Accuracy: 80.3% | Case-based reasoning (CBR) approach | Explanation by example | Yes |
22 | [47] | 2019 | Alzheimer’s diagnosis | RF, SVM, DT | Sensitivity: 84% Specificity: 67% AUC: 0.81 | An interpretable ML model: sparse high-order interaction model with rejection option (SHIMR) | Rule-based | No |
SN# | Reference | Year | Aim | AI Algorithm | AI Evaluation Metrics | XAI Method | XAI Method Type | XAI Evaluation? |
---|---|---|---|---|---|---|---|---|
23 | [48] | 2020 | Evidence-based recommendation surgery | XGBoost | Validation accuracy: 78.9% | SHAP | Feature relevance | No |
24 | [49] | 2020 | Surgery training | SVM | Accuracy: 92% Sensitivity: 100% Specificity: 82% | Virtual operative assistant | Feature relevance | No |
25 | [50] | 2019 | Surgical skill assessment | FCN | Suturing accuracy: 100% Needle passing accuracy: 100% Knot tying accuracy: 92.1% | CAM | Visual explanation | No |
26 | [51] | 2019 | Automatic recognition of instruments in laparoscopy videos | CNN | M2CAI Cholec data tuning on InstCnt non-instrument Instrument: Precision: 96% Sensitivity: 86% F1-score: 97% | Activation maps | Visual explanation | No |
27 | [52] | 2019 | Surgical education | CNN | Percentage of relevant frames among top 50 retrieved frames for three phases: 64.42%, 99.54%, 99.09% | Saliency map, content-based image retrieval | Visual explanation, explanation by example | No |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Weng, Y.; Lund, J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics 2022, 12, 237. https://doi.org/10.3390/diagnostics12020237
Zhang Y, Weng Y, Lund J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics. 2022; 12(2):237. https://doi.org/10.3390/diagnostics12020237
Chicago/Turabian StyleZhang, Yiming, Ying Weng, and Jonathan Lund. 2022. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery" Diagnostics 12, no. 2: 237. https://doi.org/10.3390/diagnostics12020237
APA StyleZhang, Y., Weng, Y., & Lund, J. (2022). Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics, 12(2), 237. https://doi.org/10.3390/diagnostics12020237