Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review
Abstract
:1. Introduction
- We introduce a comprehensive background regarding XDSSs.
- We propose a methodological taxonomy of XDSSs.
- We provide an organized overview of recent works about XDSSs according to the application field.
- Finally, we highlight the challenges and future research directions.
2. Background
2.1. Decision Support Systems
“an interactive computer-based system, which helps decision makers utilize data and models to solve unstructured problems”[6].
“a computer-based interactive system that supports decision makers, utilizes data and models, solves problems with varying degrees of structure and facilitates decision processes”[7].
“a smart system for supporting decision-making”[8].
“a specific class of computerized information system that supports management decision-making activities”[9].
2.2. Explainable Artificial Intelligence
2.3. Explainable Decision Support Systems
- Automated ExplainabilityAutomated Explainability methods pave the way towards automated DSSs since they enhance the explainability of AI systems and make it easier to understand the reasoning behind the predictions, thus boosting the robustness of a DSS [15].
- Increased TransparencyTransparency refers to a model’s ability to be understandable. Traditional DSSs are mainly focused on improving the decision-making process [5]. These systems utilize ML algorithms and operate as “black boxes”, meaning that the internal workings and logic behind their outputs are not visible or understandable to the end-users. This lack of transparency often leads to lower user trust and slower adoption, particularly in industries and healthcare where justifying the predictions made by a model is critical for making the correct decision. In contrast, XDSSs harness the benefits of XAI for ensuring that a decision-making process is transparent and interpretable. An explainable DSS transforms a complex and, usually, incomprehensible black-box model into a more transparent and, therefore, understandable one. Hence, the increased transparency enables more informed decision-making, allowing users to understand a DSS rationale and provide reliable decisions [16].
- High Accuracy LevelImproving the accuracy of a ML model has been the primary concern of scientists for building efficient predictive models, regardless of the method employed. Although explainability is the key issue in XAI, it should not be at the expense of accuracy. Therefore, the main question in the last few years has pertained to building highly accurate and XDSSs [17]. This can be carried out by adjusting specific parameters of a DSS, thus enabling users to tailor the system to their individual needs.
- Improved ComplianceTraditional DSSs often face challenges in meeting regulatory requirements that demand transparency and accountability. The opaque nature of their decision-making processes makes it difficult to audit and explain the decisions made. On the other hand, XDSSs are designed to provide comprehensive explanations of the sources and processes used to arrive at decisions, which makes it easier to audit and improves compliance with relevant regulations and standards [18]. Therefore, the decision-making processes can be thoroughly evaluated and justified.
- Enhanced CollaborationAnother feature of XDSSs is that they promote collaboration, as users gain an in-depth understanding of decisions and avoid biased predictions [18].
- Cost SavingsXDSSs can save time and reduce costs through reducing manual processes and producing faster data analysis. This enables efficient decisions and helps organizations to remain competitive. For example, an explainable clinical DSS could assist in reducing the cost of therapy and healthcare for a patient [19].
- Enhanced User ExperienceUser experience and satisfaction are also significantly influenced by the level of explainability in a DSS. Traditional DSSs can be frustrating for users who need to understand and justify the decisions made by the system. This frustration can lead to lower user satisfaction and reluctance to rely on the system. Conversely, XDSSs can greatly enhance user experience by providing understandable explanations of the decisions made [16]. Users who understand how decisions are made are more likely to be satisfied with the system, leading to enhanced experience and reliance on the technology.
- Increased ConfidenceUser confidence is significantly influenced by the level of explainability in a DSS. Traditional DSSs can be frustrating for users who need to understand the decisions made by the system. This frustration generally leads to lower user confidence and reluctance to rely on the system. Conversely, XDSSs can greatly boost confidence in end-users since they unfold the outputs of the system [20].
3. Taxonomy of Explainable Decision Support Systems
- Visual Explainability
- -
- Automatic Data Visualization
- -
- Sensitivity Analysis
- -
- Local Interpretable Model-agnostic Explanations
- -
- SHapley Additive exPlanations
- Rule-based Explainability
- -
- Production Rule Systems
- -
- Tree-based Systems
- -
- If–Then Explanation Rules
- Case-based Explainability
- -
- Case-based Reasoning
- -
- Example-based Explainability
- Natural Language Explainability
- -
- Interactive Natural Language Question-answering Systems
- -
- Natural Language Generation Systems
- -
- Natural Language Understanding Systems
- Knowledge-based Explainability
- -
- Expert Systems
3.1. Visual Explainability
3.1.1. Automatic Data Visualization
3.1.2. Sensitivity Analysis
3.1.3. Local Interpretable Model-agnostic Explanations
3.1.4. SHapley Additive Explanations
3.2. Rule-Based Explainability
3.2.1. Production Rule Systems
3.2.2. Tree-based Systems
3.2.3. If–Then Explanation Rules
3.3. Case-Based Explainability
3.3.1. Case-Based Reasoning
3.3.2. Example-Based Explainability
3.4. Natural Language Explainability
3.4.1. Interactive Natural Language Question-Answering Systems
3.4.2. Natural Language Generation Systems
3.4.3. Natural Language Understanding Systems
3.5. Knowledge-Based Explainability
Expert Systems
4. Applications of Explainable Decision Support Systems
4.1. Healthcare
4.2. Transport
4.3. Manufacturing and Industry
4.4. Finance
4.5. Education
4.6. Other Domains
4.7. Metrics for Evaluating XAI Methods in DSSs
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- McCarthy, J. The Question of artificial intelligence: Philosophical and sociological perspectives. Choice Rev. Online 1988, 26, 26-2117. [Google Scholar] [CrossRef]
- Akyol, S. Rule-based Explainable Artificial Intelligence. In Pioneer and Contemporary Studies in Engineering; 2023; pp. 305–326. Available online: https://www.duvaryayinlari.com/Webkontrol/IcerikYonetimi/Dosyalar/pioneer-and-contemporary-studies-in-engineering_icerik_g3643_2toBsc9b.pdf (accessed on 13 July 2024).
- Das, A.; Rad, P. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv 2020, arXiv:2006.11371. [Google Scholar]
- Gunning, D.; Aha, D. DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 2019, 40, 44–58. [Google Scholar]
- Keen, P.G.W. Decision support systems: A research perspective. In Decision Support Systems: Issues and Challenges: Proceedings of an International Task Force Meeting; Pergamon: Oxford, UK, 1980; pp. 23–44. [Google Scholar]
- Sprague, R.H., Jr. A framework for the development of decision support systems. MIS Q. 1980, 4, 1–26. [Google Scholar] [CrossRef]
- Eom, S.B.; Lee, S.M.; Kim, E.B.; Somarajan, C. A survey of decision support system applications (1988–1994). J. Oper. Res. Soc. 1998, 49, 109–120. [Google Scholar] [CrossRef]
- Terribile, F.; Agrillo, A.; Bonfante, A.; Buscemi, G.; Colandrea, M.; D’Antonio, A.; De Mascellis, R.; De Michele, C.; Langella, G.; Manna, P.; et al. A Web-based spatial decision supporting system for land management and soil conservation. Solid Earth 2015, 6, 903–928. [Google Scholar] [CrossRef]
- Yazdani, M.; Zarate, P.; Coulibaly, A.; Zavadskas, E.K. A group decision making support system in logistics and supply chain management. Expert. Syst. Appl. 2017, 88, 376–392. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting black-box models: A review on explainable artificial intelligence. Cognit. Comput. 2023, 16, 45–74. [Google Scholar] [CrossRef]
- Samek, W. Explainable deep learning: Concepts, methods, and new developments. In Explainable Deep Learning AI; Elsevier: Amsterdam, The Netherlands, 2023; pp. 7–33. [Google Scholar]
- Holzinger, A.; Goebel, R.; Palade, V.; Ferri, M. Towards integrative machine learning and knowledge extraction. In Towards Integrative Machine Learning and Knowledge Extraction: BIRS Workshop, Banff, AB, Canada, 24–26 July 2015, Revised Selected Papers; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–12. [Google Scholar]
- Schoonderwoerd, T.A.J.; Jorritsma, W.; Neerincx, M.A.; Van Den Bosch, K. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. Int. J. Hum. Comput. Stud. 2021, 154, 102684. [Google Scholar] [CrossRef]
- Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Confalonieri, R.; Coba, L.; Wagner, B.; Besold, T.R. A historical perspective of explainable Artificial Intelligence. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1391. [Google Scholar] [CrossRef]
- Knapič, S.; Malhi, A.; Saluja, R.; Främling, K. Explainable artificial intelligence for human decision support system in the medical domain. Mach. Learn Knowl. Extr. 2021, 3, 740–770. [Google Scholar] [CrossRef]
- Angelov, P.P.; Soares, E.A.; Jiang, R.; Arnold, N.I.; Atkinson, P.M. Explainable artificial intelligence: An analytical review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1424. [Google Scholar] [CrossRef]
- Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: A systematic review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
- Belard, A.; Buchman, T.; Forsberg, J.; Potter, B.K.; Dente, C.J.; Kirk, A.; Elster, E. Precision diagnosis: A view of the clinical decision support systems (CDSS) landscape through the lens of critical care. J. Clin. Monit. Comput. 2017, 31, 261–271. [Google Scholar] [CrossRef] [PubMed]
- Sachan, S.; Yang, J.-B.; Xu, D.-L.; Benavides, D.E.; Li, Y. An explainable AI decision-support-system to automate loan underwriting. Expert Syst. Appl. 2020, 144, 113100. [Google Scholar] [CrossRef]
- Alicioglu, G.; Sun, B. A survey of visual analytics for Explainable Artificial Intelligence methods. Comput. Graph. 2022, 102, 502–520. [Google Scholar] [CrossRef]
- Liu, G.C.; Odell, J.D.; Whipple, E.C.; Ralston, R.; Carroll, A.E.; Downs, S.M. Data visualization for truth maintenance in clinical decision support systems. Int. J. Pediatr. Adolesc. Med. 2015, 2, 64–69. [Google Scholar] [CrossRef] [PubMed]
- Wu, Z.; Chen, W.; Ma, Y.; Xu, T.; Yan, F.; Lv, L.; Qian, Z.; Xia, J. Explainable data transformation recommendation for automatic visualization. Front. Inf. Technol. Electron. Eng. 2023, 24, 1007–1027. [Google Scholar] [CrossRef]
- Bohanec, M.; Borštnar, M.K.; Robnik-Šikonja, M. Explaining machine learning models in sales predictions. Expert Syst. Appl. 2017, 71, 416–428. [Google Scholar] [CrossRef]
- Schönhof, R.; Werner, A.; Elstner, J.; Zopcsak, B.; Awad, R.; Huber, M. Feature visualization within an automated design assessment leveraging explainable artificial intelligence methods. Procedia CIRP 2021, 100, 331–336. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Zafar, M.R.; Khan, N. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr. 2021, 3, 525–541. [Google Scholar] [CrossRef]
- Zafar, M.R.; Khan, N.M. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv 2019, arXiv:1906.10263. [Google Scholar]
- Zhao, X.; Huang, W.; Huang, X.; Robu, V.; Flynn, D. Baylime: Bayesian local interpretable model-agnostic explanations. In Uncertainty in Artificial Intelligence; 2021; pp. 887–896. Available online: https://www.auai.org/uai2021/pdf/uai2021.342.pdf (accessed on 13 July 2024).
- Shi, S.; Zhang, X.; Fan, W. A modified perturbed sampling method for local interpretable model-agnostic explanation. arXiv 2020, arXiv:2002.07434. [Google Scholar]
- Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
- Song, K.; Zeng, X.; Zhang, Y.; De Jonckheere, J.; Yuan, X.; Koehl, L. An interpretable knowledge-based decision support system and its applications in pregnancy diagnosis. Knowl. Based. Syst. 2021, 221, 106835. [Google Scholar] [CrossRef]
- Yang, L.H.; Liu, J.; Ye, F.F.; Wang, Y.M.; Nugent, C.; Wang, H.; Martínez, L. Highly explainable cumulative belief rule-based system with effective rule-base modeling and inference scheme. Knowl. Based. Syst. 2022, 240, 107805. [Google Scholar] [CrossRef]
- Davis, R.; King, J.J. The Origin of Rule-Based Systems in AI. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. 1984. Available online: https://www.shortliffe.net/Buchanan-Shortliffe-1984/Chapter-02.pdf (accessed on 13 July 2024).
- McCarthy, J. Generality in artificial intelligence. Commun. ACM 1987, 30, 1030–1035. [Google Scholar] [CrossRef]
- Mahbooba, B.; Timilsina, M.; Sahal, R.; Serrano, M. Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021, 2021, 6634811. [Google Scholar] [CrossRef]
- Souza, V.F.; Cicalese, F.; Laber, E.; Molinaro, M. Decision Trees with Short Explainable Rules. Adv. Neural. Inf. Process. Syst. 2022, 35, 12365–12379. [Google Scholar]
- Sushil, M.; Šuster, S.; Daelemans, W. Rule induction for global explanation of trained models. arXiv 2018, arXiv:1808.09744. [Google Scholar]
- Aamodt, A.; Plaza, E. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Commun. 1994, 7, 39–59. [Google Scholar] [CrossRef]
- Li, W.; Paraschiv, F.; Sermpinis, G. A data-driven explainable case-based reasoning approach for financial risk detection. Quant Financ. 2022, 22, 2257–2274. [Google Scholar] [CrossRef]
- Poché, A.; Hervier, L.; Bakkay, M.-C. Natural Example-Based Explainability: A Survey. In World Conference on eXplainable Artificial Intelligence; Springer: Cham, Switzerland, 2023; pp. 24–47. [Google Scholar]
- Danilevsky, M.; Qian, K.; Aharonov, R.; Katsis, Y.; Kawas, B.; Sen, P. A survey of the state of explainable AI for natural language processing. arXiv 2020, arXiv:2010.00711. [Google Scholar]
- Cambria, E.; Malandri, L.; Mercorio, F.; Mezzanzanica, M.; Nobani, N. A survey on XAI and natural language explanations. Inf. Process. Manag. 2023, 60, 103111. [Google Scholar] [CrossRef]
- Biancofiore, G.M.; Deldjoo, Y.; Di Noia, T.; Di Sciascio, E.; Narducci, F. Interactive question answering systems: Literature review. ACM Comput. Surv. 2024, 56, 1–38. [Google Scholar] [CrossRef]
- Reiter, E. Natural language generation challenges for explainable AI. arXiv 2019, arXiv:1911.08794. [Google Scholar]
- Lenci, A. Understanding natural language understanding systems. A critical analysis. arXiv 2023, arXiv:2303.04229. [Google Scholar]
- Weber, R.; Shrestha, M.; Johs, A.J. Knowledge-based XAI through CBR: There is more to explanations than models can tell. arXiv 2021, arXiv:2108.10363. [Google Scholar]
- Chari, S.; Gruen, D.M.; Seneviratne, O.; McGuinness, D.L. Foundations of explainable knowledge-enabled systems. In Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges; IOS Press: Clifton, VA, USA, 2020; pp. 23–48. [Google Scholar]
- Ravi, M.; Negi, A.; Chitnis, S. A Comparative Review of Expert Systems, Recommender Systems, and Explainable AI. In Proceedings of the 2022 IEEE 7th International conference for Convergence in Technology (I2CT), Mumbai, India, 7–9 April 2022; pp. 1–8. [Google Scholar]
- Cawsey, A.J.; Webber, B.L.; Jones, R.B. Natural language generation in health care. J. Am. Med. Inform. Assoc. 1997, 4, 473–482. [Google Scholar] [CrossRef]
- Musen, M.A.; Middleton, B.; Greenes, R.A. Clinical decision-support systems. In Biomedical informatics: Computer Applications in Health Care and Biomedicine; Springer: Berlin/Heidelberg, Germany, 2021; pp. 795–840. [Google Scholar]
- Du, Y.; Rafferty, A.R.; McAuliffe, F.M.; Wei, L.; Mooney, C. An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus. Sci. Rep. 2022, 12, 1170. [Google Scholar] [CrossRef]
- Du, Y.; Rafferty, A.R.; McAuliffe, F.M.; Mehegan, J.; Mooney, C. Towards an explainable clinical decision support system for large-for-gestational-age births. PLoS ONE 2023, 18, e0281821. [Google Scholar] [CrossRef] [PubMed]
- Ritter, Z.; Vogel, S.; Schultze, F.; Pischek-Koch, K.; Schirrmeister, W.; Walcher, F.; Röhrig, R.; Kesztyüs, T.; Krefting, D.; Blaschke, S. Using Explainable Artificial Intelligence Models (ML) to Predict Suspected Diagnoses as Clinical Decision Support. Stud. Health Technol. Inform. 2022, 294, 573–574. [Google Scholar] [PubMed]
- Petrauskas, V.; Jasinevicius, R.; Damuleviciene, G.; Liutkevicius, A.; Janaviciute, A.; Lesauskaite, V.; Knasiene, J.; Meskauskas, Z.; Dovydaitis, J.; Kazanavicius, V.; et al. Explainable artificial intelligence-based decision support system for assessing the nutrition-related geriatric syndromes. Appl. Sci. 2021, 11, 11763. [Google Scholar] [CrossRef]
- Woensel, W.V.; Scioscia, F.; Loseto, G.; Seneviratne, O.; Patton, E.; Abidi, S.; Kagal, L. Explainable clinical decision support: Towards patient-facing explanations for education and long-term behavior change. In International Conference on Artificial Intelligence in Medicine; Springer: Cham, Switzerland, 2022; pp. 57–62. [Google Scholar]
- Antoniadi, A.M.; Galvin, M.; Heverin, M.; Hardiman, O.; Mooney, C. Development of an explainable clinical decision support system for the prediction of patient quality of life in amyotrophic lateral sclerosis. In Proceedings of the 36th Annual ACM Symposium on Applied Computing, Virtual, 22–26 March 2021; pp. 594–602. [Google Scholar]
- Suh, J.; Yoo, S.; Park, J.; Cho, S.Y.; Cho, M.C.; Son, H.; Jeong, H. Development and validation of an explainable artificial intelligence-based decision-supporting tool for prostate biopsy. BJU Int. 2020, 126, 694–703. [Google Scholar] [CrossRef]
- Abtahi, H.; Amini, S.; Gholamzadeh, M.; Gharabaghi, M.A. Development and evaluation of a mobile-based asthma clinical decision support system to enhance evidence-based patient management in primary care. Inform. Med. Unlocked 2023, 37, 101168. [Google Scholar] [CrossRef]
- Yoon, K.; Kim, J.-Y.; Kim, S.-J.; Huh, J.-K.; Kim, J.-W.; Choi, J. Explainable deep learning-based clinical decision support engine for MRI-based automated diagnosis of temporomandibular joint anterior disk displacement. Comput. Methods Programs Biomed. 2023, 233, 107465. [Google Scholar] [CrossRef]
- Aiosa, G.V.; Palesi, M.; Sapuppo, F. EXplainable AI for decision Support to obesity comorbidities diagnosis. IEEE Access 2023, 11, 107767–107782. [Google Scholar] [CrossRef]
- Talukder, N. Clinical Decision Support System: An Explainable AI Approach. Master’s Thesis, University of Oulu, Oulu, Finland, 2024. [Google Scholar]
- Du, Y.; Antoniadi, A.M.; McNestry, C.; McAuliffe, F.M.; Mooney, C. The role of xai in advice-taking from a clinical decision support system: A comparative user study of feature contribution-based and example-based explanations. Appl. Sci. 2022, 12, 10323. [Google Scholar] [CrossRef]
- Midtfjord, A.D.; De Bin, R.; Huseby, A.B. A decision support system for safer airplane landings: Predicting runway conditions using XGBoost and explainable AI. Cold Reg. Sci. Technol. 2022, 199, 103556. [Google Scholar] [CrossRef]
- Amini, M.; Bagheri, A.; Delen, D. Discovering injury severity risk factors in automobile crashes: A hybrid explainable AI framework for decision support. Reliab. Eng. Syst. Saf. 2022, 226, 108720. [Google Scholar] [CrossRef]
- Tashmetov, T.; Tashmetov, K.; Aliev, R.; Rasulmuhamedov, M. Fuzzy information and expert systems for analysis of failure of automatic and telemechanic systems on railway transport. Chem. Technol. Control. Manag. 2020, 2020, 168–172. [Google Scholar]
- Cochran, D.S.; Smith, J.; Mark, B.G.; Rauch, E. Information model to advance explainable AI-Based decision support systems in manufacturing system design. In International Symposium on Industrial Engineering and Automation; Springer: Cham, Switzerland, 2022; pp. 49–60. [Google Scholar]
- Tiensuu, H.; Tamminen, S.; Puukko, E.; Röning, J. Evidence-based and explainable smart decision support for quality improvement in stainless steel manufacturing. Appl. Sci. 2021, 11, 10897. [Google Scholar] [CrossRef]
- Galanti, R.; de Leoni, M.; Monaro, M.; Navarin, N.; Marazzi, A.; Di Stasi, B.; Maldera, S. An explainable decision support system for predictive process analytics. Eng. Appl. Artif. Intell. 2023, 120, 105904. [Google Scholar] [CrossRef]
- Senoner, J.; Netland, T.; Feuerriegel, S. Using explainable artificial intelligence to improve process quality: Evidence from semiconductor manufacturing. Manag. Sci 2022, 68, 5704–5723. [Google Scholar] [CrossRef]
- Onari, M.A.; Rezaee, M.J.; Saberi, M.; Nobile, M.S. An explainable data-driven decision support framework for strategic customer development. Knowl. Based Syst. 2024, 295, 111761. [Google Scholar] [CrossRef]
- Sun, W.; Zhang, X.; Li, M.; Wang, Y. Interpretable high-stakes decision support system for credit default forecasting. Technol. Forecast Soc. Chang. 2023, 196, 122825. [Google Scholar] [CrossRef]
- Mahmoud, M.; Algadi, N.; Ali, A. Expert system for banking credit decision. In 2008 International Conference on Computer Science and Information Technology; IEEE: New York, NY, USA, 2008; pp. 813–819. [Google Scholar]
- Kostopoulos, G.; Karlos, S.; Kotsiantis, S. Multiview Learning for Early Prognosis of Academic Performance: A Case Study. IEEE Trans. Learn. Technol. 2019, 12, 212–224. [Google Scholar] [CrossRef]
- Khosravi, H.; Shum, S.B.; Chen, G.; Conati, C.; Tsai, Y.S.; Kay, J.; Knight, S.; Martinez-Maldonado, R.; Sadiq, S.; Gašević, D. Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 2022, 3, 100074. [Google Scholar] [CrossRef]
- Karlos, S.; Kostopoulos, G.; Kotsiantis, S. Predicting and Interpreting Students’ Grades in Distance Higher Education through a Semi-Regression Method. Appl. Sci. 2020, 10, 8413. [Google Scholar] [CrossRef]
- Guleria, P.; Sood, M. Explainable AI and machine learning: Performance evaluation and explainability of classifiers on educational data mining inspired career counseling. Educ. Inf. Technol. 2023, 28, 1081–1116. [Google Scholar] [CrossRef]
- Meske, C.; Bunde, E. Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Inf. Syst. Front. 2023, 25, 743–773. [Google Scholar] [CrossRef]
- Thakker, D.; Mishra, B.K.; Abdullatif, A.; Mazumdar, S.; Simpson, S. Explainable artificial intelligence for developing smart cities solutions. Smart Cities 2020, 3, 1353–1382. [Google Scholar] [CrossRef]
- Tsakiridis, N.L.; Diamantopoulos, T.; Symeonidis, A.L.; Theocharis, J.B.; Iossifides, A.; Chatzimisios, P.; Pratos, G.; Kouvas, D. Versatile internet of things for agriculture: An explainable ai approach. In Proceedings of the Artificial Intelligence Applications and Innovations: 16th IFIP WG 12.5 International Conference, AIAI 2020, Neos Marmaras, Greece, 5–7 June 2020; pp. 180–191. [Google Scholar]
- Kenny, E.M.; Ruelle, E.; Geoghegan, A.; Shalloo, L.; O’Leary, M.; O’Donovan, M.; Temraz, M.; Keane, M.T. Bayesian case-exclusion and personalized explanations for sustainable dairy farming. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Virtual, 7–15 January 2021; pp. 4740–4744. [Google Scholar]
- Hamrouni, B.; Bourouis, A.; Korichi, A.; Brahmi, M. Explainable ontology-based intelligent decision support system for business model design and sustainability. Sustainability 2021, 13, 9819. [Google Scholar] [CrossRef]
- Papamichail, K.N.; French, S. Explaining and justifying the advice of a decision support system: A natural language generation approach. Expert. Syst. Appl. 2003, 24, 35–48. [Google Scholar] [CrossRef]
- Rosenfeld, A. Better metrics for evaluating explainable artificial intelligence. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual, 3–7 May 2021; pp. 45–50. [Google Scholar]
- Papenmeier, A.; Kern, D.; Englebienne, G.; Seifert, C. It’s complicated: The relationship between user trust, model accuracy and explanations in AI. ACM Trans. Comput. Hum. Interact. 2022, 29, 1–33. [Google Scholar] [CrossRef]
- Luo, Y.; Qin, X.; Tang, N.; Li, G. Deepeye: Towards automatic data visualization. In 2018 IEEE 34th International Conference on Data Engineering (ICDE). In Proceedings of the 2018 IEEE 34th International Conference on Data Engineering (ICDE), Paris, France, 16–19 April 2018; pp. 101–112. [Google Scholar]
- Zhou, J.; Gandomi, A.H.; Chen, F.; Holzinger, A. Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
Taxonomy | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||||||||
Visual | Rule-Based | Case-Based | NL | Knowledge-Based | |||||||||
Papers | Automatic Data Visualization | Sensitivity Analysis | LIME | SHAP | Production Rule Systems | Tree-based Systems | If–Then Explanation Rules | Case-Based Reasoning | Example-Based Explainability | Interactive NL Question-Answering Systems | NL Generation Systems | NL Understanding Systems | Expert Systems |
[16] | x | x | |||||||||||
[40,81,82] | x | ||||||||||||
[50] | x | x | |||||||||||
[51] | x | x | x | x | x | ||||||||
[52,54,57,58] | x | ||||||||||||
[62,69,70,76] | x | ||||||||||||
[53] | x | ||||||||||||
[55,56,59] | x | ||||||||||||
[60,78] | x | ||||||||||||
[61,68] | x | x | |||||||||||
[63] | x | x | |||||||||||
[64] | x | x | |||||||||||
[65] | x | x | |||||||||||
[20,66,72,73] | x | ||||||||||||
[75] | x | x | x | x | x | ||||||||
[77] | x | ||||||||||||
[79,80] | x | ||||||||||||
[83] | x | x |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kostopoulos, G.; Davrazos, G.; Kotsiantis, S. Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review. Electronics 2024, 13, 2842. https://doi.org/10.3390/electronics13142842
Kostopoulos G, Davrazos G, Kotsiantis S. Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review. Electronics. 2024; 13(14):2842. https://doi.org/10.3390/electronics13142842
Chicago/Turabian StyleKostopoulos, Georgios, Gregory Davrazos, and Sotiris Kotsiantis. 2024. "Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review" Electronics 13, no. 14: 2842. https://doi.org/10.3390/electronics13142842
APA StyleKostopoulos, G., Davrazos, G., & Kotsiantis, S. (2024). Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review. Electronics, 13(14), 2842. https://doi.org/10.3390/electronics13142842