A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory
Abstract
:1. Introduction
2. Theoretical Background
2.1. Overarching Framework: Affordance Actualization Theory
2.2. Human–AI Synergy
3. Materials and Methods
4. Findings
4.1. Theme 1: Identification of AI Affordances in Decision-Making
4.1.1. Automated Information Collecting and Updating Affordance
4.1.2. Information Processing and Analyzing Affordance
4.1.3. Predicting/Forecasting and Decision-Making-Assistance Affordance
4.1.4. Explanations Providing Affordance
4.2. Theme 2: Human–AI Synergy Patterns Regarding Different Decision Tasks
4.2.1. AI-Centered Patterns
4.2.2. Human-Centered Patterns
4.2.3. Human–AI Synergy-Centered Patterns
4.3. Theme 3: Outcomes of Human–AI Synergy in Decision-Making
4.3.1. Outcome 1: General Performance of Human–AI Synergy in Decision-Making
4.3.2. Outcome 2: Trust in Human–AI Synergy in Decision-Making
4.3.3. Outcome 3: Transparency and Explainability between Human and AI Synergy
4.3.4. Outcome 4: Cognitive Perspectives of Human–AI Synergy
5. Discussion
6. Limitations and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Russell, S.; Norvig, P.; Intelligence, A. Artificial Intelligence: A Modern Approach; Prentice-Hall: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
- Jarrahi, M.H. Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
- Lai, V.; Chen, C.; Liao, Q.V.; Smith-Renner, A.; Tan, C. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv 2021, arXiv:2112.11471. [Google Scholar]
- Achmat, L.; Brown, I. Artificial Intelligence Affordances for Business Innovation: A Systematic Review of Literature. In Proceedings of the 4th International Conference on the Internet, Cyber Security and Information Systems, (ICICIS), Johannesburg, South Africa, 31 October–1 November 2019; pp. 1–12. [Google Scholar]
- Bader, J.; Edwards, J.; Harris-Jones, C.; Hannaford, D. Practical Engineering of Knowledge-Based Systems. Inf. Softw. Technol. 1988, 30, 266–277. [Google Scholar] [CrossRef]
- Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the Role of Artificial Intelligence in Personalized Engagement Marketing. Calif. Manag. Rev. 2019, 61, 135–155. [Google Scholar] [CrossRef]
- Fernandes, T.; Oliveira, E. Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of Digital Voice Assistants Adoption. J. Bus. Res. 2021, 122, 180–191. [Google Scholar] [CrossRef]
- Xiong, W.; Fan, H.; Ma, L.; Wang, C. Challenges of Human—Machine Collaboration in Risky Decision-Making. Front. Eng. Manag. 2022, 9, 89–103. [Google Scholar] [CrossRef]
- Strong, D.; Volkoff, O.; Johnson, S.; Pelletier, L.; Tulu, B.; Bar-On, I.; Trudel, J.; Garber, L. A Theory of Organization-EHR Affordance Actualization. J. Assoc. Inf. Syst. 2014, 15, 53–85. [Google Scholar] [CrossRef]
- Du, W.; Pan, S.L.; Leidner, D.E.; Ying, W. Affordances, Experimentation and Actualization of FinTech: A Blockchain Implementation Study. J. Strateg. Inf. Syst. 2019, 28, 50–65. [Google Scholar] [CrossRef]
- Zeng, D.; Tim, Y.; Yu, J.; Liu, W. Actualizing Big Data Analytics for Smart Cities: A Cascading Affordance Study. Int. J. Inf. Manag. 2020, 54, 102156. [Google Scholar] [CrossRef]
- Lehrer, C.; Wieneke, A.; Vom Brocke, J.; Jung, R.; Seidel, S. How Big Data Analytics Enables Service Innovation: Materiality, Affordance, and the Individualization of Service. J. Manag. Inf. Syst. 2018, 35, 424–460. [Google Scholar] [CrossRef]
- Chatterjee, S.; Moody, G.; Lowry, P.B.; Chakraborty, S.; Hardin, A. Information Technology and Organizational Innovation: Harmonious Information Technology Affordance and Courage-Based Actualization. J. Strateg. Inf. Syst. 2020, 29, 101596. [Google Scholar] [CrossRef]
- Anderson, C.; Robey, D. Affordance Potency: Explaining the Actualization of Technology Affordances. Inf. Organ. 2017, 27, 100–115. [Google Scholar] [CrossRef]
- Lanz, L.; Briker, R.; Gerpott, F.H. Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning. J. Bus. Ethics 2023. [Google Scholar] [CrossRef]
- Seeber, I.; Bittner, E.; Briggs, R.O.; De Vreede, T.; De Vreede, G.-J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.; Randrup, N.; et al. Machines as Teammates: A Research Agenda on AI in Team Collaboration. Inf. Manag. 2020, 57, 103174. [Google Scholar] [CrossRef]
- Hancock, P.A.; Kajaks, T.; Caird, J.K.; Chignell, M.H.; Mizobuchi, S.; Burns, P.C.; Feng, J.; Fernie, G.R.; Lavallière, M.; Noy, I.Y.; et al. Challenges to Human Drivers in Increasingly Automated Vehicles. Hum. Factors J. Hum. Factors Ergon. Soc. 2020, 62, 310–328. [Google Scholar] [CrossRef] [PubMed]
- Van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in Humanoid Robots: Implications for Services Marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
- Lee, M.H.; Siewiorek, D.P.P.; Smailagic, A.; Bernardino, A.; Bermúdez i Badia, S.B. A Human-AI Collaborative Approach for Clinical Decision Making on Rehabilitation Assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–14. [Google Scholar]
- Järvelä, S.; Nguyen, A.; Hadwin, A. Human and Artificial Intelligence Collaboration for Socially Shared Regulation in Learning. Br. J. Educ. Technol. 2023, 54, 1057–1076. [Google Scholar] [CrossRef]
- Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef]
- Radclyffe, C.; Ribeiro, M.; Wortham, R.H. The Assessment List for Trustworthy Artificial Intelligence: A Review and Recommendations. Front. Artif. Intell. 2023, 6, 1020592. [Google Scholar] [CrossRef]
- Stahl, B.C.; Leach, T. Assessing the Ethical and Social Concerns of Artificial Intelligence in Neuroinformatics Research: An Empirical Test of the European Union Assessment List for Trustworthy AI (ALTAI). AI Ethics 2023, 3, 745–767. [Google Scholar] [CrossRef]
- Zicari, R.V.; Brodersen, J.; Brusseau, J.; Dudder, B.; Eichhorn, T.; Ivanov, T.; Kararigas, G.; Kringen, P.; McCullough, M.; Moslein, F.; et al. Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Trans. Technol. Soc. 2021, 2, 83–97. [Google Scholar] [CrossRef]
- Webster, J.; Watson, R.T. Analyzing the Past to Prepare for the Future: Writing a Literature Review. MIS Q. 2002, 26, xiii–xxiii. [Google Scholar]
- Yuan, L.; Gao, X.; Zheng, Z.; Edmonds, M.; Wu, Y.N.; Rossano, F.; Lu, H.; Zhu, Y.; Zhu, S.-C. In Situ Bidirectional Human-Robot Value Alignment. Sci. Robot. 2022, 7, eabm4183. [Google Scholar] [CrossRef] [PubMed]
- Wang, N.; Pynadath, D.V.; Hill, S.G. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 109–116. [Google Scholar]
- Chen, M.; Nikolaidis, S.; Soh, H.; Hsu, D.; Srinivasa, S. Planning with Trust for Human-Robot Collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; ACM: New York, NY, USA, 2018; pp. 307–315. [Google Scholar]
- Gao, X.; Gong, R.; Zhao, Y.; Wang, S.; Shu, T.; Zhu, S.-C. Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020. [Google Scholar]
- Gong, Z.; Zhang, Y. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1005–1011. [Google Scholar]
- Unhelkar, V.V.; Li, S.; Shah, J.A. Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; ACM: New York, NY, USA, 2020; pp. 329–341. [Google Scholar]
- Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–21. [Google Scholar] [CrossRef]
- Buçinca, Z.; Lin, P.; Gajos, K.Z.; Glassman, E.L. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 454–464. [Google Scholar]
- Lai, V.; Tan, C. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 29–38. [Google Scholar]
- Lai, V.; Liu, H.; Tan, C. “Why Is ‘Chicago’ Deceptive?” Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar]
- Alqaraawi, A.; Schuessler, M.; Weiß, P.; Costanza, E.; Berthouze, N. Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020. [Google Scholar]
- Bansal, G.; Nushi, B.; Kamar, E.; Weld, D.S.; Lasecki, W.S.; Horvitz, E. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Proc. AAAI Conf. Artif. Intell. 2019, 33, 2429–2437. [Google Scholar] [CrossRef]
- Trocin, C.; Hovland, I.V.; Mikalef, P.; Dremel, C. How Artificial Intelligence Affords Digital Innovation: A Cross-Case Analysis of Scandinavian Companies. Technol. Forecast. Soc. Change 2021, 173, 121081. [Google Scholar] [CrossRef]
- Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-Machine Collaboration in Managerial Decision Making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
- Edmonds, M.; Gao, F.; Liu, H.; Xie, X.; Qi, S.; Rothrock, B.; Zhu, Y.; Wu, Y.N.; Lu, H.; Zhu, S.-C. A Tale of Two Explanations: Enhancing Human Trust by Explaining Robot Behavior. Sci. Robot. 2019, 4, eaay4663. [Google Scholar] [CrossRef]
- Yang, F.; Huang, Z.; Scholtz, J.; Arendt, D.L. How Do Visual Explanations Foster End Users’ Appropriate Trust in Machine Learning? In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; ACM: New York, NY, USA, 2020; pp. 189–201. [Google Scholar]
- Nourani, M.; Roy, C.; Block, J.E.; Honeycutt, D.R.; Rahman, T.; Ragan, E.; Gogate, V. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, 14–17 April 2021; ACM: New York, NY, USA, 2021; pp. 340–350. [Google Scholar]
- Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
- Arshad, S.Z.; Zhou, J.; Bridon, C.; Chen, F.; Wang, Y. Investigating User Confidence for Uncertainty Presentation in Predictive Decision Making. In Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, Parkville, VIC, Australia, 7–10 December 2015; ACM: New York, NY, USA, 2015; pp. 352–360. [Google Scholar]
- Yu, K.; Berkovsky, S.; Taib, R.; Zhou, J.; Chen, F. Do I Trust My Machine Teammate?: An Investigation from Perception to Decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; ACM: New York, NY, USA, 2019; pp. 460–468. [Google Scholar]
- Mercado, J.E.; Rupp, M.A.; Chen, J.Y.C.; Barnes, M.J.; Barber, D.; Procci, K. Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 401–415. [Google Scholar] [CrossRef]
- Cheng, H.-F.; Wang, R.; Zhang, Z.; O’Connell, F.; Gray, T.; Harper, F.M.; Zhu, H. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, 4–9 May 2019; ACM: New York, NY, USA, 2019; pp. 1–12. [Google Scholar]
- Vinanzi, S.; Cangelosi, A.; Goerick, C. The Collaborative Mind: Intention Reading and Trust in Human-Robot Interaction. iScience 2021, 24, 102130. [Google Scholar] [CrossRef] [PubMed]
- Sachan, S.; Yang, J.-B.; Xu, D.-L.; Benavides, D.E.; Li, Y. An Explainable AI Decision-Support-System to Automate Loan Underwriting. Expert Syst. Appl. 2020, 144, 113100. [Google Scholar] [CrossRef]
- Gutzwiller, R.S.; Reeder, J. Dancing with Algorithms: Interaction Creates Greater Preference and Trust in Machine-Learned Behavior. Hum. Factors J. Hum. Factors Ergon. Soc. 2021, 63, 854–867. [Google Scholar] [CrossRef] [PubMed]
- Patel, B.N.; Rosenberg, L.; Willcox, G.; Baltaxe, D.; Lyons, M.; Irvin, J.; Rajpurkar, P.; Amrhein, T.; Gupta, R.; Halabi, S.; et al. Human–Machine Partnership with Artificial Intelligence for Chest Radiograph Diagnosis. NPJ Digit. Med. 2019, 2, 111. [Google Scholar] [CrossRef] [PubMed]
- Xu, A.; Dudek, G. OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; ACM: New York, NY, USA, 2015; pp. 221–228. [Google Scholar]
- Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent Algorithm Aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
- Jessup, S.; Gibson, A.; Capiola, A.; Alarcon, G.; Borders, M. Investigating the Effect of Trust Manipulations on Affect over Time in Human-Human versus Human-Robot Interactions. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020. [Google Scholar]
- Mende, M.; Scott, M.L.; Van Doorn, J.; Grewal, D.; Shanks, I. Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses. J. Mark. Res. 2019, 56, 535–556. [Google Scholar] [CrossRef]
- Fridin, M.; Belokopytov, M. Acceptance of Socially Assistive Humanoid Robot by Preschool and Elementary School Teachers. Comput. Hum. Behav. 2014, 33, 23–31. [Google Scholar] [CrossRef]
- Seo, S.H.; Griffin, K.; Young, J.E.; Bunt, A.; Prentice, S.; Loureiro-Rodríguez, V. Investigating People’s Rapport Building and Hindering Behaviors When Working with a Collaborative Robot. Int. J. Soc. Robot. 2018, 10, 147–161. [Google Scholar] [CrossRef]
- Desideri, L.; Ottaviani, C.; Malavasi, M.; Di Marzio, R.; Bonifacci, P. Emotional Processes in Human-Robot Interaction during Brief Cognitive Testing. Comput. Hum. Behav. 2019, 90, 331–342. [Google Scholar] [CrossRef]
- Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the Shades of the Uncanny Valley: An Experimental Study of Human–Chatbot Interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
- Bansal, G.; Nushi, B.; Kamar, E.; Lasecki, W.; Weld, D.S.; Horvitz, E. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP-19), Stevenson, WA, USA, 28–30 October 2019; p. 10. [Google Scholar]
- Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An Ideal Human”: Expectations of AI Teammates in Human-AI Teaming. Proc. ACM Hum.-Comput. Interact. 2021, 4, 246. [Google Scholar] [CrossRef]
- Lawrence, L.; Echeverria, V.; Yang, K.; Aleven, V.; Rummel, N. How Teachers Conceptualise Shared Control with an AI Co-orchestration Tool: A Multiyear Teacher-centred Design Process. Br. J. Educ. Technol. 2023, bjet.13372. [Google Scholar] [CrossRef]
- Chiang, C.-W.; Lu, Z.; Li, Z.; Yin, M. Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; ACM: New York, NY, USA, 2023; pp. 1–18. [Google Scholar]
- Holstein, K.; De-Arteaga, M.; Tumati, L.; Cheng, Y. Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables. Proc. ACM Hum.-Comput. Interact. 2023, 7, 152. [Google Scholar] [CrossRef]
- Tsai, C.-H.; You, Y.; Gui, X.; Kou, Y.; Carroll, J.M. Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–17. [Google Scholar]
- Levy, A.; Agrawal, M.; Satyanarayan, A.; Sontag, D. Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–13. [Google Scholar]
- Vrontis, D.; Christofi, M.; Pereira, V.; Tarba, S.; Makrides, A.; Trichina, E. Artificial Intelligence, Robotics, Advanced Technologies and Human Resource Management: A Systematic Review. Int. J. Hum. Resour. Manag. 2022, 33, 1237–1266. [Google Scholar] [CrossRef]
- Prentice, C.; Dominique Lopes, S.; Wang, X. The Impact of Artificial Intelligence and Employee Service Quality on Customer Satisfaction and Loyalty. J. Hosp. Mark. Manag. 2020, 29, 739–756. [Google Scholar] [CrossRef]
- Pournader, M.; Ghaderi, H.; Hassanzadegan, A.; Fahimnia, B. Artificial Intelligence Applications in Supply Chain Management. Int. J. Prod. Econ. 2021, 241, 108250. [Google Scholar] [CrossRef]
- Wilson, H.J.; Daugherty, P.; Shukla, P. How One Clothing Company Blends AI and Human Expertise. Harv. Bus. Rev. 2016. [Google Scholar]
- Marr, B. Stitch Fix: The Amazing Use Case of Using Artificial Intelligence in Fashion Retail. Forbes 2018, 25. [Google Scholar]
- Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep Learning for Identifying Metastatic Breast Cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar] [CrossRef]
- Arnold, T.; Kasenberg, D.; Scheutz, M. Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems. ACM Trans. Hum.-Robot Interact. 2021, 10, 25. [Google Scholar] [CrossRef]
- Lim, B.Y.; Dey, A.K.; Avrahami, D. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: New York, NY, USA, 2009; pp. 2119–2128. [Google Scholar]
- Puranam, P. Human–AI Collaborative Decision-Making as an Organization Design Problem. J. Organ. Des. 2021, 10, 75–80. [Google Scholar] [CrossRef]
- Parker, S.K.; Grote, G. Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Appl. Psychol. 2022, 71, 1171–1204. [Google Scholar] [CrossRef]
- Roth, E.M.; Sushereba, C.; Militello, L.G.; Diiulio, J.; Ernst, K. Function Allocation Considerations in the Era of Human Autonomy Teaming. J. Cogn. Eng. Decis. Mak. 2019, 13, 199–220. [Google Scholar] [CrossRef]
- Van Maanen, P.P.; van Dongen, K. Towards Task Allocation Decision Support by Means of Cognitive Modeling of Trust. In Proceedings of the 17th Belgian-Netherlands Artificial Intelligence Conference, Brussels, Belgium, 17–18 October 2005; pp. 399–400. [Google Scholar]
- Flemisch, F.; Heesen, M.; Hesse, T.; Kelsch, J.; Schieben, A.; Beller, J. Towards a Dynamic Balance between Humans and Automation: Authority, Ability, Responsibility and Control in Shared and Cooperative Control Situations. Cogn. Technol. Work 2012, 14, 3–18. [Google Scholar] [CrossRef]
- Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- The Precise4Q Consortium; Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
- Bier, V. Implications of the Research on Expert Overconfidence and Dependence. Reliab. Eng. Syst. Saf. 2004, 85, 321–329. [Google Scholar] [CrossRef]
- Charness, G.; Karni, E.; Levin, D. Individual and Group Decision Making under Risk: An Experimental Study of Bayesian Updating and Violations of First-Order Stochastic Dominance. J. Risk Uncertain. 2007, 35, 129–148. [Google Scholar] [CrossRef]
- Tong, J.; Feiler, D. A Behavioral Model of Forecasting: Naive Statistics on Mental Samples. Manag. Sci. 2017, 63, 3609–3627. [Google Scholar] [CrossRef]
- Blumenthal-Barby, J.S.; Krieger, H. Cognitive Biases and Heuristics in Medical Decision Making: A Critical Review Using a Systematic Search Strategy. Med. Decis. Mak. 2015, 35, 539–557. [Google Scholar] [CrossRef]
- Zinn, J.O. Heading into the Unknown: Everyday Strategies for Managing Risk and Uncertainty. Health Risk Soc. 2008, 10, 439–450. [Google Scholar] [CrossRef]
- Bayati, M.; Braverman, M.; Gillam, M.; Mack, K.M.; Ruiz, G.; Smith, M.S.; Horvitz, E. Data-Driven Decisions for Reducing Readmissions for Heart Failure: General Methodology and Case Study. PLoS ONE 2014, 9, e109264. [Google Scholar] [CrossRef] [PubMed]
- Pizoń, J.; Gola, A. Human–Machine Relationship—Perspective and Future Roadmap for Industry 5.0 Solutions. Machines 2023, 11, 203. [Google Scholar] [CrossRef]
- Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
- Trocin, C.; Mikalef, P.; Papamitsiou, Z.; Conboy, K. Responsible AI for Digital Health: A Synthesis and a Research Agenda. Inf. Syst. Front. 2021, 1–19. [Google Scholar] [CrossRef]
- McShane, M.; Nirenburg, S.; Jarrell, B. Modeling Decision-Making Biases. Biol. Inspired Cogn. Archit. 2013, 3, 39–50. [Google Scholar] [CrossRef]
- Parry, K.; Cohen, M.; Bhattacharya, S. Rise of the Machines: A Critical Consideration of Automated Leadership Decision Making in Organizations. Group Organ. Manag. 2016, 41, 571–594. [Google Scholar] [CrossRef]
- Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
- Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. Int. J. Hum.–Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
- Heerink, M. Assessing Acceptance of Assistive Social Robots by Aging Adults. Ph.D. Thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands, 2010. [Google Scholar]
- Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave New World: Service Robots in the Frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
- Davenport, T.; Guha, A.; Grewal, D.; Bressgott, T. How Artificial Intelligence Will Change the Future of Marketing. J. Acad. Mark. Sci. 2020, 48, 24–42. [Google Scholar] [CrossRef]
- Mikalef, P.; Gupta, M. Artificial Intelligence Capability: Conceptualization, Measurement Calibration, and Empirical Study on Its Impact on Organizational Creativity and Firm Performance. Inf. Manag. 2021, 58, 103434. [Google Scholar] [CrossRef]
- Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo Arigato Mr. Roboto: Emergence of Automated Social Presence in Organizational Frontlines and Customers’ Service Experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
- Libert, K.; Mosconi, E.; Cadieux, N. Human-Machine Interaction and Human Resource Management Perspective for Collaborative Robotics Implementation and Adoption. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020; Volume 3, pp. 533–542. [Google Scholar]
- Piçarra, N.; Giger, J.-C. Predicting Intention to Work with Social Robots at Anticipation Stage: Assessing the Role of Behavioral Desire and Anticipated Emotions. Comput. Hum. Behav. 2018, 86, 129–146. [Google Scholar] [CrossRef]
Author | Decision Tasks | Types of AI and AI Systems | Organizational Outcomes |
---|---|---|---|
[1] | hiring and firing employees | algorithm-enabled software system | humans’ acceptance of machine participation |
[7] | service encounter | intelligent digital voice assistant | users’ motivations to adopt AI |
[15] | human resource management | algorithm-based AI system | human reaction to an AI supervisor |
[18] | welcoming visitors and employees and offering directions to specific locations on a campus | humanoid robots | trust, intention to use, and enjoyment |
[19] | clinical decision-making on Rehabilitation Assessment | AI-based decision-support-system | usefulness and attitudes toward the system |
[20] | socially shared regulation (SSRL) in learning | intelligent agent | learning regulation improvement |
[26] | Scout Exploration Game | explainable artificial intelligence (XAI) system | aligned understanding between humans and AI |
[27] | reconnaissance missions to gather intelligence in a foreign town | algorithm-based robots | transparency, trust, mission success, and team performance |
[28] | table-clearing task | autonomous-system-based robot assistants | human–robot team performance and trust evolution |
[29] | real-time human–robot cooking task | algorithm-based XAI | collaboration performance and user perception of the robot |
[30] | make a cup of coffee or clean the bathroom | a synthetic robot maid | more explainable robot behavior, team performance |
[31] | a human–robot team preparing meals in a kitchen | autonomous-system-based robot assistants | human–robot collaboration performance |
[32] | turn this plate of food into a low-carb meal | recommend system (XAI) | team performance |
[33] | nutrition-related decision-making task | recommend system (XAI) | objective performance, trust, preference, mental demand, and understanding |
[34] | deception-detection task | machine learning models | human performance and human agency |
[35] | deception-detection task | machine learning models | human performance |
[36] | multi-label image classification | machine learning algorithms | outcome prediction accuracy and confidence |
[37] | three high-stakes classification tasks | machine learning algorithm | performance/compatibility tradeoff |
[38] | recruitment and staffing, e-commerce, banking | AI-based platform/assistant | degree of fairness, transparent feedback, less-biased decisions |
[39] | managerial decision-making such as hiring and firing employees | algorithm-based AI system | acceptance of the decisions |
[40] | open medicine bottles | autonomous-system-based robot | human trust in the robot |
[41] | naming and distinguishing species | machine learning classifier | end users’ appropriate trust |
[42] | cooking-related tasks in a kitchen | explainable artificial intelligence (XAI) | mental model, task performance, and reliance on the system |
[43] | visual estimation task, song forecasting task, romantic attraction forecasting task | algorithm-based AI system | algorithm appreciation (preference between algorithmic and human judgment) |
[44] | water pipe failure prediction | machine-learning (ML)-based decision-support-systems | user confidence in decision-making |
[45] | quality control in a drinking-glass-making factory | automated decision-support-systems | human trust, system performance, human perception and decisions |
[46] | multi-UxV (unmanned vehicle) planning task | intelligent agent | performance, trust, and perceived usability |
[47] | student admission | algorithm-based AI system | trust in algorithmic decisions |
[48] | collaborative game | humanoid robot | intention reading and trusting capabilities |
[49] | automate the loan underwriting process | explainable AI decision-support-system | trade-off between prediction accuracy and explainability |
[50] | control of unmanned vehicle | machine-learning-based automated agents | trust in automation, human–systems integration |
[51] | diagnosis of pneumonia on chest radiographs | deep-learning model architectures | diagnosis performance |
[52] | visual navigation | autonomous robot | human–robot trust and efficiency |
[53] | 26 tasks, including predicting stock market outcomes, predicting the weather, analyzing data, and giving directions | algorithms | trust in algorithms |
[54] | computer game | humanoid robot | trust and distrust over time |
[55] | restaurants and food services providing | humanoid robot | user discomfort, compensatory consumption |
[56] | interact with preschool-aged children | humanoid robot | acceptance of (SAR) by preschool and primary school teachers |
[57] | inspection task to sort laundered squares of cloth | humanoid robot | people’s Rapport Building and Hindering Behaviors |
[58] | cognitive assessment | humanoid robot | cognitive performances and workload |
[59] | interact on the academy enrolment process | text chatbot | individuals’ psychophysiological indices |
[60] | line scenario-like platform | machine-learning-based platform | team performance |
[61] | multiplayer games | AI algorithms | human perceptions and expectations of AI teammates |
[62] | individual and collaborative learning | AI-based tutoring systems | control, trust, responsibility, efficiency, and accuracy |
[63] | recidivism risk assessment | algorithm-based AI system | accuracy, reliance on AI, understanding of AI, decision fairness, willingness to take accountability |
[64] | AI-assisted house-price prediction | algorithm-based AI model | people’s integration of the model outputs with information, prediction accuracy |
[65] | symptom diagnose | algorithm-based intelligent online symptom checkers | diagnose transparency and explainability |
[66] | clinical concept identification and classification | natural language-processing-based clinical annotation system | accuracy and efficiency |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bao, Y.; Gong, W.; Yang, K. A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems 2023, 11, 442. https://doi.org/10.3390/systems11090442
Bao Y, Gong W, Yang K. A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems. 2023; 11(9):442. https://doi.org/10.3390/systems11090442
Chicago/Turabian StyleBao, Ying, Wankun Gong, and Kaiwen Yang. 2023. "A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory" Systems 11, no. 9: 442. https://doi.org/10.3390/systems11090442
APA StyleBao, Y., Gong, W., & Yang, K. (2023). A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems, 11(9), 442. https://doi.org/10.3390/systems11090442