Next Article in Journal
Research on Collaborative Mechanisms of Railway EPC Project Design and Construction from the Perspective of Social Network Analysis
Previous Article in Journal
Acceptance Analysis of Electric Heavy Trucks and Battery Swapping Stations in the German Market
Previous Article in Special Issue
Research on Users’ Exercise Behaviors of Online Exercise Community Based on Social Capital Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory

1
School of Economics and Management, China University of Petroleum, Beijing 102249, China
2
School of Government, Beijing Normal University, Beijing 100875, China
3
School of International Studies, University of International Business and Economics, Beijing 100029, China
*
Author to whom correspondence should be addressed.
Systems 2023, 11(9), 442; https://doi.org/10.3390/systems11090442
Submission received: 29 July 2023 / Revised: 19 August 2023 / Accepted: 22 August 2023 / Published: 25 August 2023
(This article belongs to the Special Issue Human–AI Teaming: Synergy, Decision-Making and Interdependency)

Abstract

:
The emergence of artificial-intelligence (AI)-powered information technology, such as deep learning and natural language processing, enables human to shift their behaving or working diagram from human-only to human–AI synergy, especially in the decision-making process. Since AI is multidisciplinary by nature and our understanding of human–AI synergy in decision-making is fragmented, we conducted a literature review to systematically characterize the phenomenon. Adopting the affordance actualization theory, we developed a framework to organize and understand the relationship between AI affordances, the human–AI synergy process, and the outcomes of human–AI synergy. Three themes emerged from the review: the identification of AI affordances in decision-making, human–AI synergy patterns regarding different decision tasks, and outcomes of human–AI synergy in decision-making. For each theme, we provided evidence on the existing research gaps and proposed future research directions. Our findings provide a holistic framework for understanding human–AI synergy phenomenon in decision-making. This work also offers theoretical contributions and research directions for researchers studying human–AI synergy in decision-making.

1. Introduction

Over the last decades, artificial intelligence (AI) has penetrated almost all aspects of human life, resulting in a growing trend of human–AI synergy. AI is not merely a set of applications or tools. Instead, it can be broadly defined as “intelligent systems with the ability to think and learn” [1]. It embodies a wide variety of tools, techniques, and algorithms, including robotics, autonomous vehicles, virtual assistants, neural networks, speech/pattern/facial recognition, genetic algorithms, natural language processing, deep learning, and machine learning. AI systems have been widely used to augment or assist humans in the decision-making process by providing recommendations or predictions, which decision makers can choose to either accept or dismiss. In detail, AI has been introduced to various domains such as healthcare, finance, human resource management, and criminal justice, and is increasingly being incorporated into organizational management. Some well-known examples of AI applications include IBM’s Watson and Google DeepMind’s AlphaGo. Taking Watson as an example, natural language processing empowers Watson to understand human-composed sentences and assign multiple meanings to concepts and terms. Machine learning affords Watson the ability to learn from previous experiences or interactions, and to formulate solutions or recommendations based on past experiences [2]. In the medical field, Watson can be leveraged to assist doctors in discerning the patterns of cancers based on machine learning techniques. Moreover, AI technologies can also provide a variety of other capabilities to enhance better decision-making, including performance metrics of the model, as well as explanation and uncertainty of the prediction [3]. As such, we can expect that humans and AI-enabled machines will play a synergistic role in the decision-making process in the future.
There has been an emergence of studies on human–AI synergy, and the number of papers on AI adoption in decision-making has also been growing dramatically in the past five years [4]. Scholars have delved into several aspects, encompassing the roles of humans–AI, users’ perceptions, as well as AI design and configuration of AI systems [5,6,7,8]. The existing literature provides valuable insights into the multifaceted aspects of human–AI synergy. However, the aforementioned research is spread across multiple research communities and a wide variety of disciplines, resulting in a lack of consensus on the key findings. As the prevalence of AI in decision-making calls for a deeper and systematic understanding of human–AI decision-making, we conducted an interdisciplinary literature review on human–AI synergy in decision-making and systematically characterized the state of human–AI synergy in decision-making. In the present study, we focus on the human–AI synergy process in general decision-making contexts and decision tasks across many fields, such as e-commerce, marketing, daily life, and human resource management.
The remainder of this paper is structured as follows. The paper commences with an introduction of the overarching framework used to guide this literature review. This is then followed by a discussion on the scope of the surveyed papers and the methodology employed to screen and select relevant papers for further study. Subsequently, the section presents and summarizes the findings, identifying three key themes that emerged from the reviewing process. Then, the study concludes with a research agenda specific to each theme, accompanied by suggested research opportunities for future investigation into the synergy between humans and AI in decision-making. This review makes several significant contributions to the field. It identifies associated technologies and AI affordances within various decision-making sectors. Moreover, it offers a systematic framework for the adoption of AI in decision-making, along with the identification of several research themes from existing studies. Additionally, it provides a holistic understanding of the subject matter and proposes a research agenda aimed at addressing the identified research gaps in future studies.

2. Theoretical Background

2.1. Overarching Framework: Affordance Actualization Theory

We developed an overarching theoretical framework and organized the extant papers by adopting Strong et al.’s affordance actualization theory [9]. Existing research on information systems (IS) has attached much importance to IT affordance, which focuses on the property of the relationship between an actor and an object [10]. The relational property is essential in information technology (IT) implementation, especially in the case of AI implementation in the organizational decision-making process, as AI systems and AI artifacts are inherently complex and their influence should be addressed. IT affordance does not simply refer to the physical characteristics, but also to the action possibilities empowered by the IT functionalities. The possibilities and potential embedded in these affordances are realized in the actualization process. Thus, affordance actualization is defined as “actions taken by actors as they take advantage of one or more affordances through their use of the technology to achieve immediate concrete outcomes in support of organizational goals” [6] (p. 70). The actualization process comprises actions (e.g., use of technology) and the outcomes of these actions. The relationship between actions and outcomes is iterative, with the outcome providing feedback to impact subsequent actions. The immediate outcomes also act as a mediator between users’ actions and organizations’ ultimate goals. Moreover, there are also some external actors that will impact the actualization process.
The affordance actualization theory has become popular in IS literature because it provides a better understanding of how IT affords the ways of reciprocal actions that contribute to achieving final goals [11,12]. Beyond the IS field, this theory has also been adopted in other disciplines, including sociology, computer science, and also human–computer interactions [12,13]. We adopted the affordance actualization theory framework to organize the research on human–AI synergy in organizational decision-making for several reasons. For one thing, the affordance actualization lens has been successfully employed in studying IT use and its impacts [14], which provides a solid foundation for investigating other forms of technology use. For another, the theoretical lens includes the impacts of IT functionalities on both usage and outcomes, which cater to the need to understand the influence of AI systems on both usage and organizational decision-making outcomes in this research context. The overarching framework for affordance actualization is shown in Figure 1.

2.2. Human–AI Synergy

The literature on human–AI synergy can be generally divided into the following three streams.
The first stream focus on the roles of humans–AI and task division during the collaboration. In collaborative decision-making between humans and machines, AI can assume different roles, such as a facilitator, reviewer, expert advisor, or guide [5,15]. With the improvement of AI automation, agents’ roles have evolved from being auxiliary tools to active participants in team decision-making processes [16], leading to a wider array of team collaboration formats. Consequently, scholars have explored various methods of dividing labor and allocating tasks between humans and AI in different collaborative scenarios. For instance, some have considered factors like the respective capabilities of humans and machines, the collaborative environment, the status of team members, and the autonomy and openness of collaborative tasks to appropriately assign responsibilities [17].
The second stream of the literature focuses on the impact of AI systems on users’ experience and perception. Kumar et al. [6] identified the role of AI in practice and investigated the impacts of AI on customers’ experiences. Other scholars also focused on studying users’ perceptions and acceptance of AI technologies in other fields, including service marketing, learning design, and clinical decision-making [7,18,19,20]. However, due to variations in the design characteristics of AI agents and individual differences, there are discrepancies in the level of understanding, acceptance, cognitive processing ability, and trust towards AI or AI-generated decisions during human–machine collaboration [21]. These factors subsequently influence the overall outcomes of human–machine collaboration.
The third stream of research concentrates on enhancing the design, configuration, and governance of AI systems to ensure the design quality and trustworthiness of the AI systems, thus facilitating better human–AI interactions. Some studies have delved into optimizing the collaboration effect by discussing the design aspects of human–machine collaboration systems, including AI’s appearance, presentation modes, technology optimization, physical interfaces for human–machine collaboration, and mental interfaces [8]. Moreover, to help users to improve their understanding of the inherent risks of AI systems and to mitigate these risks, organizations and scholars have created several tools. For example, the European Commission created the Assessment List for Trustworthy AI (ALTAI) [22,23], which indicates self-assessment checklist questions and a set of steps that organizations are encouraged to complete. Each question presents multiple choices to verify the implementation of the guidelines. Scholars also proposed general processes with concrete steps that can be applied to assess if an AI system is trustworthy [24].

3. Materials and Methods

In this section, we identify the scope of this literature review in detail and describe the screening criteria of the papers. Our review specifically focused on empirical studies on human–AI synergy in organizational decision-making, where the goal was to understand, evaluate, and improve the experience of human–AI synergy for decision-making tasks. Therefore, we specified the following scope of our study: In detail, the selected paper should include evaluative empirical studies on human–AI synergy. We thus excluded studies that merely focus on the design or configuration of relative AI systems, as these tend to be predominantly quantitative studies. Moreover, the paper should be targeted to a specific decision-making task. We thus excluded studies involving tasks with other purposes, such as gaming or debugging.
Regarding the search strategy, we followed the three-stage approach by Webster and Watson [25], as presented in Figure 2. Initially, we selected four digital libraries, including Web of Science, Scopus, Business Source Complete, and ProQuest. These databases encompass academic studies in the fields of information systems, technology, science, medicine, social sciences, humanities, business, and interdisciplinary research on behavioral and social sciences. Our search terms were divided into two categories. The first category included terms related to human–AI synergy and its associated terms, such as “human-AI/robot”, “human-AI/robot synergy”, “human-AI/robot interaction”, “human-AI/robot teaming”, and “human-AI/robot collaboration”. The second category consisted of terms related to the process, effects or impacts of human–AI synergy, such as “decision-making”, “productivity”, “efficiency”, “effectiveness”, and “performance”.
Inclusion and exclusion criteria were also established to eliminate the studies that do not address the research projects. Regarding the initial inclusion criteria, titles and abstracts in English published in peer-reviewed journals and conferences from 2013 to 2023 were searched. The literature was searched from March to August 2023, during which we also included newly published articles. We focused on research published in the last 10 years due to the substantial growth in the field of human–AI decision-making during this period. To maintain a comprehensive scope, we did not restrict the concept of AI, encompassing autonomous agents, algorithms, robots, and automated systems in the reviewing process. The initial search process identified a total of 2161 journal articles and conference papers from all disciplines. A total of 998 papers were removed as they were duplicated papers or the full texts were inaccessible. Each remaining paper’s title and abstract was meticulously reviewed, with consultation of the full text when necessary, to determine its eligibility for inclusion. Articles were excluded if they were merely technical papers that focus on design issues or engineering related to the technologies that we focus on. We further applied additional exclusion criteria, such as excluding non-empirical articles and those focusing solely on AI or measurement development. Following the filtering process, we identified over 40 papers that qualified for the final review and coding. Additionally, we conducted a forward and backward searching process, which yielded a further 7 articles meeting our inclusion criteria. After a final iterative evaluation, using the specified inclusion and exclusion criteria, a total of 47 papers were selected for the final review and coding. The specific review protocol is presented in Figure 2.
In our study, we employed a rigorous coding process based on our overarching theoretical framework. The main research findings of each source were used as the primary principle for in-depth coding. Papers that provided findings related to the research themes were coded. By thoroughly examining these papers, we were able to uncover valuable information regarding human–AI synergy in decision-making. We then mapped the papers from all disciplines to our affordance actualization framework, which enabled us to develop a holistic understanding of what has been studied. The profile of the studies in our sample pool is presented in Table 1. This table provides an overview of the decision tasks, types of AI, AI systems, and organizational outcomes discussed in these studies.

4. Findings

In this section, we conclude the findings based on our coding and reviewing of the papers. We synthesized the 42 papers based on the affordance actualization framework. Through iterative analysis, we identified three prominent themes that significantly contribute to our understanding of human–AI synergy in decision-making contexts. These themes include AI affordances in decision-making, patterns of human–AI synergy across different decision tasks, and outcomes of human–AI synergy in collaborative decision-making. The overall results of this high-level synthesis of human–AI synergy in decision-making are visually depicted in Figure 3.

4.1. Theme 1: Identification of AI Affordances in Decision-Making

Theme 1 attempts to describe and categorize the AI affordances they enable during the decision-making process, which are embedded in the AI systems in this review. Specifically, the AI techniques used in decision-making may encompass a range of methods such as natural language processing, artificial neural networks, regression-based models, genetic algorithms, fuzzy logic, cluster analysis, and humanoid robots. Although the literature acknowledges the presence of these functionalities and techniques in AI, the identification of affordances and functionalities within these studies is scarce. Given the ubiquitousness of AI in human life, it is worth untangling the functionalities and affordances of AI systems. By understanding AI’s functionalities, we can further identify key affordances and how they are delivered in the human–AI decision-making process. After the coding of the literature, we have identified four general categories of AI affordances in decision-making process.

4.1.1. Automated Information Collecting and Updating Affordance

One of the significant affordances of AI, as highlighted in the literature we surveyed, is its capacity for automated information collection and updating across various fields. AI devices offer novel approaches to collect data and information via online platforms [38]. For example, on social media platforms, AI systems automatically capture news information on the network, classify and process the information, and realize accurate news releases. Similarly, in the e-commerce context, AI systems automatically capture commodity information or customer information and make classifications of the products or customers, facilitating accurate commodity display and promotion. The automated collection of heterogeneous data can improve the accuracy of data analysis. In the clinical diagnosis context, symptom checkers collect the patient’s information and perform diagnosis accordingly [65].
To illustrate further, in the context of job applications, machine learning models and algorithms provide companies with the opportunity to optimize the scope of job applicants [67]. Additionally, the organizations can collect the applicants’ online behavioral information, including the user clicks and time of staying on a specific job listing through collaborative filtering. These data also enable organizations to understand applicants’ preferences, which, in turn, aids in offering tailored job listing recommendations.

4.1.2. Information Processing and Analyzing Affordance

After collecting data from multiple resources, AI enables organizations to have the ability to develop new approaches for information processing and analyzing [38]. Taking human resource management as an example, natural language processing (NLP) algorithms can help with the interpretation of text in different formats and decrease staffs’ work stress. NLP-powered systems can efficiently analyze resumes and extract relevant information from job applications, freeing-up HR professionals to focus on more strategic tasks [67]. Also, after training the AI with specific competence-related questions, organizations can use robots to conduct automatic interviews with the job candidates. The robots can also be used to record, transcribe, and analyze the candidates’ responses during the interviews. AI-powered interviewers ensure a standardized and unbiased evaluation of candidates, leading to a more efficient and fair hiring process. In an online shopping context, AI helps with the analysis of customer data, such as body measurements, customer preferences and feedback, and style trends. This data-driven approach allows organizations to personalize product recommendations, offers tailored styling suggestions, and enhances overall customer satisfaction [68]. AI’s ability to cater to individual preferences creates a more engaging and enjoyable shopping journey for customers, boosting customer loyalty and retention. As AI continues to evolve and advance, organizations can harness its potential to drive innovation, streamline processes, and deliver more personalized and efficient services to their users. Embracing AI technology opens up a world of possibilities, propelling organizations towards a data-driven and customer-centric future. Machine learning is also one of the typical representatives of this affordance. Through the utilization of information acquired from prior experiences, machine-learning-based systems have the capability to train on data and effectively address analogous problems [69].

4.1.3. Predicting/Forecasting and Decision-Making-Assistance Affordance

In spite of the information collecting and analyzing affordances, algorithms and historical information provide AI with the ability to assist humans in predicting and making decisions. AI applications in decision-making can be broadly categorized into expert systems, optimization techniques, and other methods centered around simulation and modeling [69]. This affordance is particularly helpful for tasks in need of predicting and forecasting functions, such as in the supply chain industry, medical diagnosing, product recommendation, house prediction, and so on [64,69,70]. For instance, in an online shopping context, with the combination of human expertise and the efficiency and insight of AI, human stylists can provide customers with possible recommendations [71]. Another remarkable example of successful human–AI synergy is found in cancer detection. AI can assist humans with the analysis of lymph-node-cell images, outperforming both AI-only and human-only decisions [72]. Unlike human decisions which involve intuition or subjective reasoning, AI enables organizations to develop unbiased approaches for data analysis with objective reasoning, thus yielding less-biased decisions.
Furthermore, AI can handle hundreds of thousands of transactions or datasets per second and process information at speeds far beyond human capacity. This proficiency in data analysis allows AI to identify patterns, trends, and correlations that may be challenging for humans to detect [19]. In industries like supply chain management, AI’s predictive abilities help organizations optimize inventory levels, anticipate demand fluctuations, and make informed decisions, thereby leading to cost savings and enhanced operational efficiency and prediction accuracy [64,69].

4.1.4. Explanations Providing Affordance

Interpretable AI or explainable AI (XAI) refers to efficient and transparent interactions between AI and multiple stakeholders by providing information about how decisions and generated contents are made. The effectiveness of the human–AI synergy depends not only on the efficiency of the human team members, but also on how well the human members interact with their robotic “colleagues” [48]. Especially when AI acts as a “leader”, the explainability of its decisions directly determines the trust and decision-making of human team members [49]. The level of cognition alignment between human team members and robot “colleagues” plays a pivotal role in determining the understanding consistency of team values, goals, and tasks, as well as the overall value equivalence between humans and machines [26]. In other words, effective collaboration depends on the ability of intelligent machines and team members to comprehend each other’s value needs and communicate their own solutions and needs with clarity and rationality. Successful collaboration hinges on mutual understanding and the effective communication of value requirements between human team members and AI systems [30], thus improving the collaboration performance and user perception of AI systems [29]. When both parties can effectively exchange and comprehend their respective value needs and rationalize their solutions, the collaboration becomes more seamless and productive.
The types and criteria of explanations vary greatly [36,73]. Regarding the types of explanations, studies have proposed that both inductive and deductive reasoning can provide humans with general rules or conclusions for the decision [33]. Other studies also designed different explanations (e.g., confidence-level explanation and observation explanation; symbolic explanation, haptic explanation, and text explanation; example-based and feature-based explanations) to investigate the impact of explanation-providing affordances on team performance [27,32,34,40]. Lim et al. also tested users’ understanding of the AI systems and decisions based on different types of explanations, including What, Why, Why Not, What If, and How To [74]. The above studies have verified the effectiveness of providing explanations for AI systems. However, other scholars also proposed that too much communication between humans and AI may lead to cognitive load, thus degrading performance. To address this challenge, existing research has also developed a framework to decide the necessity, timing, and the proper contents to communicate during human–AI synergy [31].

4.2. Theme 2: Human–AI Synergy Patterns Regarding Different Decision Tasks

In the human–AI synergy decision-making process, the collaboration between humans and algorithms can be considered as an organization, such as a multi-agent and goal-oriented system [75]. The goal of the organization is to produce a final decision, and its responsibilities include task division, task allocation, and integration of effort. According to some AI pioneers, the combination of computers and humans can outperform either of them alone. In the organizational decision-making process, humans and AI also complement each other [2]. Based on the prevailing state of technology, the relationship between humans and AI also differs during the human–AI synergy process, in which humans and AI can act as different roles in a multi-agent organization [34].
Studies on human–AI synergy in decision-making span a wide range of fields, including human–AI team collaboration [16,76], human–AI relationships, task allocation between humans and AI [77,78], human trust in AI [21,50], shared mental models [79], and practice in the human–AI decision-making process. The synergy between humans and AI has also been demonstrated to bring benefits in many areas, such as medical diagnoses, automated vehicles, deception detection, bail decisions, learning technology design, and so on [20,62,65]. More specifically, the decision-making process is a complex cognitive process and is always susceptible to uncertainty [44]. Due to AI’s automated information collecting and updating affordance, the machine has the ability to assess uncertainties and convey key messages to the human, thereby saving the human decision-makers’ cognitive resources [8]. However, due to the complexity of different tasks by nature, some tasks are not desired to be totally automated. Therefore, when considering different decision tasks, we can examine human–AI synergy patterns from three main perspectives: AI-centered patterns, human-centered patterns, and human–AI synergy-centered patterns. Each of these perspectives offers valuable insights into how humans and AI can collaborate effectively to achieve optimal decision-making outcomes.

4.2.1. AI-Centered Patterns

Considering the different levels of uncertainty, variability, and complexity of the decision tasks that humans and AI address, we can characterize the research opportunities for human–AI synergy in decision-making by levels of task uncertainty involved [2]. When the uncertainty of a task is low, AI can be trained properly following the decision rules of human decision-makers with sufficient data and act as the supporting role [47]. In this case, studies are mainly algorithm-centered or design-centered, which focuses on the proper utilization of the machines’ computing power [51]. The algorithms involved include natural language processing, machine learning, inverse reinforcement learning, artificial neural networks, and so on. For example, natural language processing algorithms enable the possibility of communication between customers and intelligent customer service agents in e-commerce. Also, some studies focus on the design of AI systems or robots that cater to human needs. In this phase, communication between humans and AI is also unilateral. Automated decision-making systems can replace human decisions in some cases. However, in most scenarios, humans always act as the final decision makers with the guidance of the systems’ suggestions [47].

4.2.2. Human-Centered Patterns

On the other hand, with the increasing level of task uncertainty, studies put more emphasis on human-centered algorithms or system design [35]. To address more complex tasks, algorithms are more sophisticated, which may lead to a lack of understanding and “black box” issues [80,81]. The lack of humans’ understanding calls for higher transparency of the algorithm or AI system [26]. In detail, the intelligent agent should be able to explain its decision-making process and reasons in a trustable and understandable manner to humans. In this case, studies focus on the design of transparent AI or explainable artificial intelligence. Specifically, humans and AI should share the same mental model. Studies also emphasized the design of XAI to build the shared mental model.
Despite the “black box” issue of AI, the “cognitive bias” of humans remains another issue that needs to be addressed in human-centered patterns. Specifically, human decision-makers are vulnerable to bias and anchoring effects [82], and their judgements are not always consistent with the Bayesian rule [83]. Such kinds of biases have also been widely found in empirical studies, such as in medical diagnoses, investment, and operations management [84,85]. Additionally, the performance of the decision in the human-centered pattern also relies largely on the decision-makers’ ability, emotion and pressure [86], and their framing of different decision tasks. In general, the “black box” of AI and the cognitive limitations of humans create barriers to the final decision and synergy performance [8].

4.2.3. Human–AI Synergy-Centered Patterns

Further considering tasks with a high uncertainty, studies are mostly human–AI synergy-centered and put more emphasis on the collaboration between the two agents [37]. Humans possess certain advantages over AI systems, such as intuition, experience-based decision-making, and the ability to transfer learning from one task to another. However, AI can handle complex tasks through its automated information collecting and updating affordances. For example, AI and humans can collaborate with high-uncertainty tasks in many fields, including behavioral science, computer science, and cognitive science. It is worth noting that in some critical tasks, such as recidivism prediction, medical diagnoses, and other ethics-related tasks, humans are the final decision-makers [19,63]. Due to legal and ethical concerns, full automation of these tasks is undesired and presents challenges for both humans and AI systems [34].
In summary, the combination of human expertise with AI’s capabilities leads to powerful and synergetic decision-making processes. From product recommendations in online shopping to cancer detection in the medical field, AI’s assistance enhances decision-making accuracy and efficiency, while reducing bias and allowing organizations to harness data-driven insights for better outcomes [37]. Embracing AI as a collaborative partner empowers humans to tackle complex challenges more effectively and paves the way for a promising future with human–AI synergy [87]. In particular, human–machine collaboration has been brought to a new level with the announcement of the next industrial revolution, named Industry 5.0 [88]. The revolution has shifted manufacturing from machine-centered systems to human–AI synergy-centered systems. This represents that humans and machines can collaborate to make processes more efficient by integrating human creativity and the power of intelligent systems [89].

4.3. Theme 3: Outcomes of Human–AI Synergy in Decision-Making

The outcomes of AI affordance actualization are the direct results that users can achieve by adopting AI, including general outcomes, psychological, or cognitive outcomes induced by human–AI synergy. The affordance actualization theory also suggests that these outcomes are essential mechanisms that help the human–AI team with their final decision-making. The affordances identified in theme 1 and different synergy patterns in theme 2 yield several findings, contributing to the conclusions of the outcomes. By iterating between the existing literature and affordance actualization theory, we analyzed the outcomes of human–AI synergy in decision-making and concluded with four categories: the general performance of human–AI synergy in decision-making; trust in human–AI synergy in decision-making; transparency and explainability between human and AI synergy; and cognitive perspectives of human–AI synergy.

4.3.1. Outcome 1: General Performance of Human–AI Synergy in Decision-Making

The implementation of AI in the decision-making process has the potential to increase the general performance of decision-making. Regarding the decision-making assisting affordance, AI increases the interaction and communication between humans and other key actors in an organization, such as employees, customers, and other key actors in the commerce setting. The automated information collecting and updating affordance of AI can also reduce bias in the data collection process and help to develop unbiased approaches in the data analysis process. Unlike human decisions, which may be influenced by intuition or subjective reasoning, AI enables organizations to develop unbiased approaches for data analysis. By eliminating human biases, AI-driven decisions tend to be more objective and impartial, leading to fairer outcomes [90]. Moreover, when faced with tasks of “unreadable” or incoherent information, humans tend to “fill in” the information with subjective inputs, which may be influenced by inherent cognitive biases [91]. However, AI-based systems will produce a visualization based on relevant data, which is a representation of the data itself, rather than a cognitive perception [92]. Additionally, information collecting and processing affordances of AI can also enlarge the data pool that organizations can reach, collect, and analyze. For instance, in clinical decisions on rehabilitation assessment, AI-based decision-support-systems can automatically assess and monitor users’ data and exercise information using sensors. Then, machine learning algorithms can help with the quantitative analysis [19]. This helps to increase the effectiveness and efficiency of the decision-making process, which subsequently yields effective decisions.

4.3.2. Outcome 2: Trust in Human–AI Synergy in Decision-Making

Among the many key factors perceived by individuals in the above human–AI synergy background, some scholars mainly focused on the issue of perceived trust [28,45]. Trust in AI and automation differs from the trust between humans in that AI lacks consciousness [28,93,94]. In detail, Xu and Dudek [52] investigated the robot’s trustworthiness using an online trust inference model. Heerink et al. discussed the influence of trust in the elderly’s healthcare context on their willingness to use intelligent machines to make decisions [95]. Some scholars have discussed the influence of human trust in service robots on their intention to use under the background of service marketing [18,96]. Davenport proposed that the prerequisite for AI to be recognized in enterprises and society is complete information disclosure of AI technology and endorsement by external organizations or institutions [97]. Castelo et al. explored the impact of objectivity and other factors on users’ trust and usage of algorithms [53]. Seeber et al. proposed that in the context of the application of AI in teamwork, the research on trust can take into account different dimensions, such as trust in robot teammates, trust in suggestions automatically recommended by machines, and trust in machine algorithms [16]. Trust is not absolutely static, but a dynamic process. Through a longitudinal study, Jessup et al. evaluated the changes in human trust/distrust towards robot partners in human–AI synergy, and proposed changes in trust levels over time [54]. Yang et al. investigated the impact of example-based explanations on users’ trust over time [41]. It can be seen from the above research that trust in human–AI synergy has aroused wide attention in various application scenarios of AI and plays essential roles in the decision-making process.

4.3.3. Outcome 3: Transparency and Explainability between Human and AI Synergy

AI can participate in team collaboration as different roles (leader/participant) and improve the efficiency of team performance to some extent [15,92,98]. However, in the practice of human–AI team collaboration, the lack of explainability of AI algorithms or AI systems remains to be addressed, which is mainly caused by the “black box” of AI technology and algorithm. Adding transparency to AI can improve human trust and performance in a team [27]. Humans lack a sense of understanding, trust, or acceptance of AI and its generated content, which further hinders the application and promotion of AI in team cooperation. Specifically, the interpretability of AI systems means ensuring efficient and transparent interactions between AI and multiple stakeholders by providing information on how decisions and generated contents are made. Different objects and stakeholders, such as system developers, domain experts, and system end-users, also have different requirements for the interpretability of AI systems. To address the explainability problem during human–AI synergy, studies have investigated several aspects: (1) the clarification of what to explain (content) and when to explain (timing) that the AI should constitute and (2) the algorithms and systems designed to meet the explainable standards to make the decisions/recommendations explainable. By enhancing procedural transparency and explanability, AI-based systems can make decisions less controversial and help mitigate agency problems [92]. Explanations have also been identified to be able to address users’ bias during human–AI synergy [42].

4.3.4. Outcome 4: Cognitive Perspectives of Human–AI Synergy

Studies have put much emphasis on humans’ cognitive perception of human–AI synergy, recognizing its potential impact on the final decision-making process. For example, individuals’ perception after interacting with AI in different scenarios mainly includes the perceived interactive comfort [99]. A positive experience of interactive comfort fosters trust and confidence in AI’s abilities, promoting its integration into decision-making processes. Conversely, interactive discomfort may lead to hesitation and resistance towards AI adoption. In certain contexts, AI interactions may be perceived as entertaining or enjoyable. Such positive experiences can enhance users’ engagement with AI technologies, making them more willing to explore and utilize AI-driven decision-making tools [56,95]. Interactions with AI systems can also evoke feelings of social belonging, as AI technologies are designed to emulate human-like qualities [99]. Perceptions of AI’s adaptability and social presence contribute to the human–AI synergy experience [56]. Other studies also investigated humans’ cognitive perception from the perspective of relationship building [18,57], social presence [96,99], perceived usefulness, and ease of use of AI/robots/decisions [100]. In addition to the discussion of the above perceptions, scholars also focus on the behavioral outcomes and responses of humans, such as users’ emotional and physiological reactions to AI [58], intention to use AI, acceptance of AI-recommended decisions, actual use situations [56,59], and willingness to cooperate with robot colleagues in the future [101]. Despite the positive perception of AI, negative perceptions of the automation have also been discussed, such as individuals’ perceived interactive discomfort [53,55], over-trust or -reliance on AI [66], and humans’ resistance to unethical instructions from their AI supervisors [15].

5. Discussion

The findings of this review provide a holistic understanding of research on human–AI synergy in decision-making, and develop a systematic framework to guide future investigations. Drawing upon the affordance actualization theory [9], our syntheses yield three key themes: (1) the identification of AI affordances in decision-making, (2) human–AI synergy patterns regarding different decision tasks, and (3) the outcomes of human–AI synergy. The findings provide evidence for the usefulness of AI and its positive effects on human–AI synergy processes in decision-making, ultimately leading to improved outcomes of this synergy. Our synthesis also reveals promising opportunities for future research regarding each theme, which are outlined as follows.
Regarding theme 1, the identified AI affordances afford users the ability to complete decision-making tasks more efficiently. Ideally, AI can also enable users to complete new tasks that may be difficult to accomplish through purely human efforts. It should be noted that some of the identified affordances, such as automated information collecting and updating, are foundational and essential, while others, like predicting/forecasting and decision-making assistance, provide added value with complementary functionalities. It is essential to recognize that as AI technology advances, the functions and delivery of different AI affordances will undergo significant transformations. However, our current understanding of how the new functionalities can improve the delivery of AI affordances and decision-making efficacy is still limited. Accordingly, we propose the following areas and opportunities for future research. For one, there is a need to delve deeper into the delivery efficiency of the four AI affordances that we identified during the decision-making process. Understanding how AI functions are implemented and delivered to users can help optimize the integration of AI tools into decision-making workflows, ensuring that users can fully leverage AI’s potential to enhance efficiency and decision outcomes. Against this background, policy makers and scholars should decide on intervention tools in the implementation of AI systems in industry. Specifically, future research can consider the employment of the ALTAI tool and other similar tools in practice. For another, despite the usefulness and positive effects of AI technology adoption in decision-making, there are also some conditions under which AI adoption may yield negative consequences. For instance, overreliance on AI or biases embedded in AI algorithms may result in suboptimal decisions or ethical concerns [92]. Future research should, therefore, focus on investigating both the positive and negative effects of AI affordances in decision-making processes [56]. By understanding the potential drawbacks and challenges associated with AI adoption, decision-makers can proactively address these issues and effectively harness AI’s benefits more effectively.
Regarding theme 2, patterns of human–AI synergy were summarized regarding the uncertainty level of different decision tasks. Several challenges and future research opportunities have also emerged. Firstly, it is important to recognize that different decision tasks may be better-suited for specific types of synergy between humans and AI. Understanding these task-specific synergies can lead to more effective and efficient decision-making processes, while, from another point, different patterns of human–AI synergy may also affect the allocation of tasks between humans and AI, thus impacting the overall performance and outcomes of human–AI synergy in decision-making. Therefore, future research can focus on the dynamic and flexible development of human–AI team organization that can adapt to different decision tasks and uncertainty levels. Secondly, one of the primary goals of human–AI synergy is to leverage AI’s capabilities to overcome human limitations in decision-making. We have reviewed the characteristics and disadvantages of different synergy patterns, especially the human-centered pattern. To address these problems, future research should focus on exploring effective mechanisms for correcting and mitigating human cognitive biases and limitations. By understanding how AI can complement human decision-making processes, researchers can develop strategies to improve the efficiency and accuracy of decision-making with AI assistance.
Regarding theme 3, we concluded four types of human–AI synergy outcomes in general that may facilitate the ultimate goal of collaborative decision-making. For example, the explanations provided by the AI technology offers users new opportunities to generate explainable or transparent decisions, which are beneficial for successful decision-making. However, the role of these outcomes has been generally studied, without rigorous investigation into their effects on decision-making process. Accordingly, we argue that more rigorous investigation should be conducted to illustrate their mediating effects on the final accomplishment of decision tasks. For example, future research may conduct longitudinal research to investigate the dynamic mediating effects of the synergy outcomes and their specific role in improving the decision-making process. Moreover, as the outcomes of human–AI synergy are often interconnected and interdependent, future research should explore the interactions between these outcomes. Understanding how these outcomes work together and influence each other can offer valuable insights into optimizing the human–AI collaboration process and achieving optimal decision-making outcomes. In addition, there should be a feedback loop during the human–AI synergy process. As the affordance actualization theory suggests, there exists a feedback mechanism from actualized outcomes to the affordance potentials. Therefore, future research can include various feedback mechanisms and provide deeper understanding of the theoretical framework during human–AI synergy in the decision-making process. Understanding humans’ cognitive perception and behavioral responses to human–AI synergy is crucial for maximizing the potential of AI technologies in decision-making contexts.

6. Limitations and Conclusions

This review of human–AI synergy in decision-making has highlighted potential avenues for future research to gain a deeper understanding of this phenomenon through more comprehensive insights and perspectives. However, there are certain limitations that should be acknowledged and addressed in future research endeavors. Firstly, regarding the sample pool selection, the inclusion of studies in this review was constrained to those explicitly incorporating the specified search terms. While this approach helped manage the selected articles within a manageable size, it may have limited the diversity and breadth of the sample pool. To address this, future studies could aim to include as many relevant studies as possible, employing a more comprehensive search strategy to encompass a wider array of perspectives and findings. Secondly, it becomes increasingly crucial to pay attention to variations in human–AI synergy patterns and their impact on decision-making performance. Different fields and industries may experience unique challenges and opportunities when integrating AI into decision-making processes. Future reviews could prioritize examining explicit decision-making contexts and addressing research gaps in various domains, such as healthcare, finance, and marketing, among others. This targeted approach would provide specialized insights that cater to the specific needs and complexities of different fields, ultimately contributing to more-informed decision-making practices. Last but not least, due to the rapid development of AI technology, a similar literature review should be undertaken in 2 years’ time to take into account the frontier changes in the topic and assess issues around the pace of change in AI development in human–AI synergy. This will ensure that this review remains up-to-date and captures the latest advancements in this evolving field.

Author Contributions

Conceptualization, Y.B. and W.G.; methodology, Y.B. and W.G.; formal analysis, Y.B., W.G. and K.Y.; resources, Y.B., W.G. and K.Y.; writing—original draft preparation, Y.B.; writing—review and editing, Y.B., W.G. and K.Y.; funding acquisition, Y.B. and W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Postdoctoral Science Foundation, grant number 2022M723491, the Science Foundation of China University of Petroleum, Beijing, grant number 2462023SZBH006, the Interdisciplinary Research Foundation for Doctoral Candidates of Beijing Normal University, grant number BNUXKJC2223, and the Postgraduate Innovative Research Fund of University of International Business and Economics, grant number 202367.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Russell, S.; Norvig, P.; Intelligence, A. Artificial Intelligence: A Modern Approach; Prentice-Hall: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
  2. Jarrahi, M.H. Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  3. Lai, V.; Chen, C.; Liao, Q.V.; Smith-Renner, A.; Tan, C. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv 2021, arXiv:2112.11471. [Google Scholar]
  4. Achmat, L.; Brown, I. Artificial Intelligence Affordances for Business Innovation: A Systematic Review of Literature. In Proceedings of the 4th International Conference on the Internet, Cyber Security and Information Systems, (ICICIS), Johannesburg, South Africa, 31 October–1 November 2019; pp. 1–12. [Google Scholar]
  5. Bader, J.; Edwards, J.; Harris-Jones, C.; Hannaford, D. Practical Engineering of Knowledge-Based Systems. Inf. Softw. Technol. 1988, 30, 266–277. [Google Scholar] [CrossRef]
  6. Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the Role of Artificial Intelligence in Personalized Engagement Marketing. Calif. Manag. Rev. 2019, 61, 135–155. [Google Scholar] [CrossRef]
  7. Fernandes, T.; Oliveira, E. Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of Digital Voice Assistants Adoption. J. Bus. Res. 2021, 122, 180–191. [Google Scholar] [CrossRef]
  8. Xiong, W.; Fan, H.; Ma, L.; Wang, C. Challenges of Human—Machine Collaboration in Risky Decision-Making. Front. Eng. Manag. 2022, 9, 89–103. [Google Scholar] [CrossRef]
  9. Strong, D.; Volkoff, O.; Johnson, S.; Pelletier, L.; Tulu, B.; Bar-On, I.; Trudel, J.; Garber, L. A Theory of Organization-EHR Affordance Actualization. J. Assoc. Inf. Syst. 2014, 15, 53–85. [Google Scholar] [CrossRef]
  10. Du, W.; Pan, S.L.; Leidner, D.E.; Ying, W. Affordances, Experimentation and Actualization of FinTech: A Blockchain Implementation Study. J. Strateg. Inf. Syst. 2019, 28, 50–65. [Google Scholar] [CrossRef]
  11. Zeng, D.; Tim, Y.; Yu, J.; Liu, W. Actualizing Big Data Analytics for Smart Cities: A Cascading Affordance Study. Int. J. Inf. Manag. 2020, 54, 102156. [Google Scholar] [CrossRef]
  12. Lehrer, C.; Wieneke, A.; Vom Brocke, J.; Jung, R.; Seidel, S. How Big Data Analytics Enables Service Innovation: Materiality, Affordance, and the Individualization of Service. J. Manag. Inf. Syst. 2018, 35, 424–460. [Google Scholar] [CrossRef]
  13. Chatterjee, S.; Moody, G.; Lowry, P.B.; Chakraborty, S.; Hardin, A. Information Technology and Organizational Innovation: Harmonious Information Technology Affordance and Courage-Based Actualization. J. Strateg. Inf. Syst. 2020, 29, 101596. [Google Scholar] [CrossRef]
  14. Anderson, C.; Robey, D. Affordance Potency: Explaining the Actualization of Technology Affordances. Inf. Organ. 2017, 27, 100–115. [Google Scholar] [CrossRef]
  15. Lanz, L.; Briker, R.; Gerpott, F.H. Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning. J. Bus. Ethics 2023. [Google Scholar] [CrossRef]
  16. Seeber, I.; Bittner, E.; Briggs, R.O.; De Vreede, T.; De Vreede, G.-J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.; Randrup, N.; et al. Machines as Teammates: A Research Agenda on AI in Team Collaboration. Inf. Manag. 2020, 57, 103174. [Google Scholar] [CrossRef]
  17. Hancock, P.A.; Kajaks, T.; Caird, J.K.; Chignell, M.H.; Mizobuchi, S.; Burns, P.C.; Feng, J.; Fernie, G.R.; Lavallière, M.; Noy, I.Y.; et al. Challenges to Human Drivers in Increasingly Automated Vehicles. Hum. Factors J. Hum. Factors Ergon. Soc. 2020, 62, 310–328. [Google Scholar] [CrossRef] [PubMed]
  18. Van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in Humanoid Robots: Implications for Services Marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
  19. Lee, M.H.; Siewiorek, D.P.P.; Smailagic, A.; Bernardino, A.; Bermúdez i Badia, S.B. A Human-AI Collaborative Approach for Clinical Decision Making on Rehabilitation Assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–14. [Google Scholar]
  20. Järvelä, S.; Nguyen, A.; Hadwin, A. Human and Artificial Intelligence Collaboration for Socially Shared Regulation in Learning. Br. J. Educ. Technol. 2023, 54, 1057–1076. [Google Scholar] [CrossRef]
  21. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef]
  22. Radclyffe, C.; Ribeiro, M.; Wortham, R.H. The Assessment List for Trustworthy Artificial Intelligence: A Review and Recommendations. Front. Artif. Intell. 2023, 6, 1020592. [Google Scholar] [CrossRef]
  23. Stahl, B.C.; Leach, T. Assessing the Ethical and Social Concerns of Artificial Intelligence in Neuroinformatics Research: An Empirical Test of the European Union Assessment List for Trustworthy AI (ALTAI). AI Ethics 2023, 3, 745–767. [Google Scholar] [CrossRef]
  24. Zicari, R.V.; Brodersen, J.; Brusseau, J.; Dudder, B.; Eichhorn, T.; Ivanov, T.; Kararigas, G.; Kringen, P.; McCullough, M.; Moslein, F.; et al. Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Trans. Technol. Soc. 2021, 2, 83–97. [Google Scholar] [CrossRef]
  25. Webster, J.; Watson, R.T. Analyzing the Past to Prepare for the Future: Writing a Literature Review. MIS Q. 2002, 26, xiii–xxiii. [Google Scholar]
  26. Yuan, L.; Gao, X.; Zheng, Z.; Edmonds, M.; Wu, Y.N.; Rossano, F.; Lu, H.; Zhu, Y.; Zhu, S.-C. In Situ Bidirectional Human-Robot Value Alignment. Sci. Robot. 2022, 7, eabm4183. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, N.; Pynadath, D.V.; Hill, S.G. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 109–116. [Google Scholar]
  28. Chen, M.; Nikolaidis, S.; Soh, H.; Hsu, D.; Srinivasa, S. Planning with Trust for Human-Robot Collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; ACM: New York, NY, USA, 2018; pp. 307–315. [Google Scholar]
  29. Gao, X.; Gong, R.; Zhao, Y.; Wang, S.; Shu, T.; Zhu, S.-C. Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020. [Google Scholar]
  30. Gong, Z.; Zhang, Y. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1005–1011. [Google Scholar]
  31. Unhelkar, V.V.; Li, S.; Shah, J.A. Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; ACM: New York, NY, USA, 2020; pp. 329–341. [Google Scholar]
  32. Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–21. [Google Scholar] [CrossRef]
  33. Buçinca, Z.; Lin, P.; Gajos, K.Z.; Glassman, E.L. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 454–464. [Google Scholar]
  34. Lai, V.; Tan, C. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 29–38. [Google Scholar]
  35. Lai, V.; Liu, H.; Tan, C. “Why Is ‘Chicago’ Deceptive?” Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar]
  36. Alqaraawi, A.; Schuessler, M.; Weiß, P.; Costanza, E.; Berthouze, N. Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020. [Google Scholar]
  37. Bansal, G.; Nushi, B.; Kamar, E.; Weld, D.S.; Lasecki, W.S.; Horvitz, E. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Proc. AAAI Conf. Artif. Intell. 2019, 33, 2429–2437. [Google Scholar] [CrossRef]
  38. Trocin, C.; Hovland, I.V.; Mikalef, P.; Dremel, C. How Artificial Intelligence Affords Digital Innovation: A Cross-Case Analysis of Scandinavian Companies. Technol. Forecast. Soc. Change 2021, 173, 121081. [Google Scholar] [CrossRef]
  39. Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-Machine Collaboration in Managerial Decision Making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
  40. Edmonds, M.; Gao, F.; Liu, H.; Xie, X.; Qi, S.; Rothrock, B.; Zhu, Y.; Wu, Y.N.; Lu, H.; Zhu, S.-C. A Tale of Two Explanations: Enhancing Human Trust by Explaining Robot Behavior. Sci. Robot. 2019, 4, eaay4663. [Google Scholar] [CrossRef]
  41. Yang, F.; Huang, Z.; Scholtz, J.; Arendt, D.L. How Do Visual Explanations Foster End Users’ Appropriate Trust in Machine Learning? In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; ACM: New York, NY, USA, 2020; pp. 189–201. [Google Scholar]
  42. Nourani, M.; Roy, C.; Block, J.E.; Honeycutt, D.R.; Rahman, T.; Ragan, E.; Gogate, V. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, 14–17 April 2021; ACM: New York, NY, USA, 2021; pp. 340–350. [Google Scholar]
  43. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  44. Arshad, S.Z.; Zhou, J.; Bridon, C.; Chen, F.; Wang, Y. Investigating User Confidence for Uncertainty Presentation in Predictive Decision Making. In Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, Parkville, VIC, Australia, 7–10 December 2015; ACM: New York, NY, USA, 2015; pp. 352–360. [Google Scholar]
  45. Yu, K.; Berkovsky, S.; Taib, R.; Zhou, J.; Chen, F. Do I Trust My Machine Teammate?: An Investigation from Perception to Decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; ACM: New York, NY, USA, 2019; pp. 460–468. [Google Scholar]
  46. Mercado, J.E.; Rupp, M.A.; Chen, J.Y.C.; Barnes, M.J.; Barber, D.; Procci, K. Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 401–415. [Google Scholar] [CrossRef]
  47. Cheng, H.-F.; Wang, R.; Zhang, Z.; O’Connell, F.; Gray, T.; Harper, F.M.; Zhu, H. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, 4–9 May 2019; ACM: New York, NY, USA, 2019; pp. 1–12. [Google Scholar]
  48. Vinanzi, S.; Cangelosi, A.; Goerick, C. The Collaborative Mind: Intention Reading and Trust in Human-Robot Interaction. iScience 2021, 24, 102130. [Google Scholar] [CrossRef] [PubMed]
  49. Sachan, S.; Yang, J.-B.; Xu, D.-L.; Benavides, D.E.; Li, Y. An Explainable AI Decision-Support-System to Automate Loan Underwriting. Expert Syst. Appl. 2020, 144, 113100. [Google Scholar] [CrossRef]
  50. Gutzwiller, R.S.; Reeder, J. Dancing with Algorithms: Interaction Creates Greater Preference and Trust in Machine-Learned Behavior. Hum. Factors J. Hum. Factors Ergon. Soc. 2021, 63, 854–867. [Google Scholar] [CrossRef] [PubMed]
  51. Patel, B.N.; Rosenberg, L.; Willcox, G.; Baltaxe, D.; Lyons, M.; Irvin, J.; Rajpurkar, P.; Amrhein, T.; Gupta, R.; Halabi, S.; et al. Human–Machine Partnership with Artificial Intelligence for Chest Radiograph Diagnosis. NPJ Digit. Med. 2019, 2, 111. [Google Scholar] [CrossRef] [PubMed]
  52. Xu, A.; Dudek, G. OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; ACM: New York, NY, USA, 2015; pp. 221–228. [Google Scholar]
  53. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent Algorithm Aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  54. Jessup, S.; Gibson, A.; Capiola, A.; Alarcon, G.; Borders, M. Investigating the Effect of Trust Manipulations on Affect over Time in Human-Human versus Human-Robot Interactions. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020. [Google Scholar]
  55. Mende, M.; Scott, M.L.; Van Doorn, J.; Grewal, D.; Shanks, I. Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses. J. Mark. Res. 2019, 56, 535–556. [Google Scholar] [CrossRef]
  56. Fridin, M.; Belokopytov, M. Acceptance of Socially Assistive Humanoid Robot by Preschool and Elementary School Teachers. Comput. Hum. Behav. 2014, 33, 23–31. [Google Scholar] [CrossRef]
  57. Seo, S.H.; Griffin, K.; Young, J.E.; Bunt, A.; Prentice, S.; Loureiro-Rodríguez, V. Investigating People’s Rapport Building and Hindering Behaviors When Working with a Collaborative Robot. Int. J. Soc. Robot. 2018, 10, 147–161. [Google Scholar] [CrossRef]
  58. Desideri, L.; Ottaviani, C.; Malavasi, M.; Di Marzio, R.; Bonifacci, P. Emotional Processes in Human-Robot Interaction during Brief Cognitive Testing. Comput. Hum. Behav. 2019, 90, 331–342. [Google Scholar] [CrossRef]
  59. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the Shades of the Uncanny Valley: An Experimental Study of Human–Chatbot Interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
  60. Bansal, G.; Nushi, B.; Kamar, E.; Lasecki, W.; Weld, D.S.; Horvitz, E. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP-19), Stevenson, WA, USA, 28–30 October 2019; p. 10. [Google Scholar]
  61. Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An Ideal Human”: Expectations of AI Teammates in Human-AI Teaming. Proc. ACM Hum.-Comput. Interact. 2021, 4, 246. [Google Scholar] [CrossRef]
  62. Lawrence, L.; Echeverria, V.; Yang, K.; Aleven, V.; Rummel, N. How Teachers Conceptualise Shared Control with an AI Co-orchestration Tool: A Multiyear Teacher-centred Design Process. Br. J. Educ. Technol. 2023, bjet.13372. [Google Scholar] [CrossRef]
  63. Chiang, C.-W.; Lu, Z.; Li, Z.; Yin, M. Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; ACM: New York, NY, USA, 2023; pp. 1–18. [Google Scholar]
  64. Holstein, K.; De-Arteaga, M.; Tumati, L.; Cheng, Y. Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables. Proc. ACM Hum.-Comput. Interact. 2023, 7, 152. [Google Scholar] [CrossRef]
  65. Tsai, C.-H.; You, Y.; Gui, X.; Kou, Y.; Carroll, J.M. Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–17. [Google Scholar]
  66. Levy, A.; Agrawal, M.; Satyanarayan, A.; Sontag, D. Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–13. [Google Scholar]
  67. Vrontis, D.; Christofi, M.; Pereira, V.; Tarba, S.; Makrides, A.; Trichina, E. Artificial Intelligence, Robotics, Advanced Technologies and Human Resource Management: A Systematic Review. Int. J. Hum. Resour. Manag. 2022, 33, 1237–1266. [Google Scholar] [CrossRef]
  68. Prentice, C.; Dominique Lopes, S.; Wang, X. The Impact of Artificial Intelligence and Employee Service Quality on Customer Satisfaction and Loyalty. J. Hosp. Mark. Manag. 2020, 29, 739–756. [Google Scholar] [CrossRef]
  69. Pournader, M.; Ghaderi, H.; Hassanzadegan, A.; Fahimnia, B. Artificial Intelligence Applications in Supply Chain Management. Int. J. Prod. Econ. 2021, 241, 108250. [Google Scholar] [CrossRef]
  70. Wilson, H.J.; Daugherty, P.; Shukla, P. How One Clothing Company Blends AI and Human Expertise. Harv. Bus. Rev. 2016. [Google Scholar]
  71. Marr, B. Stitch Fix: The Amazing Use Case of Using Artificial Intelligence in Fashion Retail. Forbes 2018, 25. [Google Scholar]
  72. Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep Learning for Identifying Metastatic Breast Cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar] [CrossRef]
  73. Arnold, T.; Kasenberg, D.; Scheutz, M. Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems. ACM Trans. Hum.-Robot Interact. 2021, 10, 25. [Google Scholar] [CrossRef]
  74. Lim, B.Y.; Dey, A.K.; Avrahami, D. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: New York, NY, USA, 2009; pp. 2119–2128. [Google Scholar]
  75. Puranam, P. Human–AI Collaborative Decision-Making as an Organization Design Problem. J. Organ. Des. 2021, 10, 75–80. [Google Scholar] [CrossRef]
  76. Parker, S.K.; Grote, G. Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Appl. Psychol. 2022, 71, 1171–1204. [Google Scholar] [CrossRef]
  77. Roth, E.M.; Sushereba, C.; Militello, L.G.; Diiulio, J.; Ernst, K. Function Allocation Considerations in the Era of Human Autonomy Teaming. J. Cogn. Eng. Decis. Mak. 2019, 13, 199–220. [Google Scholar] [CrossRef]
  78. Van Maanen, P.P.; van Dongen, K. Towards Task Allocation Decision Support by Means of Cognitive Modeling of Trust. In Proceedings of the 17th Belgian-Netherlands Artificial Intelligence Conference, Brussels, Belgium, 17–18 October 2005; pp. 399–400. [Google Scholar]
  79. Flemisch, F.; Heesen, M.; Hesse, T.; Kelsch, J.; Schieben, A.; Beller, J. Towards a Dynamic Balance between Humans and Automation: Authority, Ability, Responsibility and Control in Shared and Cooperative Control Situations. Cogn. Technol. Work 2012, 14, 3–18. [Google Scholar] [CrossRef]
  80. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  81. The Precise4Q Consortium; Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
  82. Bier, V. Implications of the Research on Expert Overconfidence and Dependence. Reliab. Eng. Syst. Saf. 2004, 85, 321–329. [Google Scholar] [CrossRef]
  83. Charness, G.; Karni, E.; Levin, D. Individual and Group Decision Making under Risk: An Experimental Study of Bayesian Updating and Violations of First-Order Stochastic Dominance. J. Risk Uncertain. 2007, 35, 129–148. [Google Scholar] [CrossRef]
  84. Tong, J.; Feiler, D. A Behavioral Model of Forecasting: Naive Statistics on Mental Samples. Manag. Sci. 2017, 63, 3609–3627. [Google Scholar] [CrossRef]
  85. Blumenthal-Barby, J.S.; Krieger, H. Cognitive Biases and Heuristics in Medical Decision Making: A Critical Review Using a Systematic Search Strategy. Med. Decis. Mak. 2015, 35, 539–557. [Google Scholar] [CrossRef]
  86. Zinn, J.O. Heading into the Unknown: Everyday Strategies for Managing Risk and Uncertainty. Health Risk Soc. 2008, 10, 439–450. [Google Scholar] [CrossRef]
  87. Bayati, M.; Braverman, M.; Gillam, M.; Mack, K.M.; Ruiz, G.; Smith, M.S.; Horvitz, E. Data-Driven Decisions for Reducing Readmissions for Heart Failure: General Methodology and Case Study. PLoS ONE 2014, 9, e109264. [Google Scholar] [CrossRef] [PubMed]
  88. Pizoń, J.; Gola, A. Human–Machine Relationship—Perspective and Future Roadmap for Industry 5.0 Solutions. Machines 2023, 11, 203. [Google Scholar] [CrossRef]
  89. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  90. Trocin, C.; Mikalef, P.; Papamitsiou, Z.; Conboy, K. Responsible AI for Digital Health: A Synthesis and a Research Agenda. Inf. Syst. Front. 2021, 1–19. [Google Scholar] [CrossRef]
  91. McShane, M.; Nirenburg, S.; Jarrell, B. Modeling Decision-Making Biases. Biol. Inspired Cogn. Archit. 2013, 3, 39–50. [Google Scholar] [CrossRef]
  92. Parry, K.; Cohen, M.; Bhattacharya, S. Rise of the Machines: A Critical Consideration of Automated Leadership Decision Making in Organizations. Group Organ. Manag. 2016, 41, 571–594. [Google Scholar] [CrossRef]
  93. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  94. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. Int. J. Hum.–Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
  95. Heerink, M. Assessing Acceptance of Assistive Social Robots by Aging Adults. Ph.D. Thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands, 2010. [Google Scholar]
  96. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave New World: Service Robots in the Frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
  97. Davenport, T.; Guha, A.; Grewal, D.; Bressgott, T. How Artificial Intelligence Will Change the Future of Marketing. J. Acad. Mark. Sci. 2020, 48, 24–42. [Google Scholar] [CrossRef]
  98. Mikalef, P.; Gupta, M. Artificial Intelligence Capability: Conceptualization, Measurement Calibration, and Empirical Study on Its Impact on Organizational Creativity and Firm Performance. Inf. Manag. 2021, 58, 103434. [Google Scholar] [CrossRef]
  99. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo Arigato Mr. Roboto: Emergence of Automated Social Presence in Organizational Frontlines and Customers’ Service Experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
  100. Libert, K.; Mosconi, E.; Cadieux, N. Human-Machine Interaction and Human Resource Management Perspective for Collaborative Robotics Implementation and Adoption. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020; Volume 3, pp. 533–542. [Google Scholar]
  101. Piçarra, N.; Giger, J.-C. Predicting Intention to Work with Social Robots at Anticipation Stage: Assessing the Role of Behavioral Desire and Anticipated Emotions. Comput. Hum. Behav. 2018, 86, 129–146. [Google Scholar] [CrossRef]
Figure 1. Overarching framework of affordance actualization (adapted from Strong et al. [9]).
Figure 1. Overarching framework of affordance actualization (adapted from Strong et al. [9]).
Systems 11 00442 g001
Figure 2. The review protocol.
Figure 2. The review protocol.
Systems 11 00442 g002
Figure 3. High-level synthesis of research on human–AI synergy in decision-making.
Figure 3. High-level synthesis of research on human–AI synergy in decision-making.
Systems 11 00442 g003
Table 1. Profile of studies.
Table 1. Profile of studies.
AuthorDecision TasksTypes of AI and AI SystemsOrganizational Outcomes
[1]hiring and firing employeesalgorithm-enabled software systemhumans’ acceptance of machine participation
[7]service encounterintelligent digital voice assistantusers’ motivations to adopt AI
[15]human resource managementalgorithm-based AI systemhuman reaction to an AI supervisor
[18]welcoming visitors and employees and offering directions to specific locations on a campushumanoid robotstrust, intention to use, and enjoyment
[19]clinical decision-making on Rehabilitation AssessmentAI-based decision-support-systemusefulness and attitudes toward the system
[20]socially shared regulation (SSRL) in learningintelligent agentlearning regulation improvement
[26]Scout Exploration Gameexplainable artificial intelligence (XAI) systemaligned understanding between humans and AI
[27]reconnaissance missions to gather intelligence in a foreign townalgorithm-based robotstransparency, trust, mission success, and team performance
[28]table-clearing taskautonomous-system-based robot assistantshuman–robot team performance and trust evolution
[29]real-time
human–robot cooking task
algorithm-based XAIcollaboration performance and user perception of the robot
[30]make a cup of coffee or clean the bathrooma synthetic robot maidmore explainable robot behavior, team performance
[31]a human–robot team preparing meals in a kitchenautonomous-system-based robot assistantshuman–robot collaboration performance
[32]turn this plate of food into a low-carb mealrecommend system (XAI)team performance
[33]nutrition-related
decision-making task
recommend system (XAI)objective
performance, trust, preference,
mental demand, and understanding
[34]deception-detection taskmachine learning modelshuman performance and human agency
[35]deception-detection taskmachine learning modelshuman performance
[36]multi-label image classificationmachine learning algorithmsoutcome prediction accuracy and confidence
[37]three high-stakes classification tasksmachine learning algorithmperformance/compatibility
tradeoff
[38]recruitment and
staffing, e-commerce, banking
AI-based platform/assistantdegree of fairness, transparent feedback, less-biased decisions
[39]managerial decision-making such as hiring and firing employeesalgorithm-based AI systemacceptance of the decisions
[40]open medicine bottlesautonomous-system-based robothuman trust in the robot
[41]naming and distinguishing speciesmachine learning classifierend users’ appropriate trust
[42]cooking-related tasks in a kitchenexplainable artificial intelligence (XAI)mental model, task performance, and reliance on the system
[43]visual estimation task, song forecasting task, romantic attraction forecasting taskalgorithm-based AI systemalgorithm appreciation (preference between algorithmic and human judgment)
[44]water pipe failure predictionmachine-learning (ML)-based decision-support-systemsuser confidence in decision-making
[45]quality control in a drinking-glass-making factoryautomated decision-support-systemshuman trust, system performance, human perception and decisions
[46]multi-UxV (unmanned vehicle) planning taskintelligent agentperformance, trust, and perceived usability
[47]student admissionalgorithm-based AI systemtrust in algorithmic
decisions
[48]collaborative gamehumanoid robotintention reading and trusting capabilities
[49]automate the loan underwriting processexplainable AI decision-support-systemtrade-off between prediction accuracy and explainability
[50]control of unmanned vehiclemachine-learning-based automated agentstrust in automation, human–systems integration
[51]diagnosis of pneumonia on chest radiographsdeep-learning model architecturesdiagnosis performance
[52]visual navigationautonomous robothuman–robot trust and efficiency
[53]26 tasks, including predicting stock market outcomes, predicting the weather, analyzing data, and giving directionsalgorithmstrust in algorithms
[54]computer gamehumanoid robottrust and distrust over time
[55]restaurants and food services providinghumanoid robotuser discomfort, compensatory consumption
[56]interact with preschool-aged childrenhumanoid robotacceptance of (SAR) by preschool and primary school teachers
[57]inspection task to sort laundered squares of clothhumanoid robotpeople’s Rapport Building and Hindering Behaviors
[58]cognitive assessmenthumanoid robotcognitive performances and workload
[59]interact on the academy enrolment processtext chatbotindividuals’ psychophysiological indices
[60]line scenario-like platformmachine-learning-based platformteam performance
[61]multiplayer gamesAI algorithmshuman perceptions and expectations of AI teammates
[62]individual and collaborative learningAI-based tutoring systemscontrol, trust, responsibility, efficiency, and accuracy
[63]recidivism risk assessmentalgorithm-based AI systemaccuracy, reliance on AI, understanding of AI, decision fairness, willingness to take accountability
[64]AI-assisted house-price predictionalgorithm-based AI modelpeople’s integration of the model outputs with information, prediction accuracy
[65]symptom diagnosealgorithm-based intelligent online symptom checkersdiagnose transparency and explainability
[66]clinical concept identification and classificationnatural language-processing-based clinical annotation systemaccuracy and efficiency
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bao, Y.; Gong, W.; Yang, K. A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems 2023, 11, 442. https://doi.org/10.3390/systems11090442

AMA Style

Bao Y, Gong W, Yang K. A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems. 2023; 11(9):442. https://doi.org/10.3390/systems11090442

Chicago/Turabian Style

Bao, Ying, Wankun Gong, and Kaiwen Yang. 2023. "A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory" Systems 11, no. 9: 442. https://doi.org/10.3390/systems11090442

APA Style

Bao, Y., Gong, W., & Yang, K. (2023). A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems, 11(9), 442. https://doi.org/10.3390/systems11090442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop