Next Article in Journal
Work/Life Relationships and Communication Ethics: An Exploratory Examination
Previous Article in Journal
The Impact of Life Trauma on Mental Health and Suicidal Behavior: A Study from Portuguese Language Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavioral Ethics Ecologies of Human-Artificial Intelligence Systems

VTT Technical Research Centre of Finland, FI-02150 Espoo, Finland
Behav. Sci. 2022, 12(4), 103; https://doi.org/10.3390/bs12040103
Submission received: 4 March 2022 / Revised: 8 April 2022 / Accepted: 8 April 2022 / Published: 11 April 2022

Abstract

:
Historically, evolution of behaviors often took place in environments that changed little over millennia. By contrast, today, rapid changes to behaviors and environments come from the introduction of artificial intelligence (AI) and the infrastructures that facilitate its application. Behavioral ethics is concerned with how interactions between individuals and their environments can lead people to questionable decisions and dubious actions. For example, interactions between an individual’s self-regulatory resource depletion and organizational pressure to take non-ethical actions. In this paper, four fundamental questions of behavioral ecology are applied to analyze human behavioral ethics in human–AI systems. These four questions are concerned with assessing the function of behavioral traits, how behavioral traits evolve in populations, what are the mechanisms of behavioral traits, and how they can differ among different individuals. These four fundamental behavioral ecology questions are applied in analysis of human behavioral ethics in human–AI systems. This is achieved through reference to vehicle navigation systems and healthcare diagnostic systems, which are enabled by AI. Overall, the paper provides two main contributions. First, behavioral ecology analysis of behavioral ethics. Second, application of behavioral ecology questions to identify opportunities and challenges for ethical human–AI systems.

1. Introduction

Behavioral ethics is concerned with what is done, rather than the normative ethics of what should be done. Behavioral ethics addresses the potential for good people to make questionable decisions and take dubious actions [1,2,3]. Behavioral ethics considers interactions between moral motivation and ethical temptation, which depend on combinations of variables that can come together differently in different situations [4,5,6]. These include, for example, an individual’s self-regulatory resource depletion and organizational pressure to take non-ethical actions [7,8]. Having the moral motivation to resist ethical temptation can be a struggle [9], in which non-ethical impulses can override moral reflection [10], sometimes with disastrous consequences for individuals and organizations [3]. Organizational pressure can arise from conflicts between organizational practices, such as performance reporting and performance rewarding [11]. Individuals who identify strongly with an organization can go along with organizational pressure to take non-ethical action [12,13], especially if their self-regulatory resources are depleted [14], for example from overwork for the organization [15] and/or time pressures [16,17]. Fundamentally, behavioral ethics is concerned with how interactions between individuals and their environments can lead to questionable decisions and dubious actions.
All aspects of human–environment interactions are encompassed within the field of human ecology [18]. With more specificity, cultural ecology encompasses human adaptations to social and physical environments [19]. With further specificity, human behavioral ecology addresses the same questions as behavioral ecologists when studying other species [20]. Four fundamental questions in behavioral ecology are those raised by Niko Tinbergen. Two of the questions are evolutionary as follows: what are the ecological fitness functions of behavioral traits (function)? and what is the behavioral trait’s evolutionary history in a population (phylogeny)? Two of the questions are proximate as follows: what is the structure of a behavioral trait (mechanism)? and how has the behavioral trait developed in an individual (ontogeny) [21]? Hitherto, the relevance of behavioral ecology to human ethical behavior has been recognized [22], but there has not been a behavioral ecology analysis of ethical human-artificial intelligence systems. This is despite fundamental questions in behavioral ecology being applicable to nonliving as well as living systems [23].
This gap in the literature is addressed in the remaining sections. Next, in Section 2, human–AI systems’ behavioral traits are analyzed in terms of function. In Section 3, they are analyzed in terms of phylogeny. In Section 4, they are analyzed in terms of mechanism. In Section 5, they are analyzed in terms of ontogeny. In Section 6, the four behavioral ecology questions are applied together to structure behavioral ethics analysis of a human–AI system. This is achieved with the examples of vehicle navigation systems and healthcare diagnostic systems. In conclusion, principal contributions are stated and directions for future research are proposed in Section 7. Overall, the behavioral ecology analysis identifies opportunities and challenges for ethical human–AI systems. These include the dependence of human–AI systems on ideal environmental conditions to reduce ethical stress, and the need for policy-making to encompass the multitude of environmental factors that can affect human–AI systems.

2. Function

The function of behavioral traits in enabling survival can be considered in terms of ecological fitness. The closer the fit between a human behavioral trait and the current environment in which the person intends to survive, the less information the person will be lacking about how to survive, the less physical disorder there will be in the person’s actions to survive in the environment, and the less energy the person will consume unproductively in actions to survive. Hence, when there is a good fit between a person’s behavioral traits and the environment, the person can survive with least action and have energy free for other actions [24,25,26]. In such situations, a person can maintain internal stability by internal regulation through homeostasis [27].
Consider, for example, the demanding working lives of truck drivers. Typically, truck drivers are under time pressure, but this could be reduced to some extent if a truck driver has the behavioral trait of excellent navigation skills. In particular, a truck driver who has the behavioral trait of excellent navigation skills can experience low situated entropy [24,25]. This is because such a truck driver does not experience information theoretic entropy from route information uncertainty. Consequently, the truck driver travels to delivery destinations directly, and so does not experience the statistical mechanics entropy of the physical disorganization involved in driving incorrect routes. As the truck driver does not drive incorrect routes, the truck driver is not under additional time pressure and may have time to take at least some rest breaks that include eating properly. Moreover, the truck driver does not experience thermodynamic entropy, which would be entailed in the physical disorganization of driving incorrect routes due to lack of route information. Rather, the truck driver survives with least action and can have some energy free for other actions, such as recreational activities that can facilitate sleeping well, which together with workday rest breaks, can contribute to maintaining balance between energy demands and energy supply.
By contrast, the worse the fit between a behavioral trait and the environment in which the person intends to survive, the more information a person will be lacking about how to survive. This can increase physical disorder in the person’s actions taken to try to survive in the environment, and increase the energy the person will expend unproductively in actions taken to try to survive. Hence, when there is a bad fit between a person’ behavioral trait and the environment, the person cannot survive with least action and cannot have much energy free for other actions. In such situations, a person may not be able to maintain internal stability by internal regulation through homeostasis. Rather, the person can experience allostatic overload. This can happen when internal regulatory work increases to the point where energy demand exceeds energy supply and a person is depleted of resources needed to function well. This can lead to altered activity in brain areas involved in law-abiding and moral behavior. Specifically, severe stress decreases activity within brain areas that support some of the highest forms of contextual integration, leads to top-down collapse of higher goals, and the favoring of short-term aims [28,29].
Consider, for example, a truck driver who does not have good navigation skills and so experiences high situated entropy [24,25]. Such a truck driver is lacking in correct route information and so does experience information-theoretic entropy from information uncertainty. Consequently, the truck driver does not travel to delivery destinations directly, and so does experience the statistical mechanics entropy of the physical disorder entailed in driving incorrect routes. As the truck driver is often driving incorrect routes, the truck driver is under additional time pressure and does not have any time to take rest breaks that include eating properly. Furthermore, as time pressure increases, the truck driver can experience increasing ethical temptation to drive through traffic lights as they are turning from orange to red. Every time the truck driver resists the temptation to drive through such traffic lights, the truck driver has to exert self-regulatory control that can become depleted.
In addition, the truck driver experiences thermodynamic entropy. Before getting into the truck, depending on previous food intake, the driver can have thermodynamic free energy available for doing useful work. However, once allocated to truck driving without correct route information, the truck driver’s energy changes from being potentially useful to being practically useless. This is because the truck driver’s energy is allocated to being lost amidst thermodynamic entropy entailed in the physical disorganization of driving incorrect routes due to lack of correct route information. Hence, the truck driver does not have energy free for other actions, such as recreational activities that can facilitate sleeping well. Thus, the truck driver who does not have good navigation skills may be less likely to maintain balance between energy supply and energy demands than a truck driver who does have good navigation skills.
As summarized in Table 1 below, as well as being under time pressure and being depleted, the truck driver may feel extreme survival pressure if a long track record of poor delivery performance has placed the truck driver under threat of dismissal from the last haulage company that will provide employment. In such a situation, the truck driver may suffer from chronic stress because of loss of resources [30] due to erratic employment in the past, and from chronic anxiety about how to survive in the future [31]. In such a situation, there can be increased potential for questionable decisions and dubious actions. For example, for the truck driver to continue to drive while feeling sleepy [32].
From the perspective of behavioral ethics, when the same truck driver is not under so much time pressure, is not depleted, and is not under imminent survival threat, the same truck driver would stop as traffic lights turn from green to orange and would not continue to drive when starting to feel sleepy. This could happen if, amidst a severe shortage of truck drivers [33], haulage companies could install new AI-support navigation systems into trucks that could improve the delivery performance of truck drivers [34]. Thus, although the human truck driver alone may have the behavioral trait of not having navigation skills, the same human in a human–AI system could have the behavioral trait of having navigation skills. This could lead to reduced situated entropy, i.e., reduced thermodynamic entropy because of reduced statistical mechanics entropy due to reduced information-theoretic entropy. Hence, the ecological fitness function of a behavioral trait is high when situated entropy arising from that trait in an environment is low.

3. Phylogeny

Hitherto, the evolution of behavioral traits in populations may have taken place over many generations within natural environments that changed little over millennia. For example, humans developed navigation skills, which are useful today, when we were finding our way as hunter-gathers [35,36,37]. Although human capabilities evolved through many millennia [38,39], we are now trying to survive in environments that can change rapidly at least partially because of human ecosystem engineering [40,41,42,43] and bring an increasing variety of survival threats such as unemployment [44,45] and health challenges [46,47]. Moreover, human ecosystem engineering can reduce quickly human capabilities that evolved over millennia. For example, Internet-enabled navigation systems can reduce human navigation skills [48].
The development of vehicle–infrastructure integration for human–AI systems is an example of rapid widespread ecosystem engineering. This involves the re-engineering of public roads into so called smart roads that can provide “cooperative infrastructure” for vehicles that have autonomous functionality. This involves a range of costly engineering activities, such as the installation of V2X (vehicle-to-everything) WiFi. In addition, sensors are being developed to be integrated into road surfaces that can be used to inform communication to autonomous vehicles by new types of smart road signage. The economic viability of engineering work settings depends on the number of operations over which the high financial costs can be spread. Such financial costs may perhaps be economically viable over main roads with a high frequency of vehicles, but are prohibitively expensive for low-frequency roads and for off-road locations [49,50,51,52]. Accordingly, the extent of vehicle functioning that is technically feasible can change as a truck is driven on different types of roads with different levels of cooperative infrastructure.
As summarized in Table 2 below, this example illustrates that the series of evolutionary steps in a population (i.e., phylogeny) for behavioral traits of human–AI systems can involve natural evolution over millennia combined with increasing rapid technological evolution over centuries, decades, and years. For example, human navigation capabilities have evolved over millennia, road networks have evolved over centuries, trucks have evolved over decades, and cooperative infrastructures have evolved over years.
However, the extent of each trait component can be dependent on situation-specific variables, not least environmental conditions that can limit the use of technological components. For example, erratic Internet coverage can make some AI operations erratic [53]. In addition, unfavorable weather can limit the use of technological components. For example, all technological components may function in a fully cooperative road infrastructure in favorable weather conditions, but extreme weather events can flood roads, stop trucks, and prevent cooperative infrastructure from functioning. Occurrences of such unfavorable weather conditions are increasing [54,55]. Accordingly, the function of human–AI navigation systems is not robust from an evolutionary perspective. That is, the function of human–AI navigation systems is not persistent under environmental perturbations [56,57]. Rather, the function of human–AI navigation systems is only fully operational in ideal conditions. Consequently, it can only be relied upon in ideal environmental conditions to reduce information uncertainty that could otherwise arise from poor route information that would lead to the physical disorder of driving incorrect routes with consequent unproductive energy expenditure. In summary, situated entropy in human–AI systems is dynamic as truck drivers and their vehicles pass through different environmental conditions. Thus, human–AI systems can only be relied upon in ideal conditions to reduce situated entropy, which would reduce the human truck driver’s time pressures and depletion. Hence, human–AI systems can only be relied upon in ideal conditions to reduce the potential for questionable decisions and dubious actions, such as continuing to drive while feeling sleepy.

4. Mechanism

Human navigational skills are founded on wayfinding involving memory, perception, and attention [58]. Whereas navigating involves following a preset route, wayfinding involves the ability to create a novel route that is based on understanding a wider frame of reference than a preset route [59,60]. Wayfinding involves creating novel routes through changing situations by making non-conscious reference to cognitive maps and conscious reference to waypoints [35,36]. Here, cognitive maps are mental representations of spatial relations [61]. Waypoints can be physical and natural, such as desert oases and rocky outcrops. Waypoints can be physical and human-made, such as beacons and buoys. Waypoints can be digital and human-made, such as landmarks in digital maps [62].
Human wayfinding skills evolved when we were hunter gatherers [63,64]. Human wayfinding can require dynamic cognitive activity [65]. In particular, dynamic embodied cognition [66]. That is, cognition that depends on sensory inputs brought by and processed by the physical body that shapes prior beliefs and action outputs [67]. Different memory capacities can affect different individuals’ formulation of cognitive maps [68], which can be held within different individuals’ different embodied cognitive architectures [69,70]. Human skills that have been evolved over millennia of wayfinding can be quickly reduced by current human ecosystem engineering, such as global Internet-enabled navigation systems [48]. This can be due to such systems reducing the use of memory, perception, attention, and cognitive diligence that have hitherto been essential to human wayfinding [64,71]. Thus, different people can have different navigation skills, and their navigation skills can change over time.
Accordingly, if the AI components of a human-AI systems for truck navigation are not available, for example due to lack of Internet access or extreme weather, some truck drivers will sometimes get lost rather than getting straight to delivery destinations. This may be because truck drivers are not able to minimize disparities between route assumptions based on prior beliefs, individual waypoints, and overall spatial structures. There can be many opportunities for getting lost when human wayfinding skills are poor and AI-enabled navigation support is not available. For example, getting lost may be more likely when spatial structures cannot be seen as a whole. Moreover, getting lost may be more likely when an individual’s perception of the spatial structure based on prior beliefs does not correspond to the actual structure. In addition, individuals can get lost if they try to recall and to apply a specific route instead of developing a mental spatial representation by inferring location on the move. This can happen even in structured environments such as inside buildings [72], where the final phase of a delivery may need to be made. Accordingly, the human mechanism and the AI mechanism within a human–AI system may be complementary in some situations, but have negative unintended interactions in the longer term.
Thus, it cannot be assumed that a human–AI truck navigation system will eliminate information uncertainty, physical disorder, and unproductive energy expenditure involved in getting lost. Rather, situated entropy in human-AI systems is dynamic. Hence, as summarized in Table 3 below, it cannot be assumed that a human-AI truck navigation system will always prevent human truck drivers coming under time pressure and becoming sufficiently depleted to make questionable decisions and take dubious actions. Rather, there can be many scenarios where ethical behavior is at risk. Accordingly, the mechanism of a human-AI truck navigation system should include additional components to support behavioral ethics. For example, automated messages could be provided for encouraging ethical actions and reminding about ethical actions. These can be considered as cues to support moral motivation [7,73]. Such messages do not have to be dependent on Internet access because they can be communicated via text messages [74,75]. Messages can encourage ethical actions indirectly, for example, by encouraging taking the necessary number and duration of rest breaks in order to reduce the potential for truck drivers becoming depleted. Messages can encourage ethical actions directly if they are related to a performance appraisal methodology that rewards ethical behavior [73]. Messages reminding about ethical actions can be related to reminding truck drivers not to drive while feeling sleepy, because truck driver sleepiness is a major cause of road accidents [76]. All messages should be in accordance with a policy that addresses the potential for productivity incentives to lead unintentionally to unethical actions [8,77].
Thus, while the natural mechanism for human navigation has evolved over millennia, the mechanism of ethical human-AI systems for vehicle navigation also needs to encompass rapidly evolving AI for vehicle navigation. In addition, it needs to include rapidly evolving AI to provide encouragement and reminders for ethical behavior, and a management policy that is carefully formulated by and overseen by humans to facilitate intended positive ethical outcomes and prevent negative unintended ethical consequences. The management policy should involve humans having the capability to oversee the overall activity of the system in terms of its relation to laws, regulations, and standards [78]. Moreover, it should include human intervention during the monitoring of the system’s operation, and have capacity for human intervention in every decision cycle of the system. Such management policies should be documented for impartial external audit [79].

5. Ontogeny

How a behavioral trait develops in an individual can be affected by experience and personality. Consider, for example, the different truck drivers as summarized in Table 4. Two of them have many years of experience of navigating successfully on traditional road infrastructure. Neither has used digital navigation systems. One has not used them only because of not having needed to do so. The other has not used them because of suspicion of AI. For example, people can have concerns about AI and robotics taking jobs [80]. This could be based on the truck driver believing the use of AI would include AI learning to enable full truck automation and full truck driver unemployment in the near future [81]. At the same time, some truck drivers could have wider concerns such as AI and robotics developing dangerous superintelligence, harboring malicious intrinsic motivations, and enacting unfavorable intentions [82,83,84,85]. Hence, some people can be reluctant to participate in human-AI systems [86]. The second truck driver is prone to anxiety. The other two truck drivers are digital natives [87]. Neither of them has experience of navigating successfully without digital navigation support. One of them has not tried navigating without digital support because of not having needed to do so. The other has not tried because of suspicion of traditional methods of navigation. The fourth truck driver is prone to anxiety. Personality types with a propensity for anxiety have been associated with reluctance to use new technology [88] and with truck driver accidents [89].
Human choices are often not based on objective evaluation of utility. Rather, choices can be influenced by numerous biases and include over-emphasizing potential outcomes that are extreme but unlikely [31,90]. As summarized in Table 4 below, different experience and personality can lead to some people wanting to use new technology and other people not wanting to use the same new technology. These behaviors can be described in terms of approaching and avoiding [91,92]. In particular, environments can be perceived to carry varying degrees of danger. Passive avoidance can arise in environments where the extent of danger is uncertain. Especially for people who have a general propensity for anxiety, avoidance can be accompanied by chronically high levels of anxiety [93,94,95]. Chronic anxiety can lead to chronic stress [96,97,98]. This has serious implications for ethical behavior as severe stress can decrease activity within brain areas involved in law-abiding and moral behavior [28,29].

6. Behavioral Ecology Analysis of Ethical Stress in Human-AI Systems

6.1. Overview

An overview is provided in Figure 1 of the behavioral ecology analysis of ethical stress in human-AI systems. It is appropriate to place situated entropy at the center of analysis and development of ethical human-AI systems because as well as entropy having a determining influence over human stress [28,31,96], entropy is a fundamental concept in computer science and its applications [99,100,101,102]. In Figure 1, phylogeny (1) refers to the evolution of mechanism (2) of a human-AI system before its introduction. Function (3) refers to function at introduction, which may lead to reduced situated entropy and related ethical stress in some situations, but increased situated entropy and related ethical stress in other situations. Phylogeny (1v2) refers to evolution after initial introduction that leads to an adapted mechanism (2v2), which provides adapted function (3v2) with increased potential to reduce situated entropy. Ontogeny (4.1) refers to the ontogeny of one person who experiences reduced situated entropy and related ethical stress. Ontogeny (4.2) refers to the ontogeny of another person who experiences increased situated entropy and related ethical stress. Ontogeny (4.2v2) refers to the same person after individual adaptation of the human-AI system leads to the person experiencing reduced situated entropy and related ethical stress. Feedback refers to the potential for individual adaptations informing further phylogeny.

6.2. AI Navigation Support for Human Truck Drivers

AI-enabled fully automated trucks are a goal for some road freight companies. However, this corporate goal is hindered by the need for human truck drivers to take care of the so-called first mile and last mile of deliveries where there is too much task variation for AI to deal with [103,104]. Despite the continued importance of human truck drivers, current approaches to AI implementation have led to the working lives of truck drivers becoming what has been described as a dystopian nightmare. This involves surveillance of truck drivers to such an extent that even their eye movements are monitored [105]. Accordingly, new approaches are needed for AI implementations in road freight.
As summarized in Table 1, human stress that can undermine behavioral ethics can be low when function is well-matched to an environment. This is because there is low information uncertainty, low physical disorder, and low unproductive energy consumption. Hence, there is low situated entropy. However, as summarized in Table 2, human-AI system phylogeny can involve interrelated fitness components that evolve over very different time scales, and can have very different levels of distribution. Moreover, human-AI system phylogeny can lead to a behavioral trait not being robust amidst environmental perturbations. Hence, there can be many situations in which human-AI systems will not reduce situated entropy that brings ethical stress. Thus, while human-AI systems can introduce opportunities for reducing situated entropy, they can also introduce new challenges. In particular, human navigation skills can be undermined by frequent use of AI-enabled navigation systems [48], but AI-enabled navigation systems cannot be relied upon in all situations. Accordingly, human-AI navigation systems need to be situated within wider efforts that can reduce situated entropy. For example, the replacement of hundreds of separate delivery locations with common collection locations [106]. Furthermore, as summarized in Table 3, the human-AI system mechanism needs to include additional components within a management policy, for example, that limits the potential for productivity incentives to lead unintentionally to unethical actions. However, this could increase the complexity and the risk of failure of a human-AI system. Accordingly, system design for high reliability is required [107]. As summarized in Table 4, it is important that ontogeny can include individualized adaptation of human-AI systems in order to mitigate against the potential for individual differences in experience and personality to lead to chronic stress that can undermine human moral motivation. However, this can further increase complexity, which it may not be possible to offset completely with system design for high reliability [108]. An overall summary of opportunities and challenges for human-AI truck navigation systems is shown in Table 5.

6.3. AI Diagnostic Support for Human Healthcare Providers

Human-AI navigation systems provide quite straightforward examples of moral dilemmas, such as whether or not to drive through traffic lights as they turn from orange to red, and whether or not to continue to drive when feeling sleepy. However, there are many other potential applications of human-AI systems where moral dilemmas are less straightforward. Consider, for example, a human-AI system for planning recovery pathways for functional disorders, i.e., for medical conditions without complete medical explanation that impair normal functioning of bodily processes [109]. Here, moral dilemmas arise for healthcare organizations and their personnel from the challenge of deciding where to allocate and where not to allocate finite healthcare resources at the tax payers’ expense. In particular, lack of complete medical explanation for functional disorders can lead to concerns that they are actually factitious disorders or malingering [110]. Factitious disorder, which has also been called Munchausen Syndrome, involves people behaving as if they have illnesses by deliberately producing, feigning, or exaggerating symptoms [111]. This is different to malingering, which involves deliberate effort to simulate illness in order to get out of obligations and/or to obtain benefits [112]. Accordingly, there can be stigma against patients with functional disorders that presents obstacles to diagnosis and treatment. It has been argued that symptoms can be misunderstood or dismissed because of stigma. Moreover, it has been argued that stigma exacerbates the suffering of patients, and can result in poor clinical management involving prolonged use of healthcare resources [113]. Thus, new healthcare systems are needed that can provide better diagnosis and better recovery pathways for people suffering with functional disorders [114].
New investments in human resources are needed for new healthcare systems. In addition, artificial intelligence can contribute to new healthcare systems by carrying out automated analyses of healthcare study results, such as scans, and by analyzing patterns over a series of results. For example, gait problems are a feature of some functional disorders. Gait encompasses walking, running, and other means of natural locomotion combined with posture. Gait analysis includes the measurement of multiple parameters from which conclusions can be drawn about health [115]. Gait analysis could contribute to distinguishing between functional disorders, factitious disorders, and malingering, because gait involves complex natural processes that are difficult to fake consistently. Hence, gait analyses are used for security as well as for healthcare [116]. Some gait patterns are found frequently among patients with functional disorders. These include excessive gait slowness, knee buckling, and astasia-abasia, which refers to the inability to either stand or walk in a normal manner.
Regardless of what initiates functional symptoms, they can be perpetuated by phobic avoidance and affective disorders [117]. Accordingly, it is important that gait analysis reports do not contribute to phobic avoidance and affective disorders, but rather contribute to a care pathway for patient recovery [118]. For example, gait analysis reports can contribute to evaluation of patients’ complex disorder status, treatment, rehabilitation, and recovery [119]. In practical terms, this can involve important outcomes such as preventing wheelchair dependency [120]. A range of artificial intelligence techniques have been applied to gait analysis. These have been found to have potential for making positive contributions to detecting disorders and differentiating between disorders [121,122,123,124,125], including detecting affective disorders [116,126,127,128,129].
However, there remain many challenges in the deployment of AI for gait analyses. Notably, there has been little progress in making AI-enabled gait analyses understandable for patients and healthcare providers [130]. Hence, both can experience much information uncertainty about gait analyses. As summarized in Table 6 below, if the human-AI system does have the function of reliable and explainable gait analysis, it can reduce the patient’s situated entropy in the patient’s healthcare environment. However, this depends upon the patient agreeing with the AI-enabled gait analysis, and patient acceptance of a diagnosis can depend upon patients’ preconceptions rather than the content of diagnoses [131,132]. If the patient does not agree, the patient can experience information uncertainty, physical disorder, and unproductive energy expenditure related to trying to find alternative treatment options, all of which can lead to stress that can worsen the patient’s condition. However, even if the patient does not agree with the diagnosis, the healthcare provider can experience reduced situated entropy about the allocation of treatment resources. This is because patients who do not agree with a diagnosis are less likely to respond to treatment, and resources have to be allocated appropriately in the healthcare provision environment of scarce resources [133]. Ongoing evolution of technological components has the potential to improve diagnoses. This could lead to AI-based diagnosis based on advanced technologies being seen as being more reliable than human diagnosis. However, AI-enabled gait analysis will only be robust in an environment where the recording of the patient’s gait, for example with sensors and cameras, is ideal for their operation. At the same time, the environment must be acceptable to the patient. In addition, the mechanism of the human-AI systems can involve many software and hardware components in recording and analysis that can be difficult to combine. Moreover, patients need to be comfortable with having their gait recorded. Thus, the reliable function of objective gait analysis can be difficult to achieve even in ideal environments.
Furthermore, AI-enabled gait analysis cannot be reliable if the patient and/or the healthcare provider’s human expert are impelled to avoid AI due to personal factors such as experience and personality. In such situations, the patient could experience anxiety when interacting with AI that could unintentionally alter gait [134]. In addition, if the healthcare provider’s human expert does not trust AI [135], the expert may not base treatment recommendations on the AI-enabled gait analysis. This can be because the AI has a so-called black box model within which inputs and outputs can be seen, but the processes and workings in between them cannot be seen. New methods are being researched to obtain information from AI systems in order to generate explanations for their outputs. However, these methods have limitations such as only providing details relevant for a single decision, rather than providing underlying rationale or causality [136]. Accordingly, there is also research being carried out to develop less complex AI models. However, there are few working examples of such models in 2022 [137]. Accordingly, healthcare practitioners may be more suspicious than trusting of AI-enabled diagnoses.
In terms of ontogeny, both the patient’s and the expert’s interaction with the AI may evolve so that either or both may become more or less anxious about interacting with AI. Only if both approach AI, rather than avoid AI, can the human-AI system have the function of providing an objective basis for allocation of scarce healthcare resources. Accordingly, the behavioral ethics of healthcare resource allocation can only be better enabled by a human-AI system if the humans that are involved with the system do not suffer persistent anxiety and stress from interacting with AI. This can depend upon patients and healthcare providers attributing positive intentionality to AI [138,139]. Here, it is important to note that human perception has evolved to facilitate human survival [140,141] and, whatever the visual appearance of AI, patients and healthcare providers may see AI as a threat to survival [82,83,84,85] if they do not attribute positive intentionality to AI. While there may be sufficient time for a healthcare provider to undertake necessary steps to achieve attribution of positive intentionality to AI by its own personnel, such as co-creating working personas and scenarios for the AI [142], it is less likely that there will be enough time to do this for patients.

7. Conclusions

Humans no longer have millennia to evolve behavioral traits that can best enable survival in enduring environments. Rather, rapid changes to behaviors and environments come from the introduction of artificial intelligence (AI), and infrastructures that facilitate its application. In this paper, it has been explained how four fundamental questions of behavioral ecology can be applied to inform development of ethical human-AI systems. As summarized in Figure 1, analyzing human-AI systems in terms of function, phylogeny, mechanism, and ontogeny reveals that they can increase ethical stress that can lead to questionable decisions and dubious actions. Accordingly, application of the four fundamental questions can support balanced assessment of ethical human-AI system concepts, and provide a structure to improve their function, phylogeny, mechanism, and ontogeny for behavioral ethics during their development.
Overall, the paper provides two contributions. First, behavioral ecology analysis of behavioral ethics. Second, application of behavioral ecology questions to identify opportunities and challenges for ethical human-AI systems. These include the need for policy-making to encompass the many environmental factors that can affect human-AI systems. This is imperative as there are many scenarios where environmental conditions can lead to situated entropy from system function that can cause ethical stress. Accordingly, it can be appropriate for policy making to be informed through application of methods such as task analysis, and failure mode and effects analysis (FMEA). Task analysis involves detailed evaluation of mental and manual activities in work scenarios and how they can be affected by environmental conditions [143]. FMEA is a risk assessment method that mitigates potential failures in systems, which has been used in a wide range of industries [144].
Human-AI systems may be an important direction for future research into behavioral ethics as more time is spent in environments that are either partially or fully generated by artificial intelligence; for example, in virtual worlds that may be referred to as the metaverse [145,146]. Virtual worlds involve persistent immersive environments within which individual human users can have many avatars [147,148]. Although it is recognized that virtual environments can shape behavior in physical environments [149,150], and that there can be ethical issues from interplay between virtual behavior and physical behavior [151,152], behavioral ethics implications have not been considered in the development of human-AI systems that span physical and virtual environments. For example, via augmented reality [153]. Future research could consider to what extent, if any, human-AI systems can entail complementary physical and virtual sensory ecologies [154,155]. In addition, future research could investigate how to minimize the potential for human-AI systems to introduce perceptual traps [156] and/or ecological traps [157]. In doing so, future research could apply function, phylogeny, mechanism, and ontogeny as structuring constructs to inform the design of ethical human-AI systems that entail human transitions back and forth between physical environments and virtual environments.

Funding

This research was funded by European Commission grant number 952091.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Trevino, L.K.; Weaver, G.R. Business ethics: One field or two? Bus. Ethics Q. 1994, 4, 113–128. [Google Scholar] [CrossRef]
  2. Bersoff, D.M. Why good people sometimes do bad things: Motivated reasoning and unethical behavior. Personal. Soc. Psychol. Bull. 1999, 25, 28–39. [Google Scholar] [CrossRef]
  3. De Cremer, D.; Van Dick, R.; Tenbrunsel, A.; Pillutla, M.; Murnighan, J.K. Understanding ethical behavior and decision making in management: A behavioural business ethics approach. Br. J. Manag. 2011, 22, S1–S4. [Google Scholar] [CrossRef]
  4. De Cremer, D.; Vandekerckhove, W. Managing unethical behavior in organizations: The need for a behavioral business ethics approach. J. Manag. Organ. 2017, 23, 437–455. [Google Scholar] [CrossRef] [Green Version]
  5. Gino, F.; Schweitzer, M.E.; Mead, N.L.; Ariely, D. Unable to resist temptation: How self-control depletion promotes unethical behavior. Organ. Behav. Hum. Decis. Process. 2011, 115, 191–203. [Google Scholar] [CrossRef]
  6. Trevino, L.K. Ethical decision making in organizations: A person-situation interactionist model. Acad. Manag. Rev. 1986, 11, 601–617. [Google Scholar] [CrossRef]
  7. Wang, Y.; Wang, G.; Chen, Q.; Li, L. Depletion, moral identity, and unethical behavior: Why people behave unethically after self-control exertion. Conscious. Cogn. 2017, 56, 188–198. [Google Scholar] [CrossRef]
  8. Fleischman, G.M.; Johnson, E.N.; Walker, K.B.; Valentine, S.R. Ethics versus outcomes: Managerial responses to incentive-driven and goal-induced employee behavior. J. Bus. Ethics 2019, 158, 951–967. [Google Scholar] [CrossRef]
  9. Kaptein, M. The battle for business ethics: A struggle theory. J. Bus. Ethics 2017, 144, 343–361. [Google Scholar] [CrossRef] [Green Version]
  10. Weaver, G.R.; Clark, C.E. Behavioral ethics, behavioral governance, and corruption in and by organizations. In Debates of Corruption and Integrity; Hardi, P., Heywood, P., Torsello, D., Eds.; Palgrave Macmillan: London, UK, 2015; pp. 135–158. [Google Scholar]
  11. Grover, S.L.; Hui, C. How job pressures and extrinsic rewards affect lying behavior. Int. J. Confl. Manag. 2005, 16, 287–300. [Google Scholar] [CrossRef]
  12. Chen, M.; Chen, C.C.; Sheldon, O.J. Relaxing moral reasoning to win: How organizational identification relates to unethical pro-organizational behavior. J. Appl. Psychol. 2016, 101, 1082–1096. [Google Scholar] [CrossRef] [PubMed]
  13. Umphress, E.E.; Bingham, J.B.; Mitchell, M.S. Unethical behavior in the name of the company: The moderating effect of organizational identification and positive reciprocity beliefs on unethical pro-organizational behavior. J. Appl. Psychol. 2010, 95, 769–780. [Google Scholar] [CrossRef] [Green Version]
  14. Baur, C.; Soucek, R.; Kühnen, U.; Baumeister, R.F. Unable to resist the temptation to tell the truth or to lie for the organization? Identification makes the difference. J. Bus. Ethics 2020, 167, 643–662. [Google Scholar] [CrossRef]
  15. Avanzi, L.; Van Dick, R.; Fraccaroli, F.; Sarchielli, G. The downside of organizational identification: Relations between identification, workaholism and well-being. Work. Stress 2012, 26, 289–307. [Google Scholar] [CrossRef]
  16. Lee, E.-J.; Yun, J.H. Moral incompetency under time constraint. J. Bus. Res. 2019, 99, 438–445. [Google Scholar] [CrossRef]
  17. Shalvi, S.; Eldar, O.; Bereby-Meyer, Y. Honesty requires time (and lack of justifications). Psychol. Sci. 2012, 23, 1264–1270. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Aims & Scope. Human Ecology Review; Society for Human Ecology, ANU Press, The Australian National University: Canberra, Australia, 2021; Available online: https://press.anu.edu.au/publications/journals/human-ecology-review (accessed on 17 January 2021).
  19. Frake, C.O. Cultural ecology and ethnography. Am. Anthropol. 1962, 64, 53–59. [Google Scholar] [CrossRef]
  20. Nettle, D.; Gibson, M.A.; Lawson, D.W.; Sear, R. Human behavioral ecology: Current research and future prospects. Behav. Ecol. 2013, 24, 1031–1040. [Google Scholar] [CrossRef] [Green Version]
  21. Kapheim, K.M. Synthesis of Tinbergen’s four questions and the future of sociogenomics. Behav. Ecol. Sociobiol. 2019, 73, 186. [Google Scholar] [CrossRef]
  22. Maranges, H.M.; Hasty, C.R.; Maner, J.K.; Conway, P. The behavioral ecology of moral dilemmas: Childhood unpredictability, but not harshness, predicts less deontological and utilitarian responding. J. Personal. Soc. Psychol. 2021, 120, 1696–1719. [Google Scholar] [CrossRef]
  23. Bateson, P.; Laland, K.N. Tinbergen’s four questions: An appreciation and an update. Trends Ecol. Evol. 2013, 28, 712–718. [Google Scholar] [CrossRef] [PubMed]
  24. Kaila, V.R.; Annila, A. Natural selection for least action. Proc. R. Soc. A Math. Phys. Eng. Sci. 2008, 464, 3055–3070. [Google Scholar] [CrossRef] [Green Version]
  25. Fox, S. Synchronous generative development amidst situated entropy. Entropy 2022, 24, 89. [Google Scholar] [CrossRef]
  26. Fox, S.; Kotelba, A. Principle of Least Psychomotor Action: Modelling situated entropy in optimization of psychomotor work involving human, cyborg and robot workers. Entropy 2018, 20, 836. [Google Scholar] [CrossRef] [Green Version]
  27. Ramsay, D.S.; Woods, S.C. Clarifying the roles of homeostasis and allostasis in physiological regulation. Psychol. Rev. 2014, 121, 225–247. [Google Scholar] [CrossRef] [Green Version]
  28. Goekoop, R.; De Kleijn, R. How higher goals are constructed and collapse under stress: A hierarchical Bayesian control systems perspective. Neurosci. Biobehav. Rev. 2021, 123, 257–285. [Google Scholar] [CrossRef]
  29. Youssef, F.F.; Dookeeram, K.; Basdeo, V.; Francis, E.; Doman, M.; Mamed, D.; Maloo, S.; Degannes, J.; Dobo, L.; Ditshotlo, P. Stress alters personal moral decision making. Psychoneuroendocrinology 2012, 37, 491–498. [Google Scholar] [CrossRef]
  30. Hobfoll, S.E. Conservation of Resources Theory: Its Implication for Stress, Health, and Resilience. In The Oxford Handbook of Stress, Health, and Coping; Folkman, S., Ed.; Oxford Library of Psychology: Oxford, UK, 2011; pp. 127–147. [Google Scholar]
  31. Hirsh, J.B.; Mar, R.A.; Peterson, J.B. Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychol. Rev. 2012, 119, 304. [Google Scholar] [CrossRef] [Green Version]
  32. Huhta, R.; Hirvonen, K.; Partinen, M. Prevalence of sleep apnea and daytime sleepiness in professional truck drivers. Sleep Med. 2021, 81, 136–143. [Google Scholar] [CrossRef]
  33. Mittal, N.; Udayakumar, P.D.; Raghuram, G.; Bajaj, N. The endemic issue of truck driver shortage-A comparative study between India and the United States. Res. Transp. Econ. 2018, 71, 76–84. [Google Scholar] [CrossRef]
  34. Loske, D.; Klumpp, M. Intelligent and efficient? An empirical analysis of human-AI collaboration for truck drivers in retail logistics. Int. J. Logist. Manag. 2021, 32, 1356–1383. [Google Scholar] [CrossRef]
  35. Istomin, K.V.; Dwyer, M.J. Finding the way: A critical discussion of anthropological theories of human spatial orientation with reference to reindeer herders of northeastern Europe and western Siberia. Curr. Anthropol. 2009, 50, 29–49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Tuhkanen, S.; Pekkanen, J.; Rinkkala, P.; Mole, C.; Wilkie, R.M.; Lappi, O. Humans use predictive gaze strategies to target waypoints for steering. Sci. Rep. 2019, 9, 8344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Golledge, R.; Garling, T. Cognitive maps and urban travel. In Handbook of Transport Geography and Spatial Systems, 3rd ed.; Hensher, D.A., Button, K.J., Haynes, K.E., Stopher, P.R., Eds.; Emerald: Bingley, UK, 2008; pp. 501–512. [Google Scholar]
  38. Gurven, M.; Kaplan, H. Longevity among hunter-gatherers: A cross-cultural examination. Popul. Dev. Rev. 2007, 33, 321–365. [Google Scholar] [CrossRef]
  39. Nairne, J.S.; Pandeirada, J.N.; Gregory, K.J.; Van Arsdall, J.E. Adaptive memory: Fitness relevance and the hunter-gatherer mind. Psychol. Sci. 2009, 20, 740–746. [Google Scholar] [CrossRef]
  40. Smith, B.D. The Ultimate ecosystem engineers. Science 2007, 315, 1797–1798. [Google Scholar] [CrossRef] [Green Version]
  41. Hetherington, K. (Ed.) Infrastructure, Environment, and Life in the Anthropocene; Duke University Press: Durham, NC, USA, 2018. [Google Scholar]
  42. Meadows, D.H.; Meadows, D.L.; Randers, J.; Behrens, W.W. The Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind; Universe Books: New York, NY, USA, 1972. [Google Scholar]
  43. Herrington, G. Update to limits to growth: Comparing the World3 model with empirical data. J. Ind. Ecol. 2020, 25, 614–626. [Google Scholar] [CrossRef]
  44. Nica, E. Will robots take the jobs of human workers? Disruptive technologies that may bring about jobless growth and enduring mass unemployment. Psychosociol. Issues Hum. Resour. Manag. 2018, 6, 56–61. [Google Scholar]
  45. Kral, P.; Janoskova, K.; Podhorska, I.; Pera, A.; Neguriţă, O. The automatability of male and female jobs: Technological unemployment, skill shift, and precarious work. J. Res. Gend. Stud. 2019, 9, 146–152. [Google Scholar]
  46. Cregan-Reid, V. Primate Change: How the World We Made Is Remaking Us; Hachette: London, UK, 2018. [Google Scholar]
  47. Tremblay, M.S.; Colley, R.C.; Saunders, T.J.; Healy, G.N.; Owen, N. Physiological and health implications of a sedentary lifestyle. Appl. Physiol. Nutr. Metab. 2010, 35, 725–740. [Google Scholar] [CrossRef]
  48. Baron, N.S. Know what? How digital technologies undermine learning and remembering. J. Pragmat. 2021, 175, 27–37. [Google Scholar] [CrossRef]
  49. Edwards, C. Every road tells a story: Communication smart roads. Eng. Technol. 2017, 12, 64–67. [Google Scholar] [CrossRef]
  50. Mi, C.C.; Buja, G.; Choi, S.Y.; Rim, C.T. Modern advances in wireless power transfer systems for roadway powered electric vehicles. IEEE Trans. Ind. Electron. 2016, 63, 6533–6545. [Google Scholar] [CrossRef]
  51. Johnson, C. Readiness of the Road Network for Connected and Autonomous Vehicles; RAC Foundation: London, UK, 2017. [Google Scholar]
  52. Wang, M.; Daamen, W.; Hoogendoorn, S.P.; Van Arem, B. Connected variable speed limits control and car-following control with vehicle-infrastructure communication to resolve stop-and-go waves. J. Intell. Transp. Syst. 2016, 20, 559–572. [Google Scholar] [CrossRef] [Green Version]
  53. Zhou, H.; Xu, W.; Chen, J.; Wang, W. Evolutionary V2X technologies toward the Internet of vehicles: Challenges and opportunities. Proc. IEEE 2020, 108, 308–323. [Google Scholar] [CrossRef]
  54. Mann, M.E.; Rahmstorf, S.; Kornhuber, K.; Steinman, B.A.; Miller, S.K.; Petri, S.; Coumou, D. Projected changes in persistent extreme summer weather events: The role of quasi-resonant amplification. Sci. Adv. 2018, 4, eaat3272. [Google Scholar] [CrossRef] [Green Version]
  55. Cohen, J.; Pfeiffer, K.; Francis, J.A. Warm Arctic episodes linked with increased frequency of extreme winter weather in the United States. Nat. Commun. 2018, 9, 869. [Google Scholar] [CrossRef]
  56. Kitano, H. Biological robustness. Nat. Rev. Genet. 2004, 5, 826–837. [Google Scholar] [CrossRef]
  57. Félix, M.-A.; Wagner, A. Robustness and evolution: Concepts, insights and challenges from a developmental model system. Heredity 2006, 100, 132–140. [Google Scholar] [CrossRef] [Green Version]
  58. Gillett, A.J.; Heersmink, R. How navigation systems transform epistemic virtues: Knowledge, issues and solutions. Cogn. Syst. Res. 2019, 56, 36–49. [Google Scholar] [CrossRef] [Green Version]
  59. Golledge, R.G. Human wayfinding and cognitive maps. In Wayfinding Behavior: Cognitive Mapping and Other Spatial Processes; Golledge, R.G., Ed.; John Hopkins University Press: Baltimore, MD, USA, 1999; pp. 5–45. [Google Scholar]
  60. Golledge, R.G.; Jacobson, R.D.; Kitchin, R.; Blades, M. Cognitive maps, spatial abilities, and human wayfinding. Geogr. Rev. Jpn. 2000, 73, 93–104. [Google Scholar] [CrossRef] [Green Version]
  61. Kitchin, R.M. Cognitive maps: What are they and why study them? J. Environ. Psychol. 1994, 14, 1–19. [Google Scholar] [CrossRef]
  62. Devi, S.; Alvares, S.; Lobo, S. GPS tracking system based on setting waypoint using geo-fencing. Asian J. Converg. Technol. 2019. Available online: https://asianssr.org/index.php/ajct/article/view/738 (accessed on 21 January 2021).
  63. Kada, A.T. Unfolding cultural meanings: Wayfinding practices among the San of the Central Kalahari. In Marking the Land; Lovis, W.A., Whallon, R., Eds.; Routledge: London, UK, 2016; pp. 194–214. [Google Scholar]
  64. Cornell, E.H.; Heth, C.D. Route learning and wayfinding. In Cognitive Mapping; Kitchin, R., Freundschuh, S., Eds.; Routledge: London, UK, 2018; pp. 66–83. [Google Scholar]
  65. Spiers, H.J.; Maguire, E.A. The dynamic nature of cognition during wayfinding. J. Environ. Psychol. 2008, 28, 232–249. [Google Scholar] [CrossRef] [Green Version]
  66. Gramann, K. Embodiment of spatial reference frames and individual differences in reference frame proclivity. Spat. Cogn. Comput. 2013, 13, 1–25. [Google Scholar] [CrossRef]
  67. Fox, S. Psychomotor predictive processing. Entropy 2021, 23, 806. [Google Scholar] [CrossRef]
  68. Weisberg, S.M.; Newcombe, N.S. How do (some) people make a cognitive map? Routes, places, and working memory. J. Exp. Psychol. Learn. Mem. Cogn. 2016, 42, 768–785. [Google Scholar] [CrossRef]
  69. Ziemke, T.; Lowe, R. On the role of emotion in embodied cognitive architectures: From organisms to robots. Cogn. Comput. 2009, 1, 104–117. [Google Scholar] [CrossRef]
  70. Ziemke, T. The body of knowledge: On the role of the living body in grounding embodied cognition. Biosystems 2016, 148, 4–11. [Google Scholar] [CrossRef] [Green Version]
  71. Menary, R. Keeping track with things. In Extended Epistemology; Carter, J.A., Clark, A., Kallestrup, J., Palermos, S.O., Pritchard, D., Eds.; Oxford University Press: Oxford, UK, 2018; pp. 305–330. [Google Scholar]
  72. Carlson, L.A.; Hölscher, C.; Shipley, T.F.; Dalton, R.C. Getting lost in buildings. Curr. Dir. Psychol. Sci. 2010, 19, 284–289. [Google Scholar] [CrossRef]
  73. Hirsh, J.B.; Lu, J.G.; Galinsky, A.D. Moral utility theory: Understanding the motivation to behave (un) ethically. Res. Organ. Behav. 2018, 38, 43–59. [Google Scholar] [CrossRef]
  74. Abroms, L.C.; Whittaker, R.; Free, C.; Van Alstyne, J.M.; Schindler-Ruwisch, J.M. Developing and pretesting a text messaging program for health behavior change: Recommended steps. JMIR mHealth uHealth 2015, 3, e4917. [Google Scholar] [CrossRef] [PubMed]
  75. Sahin, C.; Courtney, K.L.; Naylor, P.J.; Rhodes, R. Tailored mobile text messaging interventions targeting type 2 diabetes self-management: A systematic review and a meta-analysis. Digit. Health 2019, 5. [Google Scholar] [CrossRef] [Green Version]
  76. Garbarino, S.; Durando, P.; Guglielmi, O.; Dini, G.; Bersi, F.; Fornarino, S.; Toletone, A.; Chiorri, C.; Magnavita, N. Sleep apnea, sleep debt and daytime sleepiness are independently associated with road accidents. A cross-sectional study on truck drivers. PLoS ONE 2016, 11, e0166262. [Google Scholar] [CrossRef]
  77. Mahajan, K.; Velaga, N.R.; Kumar, A.; Choudhary, A.; Choudhary, P. Effects of driver work-rest patterns, lifestyle and payment incentives on long-haul truck driver sleepiness. Transp. Res. Part F Traffic Psychol. Behav. 2019, 60, 366–382. [Google Scholar] [CrossRef] [Green Version]
  78. Perc, M.; Ozer, M.; Hojnik, J. Social and juristic challenges of artificial intelligence. Palgrave Commun. 2019, 5, 61. [Google Scholar] [CrossRef]
  79. Van der Wiele, T.; Kok, P.; McKenna, R.; Brown, A. A corporate social responsibility audit within a quality management framework. J. Bus. Ethics 2001, 31, 285–297. [Google Scholar] [CrossRef]
  80. Sahota, N.; Ashley, M. When robots replace human managers: Introducing the quantifiable workplace. IEEE Eng. Manag. Rev. 2019, 47, 21–23. [Google Scholar] [CrossRef]
  81. Snoeck, A.; Merchán, D.; Winkenbach, M. Route learning: A machine learning-based approach to infer constrained customers in delivery routes. Transp. Res. Procedia 2020, 46, 229–236. [Google Scholar] [CrossRef]
  82. Barrat, J. Our Final Invention: Artificial Intelligence and the End of the Human Era; St. Martin’s Press: New York, NY, USA, 2013. [Google Scholar]
  83. Cave, S.; Coughlan, K.; Dihal, K. “Scary robots”: Examining public responses to AI. In Proceedings of the AIES 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019. [Google Scholar]
  84. Mathur, M.B.; Reichling, D.B. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [Green Version]
  85. Vanderelst, D.; Winfield, A. The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; pp. 317–322. [Google Scholar]
  86. Klumpp, M. Automation and artificial intelligence in business logistics systems: Human reactions and collaboration requirements. Int. J. Logist. Res. Appl. 2018, 21, 224–242. [Google Scholar] [CrossRef]
  87. Kavouras, M.; Kokla, M.; Liarokapis, F.; Pastra, K.; Tomai, E. Comparative study of the interaction of digital natives with mainstream web mapping services. In Human-Computer Interaction. Design and User Experience Case Studies: Thematic Area, HCI 2021; Kurosu, M., Ed.; Lecture Notes in Computer Science; Springer Nature: New York, NY, USA, 2021; Volume 12764, pp. 337–350. [Google Scholar]
  88. Matthews, G.; Hancock, P.A.; Lin, J.; Panganiban, A.R.; Reinerman-Jones, L.E.; Szalma, J.L.; Wohleber, R.W. Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems. Personal. Individ. Differ. 2021, 169, 109969. [Google Scholar] [CrossRef]
  89. Landay, K.; Wood, D.; Harms, P.D.; Ferrell, B.; Nambisan, S. Relationships between personality facets and accident involvement among truck drivers. J. Res. Personal. 2020, 84, 103889. [Google Scholar] [CrossRef]
  90. Fox, S. Factors in ontological uncertainty related to ICT innovations. Int. J. Manag. Proj. Bus. 2011, 4, 137–149. [Google Scholar] [CrossRef]
  91. Hwang, Y. Investigating enterprise systems adoption: Uncertainty avoidance, intrinsic motivation, and the technology acceptance model. Eur. J. Inf. Syst. 2005, 14, 150–161. [Google Scholar] [CrossRef]
  92. Liang, H.; Xue, Y. Avoidance of information technology threats: A theoretical perspective. MIS Q. 2009, 33, 71–90. [Google Scholar] [CrossRef] [Green Version]
  93. Perusini, J.N.; Fanselow, M.S. Neurobehavioral perspectives on the distinction between fear and anxiety. Learn. Mem. 2015, 22, 417–425. [Google Scholar] [CrossRef] [Green Version]
  94. Robinson, O.J.; Pike, A.C.; Cornwell, B.; Grillon, C. The translational neural circuitry of anxiety. J. Neurol. Neurosurg. Psychiatry 2019, 90, 1353–1360. [Google Scholar] [CrossRef] [Green Version]
  95. Ruscio, A.M.; Hallion, L.S.; Lim, C.C.W.; Aguilar-Gaxiola, S.; Al-Hamzawi, A.; Alonso, J.; Andrade, L.H.; Borges, G.; Bromet, E.J.; Bunting, B.; et al. Cross-sectional comparison of the epidemiology of DSM-5 generalized anxiety disorder across the globe. JAMA Psychiatry 2017, 74, 465–475. [Google Scholar] [CrossRef] [Green Version]
  96. Peters, A.; McEwen, B.S.; Friston, K. Uncertainty and stress: Why it causes diseases and how it is mastered by the brain. Prog. Neurobiol. 2017, 156, 164–188. [Google Scholar] [CrossRef]
  97. Vyas, A.; Chattarji, S. Modulation of different states of anxiety-like behavior by chronic stress. Behav. Neurosci. 2004, 118, 1450. [Google Scholar] [CrossRef] [PubMed]
  98. Patriquin, M.A.; Mathew, S.J. The neurobiological mechanisms of generalized anxiety disorder and chronic stress. Chronic Stress 2017, 1. [Google Scholar] [CrossRef] [PubMed]
  99. Zurek, W.H. Complexity, Entropy and the Physics of Information; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  100. Valavanis, K.P. The entropy based approach to modeling and evaluating autonomy and intelligence of robotic systems. J. Intell. Robot. Syst. 2018, 91, 7–22. [Google Scholar] [CrossRef]
  101. Wang, Z.L. Entropy theory of distributed energy for internet of things. Nano Energy 2019, 58, 669–672. [Google Scholar] [CrossRef]
  102. Wu, Z.; Sun, L.; Zhan, W.; Yang, C.; Tomizuka, M. Efficient sampling-based maximum entropy inverse reinforcement learning with application to autonomous driving. IEEE Robot. Autom. Lett. 2020, 5, 5355–5362. [Google Scholar] [CrossRef]
  103. Müller, S.; Voigtländer, F. Automated trucks in road freight logistics: The user perspective. In Advances in Production, Logistics and Traffic; Clausen, U., Langkau, S., Kreuz, F., Eds.; ICPLT 2019 Lecture Notes in Logistics; Springer: Cham, Switzerland, 2019; pp. 102–115. [Google Scholar]
  104. Korteling, J.; Van de Boer-Visschedijk, G.; Blankendaal, R.; Boonekamp, R.; Eikelboom, A. Human-versus artificial intelligence. Front. Artif. Intell. 2021, 4, 622364. [Google Scholar] [CrossRef]
  105. Kaiser-Schatzlein, R. How life as a trucker devolved into a dystopian nightmare. The New York Times, 15 March 2022. Available online: https://www.nytimes.com/2022/03/15/opinion/truckers-surveillance.html (accessed on 7 April 2022).
  106. Yuen, K.F.; Wang, X.; Ma, F.; Wong, Y.D. The determinants of customers’ intention to use smart lockers for last-mile deliveries. J. Retail. Consum. Serv. 2019, 49, 316–326. [Google Scholar] [CrossRef]
  107. Sha, L.; Goodenough, J.B.; Pollak, B. Simplex architecture: Meeting the challenges of using COTS in high-reliability systems. Crosstalk 1998, 7–10. [Google Scholar]
  108. Bailey, N.R.; Scerbo, M.W. Automation-induced complacency for monitoring highly reliable systems: The role of task complexity, system experience, and operator trust. Theor. Issues Ergon. Sci. 2007, 8, 321–348. [Google Scholar] [CrossRef]
  109. Stone, J. Functional symptoms in neurology: The bare essentials. Pract. Neurol. 2009, 9, 179–189. [Google Scholar] [CrossRef]
  110. Bass, C.; Halligan, P. Factitious disorders and malingering in relation to functional neurologic disorders. Handb. Clin. Neurol. 2016, 139, 509–520. [Google Scholar] [PubMed]
  111. Jimenez, X.F.; Nkanginieme, N.; Dhand, N.; Karafa, M.; Salerno, K. Clinical, demographic, psychological, and behavioral features of factitious disorder: A retrospective analysis. Gen. Hosp. Psychiatry 2020, 62, 93–95. [Google Scholar] [CrossRef] [PubMed]
  112. Bass, C.; Wade, D.T. Malingering and factitious disorder. Pract. Neurol. 2019, 19, 96–105. [Google Scholar] [CrossRef]
  113. MacDuffie, K.E.; Grubbs, L.; Best, T.; LaRoche, S.; Mildon, B.; Myers, L.; Stafford, E.; Rommelfanger, K.S. Stigma and functional neurological disorder: A research agenda targeting the clinical encounter. CNS Spectr. 2021, 26, 587–592. [Google Scholar] [CrossRef]
  114. Stone, J. Functional neurological disorders: The neurological assessment as treatment. Pract. Neurol. 2016, 16, 7–17. [Google Scholar] [CrossRef]
  115. Collins, R.T.; Gross, R.; Shi, J. Silhouette-based human identification from body shape and gait. In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 21 May 2002; pp. 366–371. [Google Scholar]
  116. Costilla-Reyes, O.; Vera-Rodriguez, R.; Alharthi, A.S.; Yunas, S.U.; Ozanyan, K.B. Deep learning in gait analysis for security and healthcare. In Deep Learning: Algorithms and Applications; Pedrycz, W., Chen, S.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 299–334. [Google Scholar]
  117. Espay, A.J.; Aybek, S.; Carson, A.; Edwards, M.J.; Goldstein, L.H.; Hallett, M.; LaFaver, K.; LaFrance, W.C., Jr.; Lang, A.E.; Morgante, F. Current concepts in diagnosis and treatment of functional neurological disorders. JAMA Neurol. 2018, 75, 1132–1141. [Google Scholar] [CrossRef]
  118. Allen, D. From boundary concept to boundary object: The practice and politics of care pathway development. Soc. Sci. Med. 2009, 69, 354–361. [Google Scholar] [CrossRef] [PubMed]
  119. Prakash, C.; Kumar, R.; Mittal, N. Recent developments in human gait research: Parameters, approaches, applications, machine learning techniques, datasets and challenges. Artif. Intell. Rev. 2018, 49, 1–40. [Google Scholar] [CrossRef]
  120. Baizabal-Carvallo, J.F.; Alonso-Juarez, M.; Jankovic, J. Functional gait disorders, clinical phenomenology, and classification. Neurol. Sci. 2020, 41, 911–915. [Google Scholar] [CrossRef] [PubMed]
  121. Khera, P.; Kumar, N. Role of machine learning in gait analysis: A review. J. Med. Eng. Technol. 2020, 44, 441–467. [Google Scholar] [CrossRef] [PubMed]
  122. Schniepp, R.; Möhwald, K.; Wuehr, M. Clinical and automated gait analysis in patients with vestibular, cerebellar, and functional gait disorders: Perspectives and limitations. J. Neurol. 2019, 266, 118–122. [Google Scholar] [CrossRef] [PubMed]
  123. Slijepcevic, D.; Zeppelzauer, M.; Gorgas, A.M.; Schwab, C.; Schüller, M.; Baca, A.; Breiteneder, C.; Horsak, B. Automatic classification of functional gait disorders. IEEE J. Biomed. Health Inform. 2017, 22, 1653–1661. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Pogorelc, B.; Bosnić, Z.; Gams, M. Automatic recognition of gait-related health problems in the elderly using machine learning. Multimed. Tools Appl. 2012, 58, 333–354. [Google Scholar] [CrossRef] [Green Version]
  125. Yang, M.; Zheng, H.; Wang, H.; McClean, S.; Hall, J.; Harris, N. A machine learning approach to assessing gait patterns for complex regional pain syndrome. Med. Eng. Phys. 2012, 34, 740–746. [Google Scholar] [CrossRef]
  126. Hausdorff, J.M.; Peng, C.K.; Goldberger, A.L.; Stoll, A.L. Gait unsteadiness and fall risk in two affective disorders: A preliminary study. BMC Psychiatry 2004, 4, 39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  127. Popkirov, S.; Hoeritzauer, I.; Colvin, L.; Carson, A.J.; Stone, J. Complex regional pain syndrome and functional neurological disorders–time for reconciliation. J. Neurol. Neurosurg. Psychiatry 2019, 90, 608–614. [Google Scholar] [CrossRef] [Green Version]
  128. Thieme, K.; Turk, D.C.; Flor, H. Comorbid depression and anxiety in fibromyalgia syndrome: Relationship to somatic and psychosocial variables. Psychosom. Med. 2004, 66, 837–844. [Google Scholar] [CrossRef]
  129. Zhao, N.; Zhang, Z.; Wang, Y.; Wang, J.; Li, B.; Zhu, T.; Xiang, Y. See your mental state from your walk: Recognizing anxiety and depression through Kinect-recorded gait data. PLoS ONE 2019, 14, e0216591. [Google Scholar] [CrossRef] [Green Version]
  130. Slijepcevic, D.; Horst, F.; Lapuschkin, S.; Raberger, A.M.; Zeppelzauer, M.; Samek, W.; Breiteneder, C.; Schöllhorn, W.I.; Horsak, B. On the explanation of machine learning predictions in clinical gait analysis. arXiv 2019, arXiv:1912.07737. [Google Scholar]
  131. Zogas, A. “We have no magic bullet”: Diagnostic ideals in veterans’ mild traumatic brain injury evaluations. Patient Educ. Couns. 2021, 105, 654–659. [Google Scholar] [CrossRef]
  132. Dunn, C.E.; Edwards, A.; Carter, B.; Field, J.K.; Brain, K.; Lifford, K.J. The role of screening expectations in modifying short–term psychological responses to low-dose computed tomography lung cancer screening among high-risk individuals. Patient Educ. Couns. 2017, 100, 1572–1579. [Google Scholar] [CrossRef] [PubMed]
  133. Lidstone, S.C.; MacGillivray, L.; Lang, A.E. Integrated therapy for functional movement disorders: Time for a change. Mov. Disord. Clin. Pract. 2020, 7, 169–174. [Google Scholar] [CrossRef]
  134. Gage, W.H.; Sleik, R.J.; Polych, M.A.; McKenzie, N.C.; Brown, L.A. The allocation of attention during locomotion is altered by anxiety. Exp. Brain Res. 2003, 150, 385–394. [Google Scholar] [CrossRef] [PubMed]
  135. Hatherley, J.J. Limits of trust in medical AI. J. Med. Ethics 2020, 46, 478–481. [Google Scholar] [CrossRef] [PubMed]
  136. Van der Waa, J.; Nieuwburg, E.; Cremersa, A.; Neerincx, M. Evaluating XAI: A comparison of rule-based and example-based explanations. Artif. Intell. 2021, 291, 103404. [Google Scholar] [CrossRef]
  137. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [Green Version]
  138. Thellman, S.; Silvervarg, A.; Ziemke, T. Folk-psychological interpretation of human vs. humanoid robot behavior: Exploring the intentional stance toward robots. Front. Psychol. 2017, 8, 1962. [Google Scholar] [CrossRef] [Green Version]
  139. Wiese, E.; Metta, G.; Wykowska, A. Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Front. Psychol. 2017, 8, 1663. [Google Scholar] [CrossRef] [Green Version]
  140. Churchland, P. Epistemology in the age of neuroscience. J. Philos. 1987, 84, 544–555. [Google Scholar] [CrossRef]
  141. Prakash, C.; Fields, C.; Hoffman, D.D.; Prentner, R.; Singh, M. Fact, fiction, and fitness. Entropy 2020, 22, 514. [Google Scholar] [CrossRef]
  142. Björndal, P.; Rissanen, M.J.; Murphy, S. Lessons learned from using personas and scenarios for requirements specification of next-generation industrial robots. In International Conference of Design, User Experience, and Usability; Springer: Berlin/Heidelberg, Germany, 2011; pp. 378–387. [Google Scholar]
  143. Diaper, D. Scenarios and task analysis. Interact. Comput. 2002, 14, 379–395. [Google Scholar] [CrossRef]
  144. Liu, H.C.; Liu, L.; Liu, N. Risk evaluation approaches in failure mode and effects analysis: A literature review. Expert Syst. Appl. 2013, 40, 828–838. [Google Scholar] [CrossRef]
  145. Bogdanovych, A.; Rodríguez-Aguilar, J.A.; Simoff, S.; Cohen, A. Authentic interactive reenactment of cultural heritage with 3D virtual worlds and artificial intelligence. Appl. Artif. Intell. 2010, 24, 617–647. [Google Scholar] [CrossRef]
  146. Dionisio, J.D.N.; Burns, W.G.; Gilbert, R. 3D virtual worlds and the metaverse: Current status and future possibilities. ACM Comput. Surv. 2013, 45, 1–38. [Google Scholar] [CrossRef]
  147. Nevelsteen, K.J. Virtual world, defined from a technological perspective and applied to video games, mixed reality, and the Metaverse. Comput. Animat. Virtual Worlds 2020, 29, e1752. [Google Scholar] [CrossRef] [Green Version]
  148. Lin, H.; Wang, H. Avatar creation in virtual worlds: Behaviors and motivations. Comput. Hum. Behav. 2014, 34, 213–218. [Google Scholar] [CrossRef]
  149. Nagy, P.; Koles, B. The digital transformation of human identity: Towards a conceptual model of virtual identity in virtual worlds. Convergence 2014, 20, 276–292. [Google Scholar] [CrossRef]
  150. Baker, E.W.; Hubona, G.S.; Srite, M. Does “being there” matter? The impact of web-based and virtual world’s shopping experiences on consumer purchase attitudes. Inf. Manag. 2019, 56, 103153. [Google Scholar] [CrossRef]
  151. Papagiannidis, S.; Bourlakis, M.; Li, F. Making real money in virtual worlds: MMORPGs and emerging business opportunities, challenges and ethical implications in metaverses. Technol. Forecast. Soc. Chang. 2008, 75, 610–622. [Google Scholar] [CrossRef]
  152. Kafai, Y.B.; Fields, D.A.; Ellis, E. The ethics of play and participation in a tween virtual world: Continuity and change in cheating practices and perspectives in the Whyville community. Cogn. Dev. 2019, 49, 33–42. [Google Scholar] [CrossRef]
  153. Swilley, E. Moving virtual retail into reality: Examining metaverse and augmented reality in the online shopping experience. In Looking Forward, Looking Back: Drawing on the Past to Shape the Future of Marketing; Campbell, C., Ma, J., Eds.; Springer: Cham, Switzerland, 2016; pp. 675–677. [Google Scholar]
  154. Dusenbery, D.B. Sensory Ecology; W.H. Freeman: New York, NY, USA, 1992. [Google Scholar]
  155. Stevens, M. Sensory Ecology, Behaviour, and Evolution; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  156. Patten, M.A.; Kelly, J.F. Habitat selection and the perceptual trap. Ecol. Appl. 2010, 20, 2148–2156. [Google Scholar] [CrossRef] [PubMed]
  157. Battin, J. When good animals love bad habitats: Ecological traps and the conservation of animal populations. Conserv. Biol. 2004, 18, 1482–1491. [Google Scholar] [CrossRef]
Figure 1. Behavioral ecology analysis of human-AI systems. Iterations of system evolution (phylogeny) and individual adaptations (ontogeny) of system mechanism are needed in order to minimize situated entropy from system function that can cause ethical stress.
Figure 1. Behavioral ecology analysis of human-AI systems. Iterations of system evolution (phylogeny) and individual adaptations (ontogeny) of system mechanism are needed in order to minimize situated entropy from system function that can cause ethical stress.
Behavsci 12 00103 g001
Table 1. Function: Interrelationships between function fitness, situated entropy, and ethical stress.
Table 1. Function: Interrelationships between function fitness, situated entropy, and ethical stress.
ConstructHigh FitnessLow Fitness
Situated entropyInformation
uncertainty
Low High e.g., due to truck driver having inadequate route information
Physical
disorder
LowHigh e.g., due driving incorrect routes
Unproductive energy useLowHigh e.g., due to driving incorrect routes
Daily stressTime pressureLowHigh e.g., no time for work rest breaks that include eating properly
Self-regulatory depletionLowHigh e.g., from stopping truck at orange traffic lights when late
Chronic stressResource lossLowHigh e.g., due to loss of resources because of erratic employment
Survival
anxiety
LowHigh e.g., due to employment uncertainty that prevents sleeping well
Energy
imbalance
LowHigh, e.g. poor diet and lack of sleep causes allostatic overload
Potential for increased ethical stressLowHigh due to interaction with environment leading to daily stress from high time pressure and high self-regulatory depletion; leading to chronic stress from resource loss, survival anxiety, energy imbalance
Table 2. Phylogeny: Interrelationships between trait components and trait robustness.
Table 2. Phylogeny: Interrelationships between trait components and trait robustness.
Behavioral Trait ComponentEvolution SpanCurrent DistributionTrait Robustness
Human navigation skillMillenniaWidespread but reducingVulnerable to lack of use
Road networksCenturiesWidespread and increasingVulnerable to extreme weather
TrucksDecadesWidespread and increasingVulnerable to extreme weather
Cooperative infrastructureYearsVery limited but can increaseVulnerable to extreme weather
Table 3. Mechanism: Variables between navigational skill and behavioral ethics.
Table 3. Mechanism: Variables between navigational skill and behavioral ethics.
MechanismBehavioral Ethics
Navigation SkillInfrastructureInternetManagement PolicyWeather
HighCooperativeReliableEthical incentivesFavorableLow risk
HighCooperativeReliableEthical incentivesUnfavorableLow risk
HighCooperativeReliableProductivity incentivesUnfavorableMedium risk
HighCooperativeUnreliableProductivity incentivesUnfavorableMedium risk
HighTraditionalUnreliableProductivity incentivesUnfavorableMedium risk
LowCooperativeReliableEthical incentivesFavorableLow risk
LowCooperativeReliableEthical incentivesUnfavorableMedium risk
LowCooperativeReliableProductivity incentivesUnfavorableMedium risk
LowCooperativeUnreliableProductivity incentivesUnfavorableHigh risk
LowTraditionalUnreliableProductivity incentivesUnfavorableHigh risk
Table 4. Ontogeny: Effects of background on stress when interacting with AI.
Table 4. Ontogeny: Effects of background on stress when interacting with AI.
Truck DriverBackgroundOntogeny
Traditional
Navigation
Experience
Suspicion of Traditional NavigationAI-Aided Navigation ExperienceSuspicion
of AI-Aided
Navigation
Propensity for Anxiety
1HighNoneNoneLowLowApproaches AI-aided navigation without anxiety
2HighNoneNoneHighHighAvoids AI-aided navigation with potential for chronic anxiety
3LowLowHighNoneLowApproaches traditional navigation without anxiety
4LowHighHighNoneHighAvoids traditional navigation with potential for chronic anxiety
Table 5. Vehicle navigation example of opportunities and challenges for behavioral ethics in human-AI systems.
Table 5. Vehicle navigation example of opportunities and challenges for behavioral ethics in human-AI systems.
ConstructOpportunitiesChallenges
FunctionHuman-AI truck navigation system can reduce stress-inducing situated entropy experienced by human truck drivers who have poor navigation skills and so could otherwise easily get lostContinual use of AI-enabled navigation systems can undermine human navigation skills
PhylogenyOngoing evolution of technological components has potential to widen the range of human-AI truck navigation systems.Until there is further evolution of AI components, reduction of stress arising from experience of situated entropy depends upon there being favorable environmental conditions
MechanismHuman-AI system can include additional components within a management policy that limits the potential for productivity incentives to lead unintentionally to unethical actionsThe inclusion of additional components can increase system complexity. Thus, there needs to be system design for high reliability.
Ontogeny Individualized adaptation of human-AI system to suit individual human truck drivers can be possibleDifferences in experience and personality can lead to human interaction with AI leading to unintended ethical stress
Table 6. Diagnostic support example of opportunities and challenges for behavioral ethics in human-AI systems.
Table 6. Diagnostic support example of opportunities and challenges for behavioral ethics in human-AI systems.
ConstructOpportunitiesChallenges
FunctionReduced situated entropy about basis for treatment decisions, and about allocation of healthcare resourcesReduced situated entropy for patient depends upon patient agreeing with the diagnosis
PhylogenyOngoing evolution of technological components has potential to improve diagnoses.Human-AI system only robust when environment is ideal for gait recording and AI gait analysis is acceptable to the patient
MechanismAI-enabled gait analysis has the potential to be seen as providing a diagnosis that is more reliable than that of human healthcare providers aloneAI-enabled gait analysis cannot provide a reliable basis for diagnosis unless many components are combined successfully
OntogenyIndividualized adaptation of human-AI system to suit individual patients and healthcare providers can be possiblePatient may unintentionally alter gait during gait recording process if has anxiety about interacting with AI-enabled system. Also, human healthcare provider may not trust AI-enabled diagnoses.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fox, S. Behavioral Ethics Ecologies of Human-Artificial Intelligence Systems. Behav. Sci. 2022, 12, 103. https://doi.org/10.3390/bs12040103

AMA Style

Fox S. Behavioral Ethics Ecologies of Human-Artificial Intelligence Systems. Behavioral Sciences. 2022; 12(4):103. https://doi.org/10.3390/bs12040103

Chicago/Turabian Style

Fox, Stephen. 2022. "Behavioral Ethics Ecologies of Human-Artificial Intelligence Systems" Behavioral Sciences 12, no. 4: 103. https://doi.org/10.3390/bs12040103

APA Style

Fox, S. (2022). Behavioral Ethics Ecologies of Human-Artificial Intelligence Systems. Behavioral Sciences, 12(4), 103. https://doi.org/10.3390/bs12040103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop