Next Article in Journal
Optimal Decision Making for Customer-Intensive Services Based on Queuing System Considering the Heterogeneity of Customer Advertising Perception
Previous Article in Journal
Parallel Learning of Dynamics in Complex Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models

VTT Technical Research Centre of Finland, FI-02150 Espoo, Finland
Systems 2022, 10(6), 260; https://doi.org/10.3390/systems10060260
Submission received: 20 November 2022 / Revised: 13 December 2022 / Accepted: 14 December 2022 / Published: 17 December 2022

Abstract

:
World models is a construct that is used to represent internal models of the world. It is an important construct for human-artificial intelligence systems, because both natural and artificial agents can have world models. The term, natural agents, encompasses individual people and human organizations. Many human organizations apply artificial agents that include machine learning. In this paper, it is explained how human survival first principles of interactions between energy and entropy influence organization’s world models, and hence their implementations of machine learning. First, the world models construct is related to human organizations. This is done in terms of the construct’s origins in psychology theory-building during the 1930s through its applications in systems science during the 1970s to its recent applications in computational neuroscience. Second, it is explained how human survival first principles of interactions between energy and entropy influence organizational world models. Third, a practical example is provided of how survival first principles lead to opposing organizational world models. Fourth, it is explained how opposing organizational world models can constrain applications of machine learning. Overall, the paper highlights the influence of interactions between energy and entropy on organizations’ applications of machine learning. In doing so, profound challenges are revealed for human-artificial intelligence systems.

1. Introduction

Conceptualization of people having internal models of themselves in the world, i.e. world models, can be found in psychology theory-building throughout the middle decades of the twentieth century [1,2,3,4]. By the 1970s, world models were being considered in the context of systems science and control theory [5,6]. More recently, there has been a framing in computational neuroscience of world models that is applicable to natural and artificial agents [7,8]. In computational neuroscience, efforts are ongoing to relate this framing of world models to machine learning [9,10,11]. Although this world models framing describes individuals’ interactions with the world in terms of entropy [12,13,14], efforts to relate it to machine learning have not previously considered how human survival first principles of interactions between energy and entropy influence machine learning (ML) implementations that are based on the world models of human organizations. That is ML implementations that are based on human organizations documented models of themselves in the world, such as their business models and strategic plans. This is an important research gap as many machine learning implementations are made by human organizations rather than by individual people.
This important research gap is addressed in the remaining five sections of this paper. Next, in Section 2, the world models construct is related to human organizations. This is done in terms of the construct’s origins in psychology theory-building during the 1930s through its applications in systems science during the 1970s to its recent applications in computational neuroscience. Then, in Section 3, it is explained how human survival first principles of interactions between energy and entropy influence organizational world models. In Section 4, a practical example is provided of how survival first principles lead to opposing organizational world models. In Section 5, it is explained how opposing organizational world models can constrain applications of machine learning. In Section 6, principal contributions are stated, and directions for future research are proposed. Overall, the paper highlights the influence of interactions between energy and entropy on organizations’ applications of machine learning. In doing so, profound challenges are revealed for human-artificial intelligence systems.

2. World Models

In this section, developments of the world models construct are related to human organizations. In each subsection, notable developments in formalizing individuals’ models of themselves in the world are described alongside chronologically corresponding developments in formalizing organizations’ models of themselves in the world. Individuals’ world models are embodied models of themselves in the world. These include some mental models, but not all mental models need be included within world models. For example, a person may have a mental model of prehistoric art as described to that person during school lessons. Such mental models may not contribute to individuals’ internal model of themselves in the world. By contrast, organizations’ world models are documented, for example in business models and strategic plans that comprise their internal models of themselves in the world.

2.1. Topological Psychology—Organizational Forecasting

Conceptualization of internal model and external world can be found in Kurt Lewin’s topological psychology of the 1930s [1]. During the same decade, the economist, Ronald Coase, considered interactions between the inside and outside of organizations: for example, in terms of where companies should define their boundaries [15]. Development of the world models concept took place in the 1940s through the work of psychologist Kenneth Craik on the nature of explanations [2]. He wrote of small-scale mental models of external reality that utilize knowledge of past events in dealing with the present and future. He opined that small-scale mental models enable trying out alternative possible actions and concluding which could be the best of them [2]. Also during the 1940s, organizations began to develop forecasting models [16].

2.2. Evolutionary Psychology—Strategic Plans

Subsequently, in the 1960s, when considering evolutionary psychology, John Bowlby opined that if an individual is to draw up a plan to achieve a set goal, the individual must have some sort of working model of his environment, and must also have some working knowledge of his own behavioral skills and potential [3]. In terms of human organizations, this corresponds loosely with strategic planning practices required to model an organization’s environment and its own capabilities in relation to the environment. These can include analyses to map macroeconomic factors such as the political, economic, social, and technological (PEST); five-forces analyses to map microeconomic forces (substituted offerings, established rivals, new entrants, power of suppliers, and power of customers); and analyses to determine one’s own internal strengths and weaknesses compared to external opportunities and threats (SWOT) [17]. Such practices can contribute to the development of organizations’ business models, which can provide structured descriptions of how an organization will interact with its environment [18].

2.3. Psycho-Social Transitions—Business Models

Further developments of the world models construct took place in the 1970s in relation to psycho-social transitions. In particular, Colin Murray Parkes [4] opined that people have an assumptive world that comprises not only a model of the world as it is but also models of the world as it might be. He opined that assumptive worlds encompass prejudices, plans, and expectations, which can change due to changes in the life space. The term life space was coined decades earlier by Kurt Lewin, by which he meant the total psychological environment that a person experiences subjectively but not necessarily consciously [19]. Parkes went on to propose that there can be three types of change in world models. One type of change is that a world model may be modified and continue to influence behavior. Another type of change is that a world model may be retained as an occasional determinant of behavior. Alternatively, a world model may be abandoned and cease to influence behavior [20]. In terms of human organizations, this corresponds loosely with issues in business model innovation [21]. In particular, organizational survival can depend on organizations changing their business models. However, business models can generate self-reinforcing feedback loops [22], which can contribute to an organization failing due to persisting with an old business model rather than changing its business model with a changing environment [23,24,25]. One organizational behavior perspective, which originated in the 1970s and can be applied to address this issue, is triple-loop learning. This involves three feedback loops. In the first, organizations seek to align internal models with the external world. In the second loop, internal models are revised to better fit the external world. In the third loop, organizations can revise how they revise their internal models [26].

2.4. Neuroscience—Systems Models

Development of the world models construct moved towards neuroscience when Parkes wrote about the capacity of the central nervous system to organize the most complex impressions into internal models of the world, which allow us to recognize and understand the world [27]. Similarly, organizational studies began to encompass neurological perspectives, notably in Stafford Beer’s book, Brain of the Firm [28]. Moreover, world models were considered in the context of system science by Jay Forrester who wrote: “Each of us uses models constantly. Every person in private life and in business instinctively uses models for decision making. The mental images in one’s head about one’s surroundings are models. One’s head does not contain real families, businesses, cities, governments, or countries. One uses selected concepts and relationships to represent real systems” [5]. At the same time, world models were considered in control theory, when it was argued that internal models need to resemble the systems that they are intended to control [6,29]. In the 1990s and 2000s, notable studies by Thomas Metzinger focused on the self in world models. This was reported in his book Subjekt und Selbstmodell [30], which was followed by several papers in the 2000s in journals such as Progress in Brain Research [31]. During this time, systems scientist, Peter Senge, argued for continuous adaptation between organizations and environments [32]. Similarly, organizational theorist Karl Weick’s 1990s concept of sensemaking provided a basis for the perspective that organizations need to adapt through continuous learning [33]. The term sensemaking refers to an active process in which actors enact their environment by isolating elements for closer attention, probing some activities and seeing what responses they attract in order to deepen their insights. Sensemaking is also retrospective because the meaning of actions is not known until they become lived experiences [34].

2.5. Active Inference—Quality Management Manuals

More recently, development of the world models construct has led to a framing that is applicable to natural and artificial agents [7,8]. This development follows some half a century after artificial intelligence pioneer John McCarthy drew attention to the importance of representations of the world in problem solving [35]. The recent framing [7,8] describes individuals’ interactions with environments in terms of entropy [9,10,11]. This first principles work provides examples of convergence between neuroscience concerned with world models and organizational studies. In particular, triple-loop learning in organizational studies [26] has some correspondence with homeostasis, allostasis, and metastasis in psychology and neuroscience [36]. If successful, homeostasis regulates essential internal variables at a set point (first loop). If homeostasis is not successful, allostasis can reorganize input–output relations with the environment in order to restore a sustainable regulatory set point (second loop). If allostasis is not successful, there can be an explicit consideration of failing implicit allostasis, and action can be taken to restore a sustainable regulatory set point (third loop). However, if there is not an explicit consideration of allostatic failure, metastasis can occur where regulatory processes are replaced by dysregulatory processes (maladaptive third loop).
Furthermore, in the first principles’ framing of world models [7,8,9,10,11], Bayesian cycles of perceptual, epistemic, and instrumental inference can exist [37,38,39,40]. Bayesian inference involves assessing the probability of a hypothesis based on prior knowledge of things that might be related to the hypothesis, and the updating of the hypothesis based on new evidence as it becomes available [41]. Perceptual inference refers to inferring sensory stimuli from predictions based on internal representations built from prior experience. Epistemic inference refers to updating beliefs about how to survive in an environment. Instrumental inference involves inferring action options and consequences in the environment. For brevity, such inference can be described as active inference [40]. This first principles work, which is led by neuroscientist Karl Friston, corresponds loosely with what organizational theorist Karl Weick described in the 1990s as the active process of sense making [34]. Moreover, active inference corresponds loosely with the continuous improvement cycles that organizations document in their quality management systems [42].

2.6. Embodied Personal World Models—Documented Organizational World Models

From the 1930s to the 2020s, a fundamental difference between the world models of individual people and human organizations is that the world models of individual people are embodied, while the world models of human organizations are documented in, for example, business models, strategic plans, and quality management system manuals. In computational neuroscience, efforts are ongoing to relate the active inference framework of world models to machine learning [9,10,11]. Although this world model framework describes individuals’ interactions with the world in terms of entropy [12,13,14], efforts to relate it to machine learning have not previously considered how survival first principles of interactions between energy and entropy influence the machine learning world models of human organizations. That is machine learning models that are developed and implemented based on, for example, human organizations’ documented business models, strategic plans, and quality management practices. This is an important research gap as many machine learning applications are made by human organizations rather than by individual people.

3. How Survival First Principles Lead to Opposing Organizational World Models

In this section, it is explained how survival first principles of interactions between energy and entropy influence organizational world models. A survival first principle is to maintain a positive energy balance by limiting the amount of energy lost to entropy. This involves resisting the second law of thermodynamics by establishing boundaries between internal states and external states. Establishing constraining boundary conditions enables living things, including human organizations, to differentiate themselves from the environment while being partially open to exchanges of information, matter, and energy with the environment. Maintaining positive energy balance is inherently tied to having boundaries [43,44]. In particular, living things construct their own constraining boundary conditions so they are able to do the work needed to survive. Here, work refers to constrained release of energy within a few degrees of freedom. Release of energy within a few degrees of freedom is necessary to prevent most energy being dissipated rapidly as entropy. For practical purposes, entropy can be considered as overlapping information uncertainty (information-theoretic entropy), physical disorder (statistical mechanics entropy), and energy expenditure being lost in unproductive actions (thermodynamic entropy). For example, a human organization with poorly defined boundaries in its strategic plan, business model, and/or quality management system manual can experience much information uncertainty about customer expectations. Accordingly, that organization can experience much physical disorder in its efforts to meet customer expectations, which entails much energy expenditure being lost in unproductive actions. By contrast, constraining the release of energy can enable much more work to be done with the same amount of energy [43,44].
Often, human formulation of boundaries can involve establishing borders, which separate areas where energy is accessed more easily than in adjacent areas on the other side of the border [45,46,47]. Human-made boundaries can entail ingroup love versus outgroup hate [48] and ingroup humanization versus outgroup dehumanization [49]. Boundary-based preferences can be deeply embodied in neurology [50,51]. They can entail related preferences for the similar [52,53] and for the familiar [54,55]. Preferences for similar people within familiar situations can become strongly related through homophily whereby, metaphorically, birds of a feather seek to flock together, for example via so called Internet echo chambers [56,57].
The need to balance energy input and energy output in exchanges across boundaries between internal states and external states can manifest in instances of the principle of least action [58], such as the principle of least effort during information seeking [59,60] and in the principle of least collaborative effort in information exchanges involving people [61,62]. The principle of least effort and the principle of least collaborative effort can be served by people paying more attention to their established internal models than by making more effort by paying attention to changing external states. This can lead to organizations having lock-ins [63] and path dependencies [64]. Paying more attention to internal models than to external states can lead to exactly the same external information being interpreted differently by different people in order to serve explanations that support their preconceptions and confirm their biases: for example, in the opposing motivated social cognition of so called culture wars [65,66]. Preference for least action to maintain own internal models across opposing boundaries is congruent with argument that the development of technology is driven by desire for own ease and for domination of others [67].

4. Example of Opposing Organizational World Models

Examples of opposing world models can be found in global food production, consumption, and prosumption. The word, prosumption, is a portmanteau term, which summarizes that people survive through a combination of production and consumption [68]. From the everyday point-of-view of individual prosumers, the external state can be environments that include a wide variety of organizations that offer different prosumption preference options that are designed to target the preferences of particular groups, which they define as market segments. This is done with the aim of making their offerings the prosumption preferences of those particular groups. For example, two segments that have been defined for the convenience food market are “kitchen evaders” and “convenience-seeking grazers” [69]. Convenience food involves little production work as people perform some minor tasks such as removing packaging. By contrast, preparing meals from home-grown food involves a much higher proportion of production work. Some people will choose to undertake a higher proportion of task work when that can keep them inside the boundaries of their preferred socio-cultural group within which they believe they can best survive [70]. Thus, there can be interplay between preference for maintaining immediate positive energy balance during tasks and maintaining overall positive energy balance by staying within the boundaries of an ingroup. For brevity, these can be abbreviated to energy-positive and ingroup-positive. These are the most fundamental of human preferred states, which underlie a multitude of more transitory heterarchical prosumption preferences. The term, heterarchical, refers to the potential for preferences to be ranked differently in different situations at different times.
As summarized in Figure 1, active inference across triple loop learning can entail heterarchical preference contests in the interface state between organizations in the external state and individual prosumers’ internal states.
In the first loop, organizations can seek to maintain market equilibrium around a set point: for example, high profit from high sales of convenience food. Organizations in the external state can formulate choice architectures [71] to lead prosumers from awareness of one of their products to involvement with their brand. This entails reinforcement teaching to prosumers with the aim of prosumers’ reinforcement learning [72] that serves the goals of the organization such as high consumption of convenience food.
In the second loop, an organization can seek to address homeostatic challenges, such as high loss of prosumers to competitor organizations, through allostatic change. For example, an organization could introduce a loyalty programme, which has step-by-step increases in bonus rates and prosumer status in line with increased value of purchases. However, there are limits to individual organizations’ reinforcement teaching of their predefined reward functions to prosumers. For example, if one organization introduces a loyalty programme, other organizations can quickly do the same through active inference. First, perceptual inference that environment change threatens survival: in particular, customers are leaving to a competitor that has introduced a loyalty programme. Next, epistemic inference that survival in changed environment depends upon offering a rival loyalty programme. Then, instrumental inference that survival depends on the new action of offering a loyalty programme. However, when all organizations attempt new reinforcement teaching by introducing loyalty programmes, there may be no survival advantage to any of them in contests for prosumption preferences [73].
At the same time, prosumers can be prone to variety-seeking behavior, which can be moderated by whether or not their prosumption is observed [74]. For example, when a healthy food prosumer has little energy available, active inference may lead the healthy food prosumer to get energy-dense food from the nearest possible source. First, there can be perceptual inference that there is energy depletion that could prevent travelling to that evening’s healthy food party. Next, there can be epistemic inference that it is acceptable at a time of energy depletion to seek the nearest available source of energy-dense food. Then, instrumental inference that it is time to go to get energy-dense junk food before there are not sufficient energy resources to even stand up and move [75]. The nearest source could be a petrol station selling junk food [76]. This source can be energy-positive but ingroup-negative, because this prosumer seeks to survive within the boundaries of a healthy food community. Hence, if the prosumer notices that an ingroup member is unexpectedly close by, for example buying petrol at the station, the prosumer may be impelled to expend energy by walking passed the energy-positive but ingroup-negative source in order to get to an ingroup-positive food shop. More broadly, people can just get bored with sourcing resources to address their needs from already known organizations. Then, occasionally and unpredictably, people can make an impulse purchase instead [77]. Hence, preference contests are heterarchical because different innate needs can have primacy in different situations at different times.
Human organizations in heterarchical preference contests can apply machine learning in their efforts to gain competitive advantage against each other [78,79]. Yet, amidst heterarchical preference contests, general preference options can emerge that can have a determining influence over prosumption preferences. This can happen through culturally-bounded rationality, within which heuristic decision-making due to imperfect information and limited energy is based upon prevailing socio-cultural norms [80]. This can involve mere-exposure conditioning, whereby repeated exposure to something leads to it becoming part of the familiar background [81]. Rather than there being reinforcement teaching and reinforcement learning of preferences through the targeted predefinition of specific rewards (e.g. increasing loyalty programme bonus rate) and specific punishments (e.g. lower loyalty programme bonus rate), there can be non-reinforced acquisition of preferences due to mere exposure to the sociomaterial environment, such as a local retail landscape comprising only convenience shops selling junk food. There can be mere-exposure effects and socio-cultural norms from the combined presence of many organizations’ offerings of food-like substances, which can lead to there being sensory ecologies where signals related to salt, sugar and fat dominate sensory exchanges with the food environment [82,83].
Thus, heterarchical preference contests can take place in ecological traps where rapid environmental change has led to preference for poor-quality habitats [84]. In particular, where it has become the socio-cultural norm to minimize energy output and maximize energy input through consumption of junk food: even when it is clear that this threatens survival [85,86]. Here, it is important to note that humans are evolved to learn to minimize energy expenditure through the regulation of movement economy. Hence, it can be expected that people will learn through repetitions of trial-and-error the shortest routes to getting positive energy balance in their sociomaterial environment [87,88].
Yet, at the same time, organizations with world models that are opposed to junk food can introduce triple loop learning initiatives to limit metastasis, such as the increasing prevalence of survival threats from overconsumption of salt, sugar, and fat [89,90]. Such initiatives can encompass food preference learning throughout life [91,92]. In preference contests, organizations can develop choice architectures for healthier food alternatives [93]. At the same time, preference contests can include efforts to frame healthier food choices in terms of bounded rationality [94]. In practical terms this can include initiatives to change the sociomaterial environment from so-called food deserts into so-called food oases. This involves healthier food options becoming available in areas where previously only highly processed food were available [95].
However, triple loop learning initiatives for healthy food may not be successful if there is insufficient consideration of innate preference for maintaining positive energy balance. For example, food oases can be so called food mirages when the healthy food options are not affordable and hence highly processed foods remain the only affordable option [96]. In terms of innate preferences, prosumers positive energy balance is facilitated by healthy food being nearer in a newly set-up local food oasis. Yet, positive energy balance is not facilitated if prosumers have to expend more energy by working more to earn the money to buy the more expensive healthy food. Also, triple loop learning initiatives may not be successful if there is insufficient consideration of innate preference for maintaining overall positive energy balance by staying inside the boundaries of an ingroup situated within the borders of a particular area. This can happen when establishing a food oasis leads to the gentrification of the area and the local population has to disperse because it cannot afford to pay housing rents. Hence, there can be local opposition to the introduction of local provision of healthy food options [97]. In their efforts to prevail in triple loop preference contests, human organizations seeking to increase access to healthier food options can apply machine learning [98,99], while organizations that they are in opposition to are already applying machine learning in efforts to enable their own survival.
In summary, preference options can come to prosumers from individual organizations’ reinforcement teaching that is designed through individual organization’s active inference. Preference options can also come to prosumers from mere-exposure effects in sociomaterial environments. These can emerge from many organizations’ designing similar preference options through active inference. The extent of reinforcement learning from reinforcement teaching and non-reinforced learning from mere-exposure effects depends on prosumers’ situated active inference as determined by, for example, prosumers’ physical borders and their ingroup boundaries. Subsequently, prosumers’ dynamic selection of preference options is carried out through their active inference as influenced by, for example, energy depletion and ingroup observation. Throughout, human organizations that are in opposition to each other can be applying machine learning [78,79,98,99]. They apply machine learning, and other technologies, in accordance with their own documented world models, for example their business models, strategic plans and quality management system manuals, as they compete against each other in efforts to enable their own survival.

5. How Opposing Organizational World Models Constrain Machine Learning

There are ongoing efforts to relate the active inference framing of world models to ML [9,10,11]. Although this world model framing describes individuals’ interactions with the world in terms of entropy [12,13,14], efforts to relate it to machine learning have not previously considered how human survival first principles of interactions between energy and entropy influence the machine learning world models of human organizations. Furthermore, these efforts have not addressed the different levels of effects that organizations can seek from applications of ML. In particular, organizations can apply machine learning (ML) in efforts to bring about automational, informational, and/or transformational effects [100]. Automational effects can involve human labour being substituted by ML. Informational effects can emerge from ML providing information to support human decision making. Transformational effects refer to the potential for ML to support radical change. Another short-coming of efforts to relate the active inference framing of world models to ML is lack of consideration of the limiting influence of ingroup—outgroup opposition on potential to bring about automational, informational and transformational effects. Opportunities and limitations to bring about the three potential effects are related to triple loop learning in the following paragraphs.

5.1. Automational Effects

ML can be applied to reduce human labour in data analyses when there are well-defined inputs that are related to well-defined outputs, and large digital data sets exist or can be created containing input-output pairs. Accordingly, ML automational effects could contribute to single loop learning, which aims to keep regulation of inputs from the environment and outputs to the environment around an existing set point. For example, ML automational effects could be applied to extend and accelerate data analyses related to assessing the efficacy of established food programmes implemented across society [101]. That is provided those programmes are not the focus of opposing ingroup-outgroup exchanges about goals [102]. This can be a major limitation for application of ML because ML works well when goals can be clearly described, and this is difficult when there are opposing beliefs across society about goals [103].

5.2. Informational Effects

ML information effects could contribute to double loop learning when there is a need to reorganize input-output relations to enable regulation around a new more sustainable set point. However, informational effects can be hindered by ongoing argument about the explainability of ML models and their outputs [104]. Moreover, ML can be difficult to implement if there are long unpredictable chains of causal interactions that do not facilitate automated collection of large sets of perfectly labelled training examples. This could be an intractable difficulty if opposing ingroup-outgroup exchanges limit agreed definition of causal interactions, for example, related to food programmes [105,106]. Even if everybody on both sides of boundaries has the same understanding of information provided by ML that does not ensure that those people will agree on what should be done on the basis of that information. Rather, ingroup versus outgroup motivated cognition can entail intractable ongoing dispute that is informed by exactly the same information [63,64,107,108]. Apropos, healthy food initiatives are not likely to involve prosumers who assess new food preference options with impartial model-based optimization decision-making that is entirely free from the influence of their existing beliefs. For example, optimization of decision-making that is based on an impartial decision model, which compares satisfaction level from current prosumption preferences against costs involved in changing to a new potential preference option [109]. Instead, when new preference options could involve moving outside of the boundaries of the current ingroup, persistence with belief-based preference decisions can have the characteristics of deontology that eschews consequentialism [110].

5.3. Transformational Effects

ML transformational effects could contribute to triple loop learning that can establish a new more sustainable set point around which regulation can be based. For example, Bayesian ML may have some potential to improve analyses of wicked problems. Those are complex problems that are characterised by stakeholder disagreements on the definition and character of these problems and their possible resolution. The problem of how to improve food prosumption to prevent global epidemics of obesity and chronic diseases [89,90] can be characterized as being a wicked problem. It has been argued that Bayesian ML can contribute to learning the structures and parameters of wicked problems [111] and, as summarized in Figure 1, Bayesian inference is fundamental to triple loop heterarchical preference competitions. However, it is unlikely that there can be explainability and acceptability of outputs from such application of ML unless there is diverse human input into a Bayesian Network Model to which ML can be applied.
This is possible as people who are not computer scientists can be involved in the participatory design of technology deployments that involve automated data collection with sensors [112]. In addition, there can be so called participatory sensing when sensing is dependent upon observations being made by people [113]. This can be facilitated through so-called citizen observatories that deploy citizen science methodologies [114]. As well as data collection, citizen scientists can lead the ideation and implementation of improvement initiatives [115]. However, ML has limited potential to learn structure and parameters of wicked problems as they change quickly through accumulating non-linear setbacks or small wins. This is because ML is not well-suited to the analyses of phenomena that change rapidly [116]. Small wins are concrete, completed, implemented outcomes of moderate importance. Wicked problems can be resolved through small wins because proposals for small incremental steps are less likely than proposals for large-scale radical change to stir up great antagonisms and paralyzing schisms. Nonetheless, small wins have the potential to accumulate into a series of small wins that may result in transformative change [117].
Although ML has limited potential to keep up with the overall non-linear progress of small wins in resolving wicked problems, ML can contribute to improving the performance of transformational technology implementations that can bring about small wins. For example, mobile retailers can satisfy innate preferences for energy-positive ingroup-positive acquisition of healthy food. This is because healthy food is brought to where prosumers are without bringing gentrification of the areas where they visits. Yet, the affordability of healthy food from mobile retailers depends upon optimizing product mixes and route plans [118]. These are well-established types of ML applications [119,120], which can bring ML automational effects and ML informational effects to support transformational effects from mobile retailing that is ideated and implemented by people.

6. Conclusions

6.1. Principal Contributions

World models is a construct that is used to represent internal models of the world. It is an important construct for human-artificial intelligence systems, because both natural and artificial agents can have world models. The term, natural agents, encompasses individual people and human organizations. Many human organizations apply artificial agents that include machine learning. Although the active inference world model framing describes individuals’ interactions with the world in terms of entropy [12,13,14], efforts to relate it to machine learning have not previously considered how survival first principles of interactions between energy and entropy influence the machine learning world models of human organizations. Thus, this paper addresses an important research gap as many machine learning applications are made by human organizations rather than by individual people.
First, the world models construct has been related to human organizations. This has been done in terms of the construct’s origins in psychology theory-building during the 1930s through its applications in systems science during the 1970s to its recent applications in computational neuroscience. In doing so, similarities between research related to individual people and research related to human organizations have been revealed. For example, SWOT analysis to determine internal strengths and weaknesses compared to external opportunities and threats was formalized in the 1960s [121]. Similarly, John Bowlby opined in the 1960s that if an individual is to draw up a plan to achieve a set-goal the individual must have some sort of working model of his environment, and must also have some working knowledge of own behavioral skills and potential [3]. Another notable similarity is between Weick’s action-orientated sensemaking and the active inference framing of world models. There is ongoing convergence between them as narrative and storytelling are important in sensemaking [122,123], and narrative and storytelling are being relating to active inference world models [124]. With regard to implementations of ML, the important characteristic of organizations’ world models is that they are documented, for example in strategic plans and quality management system manuals. It is such documents that define when, where, and how organizations develop and implement ML.
Second, it has been explained how survival first principles of interactions between energy and entropy influence organizational world models. In particular, survival depends upon maintaining a positive energy balance, and maintaining a positive energy balance is inherently linked with establishing boundaries. Human-made boundaries can entail ingroup love versus outgroup hate [48] and ingroup humanization versus outgroup dehumanization [49]. Moreover, preference for least action to maintain own internal models across opposing boundaries is congruent with argument that the development of technology is driven by desire for own ease and for domination of others [67]. Third, a practical example has been provided of how survival first principles lead to opposing organizational world models in global food prosumption. The example illustrates the many opportunities for applying machine learning, such as in customer loyalty programmes and in improving access to healthy food. However, machine learning is applied by opposing organizations as they compete against each other in their efforts to enable their own survival.
Fourth, as summarized in Table 1, it has been explained how opposing organizational world models can constrain applications of machine learning. For example, ML automational effects could contribute to single loop learning, but this potential can be limited by opposing ingroup-outgroup exchanges about goals. In addition, ML information effects could contribute to double loop learning, but this potential can be limited by opposing ingroup-outgroup exchanges that can confound definition of causal interactions. Moreover, ingroup versus outgroup motivated cognition can entail intractable ongoing dispute that is informed by exactly the same information. Furthermore, ML transformational effects could contribute to triple loop learning, but such potential is limited by ingroup versus outgroup stakeholder disagreements on the definition and character of wicked problems and their possible resolution. Thus, there are profound challenges for human-artificial intelligence systems that involve machine learning implementations based on organizational world models. Overall, this paper complements recent research that has focused on how opposing positions of individuals can limit the potential of machine learning [125].

6.2. Directions for Future Research

Future research could involve deliberative integration of organizational studies and computational neuroscience in development of the world models construct. This can be important to make explicit alignments and misalignments between organizations’ world models and machine learning world models. This could involve concurrent development of organizational triple loop learning and machine learning transformational effects across otherwise opposing boundaries. Such research can draw upon innovations in machine learning development, which may have better potential to model the non-linear dynamics of wicked problems [126,127] that emerge as human organizations try to survive through active inference in a changing world.
Future research could also consider to what extent existing formulations related to explainable artificial intelligence (XAI) are useful when organizations seek explanations that support their documented world models, such as their business models, strategic plans, etc., which are in opposition to the world models of other organizations. For example, trust in artificial intelligence can depend upon progressing from explainability, through transparency to interpretability. Explanability involves development of post-hoc models to explain ML models that would otherwise be opaque “black box” models. Transparency involves introduction of “glass box” models, which have structures and processes that are visible to humans. Beyond explainability and transparency is interpretability, which involves humans being able to interpret directly ML models and their functioning. However, interpretability is not sufficient to transcend opposing machine learning world models. That is machine learning development and implementation that is based human organizations’ opposing world models as set-out in their business models, strategic plans, etc. Rather, agreeable ML is required that transcends opposing machine learning world models. Yet, this will require more than improving ML. Rather, human organizations will need to recognize the influence of survival first principles that entail forming boundaries in order to resist locally the tendency towards maximum entropy. Then, human organization will need to generate new alternatives to resisting the tendency towards maximum entropy.

Funding

This research was funded by European Commission grant number 952091.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Lewin, K. Psychoanalysis and topological psychology. Bull. Menn. Clin. 1937, 1, 202–212. [Google Scholar]
  2. Craik, K.J.W. The Nature of Explanation; Cambridge University Press: Cambridge, UK, 1943. [Google Scholar]
  3. Bowlby, J. Attachment and Loss; Attachment; Hogarth Press: London, UK, 1969; Volume 1. [Google Scholar]
  4. Parkes, C.M. Psycho-social transitions: A field for study. Soc. Sci. Med. 1971, 5, 101–115. [Google Scholar] [CrossRef] [PubMed]
  5. Forrester, J.W. Counterintuitive behavior of social systems. Technol. Rev. 1971, 2, 109–140. [Google Scholar]
  6. Conant, R.C.; Ashby, W.R. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. 1970, 1, 89–97. [Google Scholar] [CrossRef] [Green Version]
  7. Linson, A.; Clark, A.; Ramamoorthy, S.; Friston, K. The active inference approach to ecological perception: General information dynamics for natural and artificial embodied cognition. Front. Robot. AI 2018, 5, 21. [Google Scholar] [CrossRef] [Green Version]
  8. Friston, K.; Moran, R.J.; Nagai, Y.; Taniguchi, T.; Gomi, H.; Tenenbaum, J. World model learning and inference. Neural Netw. 2021, 144, 573–590. [Google Scholar] [CrossRef]
  9. Friston, K.J.; Daunizeau, J.; Kiebel, S.J. Reinforcement learning or active inference? PLoS ONE 2009, 4, e6421. [Google Scholar] [CrossRef] [Green Version]
  10. Sajid, N.; Ball, P.J.; Parr, T.; Friston, K.J. Active inference: Demystified and compared. Neural Comput. 2021, 33, 674–712. [Google Scholar] [CrossRef]
  11. Mazzaglia, P.; Verbelen, T.; Çatal, O.; Dhoedt, B. The Free Energy Principle for Perception and Action: A Deep Learning Perspective. Entropy 2022, 24, 301. [Google Scholar] [CrossRef]
  12. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef]
  13. Sengupta, B.; Stemmler, M.B.; Friston, K.J. Information and efficiency in the nervous system—A synthesis. PLoS Comput. Biol. 2013, 9, e1003157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Bruineberg, J.; Rietveld, E.; Parr, T.; van Maanen, L.; Friston, K.J. Free-energy minimization in joint agent-environment systems: A niche construction perspective. J. Theor. Biol. 2018, 455, 161–178. [Google Scholar] [CrossRef]
  15. Coase, R.H. The nature of the firm. Economica 1937, 4, 386–405. [Google Scholar] [CrossRef]
  16. Koopmans, T.C. Identification problems in economic model construction. Econometrica 1949, 17, 125–144. [Google Scholar] [CrossRef]
  17. Wu, Y. The Marketing Strategies of IKEA in China Using Tools of PESTEL, Five Forces Model and SWOT Analysis. In Proceedings of the International Academic Conference on Frontiers in Social Sciences and Management Innovation 2020, Beijing, China, 28–29 December 2020; pp. 348–355. [Google Scholar]
  18. Ziyi, M.E.N. SWOT Analysis of the Business Model of Short Video Platform: Take Tik Tok as an Example. In Proceedings of the Management Science Informatization and Economic Innovation Development Conference 2020, Guangzhou, China, 18–20 December 2020; pp. 38–42. [Google Scholar]
  19. Wheeler, L. Kurt Lewin. Soc. Personal. Psychol. Compass 2008, 2, 1638–1650. [Google Scholar] [CrossRef]
  20. Parkes, C.M. What becomes of redundant world models? A contribution to the study of adaptation to change. Br. J. Med. Psychol. 1975, 48, 131–137. [Google Scholar] [CrossRef]
  21. Chesbrough, H. Business model innovation: Opportunities and barriers. Long Range Plan. 2010, 43, 354–363. [Google Scholar] [CrossRef]
  22. Casadesus-Masanell, R.; Ricart, J.E. How to design a winning business model. Harv. Bus. Rev. 2011, 89, 100–107. [Google Scholar]
  23. Teger, A.I. Too Much Invested to Quit; Pergamon Press: New York, NY, USA, 1980. [Google Scholar]
  24. Sydow, J.; Schreyogg, G.; Koch, J. Organizational path dependence: Opening the black box. Acad. Manag. Rev. 2009, 34, 689–709. [Google Scholar]
  25. Anthony, S.D.; Kodak’s Downfall Wasn’t about Technology. Harvard Business Review. 2016. Available online: https://hbr.org/2016/07/kodaks-downfall-wasnt-about-technology (accessed on 11 September 2022).
  26. Tosey, P.; Visser, M.; Saunders, M.N.K. The origins and conceptualizations of ‘triple-loop’ learning: A critical review. Manag. Learn. 2011, 43, 291–307. [Google Scholar] [CrossRef] [Green Version]
  27. Parkes, C.M. Bereavement as a psychosocial transition. Process of adaption to change. J. Soc. Issues 1988, 44, 53–65. [Google Scholar] [CrossRef]
  28. Beer, S. Brain of the Firm, 2nd ed.; John Wiley: London, UK, 1986. [Google Scholar]
  29. Francis, B.A.; Wonham, W.M. The internal model principle of control theory. Automatica 1976, 12, 457–465. [Google Scholar] [CrossRef]
  30. Metzinger, T. Subjekt und Selbstmodell; Schoningh: Paderborn, Germany, 1993. [Google Scholar]
  31. Metzinger, T. Empirical perspectives from the self-model theory of subjectivity: A brief summary with examples. Prog. Brain Res. 2008, 168, 215–245. [Google Scholar]
  32. Senge, P.; Kleiner, A.; Roberts, C.; Ross, R.; Smith, B. The Dance of Change: The Challenges to Sustaining Momentum in Learning Organizations; Doubleday: New York, NY, USA, 1999. [Google Scholar]
  33. Weick, K.E.; Quinn, R. Organizational change and development. Annu. Rev. Psychol. 1999, 50, 361–386. [Google Scholar] [CrossRef] [Green Version]
  34. Weick, K.E. Sensemaking in Organizations; Sage: London, UK, 1995. [Google Scholar]
  35. McCarthy, J.; Hayes, P.J. Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence; Meltzer, B., Michie, D., Eds.; Edinburgh University Press: Edinburgh, UK, 1969; pp. 463–502. [Google Scholar]
  36. Wass, S.V. Allostasis and metastasis: The yin and yang of childhood self-regulation. Dev. Psychopathol. 2021, 1–12. [Google Scholar] [CrossRef]
  37. Summerfield, C.; Koechlin, E. A neural representation of prior information during perceptual inference. Neuron 2008, 59, 336–347. [Google Scholar] [CrossRef] [Green Version]
  38. Aggelopoulos, N.C. Perceptual inference. Neurosci. Biobehav. Rev. 2015, 55, 375–392. [Google Scholar] [CrossRef]
  39. Prakash, C.; Fields CHoffman, D.D.; Prentner, R.; Singh, M. Fact, fiction, and fitness. Entropy 2020, 22, 514. [Google Scholar] [CrossRef]
  40. Mirza, M.B.; Adams, R.A.; Friston, K.; Parr, T. Introducing a Bayesian model of selective attention based on active inference. Sci. Rep. 2019, 9, 13915. [Google Scholar] [CrossRef] [Green Version]
  41. Joyce, J. Bayes’ Theorem. In The Stanford Encyclopedia of Philosophy, (Fall 2021 Edition); Zalta, E.N., Ed.; Center for the Study of Language and Information, Stanford University: Stanford, CA, USA, 2021; Available online: https://plato.stanford.edu/archives/fall2021/entries/bayes-theorem (accessed on 7 July 2022).
  42. Fox, S. Active inference: Applicability to different types of social organization explained through reference to industrial engineering and quality management. Entropy 2021, 23, 198. [Google Scholar] [CrossRef]
  43. Atkins, P. The Second Law. Freeman and Co.: New York, NY, USA, 1984. [Google Scholar]
  44. Montévil, M.; Mateo, M. Biological organization and constraint closure. J. Theor. Biol. 2015, 372, 179–191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Bombaerts, G.; Jenkins, K.; Sanusi, Y.A.; Guoyu, W. Energy Justice across Borders; Springer Nature: Cham, Switzerland, 2020. [Google Scholar]
  46. Schofield, C.; Storey, I. Energy security and Southeast Asia: The impact on maritime boundary and territorial disputes. Harv. Asia Q. 2005, 9, 36. [Google Scholar]
  47. Nevins, J. The speed of life and death: Migrant fatalities, territorial boundaries, and energy consumption. Mobilities 2018, 13, 29–44. [Google Scholar] [CrossRef]
  48. Brewer, M.B. The psychology of prejudice: Ingroup love and outgroup hate? J. Soc. Issues 1999, 55, 429–444. [Google Scholar] [CrossRef]
  49. Vaes, J.; Leyens, J.P.; Paola Paladino, M.; Pires Miranda, M. We are human, they are not: Driving forces behind outgroup dehumanisation and the humanisation of the ingroup. Eur. Rev. Soc. Psychol. 2012, 23, 64–106. [Google Scholar] [CrossRef]
  50. Mendez, M.F. A neurology of the conservative-liberal dimension of political ideology. J. Neuropsychiatry Clin. Neurosci. 2017, 29, 86–94. [Google Scholar] [CrossRef]
  51. Kaplan, J.T.; Gimbel, S.I.; Harris, S. Neural correlates of maintaining one’s political beliefs in the face of counterevidence. Sci. Rep. 2016, 6, 39589. [Google Scholar] [CrossRef] [Green Version]
  52. Miralles, A.; Raymond, M.; Lecointre, G. Empathy and compassion toward other species decrease with evolutionary divergence time. Sci. Rep. 2019, 9, 19555. [Google Scholar] [CrossRef] [Green Version]
  53. McDermott, R.; Tingley, D.; Hatemi, P.K. Assortative mating on ideology could operate through olfactory cues. Am. J. Political Sci. 2014, 58, 997–1005. [Google Scholar] [CrossRef]
  54. Chark, R.; Zhong, S.; Tsang, S.Y.; Khor, C.C.; Ebstein, R.P.; Xue, H.; Chew, S.H. A gene–brain–behavior basis for familiarity bias in source preference. Theory Decis. 2022, 92, 531–567. [Google Scholar] [CrossRef]
  55. Lubell, M. Familiarity breeds trust: Collective action in a policy domain. J. Politics 2007, 69, 237–250. [Google Scholar] [CrossRef] [Green Version]
  56. D’Onofrio, P.; Norman, L.J.; Sudre, G.; White, T.; Shaw, P. The anatomy of friendship: Neuroanatomic homophily of the social brain among classroom friends. Cereb. Cortex 2022, 32, 3031–3041. [Google Scholar] [CrossRef] [PubMed]
  57. Colleoni, E.; Rozza, A.; Arvidsson, A. Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. J. Commun. 2014, 64, 317–332. [Google Scholar] [CrossRef]
  58. Terekhovich, V. Metaphysics of the principle of least action. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys. 2018, 62, 189–201. [Google Scholar] [CrossRef] [Green Version]
  59. Zipf, G.K. Human Behavior and the Principle of Least Effort; Addison-Wesley Press: Boston, MA, USA, 1949. [Google Scholar]
  60. Chang, Y.W. Influence of the principle of least effort across disciplines. Scientometrics 2016, 106, 1117–1133. [Google Scholar] [CrossRef]
  61. Clark, H.H.; Wilkes-Gibbs, D. Referring as a collaborative process. Cognition 1986, 22, 1–39. [Google Scholar] [CrossRef] [PubMed]
  62. Davies, B.L. Least collaborative effort or least individual effort: Examining the evidence. Univ. Leeds Work. Pap. Linguist. Phon. 2007, 12, 1–20. [Google Scholar]
  63. Arthur, W.B. Competing technologies, increasing returns, and lock-in by historical events. Econ. J. 1989, 99, 116–131. [Google Scholar] [CrossRef]
  64. Schreyögg, G.; Sydow, J.; Holtmann, P. How history matters in organisations: The case of path dependence. Manag. Organ. Hist. 2011, 6, 81–100. [Google Scholar] [CrossRef]
  65. Jost, J.T.; Amodio, D.M. Political ideology as motivated social cognition: Behavioral and neuroscientific evidence. Motiv. Emot. 2012, 36, 55–64. [Google Scholar] [CrossRef]
  66. Barker, D.C.; Carman, C.J. Representing Red and Blue: How the Culture Wars Change the Way Citizens Speak and Politicians Listen; Oxford University Press: New York, NY, USA, 2012. [Google Scholar]
  67. Von Hoerner, S. The search for signals from other civilizations. Science 1961, 134, 1839–1843. [Google Scholar] [CrossRef] [PubMed]
  68. Ritzer, G.; Jurgenson, N. Production, consumption, prosumption: The nature of capitalism in the age of the digital ‘prosumer’. J. Consum. Cult. 2010, 10, 13–36. [Google Scholar] [CrossRef]
  69. Buckley, M.; Cowan, C.; McCarthy, M. The convenience food market in Great Britain: Convenience food lifestyle (CFL) segments. Appetite 2007, 49, 600–617. [Google Scholar] [CrossRef]
  70. Fox, S. Mass imagineering, mass customization, mass production: Complementary cultures for creativity, choice and convenience. J. Consum. Cult. 2019, 19, 67–81. [Google Scholar] [CrossRef]
  71. Schneider, M.; Deck, C.; Shor, M.; Besedeš, T.; Sarangi, S. Optimizing choice architectures. Decis. Anal. 2019, 16, 2–30. [Google Scholar] [CrossRef]
  72. Neftci, E.O.; Averbeck, B.B. Reinforcement learning in artificial and biological systems. Nat. Mach. Intell. 2019, 1, 133–143. [Google Scholar] [CrossRef] [Green Version]
  73. Meyer-Waarden, L.; Benavent, C. The impact of loyalty programmes on repeat purchase behaviour. J. Mark. Manag. 2006, 22, 61–88. [Google Scholar] [CrossRef]
  74. Ratner, R.K.; Kahn, B.E. The impact of private versus public consumption on variety-seeking behavior. J. Consum. Res. 2002, 29, 246–257. [Google Scholar] [CrossRef] [Green Version]
  75. Korn, C.W.; Bach, D.R. Heuristic and optimal policy computations in the human brain during sequential decision-making. Nat. Commun. 2018, 9, 325. [Google Scholar] [CrossRef] [Green Version]
  76. Farley, T.A.; Baker, E.T.; Futrell, L.; Rice, J.C. The ubiquity of energy-dense snack foods: A national multicity study. Am. J. Public Health 2010, 100, 306–311. [Google Scholar] [CrossRef]
  77. Dal Mas, D.E.; Wittmann, B.C. Avoiding boredom: Caudate and insula activity reflects boredom-elicited purchase bias. Cortex 2017, 92, 57–69. [Google Scholar] [CrossRef] [PubMed]
  78. Aluri, A.; Price, B.S.; McIntyre, N.H. Using machine learning to cocreate value through dynamic customer engagement in a brand loyalty program. J. Hosp. Tour. Res. 2019, 43, 78–100. [Google Scholar] [CrossRef]
  79. Khodabandehlou, S.; Rahman, M.Z. Comparison of supervised machine learning techniques for customer churn prediction based on analysis of customer behavior. J. Syst. Inf. Technol. 2017, 19, 65–93. [Google Scholar] [CrossRef]
  80. Hayakawa, H. Bounded rationality, social and cultural norms, and interdependence via reference groups. J. Econ. Behav. Organ. 2000, 43, 1–34. [Google Scholar] [CrossRef]
  81. Cohen, D.A.; Babey, S.H. Contextual influences on eating behaviours: Heuristic processing and dietary choices. Obes. Rev. 2012, 13, 766–779. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Dangles, O.; Irschick, B.D.; Chittka, B.L.; Casas, B.J. Variability in sensory ecology: Expanding the bridge between physiology and evolutionary biology. Q. Rev. Biol. 2009, 84, 51–74. [Google Scholar] [CrossRef] [Green Version]
  83. Moss, M. Salt Sugar Fat: How the Food Giants Hooked Us; Random House: New York, NY, USA, 2013. [Google Scholar]
  84. Battin, J. When good animals love bad habitats: Ecological traps and the conservation of animal populations. Conserv. Biol. 2004, 18, 1482–1491. [Google Scholar] [CrossRef]
  85. Parry, J. Pacific islanders pay heavy price for abandoning traditional diet. World Health Organ. Bull. World Health Organ. 2010, 88, 484. [Google Scholar]
  86. Hawley, N.L.; McGarvey, S.T. Obesity and diabetes in Pacific Islanders: The current burden and the need for urgent action. Curr. Diabetes Rep. 2015, 15, 1–10. [Google Scholar] [CrossRef]
  87. Sparrow, W.A.; Newell, K.M. Metabolic energy expenditure and the regulation of movement economy. Psychon. Bull. Rev. 1998, 5, 173–196. [Google Scholar] [CrossRef] [Green Version]
  88. Finley, J.M.; Bastian, A.J.; Gottschall, J.S. Learning to be economical: The energy cost of walking tracks motor adaptation. J. Physiol. 2013, 591, 1081–1095. [Google Scholar] [CrossRef] [PubMed]
  89. Malik, V.S.; Hu, F.B. The role of sugar-sweetened beverages in the global epidemics of obesity and chronic diseases. Nat. Rev. Endocrinol. 2022, 18, 205–218. [Google Scholar] [CrossRef]
  90. Monteiro, C.A.; Lawrence, M.; Millett, C.; Nestle, M.; Popkin, B.M.; Scrinis, G.; Swinburn, B. The need to reshape global food processing: A call to the United Nations Food Systems Summit. BMJ Glob. Health 2021, 6, e006885. [Google Scholar] [CrossRef] [PubMed]
  91. Anzman-Frasca, S.; Ventura, A.K.; Ehrenberg, S.; Myers, K.P. Promoting healthy food preferences from the start: A narrative review of food preference learning from the prenatal period through early childhood. Obes. Rev. 2018, 19, 576–604. [Google Scholar] [CrossRef] [PubMed]
  92. Roberto, C.A.; Baik, J.; Harris, J.L.; Brownell, K.D. Influence of licensed characters on children’s taste and snack preferences. Pediatrics 2010, 126, 88–93. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Thorndike, A.N.; Riis, J.; Sonnenberg, L.M.; Levy, D.E. Traffic-light labels and choice architecture: Promoting healthy food choices. Am. J. Prev. Med. 2014, 46, 143–149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. Ashraf, M.A. What drives and mediates organic food purchase intention: An analysis using bounded rationality theory. J. Int. Food Agribus. Mark. 2021, 33, 185–216. [Google Scholar] [CrossRef]
  95. Howlett, E.; Davis, C.; Burton, S. From food desert to food oasis: The potential influence of food retailers on childhood obesity rates. J. Bus. Ethics 2016, 139, 215–224. [Google Scholar] [CrossRef]
  96. Breyer, B.; Voss-Andreae, A. Food mirages: Geographic and economic barriers to healthful food access in Portland. Or. Health Place 2013, 24, 131–139. [Google Scholar] [CrossRef]
  97. Alkon, A.H.; Cadji, Y.J.; Moore, F. Subverting the new narrative: Food, gentrification and resistance in Oakland, California. Agric. Hum. Values 2019, 36, 793–804. [Google Scholar] [CrossRef]
  98. Almalki, A.; Gokaraju, B.; Mehta, N.; Doss, D.A. Geospatial and Machine Learning Regression Techniques for Analyzing Food Access Impact on Health Issues in Sustainable Communities. ISPRS Int. J. Geo-Inf. 2021, 10, 745. [Google Scholar] [CrossRef]
  99. Amin, M.D.; Badruddoza, S.; McCluskey, J.J. Predicting access to healthful food retailers with machine learning. Food Policy 2021, 99, 101985. [Google Scholar] [CrossRef] [PubMed]
  100. Mooney, J.G.; Gurbaxani, V.; Kraemer, K.L. A process oriented framework for assessing the business value of information technology. ACM SIGMIS Database Database Adv. Inf. Syst. 1996, 27, 68–81. [Google Scholar] [CrossRef] [Green Version]
  101. Ratcliffe, C.; McKernan, S.M.; Zhang, S. How much does the Supplemental Nutrition Assistance Program reduce food insecurity? Am. J. Agric. Econ. 2011, 93, 1082–1098. [Google Scholar] [CrossRef] [Green Version]
  102. Gollust, S.E.; Barry, C.L.; Niederdeppe, J. Partisan responses to public health messages: Motivated reasoning and sugary drink taxes. J. Health Politics. Policy Law 2017, 42, 1005–1037. [Google Scholar] [CrossRef] [Green Version]
  103. Sainsbury, E.; Magnusson, R.; Thow, A.M.; Colagiuri, S. Explaining resistance to regulatory interventions to prevent obesity and improve nutrition: A case-study of a sugar-sweetened beverages tax in Australia. Food Policy 2020, 93, 101904. [Google Scholar] [CrossRef]
  104. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [Green Version]
  105. Kogan, V. Do welfare benefits pay electoral dividends? Evidence from the national food stamp program rollout. J. Politics 2021, 83, 58–70. [Google Scholar] [CrossRef]
  106. Seiler, A. Let’s Move: The ideological constraints of liberalism on Michelle Obama’s obesity rhetoric. In The Rhetoric of Food: Discourse, Materiality, and Power; Frye, J., Bruner, M., Eds.; Routledge: New York, NY, USA, 2012; pp. 168–183. [Google Scholar]
  107. Nurse, M.S.; Grant, W.J. I’ll see it when I believe it: Motivated numeracy in perceptions of climate change risk. Environ. Commun. 2020, 14, 184–201. [Google Scholar] [CrossRef]
  108. Dunning, D.; Balcetis, E. Wishful seeing: How preferences shape visual perception. Curr. Dir. Psychol. Sci. 2013, 22, 33–37. [Google Scholar] [CrossRef] [Green Version]
  109. Carlsson, F.; Kataria, M.; Lampi, E. How much does it take? Willingness to switch to meat substitutes. Ecol. Econ. 2022, 193, 107329. [Google Scholar] [CrossRef]
  110. Crockett, M.J. Models of morality. Trends Cogn. Sci. 2013, 17, 363–366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  111. Semwayo, D.T.; Ajoodha, R. A Causal Bayesian Network Model for Resolving Complex Wicked Problems. In Proceedings of the IEEE International IOT, Electronics and Mechatronics Conference 2021, Toronto, ON, Canada, 21–24 April 2021; pp. 1–8. [Google Scholar]
  112. Heitlinger, S.; Bryan-Kinns, N.; Comber, R. Connected seeds and sensors: Co-designing internet of things for sustainable smart cities with urban food-growing communities. In Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial-Volume 2, Hasselt/Genk, Belgium, 20–24 August 2018; pp. 1–5. [Google Scholar]
  113. Ham, Y.; Kim, J. Participatory sensing and digital twin city: Updating virtual city models for enhanced risk-informed decisionmaking. J. Manag. Eng. 2020, 36, 04020005. [Google Scholar] [CrossRef]
  114. Ajates, R.; Hager, G.; Georgiadis, P.; Coulson, S.; Woods, M.; Hemment, D. Local Action with Global Impact: The Case of the GROW Observatory and the Sustainable Development Goals. Sustainability 2020, 12, 10518. [Google Scholar] [CrossRef]
  115. Chrisinger, B.W.; Ramos, A.; Shaykis, F.; Martinez, T.; Banchoff, A.W.; Winter, S.J.; King, A.C. Leveraging citizen science for healthier food environments: A pilot study to evaluate corner stores in Camden, New Jersey. Front. Public Health 2018, 6, 89. [Google Scholar] [CrossRef] [Green Version]
  116. Brynjolfsson, E.; Mitchell, T. What can machine learning do? Workforce implications. Science 2017, 358, 1530–1534. [Google Scholar] [CrossRef]
  117. Termeer, C.J.; Dewulf, A. A small wins framework to overcome the evaluation paradox of governing wicked problems. Policy Soc. 2019, 38, 298–314. [Google Scholar] [CrossRef] [Green Version]
  118. Wishon, C.; Villalobos, J.R. Alleviating food disparities with mobile retailers: Dissecting the problem from an OR perspective. Comput. Ind. Eng. 2019, 91, 154–164. [Google Scholar] [CrossRef]
  119. Greenstein-Messica, A.; Rokach, L. Machine learning and operation research based method for promotion optimization of products with no price elasticity history. Electron. Commer. Res. Appl. 2020, 40, 100914. [Google Scholar] [CrossRef]
  120. Snoeck, A.; Merchán, D.; Winkenbach, M. Route learning: A machine learning-based approach to infer constrained customers in delivery routes. Transp. Res. Procedia 2020, 46, 229–236. [Google Scholar] [CrossRef]
  121. Puyt, R.; Lie, F.B.; De Graaf, F.J.; Wilderom, C.P. Origins of SWOT analysis. Acad. Manag. Proc. 2020, 1, 17416. [Google Scholar] [CrossRef]
  122. Patriotta, G. Sensemaking on the shop floor: Narratives of knowledge in organizations. J. Manag. Stud. 2003, 40, 349–376. [Google Scholar] [CrossRef]
  123. Weick, K.E. Organized sensemaking: A commentary on processes of interpretive work. Hum. Relat. 2012, 65, 141–153. [Google Scholar] [CrossRef] [Green Version]
  124. Bouizegarene, N.; Ramstead, M.; Constant, A.; Friston, K.; Kirmayer, L. Narrative as active inference. PsyArXiv, 2020; Preprint. [Google Scholar] [CrossRef]
  125. Namvar, M.; Intezari, A.; Akhlaghpour, S.; Brienza, J.P. Beyond effective use: Integrating wise reasoning in machine learning development. Int. J. Inf. Manag. 2022, 102566. [Google Scholar] [CrossRef]
  126. Martin-Maroto, F.; de Polavieja, G.G. Algebraic Machine Learning. arXiv 2018, arXiv:1803.05252. [Google Scholar]
  127. Malov, D. Quantum Algebraic Machine Learning. In Proceedings of the 2020 IEEE 10th International Conference on Intelligent Systems, Varna, Bulgaria, 28–30 August 2020; pp. 426–430. [Google Scholar]
Figure 1. Triple-loop learning in heterarchical preference competitions. Based on innate preferences for energy-positive options and ingroup-positive options, prosumption preferences are inferred from targeted and general preference options across triple-loop learning.
Figure 1. Triple-loop learning in heterarchical preference competitions. Based on innate preferences for energy-positive options and ingroup-positive options, prosumption preferences are inferred from targeted and general preference options across triple-loop learning.
Systems 10 00260 g001
Table 1. ML implementation constraints.
Table 1. ML implementation constraints.
Effect TypeExampleConstraint
Automational effects,
e.g., some work conducted by ML instead of by people
Data analyses related to an assessment of the efficacy of food programsML works well when goals can be clearly described, but this is difficult when there are opposing beliefs about goals
Informational effects,
e.g., ML provides information that can support human decision making
Information from comparative analyses of healthy food initiativesDefinition of causal interactions can be confounded by opposing ingroup-outgroup exchanges
Transformational effects,
e.g., ML supports radical change in products and/or processes
Addressing wicked problems in global food prosumptionStakeholder disagreements on the definition and character of wicked problems
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fox, S. Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models. Systems 2022, 10, 260. https://doi.org/10.3390/systems10060260

AMA Style

Fox S. Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models. Systems. 2022; 10(6):260. https://doi.org/10.3390/systems10060260

Chicago/Turabian Style

Fox, Stephen. 2022. "Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models" Systems 10, no. 6: 260. https://doi.org/10.3390/systems10060260

APA Style

Fox, S. (2022). Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models. Systems, 10(6), 260. https://doi.org/10.3390/systems10060260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop