Next Article in Journal
Deep Learning Assisted Diagnosis of Chronic Obstructive Pulmonary Disease Based on a Local-to-Global Framework
Next Article in Special Issue
Literacy Deep Reinforcement Learning-Based Federated Digital Twin Scheduling for the Software-Defined Factory
Previous Article in Journal
A Survey on Design Space Exploration Approaches for Approximate Computing Systems
Previous Article in Special Issue
Adoption and Continuance in the Metaverse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Taxonomy of Embodiment in the AI Era

1
Department of Computing Science, Umeå University, 901 87 Umeå, Sweden
2
Department of Psychology, Umeå University, 901 87 Umeå, Sweden
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2024, 13(22), 4441; https://doi.org/10.3390/electronics13224441
Submission received: 25 September 2024 / Revised: 24 October 2024 / Accepted: 9 November 2024 / Published: 13 November 2024
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)

Abstract

:
This paper presents a taxonomy of agents’ embodiment in physical and virtual environments. It categorizes embodiment based on five entities: the agent being embodied, the possible mediator of the embodiment, the environment in which sensing and acting take place, the degree of body, and the intertwining of body, mind, and environment. The taxonomy is applied to a wide range of embodiment of humans, artifacts, and programs, including recent technological and scientific innovations related to virtual reality, augmented reality, telepresence, the metaverse, digital twins, and large language models. The presented taxonomy is a powerful tool to analyze, clarify, and compare complex cases of embodiment. For example, it makes the choice between a dualistic and non-dualistic perspective of an agent’s embodiment explicit and clear. The taxonomy also aided us to formulate the term “embodiment by proxy” to denote how seemingly non-embodied agents may affect the world by using humans as “extended arms”. We also introduce the concept “off-line embodiment” to describe large language models’ ability to create an illusion of human perception.

1. Introduction

The role and importance of “embodiment” for natural and artificial intelligence have been analyzed and debated for millennia in a range of different scientific communities such as philosophy, psychology, linguistics, neuroscience, computer science, and artificial intelligence. The meaning of the term has varied widely over time, yet no consensus has been reached [1], and today it is being used to describe physical and virtual systems in new ways that overlap and sometimes even contradict traditional usage of the term. The introduction of the metaverse [2] (Milmo, D., “Enter the metaverse: the digital future Mark Zuckerberg is steering us toward” The Guardian 28 October 2021) and associated notions such as VR, AR, and digital twins add further complexity to the term.
This paper’s primary contribution is a unified taxonomy for embodiment. The taxonomy is based on five aspects: the agent being embodied, the possible mediator of the embodiment, the environment in which the embodiment takes place, the agent’s ability to sense and act, and the intertwining of body, mind, and environment. These aspects were identified by analyzing earlier work on how embodiment is defined and characterized, and on recent technological developments that require extensions of traditional notions. As a second contribution, we apply the taxonomy to analyze, describe, and discuss embodiment for a wide range of agents, including humans, robots, other artifacts, and computer programs. Special attention is given to technological and scientific innovations related to AI, the metaverse, and digital twins.
The presented taxonomy is a powerful tool to analyze, clarify, and compare complex cases of embodiment. For example, it makes the choice between a dualistic and non-dualistic perspective of an agent’s embodiment explicit and clear. The taxonomy also aided us to formulate the new term “embodiment by proxy” to denote how seemingly non-embodied agents may affect the world by using humans as “extended arms”. We also introduce the concept “off-line embodiment” to describe large language models’ ability to create an illusion of human perception.
Section 2 investigates how earlier works describe and characterize embodiment. Based on five identified aspects, we present our taxonomy in Section 3. In Section 4, the embodiment of humans, artifacts, and computer programs are discussed and categorized using the taxonomy. Finally, Section 5 contains a discussion of the results, possible extensions to the work, and some final thoughts about the current situation where both scientific and commercial focus is shifted toward non-embodied or virtually embodied solutions.

2. What Does It Mean to Be Embodied?

As mentioned above, there is no universally agreed-upon meaning of the term embodiment, and it is today applied to physical and virtual systems in very many different ways. To support our intention to create a unified taxonomy, this section summarizes and analyzes some of the most relevant attempts to define and characterize embodiment.
The concept of embodiment traces back to René Descartes’ influential work in the 17th century [3]. Similar thoughts were certainly expressed much earlier, for example by Plato and Aristotle, but Descartes was arguably one of the first Western philosophers advocating for a clear separation of mind and body. Descartes argued that a human comprises an immaterial spirit inside a mechanical body. The essential attributes of humans, such as thinking, willing, and conceiving, were attributes of the spirit. The role of the physical body was to provide inputs passed from the sensory organs to the immaterial spirit and to receive signals to activate muscles and enable motion. This mind–body dualism remained the major paradigm and model in science and Western medicine for the following three centuries.
Some 300 years after Descartes, philosophers such as Husserl, Heidegger, and [4] started to question the mind–body dualism and investigated how the human mind depends on the body and vice versa. The emerging field of embodied cognition emphasized that the human body is intertwined with the mind. Cognitive processes depend on not only the mind, but also the physical body, and both sensing and acting are intertwined with the mind, each other, and the environment. A few examples of how this has appeared in research are the following:
  • Lakoff and Johnson [5] argued that the development of language, particularly metaphors, is tightly connected to our bodily experiences.
  • Humans’ fine-motor skills are tightly connected to sensory–motor coordination [6].
  • Perception has been shown to directly affects actions. For example, hearing or reading words associated with light, such as “day” or “lamp”, causes the pupils to constrict, beyond voluntary control [7].
  • The theory on symbol grounding describes how formal symbols or representations must be grounded in non-symbolic perceptions through intertwined sensing and acting to create meaning and understanding.
  • Radical embodied cognitive neuroscience (RECT) proposes that cognition and emotion are inseparable in the brain, and should be studied as a whole brain–body–environment system, fully merging the concepts of body and mind [8].
  • The human central nervous system creates several models connecting sensing and acting, for example, “forward models” computing predicted sensory signals as a result of an executed muscle movement (for an overview, see [9]).
In an influential work by Wilson [10], the following claims are made: cognition is situated, we off-load cognitive work onto the environment, the environment is part of the cognitive system, cognition is for action, and (even) off-line cognition is body-based. This points to the importance of a body for sensing and acting, and of a continuous interplay between a cognitive process, sensing, acting, and the environment. This interplay is sometimes denoted as “structural coupling between agent and environment” or “physical/sensorimotor embodiment” [11,12], and is also mentioned by Maturana and Varela [13,14]. Quick and Dautenhahn [15] defined a system as embodied in an environment if the system can perturb the environment and vice versa. They also suggested that embodiment can be quantified by a complexity measure applied to the perturbation. This perturbation depends on two factors: the available sensors, actuators, and “the dynamical relationship between system and environment over all possible interactions”. Duffy and Joue [16] use the terms “ON-World” and “IN-World” to distinguish between merely placing a controller in a physical environment (ON-World) and having an agent interacting, participating, and adapting in the world (IN-World). For example, a self-driving car can be seen as more IN-World, than a telepresence robot, which is more ON-World.
The recent technological development has extended the embodiment concept in several respects. Most importantly, embodiment is not restricted to the physical world, but may also take place in virtual created worlds, for example through virtual reality (VR) [17], or software agents (see Section 4.3) (In this paper, we use the word “environment” interchangeably with “world”).
It should be noted that the term embodiment often has a special meaning in the VR community [18,19,20], where it often refers to an agent’s feeling of presence in a virtual environment. Even if presence is tightly connected to what we here denote as embodiment, it is also fundamentally different since presence is a subjective experience, while we regard embodiment as “an inherent property of an agent” [16].

3. A Taxonomy of Embodiment

The overview and analysis in the previous section enabled us to identify five dimensions that characterize specific cases of embodiment. The dimensions are as follows (definitions and explanations follow afterwards):
  • Agent—the entity being embodied;
  • Mediator—the entity sensing and acting;
  • Environment—where sensing and acting takes place;
  • Degree of body—according to Definition 1;
  • Degree of intertwining—according to Definition 2.
These dimensions define the proposed taxonomy and are described in detail below. Along the first dimension, we specify the agent (the embodied entity). Our usage of the term “agent” relates to the common notion of “autonomous agent”, for example as defined by Franklin and Graesser [21]: “An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”.
To characterize simultaneous embodiment in different worlds (and multiple embodiments in the same world), we introduce two major categories of embodiments: non-mediated and mediated. In non-mediated embodiment, the agent senses and acts in the environment using its own sensors and actuators. This is often described with expressions such as “humans are embodied” [12] or “embodied robots” [22]. In mediated embodiment, the agent senses and acts in an environment using sensors and actuators belonging to a mediator [23,24]. The mediator may reside in the same environment as the agent; for example, when a human is embodied as a telepresence robot. The mediator may also reside in another environment, such as when a human is embodied as an avatar in a computer game. Mediated embodiment is sometimes referred to as “remote embodiment” [25], or with expressions such as “is embodied as” [26], “is embodied via” [27], or “is embodied through” [28]. In this paper, we use the expression “is embodied as”.
As a second dimension, we specify the mediator (the entity being equipped with sensors and actuators). In many cases, this coincides with the agent, and the mediator is then not explicitly specified. Along the third dimension, we specify in which world sensing and acting take place. As noted in the analysis in the previous section, this may be the physical world or one of several possible virtual worlds. If sensing and acting take place in the physical world, the agent is said to be “physically embodied”. If the agent senses and acts in a virtual world, it is “virtually embodied” in that world (It can be argued that the physical world is simulated and hence rather virtual than physical [29,30]. However, we stick to the common practice of referring to the world we live in as “the physical world”, and all other worlds as “virtual”).
The technological development has introduced techniques like Augmented Reality (AR) [31,32], which extends humans’ perception and action beyond what is possible for a “plain” human. To be able to distinguish between embodiment for agents with varying abilities to perceive and act we introduce a fourth dimension denoted degree of body, defined as follows:
Definition 1. 
The degree of body for an agent in a given environment is the proportion of aspects of the environment that the agent can sense and affect.
This dimension is also relevant if we want to distinguish between the embodiment of artifacts and software agents with varying sensing and acting capabilities. For example, a smart speaker that is connected to thermometers as well as heaters in a house may be regarded as having a higher degree of body than a regular chatbot (see Section 4.3 for more details and examples). If the the degree of body is zero, i.e., if the agent can neither perceive nor affect the world it is in, we say that the agent is “non-embodied”.
To accommodate for levels of coupling between agent and environment, as suggested by, for example, Duffy and Joue [16], and Quick and Dautenhahn [15] (see Section 2), we introduce a dimension denoted degree of intertwining, defined as follows:
Definition 2. 
The degree of intertwining of body, mind, and environment for an agent is the proportion of aspects of the environment that it senses and affects in an intertwined manner. Intertwining refers to a dependency between sensing, thinking, and acting.
The term intertwining captures to what extent a virtual or physical agent’s sensing, thinking, and acting depend on each other, and therefore on the environment it operates in. The term is commonly used to characterize embodiment, with a history that goes back to both Merleau-Ponty and Husser [33].
Definitions 1 and 2 should not necessarily be interpreted in a mathematical sense, since “the proportion of aspects” in practice may be hard, or even impossible, to estimate (For example, we cannot possibly know “all aspects” of the physical world with our limited senses and technology). Rather, they should be seen as a way to step away from viewing agents as either “embodied” or “non-embodied”, or either intertwined or not with the environment. As an example, which also illustrates the complementarity of the body and intertwining dimensions, we consider a robot equipped with all sorts of sensors, actuators, and computing power to process sensor data and compute control signals. This robot therefore scores “high” in the body dimension. However, if these capabilities are not fully utilized in an intertwined fashion, the robot scores “low” in the intertwining dimension (by “high” and “low”, we refer to scores close to the maximum and minimum proportions mentioned in Definitions 1 and 2). Hence, agents with identical bodies may very well have varying degrees of intertwining.

4. Applying the Taxonomy

In the following subsections, we apply the proposed taxonomy to three types of embodied agents: humans, physical robots and other artifacts, and computer programs. This overview does not aim to be a review all possible cases of embodiment. The aim is rather to illustrate how the taxonomy is useful to characterize, compare, and distinguish between quite different cases of embodiment. Furthermore, the analyses of the nature of the embodiment cases often lead to insights that are novel contributions in their own right. In the text, we refer to rows R1–R17 in Figure 1, Figure 2 and Figure 3, with columns specifying the agent, the mediator, and the environment (the world where sensing and acting take place) for several of the described cases. For non-mediated embodiment, i.e., when the agent uses its own sensors and actuators, the mediator column is left blank. For mediated embodiment, the agent uses sensors and actuators belonging to the mediator, specified in the mediator column. The right-most column is a combination of the two dimensions degree of body and degree of intertwining. Since these concepts may vary a lot, even for a given category of embodiment, we indicate a bar of varying length, mainly to illustrate relative differences between categories (rows).

4.1. Humans

In this subsection, we provide examples of how humans may be embodied in the physical world as well as in various virtual worlds.
The role and function of human embodiment are of interest to AI research for several reasons. Embodiment is often described as an important ingredient for human intelligence. For example, the embodiment hypothesis [36,37] states that “intelligence emerges in the interaction of an agent with an environment…”. If we aim at creating artificial intelligence, it is therefore worthwhile to study, and possibly mimic, human embodiment. Embodiment is also often described as an important ingredient for user experience in human–robot interaction. For example, the embodiment hypothesis in social robotics states that “physical embodiment has a measurable effect on the perception of social interactions” [38,39]. If we aim at creating AI that interacts well with humans, it is therefore worthwhile to study, and possibly mimic, human embodiment.
Referring to our taxonomy, humans are strongly embodied in the physical world along the intertwining dimension, both at the sensory–motor level, at the interaction level, and at the cognitive level (R1). Humans also score high along the degree of body dimension, even if our Definition 1 allows for even higher scores. Augmented Reality (AR) glasses [31,32] increase a human’s embodiment along the degree of body dimension by displaying additional information about the physical world (R2).
Telepresence [40] offers an additional type of human embodiment in the physical world, mediated by a mobile robot equipped with cameras, other sensors, and actuators (R3). A remotely located human operator may be wirelessly connected via displays, feedback devices, and control interfaces to both “sense” what the robot senses and control the robot’s motion in a way that creates a feeling of “being there” (it is interesting to note that a human operator of a telepresence robot may be viewed as embodied twice in the physical world (R1 and R3), albeit at different locations). The degree of body and degree of intertwining are both limited since the amount of sensory information, feedback, and ability to control is typically limited [41]. As described by Sheridan [42], the sense of presence in telepresence is determined by three factors: (1) the extent of transferred sensory information, (2) the control of the relation of sensors to environment, and (3) the ability to modify the physical environment. It is noteworthy that (1) and (3) correspond well to our body dimension, and (2) corresponds to our intertwining dimension. Video conferencing is a simplified version of telepresence where the robot is replaced with a stationary computer screen, video camera, loudspeaker, and microphone, resulting in lower scores for the degree of body and degree of intertwining.
Several recent technological innovations have introduced human embodiment in virtual worlds. In metaverse platforms such as Second Life (often cited as the first example of the metaverse [43]), Microsoft Teams, and numerous multiplayer online games (for example, Roblox and Fortnite), human users are virtually embodied as avatars—with bodies acting and interacting in a virtual world (R4). Hence, these avatars act as mediators for virtually embodied humans. Virtual Reality (VR) equipment increases the feeling of being immersed in the virtual environment [17], including the sensation that the avatar’s virtual body parts are parts of the own body (R5). There is substantial research on how this affects the feeling of first-person embodiment in the virtual world (see [44] for an overview). The scores along the body and intertwining dimensions increase when using such VR equipment compared to watching your avatar on a screen (R4). Spatial Computing, for example, implemented as smart glasses, introduces a similar, but more complex, situation by simultaneously supporting a human user’s presence, and hence embodiment, in both the physical and virtual world (R6).

4.2. Physical Robots and Other Artifacts

In this subsection, we provide examples of how physical robots and other artifacts may be embodied in various ways.
Physical robots are, to varying extents, embodied in the physical world. Traditional industrial robots have no or only a few sensors and cannot move around freely (R7). This limits their embodiment along the body dimension, and also along the intertwining dimension since there are very limited, if any, sensory–motor interactions with the environment. Some modern industrial and field robots are equipped with more advanced sensors, and with control mechanisms that build on sensory–motor interaction. One example is visual servoing, which uses direct image feedback to control a robot gripper toward its goal, instead of depending on unreliable absolute position estimates [45]. Another example is force feedback, which avoids collisions and supports safe interaction with humans, as implemented in, for example, ABB:s YuMi robot [46]. Such robots score higher along both the body and the intertwining dimensions.
A significant amount of the early AI research concerned mobile robots. Shakey was one of the most recognized examples, as the first robot that could perceive and reason about its surroundings [47]. However, the “minds” of these robots were distinctly separated from the world through sensors, actuators, and symbol systems. In our taxonomy, such robots therefore score low along the intertwining dimension, albeit higher than traditional industrial robots (R8).
The early ideas on embodied cognition were picked up by AI researchers such as Brooks [48] at the end of the 1980s (for an overview see [49]). He argued that robots need to be based on sensory–motor coupling with the environment, and built several robots that did not rely on detailed models of the world. Instead, he claimed that “the world is its own model”, that can be simply accessed with sensors. Other researchers, for example, Pfeifer and Bongard [50], demonstrated how computations can be outsourced to both hardware and the environment; for example, in the design of walking robots with neither computers nor sensors. Such robots score higher along the intertwining dimension. The shortcomings of this “Embodied AI” approach were later recognized and led to approaches such as “enactive artificial intelligence” [51].
The last decades’ development of faster computers and more accurate and robust sensors has enabled the development of self-driving cars. Most often, they follow the GOFAI Sense-Think-Act paradigm, either with separate modules (for object detection, classification, localization, planning, vehicle control, etc.), or end-to-end driving where sensor data are processed by a single “thinking” module to produce suitable actions [52]. Neural networks and deep learning are extensively used, which obviously makes the approach less symbolic, but these cars still score low along the intertwining dimension (R9). A recent approach with connected systems aims at distributing basic operations among cars, pedestrians, and infrastructure such as traffic lights and traffic signs [53]. This is clearly in the spirit of embodied cognition and distributed cognitive processes [54], and does as such increase self-driving cars’ score along both the body and intertwining dimensions.
Another recent technological development is smart buildings [55], with automated systems for control of resources such as electricity, security, air conditioning, access control, etc. Such buildings are equipped with sensors and actuators, and also interfaces to the Internet and to the people who use the buildings. Hence, a smart building may be regarded as a physically embodied artifact, with degrees of body and intertwining that depend on the sophistication of the sensing and actuator system (R10).

4.3. Computer Programs

In this subsection, we provide examples of how different types of computer programs may be embodied in various ways, and in various worlds.
A personal computer’s operating system may be regarded as an embodied agent using the computer as a mediator (R11). However, the computer as such has very limited sensing and acting capabilities. A keyboard may, of course, be seen as a sensor, and the screen as an actuator, but compared to, for example, a robot, a plain computer such as IBM PC equipped with the operating system MS-DOS cannot perceive much about the world, and cannot affect much of it either. However, with our terminology, the MS-DOS software is physically embodied with the IBM PC acting mediator, albeit with low scores along the body and intertwining dimensions (see Section 5 for an alternative categorization).
One interesting case of embodied software is chatbots, which are programs capable of conducting a conversation with a human. They were introduced by AI researchers already in the 1960s, if which the most notable is the program ELIZA by Weizenbaum [56]. Regarding embodiment, basic chatbots are computer programs that act through a mediating computer that produces text on a screen or speech through a loudspeaker. Their sensing is typically limited to understanding written or spoken text. Hence, a plain chatbot could be seen as embodied in the physical world, albeit with very limited abilities to both sense and act. McGregor [57] supports this view and even holds it as possible that some chatbots should be regarded as embodied agents.
Some chatbots; for example, the digital assistant Siri, appear as animated figures on the computer or phone screen, and several experiments have shown that they trigger human responses in the same way as do robots (for an overview see [39]). Such chatbots score higher along the body dimension (e.g., due to the animation effects) and have the potential to score higher also along the intertwining dimension since they may interact in a more complex way with humans (R12). Smart speakers, such as Google Speech, Alexa, and Amazon Echo, are mediators for chatbot programs, with dedicated hardware that, for example, enables them to sense temperature and doorbells, and control thermostats, locks, and lighting in a house (R13). The functionality of this extra hardware, and how it is controlled, affect both the degree of body and degree of intertwining. A (fictitious) example of a, in this respect, highly advanced chatbot is the program controlling the HAL 9000, which is a sentient supercomputer in Arthur C. Clarke’s Space Odyssey series, and the movie 2001: A Space Odyssey. The HAL computer, had it existed in the physical world, would have a high degree of body since it controls the spacecraft as well as interacts with the crew (for example, it manages lip reading). It also has a high degree of intertwining since its actions depend on its percepts in an advanced fashion (for example, it locks pod bay doors when deemed necessary to reach its goal). For an overview of HAL’s appearance in the movie, see https://en.wikipedia.org/wiki/HAL_9000 (accessed on 8 November 2024).
A new generation of chatbots, most notably ChatGPT [58], is driven by large language models (LLMs), which are trained on massive collections of text written by humans. As a result, ChatGPT knows a significant amount about the physical world, and it can be argued that it, albeit indirectly, “senses” the physical world. While the information decoded in ChatGPT is mostly “non-personal”, it can also generate output that expresses personal sensory experiences. For example, when given the prompt “Suppose you first eat an orange, and then drink milk. Answer in one short sentence, how it tastes”, ChatGPT responds “It tastes unpleasant, with a sour, curdled flavor and an off texture”. Obviously, this does not refer to the chatbot’s own experience of drinking milk, but rather to the collective expressed experiences of many humans who have mixed oranges and milk. Nevertheless, the chatbot creates a strong illusion of having had experiences resembling human perception, and we refer to this as “off-line embodiment”. Also the acting (producing text output) by ChatGPT and intertwining of sensing and acting (i.e., dialogue management) is more advanced compared to traditional chatbots. The degrees of body and intertwining may become even higher through embodiment by proxy mechanisms (see below). An alternative way to describe the embodiment of ChatGPT is to regard it as living in a virtual “text world”, for which the text-based user interface provides sensing and acting capabilities [59].
A recent phenomenon is artificial avatars [60], also known as fake avatars [61] or AI avatars [62]. These programs are embodied in virtual environments in the same way as the non-artificial avatars described in Section 4.1. However, while a non-artificial avatar is controlled by a human, an artificial avatar is controlled by an overarching computer program (e.g., in metaverse). Unlike non-artificial avatars, an artificial avatar may be described as a virtually embodied computer program that directly senses and acts in a virtual world without the involvement of a mediator (R14).
Some computer programs are autonomous agents [21] acting in a virtual environment that is not directly accessible by humans, and mainly interacting with other similar agents and programs. Such software agents may, for example, act as spam filters, or transfer emails over the Internet, the virtual world in which they are embodied (R15).
Embodiment is currently receiving increased attention and acceptance in the machine learning community where the limitations of only using observational data are noted as a hurdle for further progress. However, instead of turning to physically embodied robot solutions, the suggested approach is to simulate the physical world and let a simulated virtual robot move around in this world to collect data from which it learns. Somewhat surprisingly, the approach is denoted “Embodied AI” [34,35]. Given the well-known history of the overloaded terms “embodiment” and “embodied AI” [51], this certainly does not contribute to clarifying the terminology. Nevertheless, several interesting results have been presented, see [63] for a recent overview. With our terminology, we may describe the setup as a software module being embodied as a simulated robot in a virtual world created and maintained by the simulator (R16). The degree of intertwining depends on the sophistication of the simulator.
A recent innovation is digital twins, which also build on simulator technology [64]. A major difference compared to a regular simulator is that a digital twin is a simulation of a specific physical system of components; for example, a factory or a car driving in a city with other cars and pedestrians (denoted the “real twin”). Another difference is that the real twin may be connected to sensors that provide real-time data that feed into the digital twin. Sometimes the digital twin also outputs data that connect to actuators in the real twin [65]. Hence, a digital twin program may be described as embodied in the physical world with the real twin acting as mediator (R17). The degree of body and the degree of intertwining depend on the sophistication of the simulation, the type and extent of real-time data being exchanged, and how data are processed by the digital and real twins.

4.4. Embodiment by Proxy

As described by Tegmark [66] (p. 148), a very smart AI program could manipulate a human into granting it more control over the world, eventually leading to a complete takeover by the AI. A less futuristic, but equally dystopian, scenario takes place in the metaverse, with AI-controlled artificial avatars manipulating human-controlled avatars (and their associated humans) in ways that have clear effects in the physical world (such as how people vote, what they buy, and who they trust) [60]. In both scenarios, the AI program exerts what we denote as “embodiment by proxy”, by almost literally using humans as “extended arms”.
It is noteworthy that embodiment by proxy does not require a sentient AI program that tricks or deceives people to reach its own goals. Already today, both individuals and entire companies voluntarily adapt their behavior to fit decisions made by computer programs and artifacts, while the programs and artifacts were designed to fit requirements defined by the humans. Such man–computer symbiosis has been anticipated since the invention of computers [67], and nowadays also one-sided handover of company management and leadership to computers is considered [68]. This means that the computers control the physical world indirectly via the involved humans, through an embodiment-by-proxy mechanism. Furthermore, already today many computer systems are physically embodied beyond the keyboard and screen and have direct control over, for example, salary payments, power grids, communication between people, and distribution of news. Both indirect and direct control of the world is expected to increase as the AI systems and metaverse environments become more powerful. Hence, the non-embodied appearance of computer programs is deceptive, since they neither need arms nor legs to affect the physical world.

5. Discussion

The discussion in this section is divided into two parts: one focusing on the suggested taxonomy, and one on the current and future state and status of embodiment, analyzed in light of the taxonomy and the examples of embodiment discussed in Section 4.

5.1. The Taxonomy

We introduced a five-dimensional taxonomy that helps categorize and understand various embodiment cases. The distinction between the embodied agent and the mediator (dimensions 1 and 2, respectively) may appear as a commitment to the Descartian mind–body dualism. According to this paradigm [3], a human was made up of an immaterial spirit inside a mechanical body. Thinking, willing, and conceiving were attributes of the spirit. The role of the physical body was to provide inputs passed from the sensory organs to the immaterial spirit, and to receive signals to activate muscles and enable motion. Our taxonomy certainly allows for human embodiment to be categorized as such, by regarding the soul as the agent (dimension 1) and the body as the mediator (dimension 2). However, R1 in Figure 1 illustrates the more recent embodied cognition paradigm wherein the entire human is regarded as an embodied agent. A dualistic approach is taken to describe teleoperation (R3), wherein the human is embodied as a telepresence robot, with the latter acting as a mediator. At R11, the operating system MS-DOS and the IBM PC are also described in a dualistic fashion. However, it would also be possible to regard the computer and operating system as one embodied agent, just as we describe the embodied human at R1. Hence, the taxonomy allows for both paradigms, and the choice is up to the user. We believe that this explicit option helps in understanding and discussing complex cases of embodiment.
Dimensions 4 (degree of body) and 5 (degree of intertwining) may be perceived as controversial. According to Definitions 1 and 2, a blind person would score lower along both dimensions than a seeing person (everything else equal), and hence be “less embodied”. However, the blind person’s hearing may be more intertwined with both thinking and acting compared to a seeing person. The definitions do not specify how these two conditions should be weighted when assigning values to the body and intertwining dimensions. However, as previously mentioned, the dimensions should not be interpreted in a mathematical sense, but rather as a way of expressing the non-binary nature of both body and intertwining. A related example is illustrated at R2 in Figure 1, where we attribute a higher degree of body and intertwining to a person equipped with AR glasses, who may be regarded as “more embodied” than the person at R1.
The proposed taxonomy may be used to classify cases of social embodiment [69,70,71] and socially embodied AI [72], even if no examples are given in this paper. For this, the usual meanings of sensing and acting have to be extended to also cover mechanisms for perception and enacting of social cues, social signals, and social norms. Intertwining of sensing and acting should also be extended to include social interaction. One example would be a chatbot program embodied as a smart speaker (R13 in Figure 3). Compared to chatbot programs running on plain computers, the smart speaker increases users’ tendency to anthropomorphize [73,74]. This may be described in the taxonomy as a case of social embodiment where the human expresses its anthropomorphization of the chatbot through social signals, which in turn are perceived and used by the chatbot to shape the continued dialogue.
Altogether, the presented taxonomy proved to be a useful tool to analyze, clarify, and compare complex cases of embodiment. It also aided us to define the novel concepts embodiment by proxy and off-line embodiment. Overall, we hope and believe that the taxonomy and the accompanied discussion contribute to a better understanding of the notion of embodiment, and of the different ways in which the term has been used, and will be used, not least related to the recent and future developments in AI and the metaverse.

5.2. The Current and Future State and Status of Embodiment

In Section 2, we gave a brief overview of the history of embodiment. As a complement, we here provide an analysis of the current and future state and status of embodiment in science and technology, with the presented taxonomy as a backdrop.
The success of largely non-embodied and virtually embodied approaches, such as the large language models and metaverse applications mentioned in Section 4.3, indicate a distinct shift in attention in research and development, away from physically embodied solutions. Clearly, a virtual world is easier to deal with, since it circumvents many of the hard problems recognized already in the early years of AI (for example, the noisy, dynamic physical world, inaccurate sensors, and slow computers). Furthermore, virtual solutions “scale” commercially: the extra effort to launch 100 more chatbots or artificial avatars is very low, once the first one is working as it should. This shift is strongly supported by the large Internet companies and their research laboratories. Both Google and OpenAI have either sold off their robotics companies [75] (Google purchased eight robotics companies in less than six months in 2013 [76]) or disbanded their robotics teams to focus on either simulated robots or pure machine learning [77]. At the same time, large automotive companies such as Ford and Volkswagen shut down their programs for the development of self-driving cars [78]. While few details on the reasons for these changes are public, a reasonable guess is that the industry does not see a near future where they can make money on physically embodied robots or cars. This shift away from physical embodiment seems to have influenced funding agencies such as the European Union’s frame program for research, where recent agendas and call texts involving robotics almost always refer to the combined area “AI, data and robotics” [79], whereas earlier robotics was a separate area.
While physical embodiment seems to be on the decline, a strong development of the metaverse (as well as of involved technologies such as VR) would strengthen the role of virtual embodiment. Furthermore, avatars with abilities to sense and affect the physical world would increase also the physical embodiment. This may be achieved directly by equipping the computers with sensors and actuators, or through embodiment by proxy mechanisms, as described in Section 4.3. Hence, even if the future, as always, is uncertain, embodiment is expected to remain an important concept in both science and technology, not least in relation to AI and the metaverse.

Author Contributions

Conceptualization, T.H., N.K. and S.B.; writing—original draft preparation, T.H., N.K. and S.B.; writing—review and editing, T.H., N.K. and S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the Swedish Research Council through grant 2022-04674, and by TAIGA—Centre for Transdisciplinary AI at Umeå University.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the writing of the manuscript or in the decision to publish the results.

References

  1. Ziemke, T. The body of knowledge: On the role of the living body in grounding embodied cognition. Biosystems 2016, 148, 4–11. [Google Scholar] [CrossRef]
  2. The Guardian. Enter the Metaverse: The Digital Future Mark Zuckerberg Is Steering Us Toward. 2021. Available online: https://www.theguardian.com/technology/2021/oct/28/enter-the-metaverse-the-digital-future-mark-zuckerberg-is-steering-us-toward (accessed on 8 November 2024).
  3. Descartes, R. Meditations on First Philosophy: With Selections From the Objections and Replies; Cambridge University Press: Cambridge, UK, 1986. [Google Scholar]
  4. Merleau-Ponty, M. Phenomenology of Perception; Routledge: London, UK, 2012. [Google Scholar]
  5. Lakoff, G.; Johnson, M. Metaphors We Live by; University of Chicago Press: Chicago, IL, USA, 1980. [Google Scholar]
  6. Johansson, R.S.; Cole, K.J. Sensory-motor coordination during grasping and manipulative actions. Curr. Opin. Neurobiol. 1992, 2, 815–823. [Google Scholar] [CrossRef]
  7. Mathot, S.; Grainger, J.; Strijkers, K. Pupillary Responses to Words That Convey a Sense of Brightness or Darkness. Psychol. Sci. 2017, 28, 1116–1124. [Google Scholar] [CrossRef]
  8. Kiverstein, J.; Miller, M. The embodied brain: Towards a radical embodied cognitive neuroscience. Front. Hum. Neurosci. 2015, 9, 237. [Google Scholar] [CrossRef]
  9. Wolpert, D.M.; Ghahramani, Z. Computational principles of movement neuroscience. Nat. Neurosci. 2000, 3, 1212–1217. [Google Scholar] [CrossRef]
  10. Wilson, M. Six views of embodied cognition. Psychon. Bull. Rev. 2002, 9, 625–636. [Google Scholar] [CrossRef]
  11. Ziemke, T. Embodiment in Cognitive Science and Robotics. In Cognitive Robotics; MIT Press: Cambridge, MA, USA, 2022; Chapter 11; pp. 213–229. [Google Scholar]
  12. Ziemke, T. On the Role of Robot Simulations in Embodied Cognitive Science. AISB J. 2003, 1, 389–399. [Google Scholar]
  13. Maturana, H.R.; Varela, F.J. Autopoiesis and Cognition: The Realization of the Living; D. Reidel Pub. Co.: Dordrecht, The Netherlands, 1980. [Google Scholar]
  14. Maturana, H.R.; Varela, F.J. The Tree of Knowledge: Biological Roots of Human Understanding; Shambhala: Boulder, CO, USA, 1987. [Google Scholar]
  15. Quick, T.; Dautenhahn, K. Making embodiment measurable. In Proceedings of the ‘4 Fachtagung der Gesellschaft für Kognitionswissenschaft, Bielefeld, Germany, 28 September–1 October 1999. [Google Scholar]
  16. Duffy, B.R.; Joue, G. Intelligent Robots: The Question of Embodiment. 2000. Available online: https://api.semanticscholar.org/CorpusID:15520603 (accessed on 24 September 2024).
  17. Slater, M. Immersion and the illusion of presence in virtual reality. Br. J. Psychol. 2018, 109, 431–433. [Google Scholar] [CrossRef]
  18. Suk, H.; Laine, T.H. Influence of Avatar Facial Appearance on User’ Perceived Embodiment and Presence in Immersive Virtual Reality. Electronics 2023, 12, 583. [Google Scholar] [CrossRef]
  19. Blanke, O.; Metzinger, T. Full-body illusions and minimal phenomenal selfhood. Trends Cogn. Sci. 2009, 13, 7–13. [Google Scholar] [CrossRef]
  20. Guy, M.; Normand, J.M.; Jeunet-Kelway, C.; Moreau, G. The sense of embodiment in Virtual Reality and its assessment methods. Front. Virtual Real. 2023, 4, 1141683. [Google Scholar] [CrossRef]
  21. Franklin, S.; Graesser, A. Is It an agent, or just a program?: A taxonomy for autonomous agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 1997; Volume 1193, pp. 21–35. [Google Scholar] [CrossRef]
  22. Brooks, R.A. Artificial Life and Real Robots. In Proceedings of the European Conference on Artifcial Life; MIT Press: Cambridge, MA, USA, 1991; pp. 3–10. [Google Scholar]
  23. Aymerich-Franch, L. Mediated Embodiment in New Communication Technologies. In Encyclopedia of Information Science and Technology, 4th ed.; IGI Global: Hershey, PA, USA, 2019; pp. 563–574. [Google Scholar] [CrossRef]
  24. Aymerich-Franch, L. Towards a Common Framework for Mediated Embodiment. Digit. Psychol. 2020, 1, 3–12. [Google Scholar] [CrossRef]
  25. Björnfot, P. Being Connected to the World Through a Robot. Ph.D. Thesis, Umeå University, Umeå, Sweden, 2022. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1657323&dswid=3087 (accessed on 24 September 2024).
  26. Robb, D.A.; Lopes, J.; Ahmad, M.I.; McKenna, P.E.; Liu, X.; Lohan, K.; Hastie, H. Seeing eye to eye: Trustworthy embodiment for task-based conversational agents. Front. Robot. AI 2023, 10, 1234767. [Google Scholar] [CrossRef]
  27. St-Onge, D.; Reeves, N.; Kroos, C.; Hanafi, M.; Herath, D.; Stelarc. The floating head experiment. In Proceedings of the 6th International Conference on Human-Robot Interaction, HRI ’11, Lausanne, Switzerland, 6–9 March 2011; pp. 395–396. [Google Scholar] [CrossRef]
  28. Tsfasman, M.; Saravanan, A.; Viner, D.; Goslinga, D.; de Wolf, S.; Raman, C.; Jonker, C.M.; Oertel, C. Towards a Real-time Measure of the Perception of Anthropomorphism in Human-robot Interaction. In Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI, Virtual Event, 20 October 2021. [Google Scholar] [CrossRef]
  29. Chalmers, D.J. Reality+: Virtual Worlds and the Problems of Philosophy; W. W. Norton: New York, NY, USA, 2022. [Google Scholar]
  30. Bostrom, N. Are you living in a computer simulation? Philos. Q. 2003, 53, 243–255. [Google Scholar] [CrossRef]
  31. Cipresso, P.; Giglioli, I.A.C.; Raya, M.A.; Riva, G. The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Front. Psychol. 2018, 9, 2086. [Google Scholar] [CrossRef]
  32. Rosenberg, L.B. The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator Performance in Remote Environments. SPIE 1993, 2057, 10–21. [Google Scholar]
  33. Moran, D. The Phenomenology of Embodiment: Intertwining and Reflexivity. In The Phenomenology of Embodied Subjectivity; Springer International Publishing: Berlin/Heidelberg, Germany, 2013; pp. 285–303. [Google Scholar] [CrossRef]
  34. Savva, M.; Kadian, A.; Maksymets, O.; Zhao, Y.; Wijmans, E.; Jain, B.; Straub, J.; Liu, J.; Koltun, V.; Malik, J.; et al. Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
  35. Batra, D.; Chang, A.X.; Chernova, S.; Davison, A.J.; Deng, J.; Koltun, V.; Levine, S.; Malik, J.; Mordatch, I.; Mottaghi, R.; et al. Rearrangement: A Challenge for Embodied AI. arXiv 2020, arXiv:2011.01975. [Google Scholar]
  36. Smith, L.B.; Gasser, M. The Development of Embodied Cognition: Six Lessons from Babies. Artif. Life 2005, 11, 13–29. [Google Scholar] [CrossRef]
  37. Tirado, C.; Khatin-Zadeh, O.; Gastelum, M.; Leigh-Jones, N.; Marmolejo-Ramos, F. The strength of weak embodiment. Int. J. Psychol. Res. 2018, 11, 77–85. [Google Scholar] [CrossRef]
  38. Wainer, J.; Feil-seifer, D.J.; Shell, D.A.; Mataric, M.J. The role of physical embodiment in human-robot interaction. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 117–122. [Google Scholar] [CrossRef]
  39. Deng, E.; Mutlu, B.; Mataric, M.J. Embodiment in Socially Interactive Robots; Now Publishers Inc.: Delft, The Netherlands, 2019. [Google Scholar] [CrossRef]
  40. Minsky, M. Telepresence. Omni Magazine. 1980, pp. 45–51. Available online: https://philpapers.org/rec/MINT (accessed on 24 September 2024).
  41. IJsselsteijn, W. Towards a Neuropsychological Basis of Presence. Annu. Rev. Cyberther. Telemed. Decade VR 2005, 3, 25–30. [Google Scholar]
  42. Sheridan, T.B. Musings on Telepresence and Virtual Presence. Presence Teleoperators Virtual Environ. 1992, 1, 120–126. [Google Scholar] [CrossRef]
  43. Tidy, J. Zuckerberg’s Metaverse: Lessons from Second Life. 2021. Available online: https://www.bbc.com/news/technology-59180273 (accessed on 5 November 2021).
  44. Furlan, M.; Spagnolli, A. Using an Embodiment Technique in Psychological Experiments with Virtual Reality: A Scoping Review of the Embodiment Configurations and their Scientific Purpose. The Open Psychol. J. 2021, 14, 204–212. [Google Scholar] [CrossRef]
  45. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  46. Stolt, A. On Robotic Assembly Using Contact Force Control and Estimation. Ph.D. Thesis, Lund University, Lund, Sweden, 2015. [Google Scholar]
  47. Nilsson, N.J. A Mobile Automaton: An Application of Artificial Intelligence Techniques. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI, Washington, DC, USA, 7–9 May 1969; Walker, D.E., Norton, L.M., Eds.; William Kaufmann: Los Altos, CA, USA, 1969; pp. 509–520. [Google Scholar]
  48. Brooks, R.A. Intelligence without reason. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91), Sydney, Australia, 24–30 August 1991; pp. 569–595. [Google Scholar]
  49. Anderson, M.L. Embodied Cognition: A field guide. Artif. Intell. 2003, 149, 91–130. [Google Scholar] [CrossRef]
  50. Pfeifer, R.; Bongard, J.C. How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books); The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  51. Froese, T.; Ziemke, T. Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artif. Intell. 2009, 173, 466–500. [Google Scholar] [CrossRef]
  52. Peng, B.; Sun, Q.; Li, S.E.; Kum, D.; Yin, Y.; Wei, J.; Gu, T. End-to-End Autonomous Driving Through Dueling Double Deep Q-Network. Automot. Innov. 2021, 4, 328–337. [Google Scholar] [CrossRef]
  53. Cheng, H.T.; Shan, H.; Zhuang, W. Infotainment and road safety service support in vehicular networking: From a communication perspective. Mech. Syst. Signal Process. 2011, 25, 2020–2038. [Google Scholar] [CrossRef]
  54. Hutchins, E. Cognition in the Wild; The MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  55. Buckman, A.; Mayfield, M.; Beck, S.B. What is a Smart Building? Smart Sustain. Built Environ. 2014, 3, 92–109. [Google Scholar] [CrossRef]
  56. Weizenbaum, J. ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine. Commun. ACM 1966, 9, 36–45. [Google Scholar] [CrossRef]
  57. McGregor, S. Is ChatGPT Really Disembodied? In Proceedings of the ALIFE2023; MIT Press: Cambridge, MA, USA, 2023. [Google Scholar]
  58. OpenAI. ChatGPT: Language Model. Available online: https://chat.openai.com (accessed on 8 November 2024).
  59. Emmert-Streib, F. Is ChatGPT the way toward artificial general intelligence. Discov. Artif. Intell. 2024, 4, 32. [Google Scholar] [CrossRef]
  60. Hellström, T.; Bensch, S. Apocalypse now: No need for artificial general intelligence. AI Soc. 2022, 39, 811–813. [Google Scholar] [CrossRef]
  61. Gavrilova, M.L.; Yampolskiy, R. Applying Biometric Principles to Avatar Recognition. In Transactions on Computational Science XII: Special Issue on Cyberworlds; Springer: Berlin/Heidelberg, Germany, 2011; pp. 140–158. [Google Scholar]
  62. Wiederhold, B.K. Treading Carefully in the Metaverse: The Evolution of AI Avatars. Cyberpsychology Behav. Soc. Netw. 2023, 26, 321–322. [Google Scholar] [CrossRef] [PubMed]
  63. Duan, J.; Yu, S.; Tan, H.L.; Zhu, H.; Tan, C. A Survey of Embodied AI: From Simulators to Research Tasks. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 230–244. [Google Scholar] [CrossRef]
  64. Saddik, A.E. Digital twins: The convergence of multimedia technologies. IEEE Multimed. 2018, 25, 87–92. [Google Scholar] [CrossRef]
  65. Zhou, J.; Zhang, S.; Gu, M. Revisiting digital twins: Origins, fundamentals, and practices. Front. Eng. Manag. 2022, 9, 668–676. [Google Scholar] [CrossRef]
  66. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence; Knopf Publishing Group: New York, NY, USA, 2017. [Google Scholar]
  67. Licklider, J.C.R. Man-Computer Symbiosis. IRE Trans. Hum. Factors Electron. 1960, HFE-1, 4–11. [Google Scholar] [CrossRef]
  68. Wesche, J.S.; Sonderegger, A. When computers take the lead: The automation of leadership. Comput. Hum. Behav. 2019, 101, 197–209. [Google Scholar] [CrossRef]
  69. Barsalou, L.W.; Niedenthal, P.M.; Barbey, A.K.; Ruppert, J.A. Social embodiment. In The Psychology of Learning and Motivation: Advances in Research and Theory; Ross, B.H., Ed.; Elsevier Science: Amsterdam, The Netherlands, 2003; Volume 43, pp. 43–92. [Google Scholar]
  70. Barsalou, L.W. Grounded Cognition. Annu. Rev. Psychol. 2008, 59, 617–645. [Google Scholar] [CrossRef]
  71. Lindblom, J.; Ziemke, T. Interacting Socially through Embodied Action. Emerg. Commun. Stud. New Technol. Pract. Commun. 2008, 10, 49–63. [Google Scholar]
  72. Seaborn, K.; Pennefather, P.; Miyake, N.P.; Otake-Matsuura, M. Crossing the Tepper Line: An Emerging Ontology for Describing the Dynamic Sociality of Embodied AI: Crossing the Tepper Line. In Extended Abstracts, Proceedings of the CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event/Yokohama Japan, 8–13 May 2021; Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Eds.; ACM: New York, NY, USA, 2021; pp. 281:1–281:6. [Google Scholar] [CrossRef]
  73. Schneiders, E.; Papachristos, E.; van Berkel, N. The Effect of Embodied Anthropomorphism of Personal Assistants on User Perceptions. In Proceedings of the 33rd Australian Conference on HCI, OzCHI ’21, Melbourne, VIC, Australia, 30 November–2 December 2021; Association for Computing Machinery (ACM): New York, NY, USA, 2022; pp. 231–241. [Google Scholar]
  74. Pradhan, A.; Findlater, L.; Lazar, A. “Phantom Friend” or “Just a Box with Information”: Personification and Ontological Categorization of Smart Speaker-Based Voice Assistants by Older Adults. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–21. [Google Scholar] [CrossRef]
  75. Statt, N. Alphabet Agrees to Sell Boston Dynamics to SoftBank. 2017. Available online: https://www.theverge.com/2017/6/8/15766434/alphabet-google-boston-dynamics-softbank-sale-acquisition-robotics (accessed on 9 June 2017).
  76. Markoff, J. Google Adds to Its Menagerie of Robots. 2013. Available online: https://www.nytimes.com/2013/12/14/technology/google-adds-to-its-menagerie-of-robots.html (accessed on 14 December 2013).
  77. Wiggers, K. OpenAI Disbands Its Robotics Research Team. 2021. Available online: https://venturebeat.com/business/openai-disbands-its-robotics-research-team/ (accessed on 16 July 2021).
  78. McFarland, M. Ford, VW Pull Plug on Robotaxis in Blow to Self-Driving Car Industry. Available online: https://edition.cnn.com/2022/10/26/business/ford-argo-ai-vw-shut-down/index.html (accessed on 26 October 2022).
  79. Zillner, S.; Bisset, D.; Milano, M.; Curry, E.; García Robles, A.; Hahn, T.; Irgens, M.; Lafrenz, R.; Liepert, B.; O’Sullivan, B.; et al. (Eds.) Strategic Research, Innovation and Deployment Agenda: AI, Data and Robotics Partnership, 3rd ed.; Adra Association: London, UK, 2020. [Google Scholar]
Figure 1. Examples of various types of human embodiment categorized by our taxonomy.
Figure 1. Examples of various types of human embodiment categorized by our taxonomy.
Electronics 13 04441 g001
Figure 2. Embodiment of different types of robots and other artifacts according to our taxonomy.
Figure 2. Embodiment of different types of robots and other artifacts according to our taxonomy.
Electronics 13 04441 g002
Figure 3. Embodiment of different types of computer programs according to our taxonomy [34,35].
Figure 3. Embodiment of different types of computer programs according to our taxonomy [34,35].
Electronics 13 04441 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hellström, T.; Kaiser, N.; Bensch, S. A Taxonomy of Embodiment in the AI Era. Electronics 2024, 13, 4441. https://doi.org/10.3390/electronics13224441

AMA Style

Hellström T, Kaiser N, Bensch S. A Taxonomy of Embodiment in the AI Era. Electronics. 2024; 13(22):4441. https://doi.org/10.3390/electronics13224441

Chicago/Turabian Style

Hellström, Thomas, Niclas Kaiser, and Suna Bensch. 2024. "A Taxonomy of Embodiment in the AI Era" Electronics 13, no. 22: 4441. https://doi.org/10.3390/electronics13224441

APA Style

Hellström, T., Kaiser, N., & Bensch, S. (2024). A Taxonomy of Embodiment in the AI Era. Electronics, 13(22), 4441. https://doi.org/10.3390/electronics13224441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop