Next Article in Journal
Use of Analogy in the Development of Intercultural Competence
Next Article in Special Issue
Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot
Previous Article in Journal
Something’s Got to Give: Reconsidering the Justification for a Gender Divide in Sport
Previous Article in Special Issue
Rilkean Memories and the Self of a Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence

by
Steven Umbrello
1,* and
Stefan Lorenz Sorgner
2
1
Institute for Ethics and Emerging Technologies, Università degli Studi di Torino (Consorzio FINO), 10123 Turin, Italy
2
Institute for Ethics and Emerging Technologies, John Cabot University, 00156 Rome, Italy
*
Author to whom correspondence should be addressed.
Philosophies 2019, 4(2), 24; https://doi.org/10.3390/philosophies4020024
Submission received: 11 April 2019 / Revised: 24 April 2019 / Accepted: 26 April 2019 / Published: 17 May 2019

Abstract

:
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
—Ada Lovelace
“Natures black box cannot necessarily be described by a simple model”
—Peter Norvig
“We categorize as we do because we have the brains and bodies we have and because we interact in the world as we do.”
—George Lakoff

1. Introduction

The tradition of early modern and contemporary western philosophy has generally ignored, if not sought out intentionally to disentangle, the human mind from its corporeal body, arguing that the two are necessarily separate substance (i.e., mind/body dualism) [1]. This praxis has similarly spilled over into artificial intelligence (AI) research which has been marked primarily by innovations in disembodied forms of intelligence (AI’s without sensors). However, advances in both the cognitive sciences and AI research have shown that human cognitive faculties in their incarnate state are enmeshed and constitute one another [2,3]. For this reason, embodiment as a research avenue in AI innovation pathways has been of particular interest, albeit not widely adopted [4,5].1 Here, embodied AI’s stand for AI’s which are equipped with sensors, whereby the question could even arise whether they could in principle receive a moral status, if they have the appropriate capacities. In fact, research in the cognitive science shows that the emergence of the human ability to visualize and recognize objects and abstract within three dimensional environments is the product of the embodied mind; intelligence abilities, rudimentary as they are, are lacking in AI research [10,11,12,13].
One of the primary arguments for embodiment, aside from those that are garnered viz. observing humans and nonhuman animals, is the frame problem with AI [14,15,16] where tacit assumptions made by human designers are not accounted for by the program because they are not explicit. Constraints are needed on the system that are offered by embodiment so that the system does not drown in an infinite sea of interpretation. Convincing arguments have thus been made that conventional, disembodied AI research will eventually hit a ceiling, a limit that can be surpassed, if not entirely revolutionized through embodied forms of AI because of its ability to dissolve the ‘framing problem’ (discussed later) through adaptive learning in physical environments [4,6,17,18,19]. Although this line of research has received limited attention, there are strong reasons to think that this could be the next large research avenue for AI research. This paper raises some concerns regarding the ethical issues that may arise in the near future with sufficiently advanced embodied AI.
The aim of this paper is to explore the novel cognitive features that arise from embodied cognition, whether it is humans or other nonhuman animals and how those differ from the current forms of cognition in traditional AI research. More specifically, the work of the literary theorist N. Katherine Hayles and her thesis of the cognitive nonconscious will be developed as a way to understand embodied AI agents in a novel way [20,21,22]. Similarly, these nonconscious cognizers will be shown to potentially instantiate a form of cognitive suffering that differs from traditional accounts of suffering as either a conscious or purely physical state. If shown to be true, at least plausible, this thesis evokes ethical considerations that must be taken in to account both during the early phases and throughout the design process of embodied AI.
To the best of my knowledge, this is the first paper to consider the potential suffering risks (s-risks)2 that may emerge from embodied AI3, with particular emphasis on the concept that the embodied AI need not be conscious in order to suffer, but at least be in possession of cognitive systems at a sufficiently advanced state. Previous research has focused on the (1) merits and imperatives for embodied AI research [4,24,25,26], the (2) cognition does not necessarily entail consciousness, the two can be uncoupled, particularly the nonconscious cognition [2,3,20,21,27], and the (3) ethics of suffering cognitive AI in general [23,28]. This study is comparatively unique since it shows the potential for (3) to emerge from the synthesis of (1) and (2). The conceptual implications of this thesis, if shown to be at least plausible, potentially extend beyond AI, (see Section 3.1). That issue, however, is bracketed since it falls beyond the narrow scope of this paper to evaluate embodied forms of AI as nonconscious cognizers.
This paper explores these questions through the following itinerary. Section 2 will outline the concept of embodied AI, why it is argued to be the means of solving the frame problem as well as exceed the limits of conventional AI research. Section 3 will thoroughly layout the notion of the cognitive nonconscious, its relations to embodiment, intelligence and cognition in general. This section will be the most robust seeing as its implications for embodied AI are conceptually profound. Section 4 will synthesize Section 2 and Section 3 to show how embodied AI can be envisioned as a nonconscious cognizer and consequentially may suffer. The final section will lay out the limitations of this this study and propose potentially fruitful research avenues as a way forward.

2. Performativity, Proprioception and the Limits of Disembodiment

Artificial Life (AL) systems are modeled after those of biological systems. Similar to AI, AL research has goals that fall within both technological as well as general scientific domains. Firstly, AL is elemental to AI given that the source of our epistemic understanding of intelligence all comes from biological entities, and, along these lines, there are theorists who argue adamantly that it is only via biological substrates that cognition can emerge [29,30].
Similarly, early robotics which were modeled using classical forms of AI that relied on natural language processing (NLP), internal representations and symbols means that the resulting embodied AI was heavily constrained by the framing problem [31]. Because of dynamic environmental contexts any circumstance that was unmodeled meant the inherit inability for these systems to adapt [32]. Naturally, advancements have been made that have extended the range of robotic abilities to adapt to and reflexively respond to environmental changes. This shift towards situated robotics that focus on the behaviors of the current situation in which they find themselves in rather than modeling some end goal or idealized external environment [33,34]. Still, the general practice is to design these AI facilitated programs as software first and evolve them in simulations prior to them being built and tested in their robotic form [30,35].
Embodied AI similarly need not take the anthropomorphic form often depicted in science fiction, although that goal remains and there has been significant progress towards that end, particularly in the realm of care robotics [36,37,38]. The reconceptualization of embodied robotics as isolated entities has given way to another line of research exemplified the concept of distributed cognition. The concept arising from developments in anthropology, sociology and psychology argues that knowledge is distributed across the entities of a group(s) rather than in the position of a single entity of that group. When speaking of robotics, the concept of ‘swarm intelligence; often arises as a way to describe the distributed intelligence of information of simple robots that are co-dependent and co-vary depending on the finite and discrete bites of information that each one possesses [39,40]. To this end, a platooning, or swarm effect of what are otherwise simple constituent parts can exhibit highly complex behavior not unlike that of the multicellular beings [41,42].
The relevance of distributed cognition in robotics mirrors that of its originating source in human psychology. The concept is similar, distributed cognition is used when referring to the epistemic access that an individual has is not limited to solely themselves. For example, large group behavior is not solely found by analyzing any single individual. The knowledge that is distributed need not only be a property of animate agent, but inanimate objects also. The anthropologist Edwin Hutchins showed the distribution of cognition in airplane cockpits across a variety of technical and inanimate devices that form an assemblage for airplane functionality and navigation that are not in the full possession of either the pilot or co-pilots [43,44,45].
What does this entail then? Well, it reveals the limits of the position held that the meanings given to AI programs are exhausted by whatever human programmers put in them. The AI program itself does not really matter seeing as this position holds such a program to be semantically vacuous (i.e., John Searle’s Chinese room through experiment) [46]. This entails the arbitrariness of the syntax of any given system. Although the position does hold water in some cases, however the arbitrariness does not map on to all cases. One salient example would be that of Learning Intelligent Distribution Agents (LIDA) which are cognitive architectures that aim to model distributed biological cognition in a grounded and embodied sense [47]. It is able to model this broad spectrum cognition via a tripartite recursive feedback processes that structures environmental input, sensors and actuators [18,19]. This structure provides adaptive orientation and command system control through cyclical feedback between cognitive states (cognitive contents, memory systems and actuators) [5]. This orientation system can hardly be labeled as being arbitrary seeing that its ontology is predicated on its growth and development as an orientation sensor which is fundamental to the system’s ability to reach its aims [6]. The consequence of this development is that for an AI system to possess true understanding it must be the result of an evolutionary system that is physically environmentally situated and is able to adapt to changing contexts, thus breaking free of its syntactic frame [31,48].
What this section has tried to outline is some of the reasons guiding the shift towards embodied cognition in AI systems. The goal behind this rationale is an attempt to reformulate what constitutes genuine cognition and how it arises in biological entities. Current disembodied AI research risks stultifying because of the frame problem inherent with representational modeling and the lack of evolutionary development that emerges from being situated in an environment and having continued recursive feedback through various cognitive and physical systems. The following section will outline in greater depth the concept of cognition itself as something uniquely embodied, the different forms of cognition and giving particular emphasis on nonconscious cognition. The particular aim of this paper is to show that different levels of cognition can exist (both in artificial and biological forms) that are a sufficient condition for certain understandings of suffering and that such should enter into ethical discourse.

3. Assembling Nonconscious Cognizers

Predicated on what appears to be the counterintuitive notion that consciousness and higher cognitive functions as being necessarily intertwined, the concept of nonconscious cognition steers starkly away from this heirloom of the enlightenment tradition. It, of course, is not surprising why the concept of consciousness has retained its position as center stage in human though given its critical role in making sense of individual and group narratives and its ability to construct concordance with the world [49,50,51,52]. However, recent advances in cognitive sciences have shown that cognition and consciousness are not necessarily inseparable, but rather that cognition is a distributed capacity that plays a key role in neurological processes that are not always apparent to consciousness, and that such a way of explaining what cognition is can similarly be extended to other forms of life and sufficiently advanced artificial systems [21,48,53]. This general capacity is termed as the nonconscious cognition by N. Katherine Hayles and this section tangles with its nuances in greater depth [20].
The cognitive sciences traditionally differentiate between levels of consciousness: core consciousness and higher consciousness [54]. Although it has been disputed to use the term ‘level’ and instead use less discrete reference like degrees, what they refer to has remained relatively stable [55]. The first, core consciousness, is described as being the minimal level of experience and from which the emergence of the Self is possible [56]. This core consciousness is the performative coaction of the organism and the object, and as such is the product of an embodied subject at play in the world; this pre-reflexive subject precedes experience [54]. Such core consciousness is not exclusive to humans, but is also possessed by other animals [57,58,59,60,61,62].4 Higher consciousness5, however, is an entity’s ability to go beyond the limits of core consciousness of the present, use of abstract reasoning, the possession of reflexive self-awareness (has a past, present and future self), the use of verbal language and other metacognitive activities [54,76,77,78]. Aside from humans, in which the capacity of this level of consciousness is distinct, evidence has shown that some primates display certain metacognitive activities associated with higher consciousness [59,79,80].
Distinguishing these two levels of consciousness, seeing as they interplay with one another, as well as the continually studied unconscious, an extremely difficult task. However, the nonconscious cognition that is described here does not get muddled in the level of awareness that the levels of consciousness do. Hayles aptly describes the nonconscious cognition as that which “operates at a level of neuronal processing inaccessible to the modes of awareness but nevertheless performing functions essential to consciousness” [81] (p. 784). Developments in neuroscience have shown these neuronal processes as functionating as a sort of information filter for the slower consciousness allowing pertinent information to pass through while restricting the numerous sensory and somatic inputs [82]. This prevents sensory overload by abstracting inferences from vast inputs to create coherence space–time coherence for conscious uptake.
Cognition, however, should not be interpreted as an attribute, but rather as a ‘process’ that emerges as a function of its animation in an environment [20]. This environmental co-variability implies that something like a computer program only becomes cognitive once it is enacted by architecture capable of realizing its goals. This means that there necessarily needs to be hermeneutic variability to permit operational choice to take place, not in the sense of choice as free will, but of multi-model conditionals such as those employed in the NemoLOG and DyLOG programming languages [83,84]. Therefore, operational decision-making in terms of interpreting choice is something that can be built into genetic coding, preprogrammed to follow certain conditional pathways given evermore complex environmental exposure [85,86]. Hence, meaning making is necessarily context sensitive, where a system connects actions to meaning depending on situational success of goal attainment [20]. This of course differs from the meaning making and interpretability of choice of humans and other nonhuman animals, but the basis of interpretability and the context–meaning connection reveal a continuum of possible cognitive processes many of which are inaccessible to humans.
To reiterate a clarifying point by Hayles, the emphasis of discerning nonconscious cognition should not be treated as being mutually exclusive of consciousness, nor should the discussion of this cognition be interpreted as a means to downplay the importance and impacts of consciousness, but instead to more authentically describe what she calls “human cognitive ecology” and how such can be used as a basis for enrolling other cognizers, biological or technical, that possess such capacities [20,81]. What this approach permits is a decentering of the human from a privileged ontological position towards a more ecological ontology where the enmeshments between different beings can be more genuinely investigated [81,87]. This decoupling is particularly salient in the results of various selective attention tests, such at those carried out by Simons and Chabris [88] which showed the separation of cognition from consciousness of test subjects.6

3.1. Technical Nonconscious Cognizers

Technical cognition is very much analogous to that of the human nonconscious cognition and in many respects design is used to supplement human consciousness in the way that the nonconscious cognition does. In many respects, technical cognitions perform a prosthetically oriented function for human consciousness by processing large amounts of complex data that would be overwhelming for consciousness [20]. From these large quantities of data, inferences can be drawn and state-specific information can be drawn from subroutines. Simply speaking, human consciousness would simply not be able to handle the quantity and complexity of the data that technical cognizers are able to process.
The similarities between the human nonconscious cognition and that of technical cognition reveals an always already present trend towards the further distribution of cognition among various artifacts, each of which form an asymmetric, yet symbiotic assemblage [81]. Similarly, because nonconscious cognition functions as a state divorced from that of the posteriori projections of consciousness it is in fact provides a closer mapping of how the real world actually is [78]. This, however, does not entail that the neural processes of the nonconscious cognition can be computed, neurons and synaptic action potentials do not function in the same way that binary does [89]. Rather, empirical evidence has shown that, and as was discussed in Section 2, proprioception as a function of embodied cognition forms the basis of the development of understanding of abstractions and metaphors that are critical to human verbal architecture and similarly to the abstractions necessary for dissolving the frame problem in technical cognition [3,13,90].
However, technical limitations in cybernetics and robotics have led researchers to employ simulations as a vicarious environment in which AI systems can inhabit a ‘body’ (virtually of course) and are subject to physical limitations and dynamic changes. These environmental simulations were typically based off the advanced physics engines used in video games given that they were designed to mimic the real world [5]. Similarly, these simulations need not provide labeling for every object in the environment which permits AI systems to learn orientation and interaction in a relatively safe space, rather than, for example, trying to learn what a pedestrian is in the real world [35,91,92]. Naturally, the goal is to train these Artificial Life.7 systems in a simulation with the intent to embody them in the real world, thus reducing the risk of unintended consequences that could be taught in the simulations. These AL simulations, however, miss the forest for the trees, aiming to simulate biological life for the purposes of embodied robotics. It does so by placing particular emphasis on ‘life’ and its relation to technical systems. This is because technical systems, cognitive or not, cannot be alive, that is not be autopoietic (i.e., the ability for a system to be capable of reproducing and maintaining itself).8 However, this, as Hayles herself argues, should not be the common factor when discussing common ground between biological and technical entities, but rather cognition as the broad spectrum commonality [20] (p. 22).
This of course does not exclude the possibility of a technical system employing wetware as its architecture, in order to solve the problem of achieving computational speed while maintaining power consumption; something the brain does at an unparalleled level. This perhaps will require neuroprotein as an essential base element to achieve this computation state, however that is not the issue in question here [92,97,98]. Cognition, as a process-function in making meaning within contexts is what is under consideration in this paper. As stated, the thesis here is to take the concepts of embodied cognition, and its potential future for AI development, as well as the concept of the nonconscious cognition to show that there is a common ontological level among biological entities as well as technical ones in which suffering may predicate. It is one in which the traditional notions of suffering which are predicated on physical pain or conscious pain are not necessary, but may be a product of. The following section synthesizes what has been discussed so far to envision exactly how technical-embodied AI systems can experience nonconscious cognitive suffering.

4. Suffering Technical Cognizers

The entanglement of suffering as a fundamental part of the existential condition of humans has a long tradition in both eastern and western thought. Similarly, the concept of suffering has similarly been indivisible from conceptions of pain, and rightly so. However, the development of synthetic intelligence, or in this more specific case synthetic cognizers, raises new ethical issues. The most salient, and perhaps the most complex issue is the evaluation of the moral status of entities. Posthumanist studies have been particularly instrumental in calling into question the traditional method by which western societies have attributed moral status to (almost exclusively) human beings [99,100,101,102,103]. Traced back to Plato (see Protagoras 321)9, the concept of supranatural element to human existence has shadowed western thought; Stefan Sorgner accounts that it is at this point (in Plato’s writings) that:
“God’s divine immaterial spark, our reason, entered into us and connected with us, this process is responsible for the fact that only we humans possess something that goes beyond the purely natural world, which is why only humans possess the subject status,”.
[104] (p. 3)
This conception schisms a hard duality between human subjects (the possessors of dignity and ethical consideration) and the rest of the world and its entities as mere objects. This is similarly reflected in how legal assessments of humans and nonhuman animals are constructed and evaluated. This, of course, raises practical concerns of adhering to legal codes that are founded on ontological principles that, at the very least, are not held by a large minority of western democracies (i.e., atheists, agnostics, etc.). Genealogically, it is for this reason that the ethical status of many nonhuman animals, even those that are genetically similar to us such as great apes and chimpanzees—among others—have not been fully considered in a comprehensive way, despite incremental legislative progression.10
The ethical considerations of affording moral status to nonhuman beings is perhaps most eloquently, and impactfully, evaluated by Peter Singer. His counterargument against the supposed misplaced moral privilege of human beings is spurious in his view11, instead of arguing that the blanket attribution of moral status to humans should be replaced with conditions for the perception of pain as the relevant considerations for moral attribution [106]. Situating pain as the locus for evaluating the moral status of entities, Singer’s condition rank-orders the intensity of pain as scale by which moral considerations can be weighed. The basis for this view is constructed on the coalescence of sentience and self-consciousness. Consciousness isolated (core-consciousness) limits the subject to a presentism perspective that delimits any projection of it as a subject that will persist as a continuous entity into the future, or has existed in the past [59]. A self-conscious subject (core + higher consciousness) [61] is able to view itself in such a temporality, this higher consciousness, coupled with sentience (the ability to perceive pain) is considered the highest-order subject for moral consideration, rather than simply an entity in possession of core-consciousness.12 Interestingly enough, experiments that employ the mirror test, currently one of the most important ways to determine if an entity possesses self-consciousness is failed by up to 35% of two-year old children [64,108,109].
This, of course, does not entail that the test itself can be brought into doubt, dogs for example are incapable of passing the mirror test. Such intuitions can be perhaps explained by the fact that the test is anthropically biased towards the sense of sight used to interact with the mirror, whereas other higher mammals such as canines enact in the world through a proprioceptive ability predicated on smell, or similarly bats’ ability to use echolocation for navigation [104,110]. However, this is not the central point here, rather, what should be called into question is the implications of Singer’s ethical evaluations. The binary strong disjunctive choice between saving a chimpanzee or a newborn with severe cognitive impairment—of which Singer’s ethics would choose the former—goes against many of the strong intuitions that people hold, many of which, may be nothing other than the result of encrusted Christian cultural structures shaping our emotional responses [111,112,113,114]. The conclusiveness of this ethics, despite its counter-intuitive invocations, brings into question the nature and source of these intuitions themselves, and questions how they may change across temporality and as we become less anthropocentric [115].
Why bring up Singer’s ethics here? It seems that the correlate of ethical considerations with consciousness does not permit a solid foundation for evaluating the potential moral status of AI, such as technical cognizers. Conceptualizations are hard, of course, to situate technical cognizers into Singer’s ethics, given that sentience is often understood as a function of physical pain requiring biological wetware from which these sensations can arise. To that end, technical cognizers seem a priori excluded from ethical considerations. This consideration becomes ever more salient if we consider the speculative hypothesis of mind uploading. If we suppose that the personality, and self-consciousness of an individual can be uploaded to a hard state, such a posthuman entity would still fall outside the bounds of moral consideration in modern personhood-based ethics like that of Singer’s, as it lacks sentience. This, on account of the fact that such a synthetic entity would be in privation of the necessary sentience that is, at least now, only present in wetware-based biology. However, the current co-constitution with many forms of AI and robotics distrusted across human social ecology, such as those with care robots [36,37,116] and combat robotics [117,118,119] shows that these relationships cannot solely be governed by a sterile and often highly reductive property-based and transactional ethics.
Similarly, the mirror test is further brought into question given that some AI have already, to some extent, passed the test13; does this entail that it is in possession of self-consciousness? [120,121], Naturally, this brings into question the nature of consciousness itself, how we design evaluations to measure it and, consequentially, how we construct ethical theories founded on such assumptions. As already stated, the notion of suffering as being predicated on pain seems something that is only feasible in wetware, however, this may not be the relevant moral criterion on which moral status to entities is determined. The emphasis on dignity may nonetheless be preserved without resorting to physical pain as the predicate. One way to accomplish this is by preserving dignity by construing it instead as not being humiliated as best argued by Avishai Margalit [104,122]. Humiliation can be construed as a situation in which one entity places itself above another with similarly directed contempt towards the lower.
The humiliated entity need not suffer from physical pain in order to suffer. Humiliation, unlike something such as being physically tortured, is formally associated with a cognitive realization of a privation of appreciation; to not being afforded the self-value that one attributes oneself. This cognitive realization process can be described as painful, but not necessarily in a physical way. Instead, it is a pain that is linked to cognition rather than to consciousness. This can only be appreciated through the decoupling of cognition and consciousness described in the preceding section. This strong separation between cognition and consciousness, and thus the separation of pain from consciousness (at least certain types of pain), is best illustrated in a less speculative way by looking at the signs of the existence of fetal pain [123].14 If fetuses can, without a fully developed brain or nervous system nonetheless experience pain (~20 weeks of gestation), then the distinction between pain, cognition and consciousness becomes even more discrete [104,126]. The prerequisites for such pain are that the brain and nervous system are sufficiently developed where such pain can at least be plausibly manifested and experienced by the fetus (thus warranting the use of anesthesia). The interesting point here is that the consideration of the potential suffering of the fetus is based on observations of the mother given certain signs that manifest themselves that lead the medical practitioners to infer that the fetus may be in distress.15 Not only this, but there are fetal reactions at this stage (20+ weeks of gestation) which could also indicate the experience of pain. Still, fetuses without a brain show the same reactions which indicates that the reactions are reflexes and do not indicate the experience of pain [127].
Hence, if both the cognitive nonconscious and the locution of pain in cognition (distinct from consciousness) exist, like proposed above with the example of humiliation, seems to form a distinct way of understanding suffering in the cognitive nonconscious, something that, as Hayles’ argues, is a particularly apt way of understanding how the cognition of technical cognizers functions.16

5. Limitations and Further Research Streams

This paper sought to draw a potential connection between how the cognitive nonconscious for technical cognizers can be a locus for at least certain types of pain. Humiliation, or forms thereof, seem to be a good place to begin, focusing primarily on how different conceptions of what perceive as painful, uncoupled by the physical, can be instantiated in hardware-based technical cognizers. This, of course, remains wholly speculative, but it does give pause for considerations, particularly by those research groups whose aim is to reduce long-term further suffering of entities.17 What are ultimately needed is novel evaluation and a shift in perspective to the distribution of cognitive processes and how cognitive assemblages form, both for and between different cognizers.
Further research projects should look how such s-risks from nonconscious cognitive suffering can, and perhaps necessarily, may be instantized in wetware developments in AI research. Given the current limits of hardware-based approaches to AI design, the employment of synthetic biology as a guide for achieving stable power consumption in relation to optimal computing power is best exemplified by the human brain. Arguments have been made that the human brain provides the ideal model for more advanced forms of AI by using fuzzy logics to reduce the necessary power input [6]. However, similarly, the use of biological materials may make system design more prone to these s-risks, consciousness notwithstanding given that the current modes of existent suffering are most obvious in biological systems, the neuroprotein-based nervous systems that are possessed by humans and many other species makes the manifestation of suffering evermore salient, and can be potentially exacerbated with hybrid hardware–wetware interfaces. What is needed is a design approach that can weigh how different design flows can either support or constrain design requirements that lead towards less suffering [115]. Along a similar line, the sustainability issues of these novel pathways is itself continuous, further analyses that look at these ethical issues of AI under the lens of sustainability should be looked at [128].
Similarly, what this paper has failed to do is provide a comprehensive way of reconceptualizing suffering as such outside of consciousness, and providing a more dynamic and pragmatic approach to recognizing it, both philosophically and technically. What it has done is provide hints that the separation of consciousness from the cognitive nonconscious provides a ground for suffering that is not related to either physical pain or consciousness at all. If such a thesis proves to be even remotely true, serious social and ethical issues will arise as technical innovations continually advance. Formalizing and arriving at a greater understanding of not only what the cognitive nonconscious is important, but also how such cognizers can suffer is equally so.
Finally, the suffering of technical cognizers need not be conditional on human cognizers as the cause of suffering, but machine–machine interactions that are black-boxed to users and programmers may, on this account of suffering, permit the emergence for new grounds on which these relations and consequences can take place. Issues of transparency, understandability and technical verifiability serve as good grounds for how machines interact with each other, form hierarchies of power and affect each other on a cognitive nonconscious level [129,130,131,132]. The distribution of systems, their interplay and interdependence make this all the more prescient.

6. Conclusions

This paper aimed to explore how the notion of the cognitive nonconscious in technical artifacts can be understood in a way as to locate a form of non-physical pain. The paper began by exploring the nature of embodiment, its relations to the frame problem in AI and that enactivism appears as the natural way forward for AI research to go beyond current technological limits in disembodied AI. Similarly, the concept of embodiment raises issues of the ecology of cognition, its distribution and the separation of consciousness from the cognitive nonconscious. The implications of this separation allow, at least in a speculative capacity, for us to conceptualize certain forms of suffering to be located at a nonconscious cognitive level. If so, this has implications for how we design technical cognizers and the approaches we take moving forward. This paper serves as only a spark for a possible future for AI development. What is needed is a more thorough investigation of the cognitive nonconscious in technical systems, nonconscious suffering and how these two are related. This has only scratched the surface of a much larger plane.

Author Contributions

Writing—original draft, S.U.; Writing—review & editing, S.U. and S.L.S.

Funding

This research received no external funding.

Acknowledgments

Any remaining errors are the authors’ alone. The views expressed in this paper do not necessarily reflect those of the Institute for Ethics and Emerging technologies.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alanen, L. Descartes’s Concept of Mind; Harvard University Press: Boston, MA, USA, 2009. [Google Scholar]
  2. Gibbs, R.W., Jr.; Hampe, B. The Embodied and Discourse Views of Metaphor: Why These Are Not so Different and How They Can Be Brought Closer Together. Metaphor Embodied Cogn. C 2017, 319–365. [Google Scholar]
  3. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
  4. Steels, L.; Brooks, R. The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents; Routledge: Abingdon, UK, 2018. [Google Scholar]
  5. Wallach, W.; Franklin, S.; Allen, C. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents. Top. Cogn. Sci. 2010, 2, 454–485. [Google Scholar] [CrossRef] [Green Version]
  6. Boden, M.A. Artificial Intelligence: A Very Short Introduction; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  7. Veruggio, G.; Operto, F. Roboethics: A Bottom-up Interdisciplinary Discourse in the Field of Applied Ethics in Robotics. Int. Rev. Inf. Ethics 2006, 6, 2–8. [Google Scholar]
  8. Veruggio, G.; Operto, F.; Bekey, G. Roboethics: Social and Ethical Implications. In Springer Handbook of Robotics; Springer: Berlin, Germany, 2016; pp. 2135–2160. [Google Scholar]
  9. Moon, A.; Calisgan, C.; Operto, F.; Veruggio, G.; Van der Loos, H.F.M. Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change. In Proceedings of the We Robot, Miami, FL, USA, 21–22 April 2012. [Google Scholar]
  10. Johnson, M.; Lakoff, G. Why Cognitive Linguistics Requires Embodied Realism. Cogn. Linguist. 2002, 13, 245–264. [Google Scholar] [CrossRef]
  11. Lakoff, G. Language and Emotion. Emot. Rev. 2016, 8, 269–273. [Google Scholar] [CrossRef]
  12. Lakoff, G. Explaining Embodied Cognition Results. Top. Cogn. Sci. 2012, 4, 773–785. [Google Scholar] [CrossRef] [Green Version]
  13. Lakoff, G.; Núñez, R.E. Where Mathematics Comes from: How the Embodied Mind Brings Mathematics into Being. AMC 2000, 10, 720–733. [Google Scholar]
  14. Lormand, E. Framing the Frame Problem. Synthese 1990, 82, 353–374. [Google Scholar] [CrossRef]
  15. Dennett, D.C. Cognitive Wheels: The Frame Problem of AI. In Routledge Contemporary Readings in Philosophy. Philosophy of Psychology: Contemporary Readings; Bermúdez, J.L., Ed.; Routledge/Taylor & Francis Group: New York, NY, USA, 2006; pp. 433–454. [Google Scholar]
  16. Ford, K.M.; Glymour, C.N.; Hayes, P.J. Thinking about Android Epistemology; AAAI Press (American Association for Artificial Intelligence): Menlo Park, CA, USA, 2006. [Google Scholar]
  17. Brooks, R.A. Artificial Life and Real Robots. In Proceedings of the First European Conference on Artificial Life, Paris, France, 10–15 December 1992; pp. 3–10. [Google Scholar]
  18. Ramamurthy, U.; Baars, B.J.; D’Mello, S.K.; Franklin, S. LIDA: A Working Model of Cognition. 2006. Available online: http://cogprints.org/5852/1/ICCM06-UR.pdf (accessed on 29 April 2019).
  19. Faghihi, U.; Franklin, S. The LIDA Model as a Foundational Architecture for AGI. In Theoretical Foundations of Artificial General Intelligence; Springer: Berlin, Germany, 2012; pp. 103–121. [Google Scholar]
  20. Hayles, N.K. Unthought: The Power of the Cognitive Nonconscious; University of Chicago Press: Chicago, IL, USA, 2017. [Google Scholar]
  21. Hayles, N.K. Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness. New Lit. Hist. 2014, 45, 199–220. [Google Scholar] [CrossRef]
  22. Hayles, N.K. Distributed Cognition at/in Work. Frame 2008, 21, 15–29. [Google Scholar]
  23. Althaus, D.; Gloor, L. Reducing Risks of Astronomical Suffering: A Neglected Priority; Foundational Research Institute: Berlin, Germany, 2016. [Google Scholar]
  24. Wykowska, A.; Chaminade, T.; Cheng, G. Embodied Artificial Agents for Understanding Human Social Cognition. Philos. Trans. R. Soc. B Biol. Sci. 2016, 371, 20150375. [Google Scholar] [CrossRef]
  25. Müller, V.C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence; Springer: Berlin, Germany, 2016; pp. 555–572. [Google Scholar]
  26. Kiela, D.; Bulat, L.; Vero, A.L.; Clark, S. Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research. arXiv 2016, arXiv:1610.07432. [Google Scholar]
  27. Gibbs, R.W., Jr. Embodiment and Cognitive Science; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  28. Sotala, K.; Gloor, L. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. Informatica 2017, 41. [Google Scholar]
  29. Millikan, R.G. Language, Thought and Other Biological Categories: New Foundations for Realism; The MIT Press: London, UK, 1984. [Google Scholar]
  30. Ray, T.; Sarker, R.; Li, X. Artificial Life and Computational Intelligence; Springer: Berlin, Germany, 2016. [Google Scholar]
  31. Förster, F. Enactivism and Robotic Language Acquisition: A Report from the Frontier. Philosophies 2019, 4, 11. [Google Scholar] [CrossRef]
  32. Nilsson, N.J. Shakey the Robot; SRI International: Menlo Park, CA, USA, 1984. [Google Scholar]
  33. Wheeler, M. Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem. Int. J. Philos. Stud. 2008, 16, 323–349. [Google Scholar] [CrossRef] [Green Version]
  34. Matarić, M.J. Situated Robotics. Encyclopedia of Cognitive Science; Wiley Online Library: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
  35. Caillou, P.; Gaudou, B.; Grignard, A.; Truong, C.Q.; Taillandier, P. A Simple-to-Use BDI Architecture for Agent-Based Modeling and Simulation. In Advances in Social Simulation 2015; Springer: Berlin, Germany, 2017; pp. 15–28. [Google Scholar]
  36. Tao, Z.; Biwen, Z.; Lee, L.; Kaber, D. Service Robot Anthropomorphism and Interface Design for Emotion in Human-Robot Interaction. In Proceedings of the 4th IEEE Conference on Automation Science and Engineering, CASE 2008, Arlington, VA, USA, 23–26 August 2008; pp. 674–679. [Google Scholar] [CrossRef]
  37. Sharkey, A.; Sharkey, N. Granny and the Robots: Ethical Issues in Robot Care for the Elderly. Ethics Inf. Technol. 2012, 14, 27–40. [Google Scholar] [CrossRef]
  38. van Wynsberghe, A. Service Robots, Care Ethics, and Design. Ethics Inf. Technol. 2016, 18, 311–321. [Google Scholar] [CrossRef]
  39. Kennedy, J. Swarm Intelligence. In Handbook of Nature-Inspired and Innovative Computing; Springer: Berlin, Germany, 2006; pp. 187–219. [Google Scholar]
  40. Blum, C.; Merkle, D. Swarm Intelligence in Optimization. In Swarm Intelligence; Blum, C., Merkle, D., Eds.; Springer: Berlin, Heidelberg, 2008; pp. 43–85. [Google Scholar] [CrossRef] [Green Version]
  41. Karaboga, D.; Akay, B. A Survey: Algorithms Simulating Bee Swarm Intelligence. Artif. Intell. Rev. 2009, 31, 61–85. [Google Scholar] [CrossRef]
  42. Bonabeau, E.; Dorigo, M.; Theraulaz, G. From Natural to Artificial Swarm Intelligence; Oxford University Press, Inc.: New York, NY, USA, 1999. [Google Scholar]
  43. Hutchins, E.; Klausen, T. Distributed Cognition in an Airline Cockpit. In Cognition and Communication at Work; Engeström, Y., Middleton, D., Eds.; New York: Cambridge, UK, 1996; pp. 15–34. [Google Scholar] [CrossRef]
  44. Hollan, J.; Hutchins, E.; Kirsh, D. Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research. ACM Trans. Comput. Interact. 2000, 7, 174–196. [Google Scholar] [CrossRef]
  45. Hutchins, E. The Social Organization of Distributed Cognition. In Perspectives on Socially Shared Cognition; Resnick, L.B., Levine, J.M., Teasley, S.D., Eds.; American Psychological Association: Washington, DC, USA, 1991. [Google Scholar]
  46. Searle, J.R. Is the Brain’s Mind a Computer Program? Sci. Am. 1990, 262, 25–31. [Google Scholar] [CrossRef]
  47. Wallach, W.; Allen, C.; Franklin, S. Consciousness and Ethics: Artificially Conscious Moral Agents. Int. J. Mach. Conscious. 2011, 03, 177–192. [Google Scholar] [CrossRef]
  48. Magnani, L. Eco-Cognitive Computationalism: From Mimetic Minds to Morphology-Based Enhancement of Mimetic Bodies. Entropy 2018, 20, 430. [Google Scholar] [CrossRef]
  49. Peterson, J.B. Maps of Meaning: The Architecture of Belief; Routledge: New York, NY, USA, 1999. [Google Scholar]
  50. Sarbin, T.R. Embodiment and the Narrative Structure of Emotional Life. Narrat. Inq. 2001, 11, 217–225. [Google Scholar] [CrossRef]
  51. Nelson, K. Narrative and the Emergence of a Consciousness of Self. Narrat. Conscious. 2003, 17–36. [Google Scholar]
  52. Herman, D. Emergence of Mind: Representations of Consciousness in Narrative Discourse in English; University of Nebraska Press: Lincoln, NE, USA, 2011. [Google Scholar]
  53. Miłkowski, M. Situatedness and Embodiment of Computational Systems. Entropy 2017, 19, 162. [Google Scholar] [CrossRef]
  54. Damasio, A.R. The Feeling of What Happens: Body and Emotion in the Making of Consciousness; Houghton Mifflin Harcourt: Boston, MA, USA, 1999. [Google Scholar]
  55. Bayne, T.; Hohwy, J.; Owen, A.M. Are There Levels of Consciousness? Trends Cogn. Sci. 2016, 20, 405–413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Damasio, A.; Dolan, R.J. The Feeling of What Happens. Nature 1999, 401, 847. [Google Scholar]
  57. De Waal, F.B.M.; Ferrari, P.F. Towards a Bottom-up Perspective on Animal and Human Cognition. Trends Cogn. Sci. 2010, 14, 201–207. [Google Scholar] [CrossRef]
  58. Dawkins, M.S. Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-Being; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  59. Boly, M.; Seth, A.K.; Wilke, M.; Ingmundson, P.; Baars, B.; Laureys, S.; Edelman, D.; Tsuchiya, N. Consciousness in Humans and Non-Human Animals: Recent Advances and Future Directions. Front. Psychol. 2013, 4, 625. [Google Scholar] [CrossRef]
  60. Godfrey-Smith, P. Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness; Farrar, Straus and Giroux: New York, NY, USA, 2016. [Google Scholar]
  61. Montgomery, S. The Soul of an Octopus: A Surprising Exploration into the Wonder of Consciousness; Simon and Schuster: New York, NY, USA, 2015. [Google Scholar]
  62. Edelman, D.B.; Baars, B.J.; Seth, A.K. Identifying Hallmarks of Consciousness in Non-Mammalian Species. Conscious. Cogn. 2005, 14, 169–187. [Google Scholar] [CrossRef]
  63. Izard, C.E. The Emergence of Emotions and the Development of Consciousness in Infancy. In The Psychobiology of Consciousness; Springer: Berlin, Germany, 1980; pp. 193–216. [Google Scholar]
  64. Gallup, G.G., Jr.; Anderson, J.R.; Shillito, D.J. The Mirror Test. Cogn. Anim. Empir. Theor. Perspect. Anim. Cogn. 2002, 325–333. [Google Scholar]
  65. Marten, K.; Psarakos, S. Evidence of Self-Awareness in the Bottlenose Dolphin (Tursiops Truncatus). In Self-Awareness in Animals and Humans: Developmental Perspectives; Parker, S.T., Mitchell, R., Boccia, M., Eds.; Cambridge University Press: Cambridge, UK, 1994; pp. 361–379. [Google Scholar]
  66. Delfour, F.; Marten, K. Mirror Image Processing in Three Marine Mammal Species: Killer Whales (Orcinus Orca), False Killer Whales (Pseudorca crassidens) and California Sea Lions (Zalophus californianus). Behav. Process. 2001, 53, 181–190. [Google Scholar] [CrossRef]
  67. Walraven, V.; Van Elsacker, L.; Verheyen, R. Reactions of a Group of Pygmy Chimpanzees (Pan paniscus) to Their Mirror-Images: Evidence of Self-Recognition. Primates 1995, 36, 145–150. [Google Scholar] [CrossRef]
  68. Suárez, S.D.; Gallup, G.G., Jr. Self-Recognition in Chimpanzees and Orangutans, but Not Gorillas. J. Hum. Evol. 1981, 10, 175–188. [Google Scholar]
  69. Robert, S. Ontogeny of Mirror Behavior in Two Species of Great Apes. Am. J. Primatol. 1986, 10, 109–117. [Google Scholar] [CrossRef]
  70. Gallup, G.G. Chimpanzees: Self-Recognition. Science 1970, 167, 86–87. [Google Scholar] [CrossRef]
  71. De Veer, M.W.; Gallup Jr, G.G.; Theall, L.A.; van den Bos, R.; Povinelli, D.J. An 8-Year Longitudinal Study of Mirror Self-Recognition in Chimpanzees (Pan Troglodytes). Neuropsychologia 2003, 41, 229–234. [Google Scholar] [CrossRef]
  72. Plotnik, J.M.; De Waal, F.B.M.; Reiss, D. Self-Recognition in an Asian Elephant. Proc. Natl. Acad. Sci. USA 2006, 103, 17053–17057. [Google Scholar] [CrossRef] [PubMed]
  73. Prior, H.; Schwarz, A.; Güntürkün, O. Mirror-Induced Behavior in the Magpie (Pica Pica): Evidence of Self-Recognition. PLoS Biol. 2008, 6, e202. [Google Scholar] [CrossRef]
  74. Uchino, E.; Watanabe, S. Self-recognition in Pigeons Revisited. J. Exp. Anal. Behav. 2014, 102, 327–334. [Google Scholar] [CrossRef]
  75. Kohda, M.; Hotta, T.; Takeyama, T.; Awata, S.; Tanaka, H.; Asai, J.; Jordan, A.L. If a Fish Can Pass the Mark Test, What Are the Implications for Consciousness and Self-Awareness Testing in Animals? PLoS Biol. 2019, 17, e3000021. [Google Scholar] [CrossRef]
  76. Edelman, G.; Tononi, G. A Universe of Consciousness How Matter Becomes Imagination: How Matter Becomes Imagination; Basic Books: New York, NY, USA, 2008. [Google Scholar]
  77. Edelman, G.M. Bright Air, Brilliant Fire: On the Matter of the Mind.; Basic Books: New York, NY, USA, 1992. [Google Scholar]
  78. Edelman, G.M. Wider than the Sky: A Revolutionary View of Consciousness; Penguin Press Science: London, UK, 2005. [Google Scholar]
  79. Seth, A.K.; Baars, B.J.; Edelman, D.B. Criteria for Consciousness in Humans and Other Mammals. Conscious. Cogn. 2005, 14, 119–139. [Google Scholar] [CrossRef]
  80. Edelman, D.B.; Seth, A.K. Animal Consciousness: A Synthetic Approach. Trends Neurosci. 2009, 32, 476–484. [Google Scholar] [CrossRef] [PubMed]
  81. Hayles, N.K. The Cognitive Nonconscious: Enlarging the Mind of the Humanities. Crit. Inq. 2016, 42, 783–808. [Google Scholar] [CrossRef]
  82. Slotnick, S.D.; Schacter, D.L. Conscious and Nonconscious Memory Effects Are Temporally Dissociable. Cogn. Neurosci. 2010, 1, 8–15. [Google Scholar] [CrossRef]
  83. Winkelhagen, L.; Dastani, M.; Broersen, J. Beliefs in Agent Implementation. In Proceedings of the International Workshop on Declarative Agent Languages and Technologies, Utrecht, The Netherlands, 25 July 2005; Springer: Berlin, Germany, 2005; pp. 1–16. [Google Scholar]
  84. Dong, W.; Luo, L.; Huang, C. Dynamic Logging with Dylog in Networked Embedded Systems. ACM Trans. Embed. Comput. Syst. 2016, 15, 5. [Google Scholar]
  85. Auletta, G. Cognitive Biology: Dealing with Information from Bacteria to Minds; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  86. Auletta, G. Teleonomy: The Feedback Circuit Involving Information and Thermodynamic Processes. J. Mod. Phys. 2011, 2, 136. [Google Scholar] [CrossRef]
  87. Morton, T. Being Ecological; MIT Press: Boston, MA, USA, 2018. [Google Scholar]
  88. Simons, D.J.; Chabris, C.F. Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events. Perception 1999, 28, 1059–1074. [Google Scholar] [CrossRef] [Green Version]
  89. Freeman, W.J.; Núñez, R.E. Editors’ Introduction. In Reclaiming Cognition: The Primacy of Action, Intention, and Emotion; Freeman, W.J., Núñez, R.E., Eds.; Imprint Academic: Bowling Green, OH, USA, 1999; p. xvi. [Google Scholar]
  90. Johnson, M. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason; University of Chicago Press: Chicago, IL, USA, 2013. [Google Scholar]
  91. Stano, P.; Kuruma, Y.; Damiano, L. Synthetic Biology and (Embodied) Artificial Intelligence: Opportunities and Challenges. Adapt. Behav. 2018, 26, 41–44. [Google Scholar] [CrossRef]
  92. Damiano, L.; Stano, P. Understanding Embodied Cognition by Building Models of Minimal Life. In Proceedings of the Italian Workshop on Artificial Life and Evolutionary Computation, Venice, Italy, 19–21 September 2017; Springer: Berlin, Germany, 2017; pp. 73–87. [Google Scholar]
  93. Langton, C.G. Artificial Life: An Overview; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  94. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  95. Umbrello, S. Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy. Int. J. Technoethics 2019, 10. [Google Scholar] [CrossRef]
  96. Umbrello, S.; Baum, S.D. Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing. Futures 2018, 100, 63–73. [Google Scholar] [CrossRef]
  97. Poole, D.L.; Mackworth, A.K. Artificial Intelligence: Foundations of Computational Agents; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  98. Copeland, J. Artificial Intelligence: A Philosophical Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  99. Umbrello, S.; Lombard, J. Silence of the Idols: Appropriating the Myths of Daedalus and Sisyphus for Posthumanist Discourses. Postmod. Open. 2018, 9, 98–121. [Google Scholar] [CrossRef]
  100. Hayles, K.N. Afterword: The Human in the Posthuman. Cult. Crit. 2003, 53, 134–137. [Google Scholar] [CrossRef]
  101. Marchesini, R. Tecnosfera: Proiezioni per Un Futuro Posthumano; Castelvechi: Rome, Italy, 2017. [Google Scholar]
  102. Sorgner, S.L. Pedegrees. In Post- and Transhumanism: An Introduction; Ranisch, R., Sorgner, S.L., Eds.; Peter Lang: Frankfurt am Main, Germany, 2014; pp. 29–48. [Google Scholar] [CrossRef]
  103. Caffo, L. Fragile Umanità; Giulio Einaudi editore: Torino, Italy, 2017. [Google Scholar]
  104. Sorgner, S.L. Dignity of Apes, Humans and AI. 2019. Available online: http://trivent-publishing.eu/ (accessed on 4 May 2019).
  105. Keim, B. An Orangutan Has (Some) Human Rights, Argentine Court Rules. Wired. 2014. Available online: https://www.wired.com/2014/12/orangutan-personhood/ (accessed on 29 April 2019).
  106. Singer, P. Practical Ethics; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  107. Sorgner, S.L. Schöner Neuer Mensch; Nicolai Verlag: Berlin, Germany, 2018. [Google Scholar]
  108. Amsterdam, B. Mirror Self-image Reactions before Age Two. Dev. Psychobiol. J. Int. Soc. Dev. Psychobiol. 1972, 5, 297–305. [Google Scholar] [CrossRef]
  109. Bard, K.A.; Todd, B.K.; Bernier, C.; Love, J.; Leavens, D.A. Self-awareness in Human and Chimpanzee Infants: What Is Measured and What Is Meant by the Mark and Mirror Test? Infancy 2006, 9, 191–219. [Google Scholar] [CrossRef]
  110. Nagel, T. What Is It like to Be a Bat? Philos. Rev. 1974, 83, 435–450. [Google Scholar] [CrossRef]
  111. Singer, P. On Comparing the Value of Human and Nonhuman Life BT—Applied Ethics in a Troubled World; Morscher, E., Neumaier, O., Simons, P., Eds.; Springer: Dordrecht, The Netherlands, 1998; pp. 93–104. [Google Scholar] [CrossRef]
  112. Lakoff, G. Mapping the Brain’s Metaphor Circuitry: Metaphorical Thought in Everyday Reason. Front. Hum. Neurosci. 2014, 8, 958. [Google Scholar] [CrossRef]
  113. Lakoff, G.; Johnson, M. Metaphors We Live by; University of Chicago Press: Chicago, IL, USA, 2003. [Google Scholar]
  114. Sorgner, S.L. Human Dignity 2.0: Beyond a Rigid Version of Anthropocentrism. Trans-Humanit. J. 2013, 6, 135–159. [Google Scholar] [CrossRef]
  115. Umbrello, S. Safe-(for Whom?)-By-Design: Adopting a Posthumanist Ethics for Technology Design; York University: Toronto, ON, USA, 2018. [Google Scholar] [CrossRef]
  116. Vallor, S. Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century. Philos. Technol. 2011, 24, 251. [Google Scholar] [CrossRef]
  117. Kolb, M. Soldier and Robot Interaction in Combat Environments; The University of Oklahoma: Norman, OK, USA, 2012. [Google Scholar]
  118. Scheutz, M. The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots; MIT Press: Cambridge, MA, USA, 2011; p. 205. [Google Scholar]
  119. Darling, K. Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects; Robot Law, C., Froomkin, K., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2016. [Google Scholar]
  120. Hart, J.W.; Scassellati, B. Mirror Perspective-Taking with a Humanoid Robot. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012. [Google Scholar]
  121. Floridi, L. Consciousness, Agents and the Knowledge Game. Minds Mach. 2005, 15, 415–444. [Google Scholar] [CrossRef] [Green Version]
  122. Avishai, M. Politik Der Würde: Über Achtung Und Verachtung; Suhrkamp: Berlin, Germany, 2012. [Google Scholar]
  123. Lee, S.J.; Ralston, H.J.P.; Drey, E.A.; Partridge, J.C.; Rosen, M.A. Fetal Pain: A Systematic Multidisciplinary Review of the Evidence. JAMA 2005, 294, 947–954. [Google Scholar] [CrossRef] [PubMed]
  124. Garite, T.J.; Simpson, K.R. Intrauterine Resuscitation during Labor. Clin. Obstet. Gynecol. 2011, 54, 28–39. [Google Scholar] [CrossRef] [PubMed]
  125. Fetal Distress. Available online: https://americanpregnancy.org/labor-and-birth/fetal-distress/ (accessed on 6 April 2019).
  126. Bellieni, C.V.; Buonocore, G. Fetal Pain Debate May Weaken the Fight for Newborns’ Analgesia. J. Pain 2019, 20, 366–367. [Google Scholar] [CrossRef]
  127. Derbyshire, S.W.G. Can Fetuses Feel Pain? BMJ 2006, 332, 909–912. [Google Scholar] [CrossRef]
  128. Khakurel, J.; Penzenstadler, B.; Porras, J.; Knutas, A.; Zhang, W. The Rise of Artificial Intelligence under the Lens of Sustainability. Technologies 2018. [Google Scholar] [CrossRef]
  129. Watson, D.S.; Krutzinna, J.; Bruce, I.N.; Griffiths, C.E.; McInnes, I.B.; Barnes, M.R.; Floridi, L. Clinical Applications of Machine Learning Algorithms: Beyond the Black Box. BMJ 2019, 364, 1–4. [Google Scholar] [CrossRef]
  130. Carabantes, M. Black-Box Artificial Intelligence: An Epistemological and Critical Analysis Journal. Eng. Rep. 2019. [Google Scholar] [CrossRef]
  131. Sternberg, G.S.; Reznik, Y.; Zeira, A.; Loeb, S.; Kaewell, J.D. Cognitive and Affective Human Machine Interface. Google Patents 5 July 2018. Available online: https://patentscope.wipo.int/search/en/detail.jsf;jsessionid=4E37A46F5807F625B3D12E91A33E2659.wapp2nB;jsessionid=9DB31F5F7B0EEA4D1BB54F6FB168BAA4.wapp2nB?docId=US222845836&recNum=5806&office=&queryString=&prevFilter=&sortOption=Fecha+de+publicaci%C3%B3n%2C+orden+descendente&maxRec=69890666 (accessed on 4 May 2019).
  132. Damaševicius, R.; Wei, W. Design of Computational Intelligence-Based Language Interface for Human-Machine Secure Interaction. J. Univ. Comput. Sci. 2018, 24, 537–553. [Google Scholar]
1
AI embodiment research has primarily taken the form of robotics, although interrelated and dependent on traditional AI research, robotics remains a separate research stream [6,7,8,9].
2
The Foundational Research Institute defines suffering risks as: risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far [23].
3
AI that benefits from being able to interact in the physical world through robotic technologies such as advanced sensors, actuators and motor control units.
4
This core consciousness requires a particular level of complexity of the brain and a specific connection to the nervous system and senses. Humans have been shown to have it (from 4–5 months) [63].
5
One of the ways to test for this level of consciousness is through the Mirror Test which is a measure of self-awareness developed by Gordon Gallup Jr. in 1970. The test gauges this self-awareness by determining if the entity can recognize itself when encountering its reflection in a mirror [64]. Nine nonhuman animal species were also able to pass the mirror test: bottlenose dolphin [65], killer whale [66], bonobo [67], Bornean orang-utan [68,69], chimpanzee [70,71] (after 1 year of age) [69], Asian elephant [72], Eurasian magpie [73], pigeons [74], and the Cleaner Wrasse [75].
6
This selective attention tests requires subjects to count the number of passes between a basketball team and are asked to determine if there was anything special about the video. However, the ‘specialness’ of this video is that during the scene in which players are passing the ball, a kickboxing gorilla comes on screen, yet it remains unnoticed by many. Still, this gorilla remains clearly in the cognitive field of the observers. The consequences of this test show that cognition and consciousness are differing (albeit co-varying) phenomena.
7
AL refers to the use of biochemistry, robotics and simulations to study the evolutions and processes of systems that are related to natural life [93].
8
Naturally, autopoiesis is a biological capacity, strictly speaking the concepts of autonomy, self-maintenance and reproduction could in theory be interpreted as a capacity that can be possessed by a sufficiently advanced AI. This can feasibly be done through the marriage of advanced deep neural networks, machine learning (with variants of genetic evolutionary) and perhaps embodied with access to advanced molecular manufacturing technologies [94,95,96]. Still, this differs ontologically from the autopoiesis mentioned here.
9
κλέπτει Ἡφαίστου καὶ Ἀθηνᾶς τὴν ἔντεχνον σοφίαν σὺν πυρί—ἀμήχανον γὰρ ἦν ἄνευ πυρὸς αὐτὴν κτητήν τῳ ἢ χρησίμην γενέσθαι—καὶ οὕτω δὴ δωρεῖται ἀνθρώπῳ. τὴν μὲν οὖν περὶ τὸν βίον σοφίαν ἄνθρωπος ταύτῃ ἔσχεν, τὴν δὲ πολιτικὴν οὐκ εἶχεν: ἦν γὰρ παρὰ τῷ Διί. τῷ δὲ Προμηθεῖ εἰς μὲν τὴν ἀκρόπολιν τὴν τοῦ Διὸς οἴκησιν οὐκέτι ἐνεχώρει εἰσελθεῖν—πρὸς δὲ καὶ αἱ Διὸς φυλακαὶ φοβεραὶ ἦσαν—εἰς δὲ τὸ τῆς Ἀθηνᾶς καὶ Ἡφαίστου οἴκημα τὸ κοινόν, ἐν ᾧ (Protagoras 321d)
10
There usually was and still is a categorically dualistic ontological separation between humans and solely natural beings. This is most dominant and apparent in legal frameworks, with the exception of Argentina, which, on October 18, 2014 recognized the orang-utan named Sandra as the subject of (some) human rights in what turned out to be an unsuccessful habeas corpus case [105].
11
A form of speciesism which is markedly similar to racism and sexism.
12
Stefan L. Sorgner criticizes the notion that higher-consciousness (self-consciousness) is a necessary condition for personhood. Similarly, he makes the further, and more controversial step that sentience is not required either for the affordance of personhood [107].
13
13 There is a difference, however, in that the AIs exposed to the mirror test are (obviously) not quite like humans or animals. The first time they encountered themselves they had to be told that what was being reflected was themselves. This provides a reason against the possibility of AI consciousness (not nonconscious cognition however).
14
The signs that the fetus may not be well can be inferred through the distress of the mother via distress signs such as abnormal patterns in cardiotocography, decrease in fetal movement, and fetal metabolic acidosis among others [124,125].
15
Fetal metabolic acidosis is a strong chemical predictor that is done by taking small blood samples from the fetus itself. It is more reliable than cardiotocography which has shown to produce more false positives [124].
16
A cursory example would be a sufficiently advanced care robot giving a patient a prognosis given certain symptoms where the patients either disregards such advice or does something that is contrary to such advice. Doing so opens up questions that such a cognizer might realize as being punitive. The logic is that is debases the cognizer’s very reason for being.
17
Effective Altruism organizations broadly aim towards this goal. Particular focus on the long-term reduction of s-risks by and for AI has been undertaken by the Foundational Research Institute [23].

Share and Cite

MDPI and ACS Style

Umbrello, S.; Sorgner, S.L. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies 2019, 4, 24. https://doi.org/10.3390/philosophies4020024

AMA Style

Umbrello S, Sorgner SL. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies. 2019; 4(2):24. https://doi.org/10.3390/philosophies4020024

Chicago/Turabian Style

Umbrello, Steven, and Stefan Lorenz Sorgner. 2019. "Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence" Philosophies 4, no. 2: 24. https://doi.org/10.3390/philosophies4020024

APA Style

Umbrello, S., & Sorgner, S. L. (2019). Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies, 4(2), 24. https://doi.org/10.3390/philosophies4020024

Article Metrics

Back to TopTop