Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence
Abstract
:“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”—Ada Lovelace
“Natures black box cannot necessarily be described by a simple model”—Peter Norvig
“We categorize as we do because we have the brains and bodies we have and because we interact in the world as we do.”—George Lakoff
1. Introduction
2. Performativity, Proprioception and the Limits of Disembodiment
3. Assembling Nonconscious Cognizers
3.1. Technical Nonconscious Cognizers
4. Suffering Technical Cognizers
“God’s divine immaterial spark, our reason, entered into us and connected with us, this process is responsible for the fact that only we humans possess something that goes beyond the purely natural world, which is why only humans possess the subject status,”.[104] (p. 3)
5. Limitations and Further Research Streams
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Alanen, L. Descartes’s Concept of Mind; Harvard University Press: Boston, MA, USA, 2009. [Google Scholar]
- Gibbs, R.W., Jr.; Hampe, B. The Embodied and Discourse Views of Metaphor: Why These Are Not so Different and How They Can Be Brought Closer Together. Metaphor Embodied Cogn. C 2017, 319–365. [Google Scholar]
- Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
- Steels, L.; Brooks, R. The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents; Routledge: Abingdon, UK, 2018. [Google Scholar]
- Wallach, W.; Franklin, S.; Allen, C. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents. Top. Cogn. Sci. 2010, 2, 454–485. [Google Scholar] [CrossRef] [Green Version]
- Boden, M.A. Artificial Intelligence: A Very Short Introduction; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Veruggio, G.; Operto, F. Roboethics: A Bottom-up Interdisciplinary Discourse in the Field of Applied Ethics in Robotics. Int. Rev. Inf. Ethics 2006, 6, 2–8. [Google Scholar]
- Veruggio, G.; Operto, F.; Bekey, G. Roboethics: Social and Ethical Implications. In Springer Handbook of Robotics; Springer: Berlin, Germany, 2016; pp. 2135–2160. [Google Scholar]
- Moon, A.; Calisgan, C.; Operto, F.; Veruggio, G.; Van der Loos, H.F.M. Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change. In Proceedings of the We Robot, Miami, FL, USA, 21–22 April 2012. [Google Scholar]
- Johnson, M.; Lakoff, G. Why Cognitive Linguistics Requires Embodied Realism. Cogn. Linguist. 2002, 13, 245–264. [Google Scholar] [CrossRef]
- Lakoff, G. Language and Emotion. Emot. Rev. 2016, 8, 269–273. [Google Scholar] [CrossRef]
- Lakoff, G. Explaining Embodied Cognition Results. Top. Cogn. Sci. 2012, 4, 773–785. [Google Scholar] [CrossRef] [Green Version]
- Lakoff, G.; Núñez, R.E. Where Mathematics Comes from: How the Embodied Mind Brings Mathematics into Being. AMC 2000, 10, 720–733. [Google Scholar]
- Lormand, E. Framing the Frame Problem. Synthese 1990, 82, 353–374. [Google Scholar] [CrossRef]
- Dennett, D.C. Cognitive Wheels: The Frame Problem of AI. In Routledge Contemporary Readings in Philosophy. Philosophy of Psychology: Contemporary Readings; Bermúdez, J.L., Ed.; Routledge/Taylor & Francis Group: New York, NY, USA, 2006; pp. 433–454. [Google Scholar]
- Ford, K.M.; Glymour, C.N.; Hayes, P.J. Thinking about Android Epistemology; AAAI Press (American Association for Artificial Intelligence): Menlo Park, CA, USA, 2006. [Google Scholar]
- Brooks, R.A. Artificial Life and Real Robots. In Proceedings of the First European Conference on Artificial Life, Paris, France, 10–15 December 1992; pp. 3–10. [Google Scholar]
- Ramamurthy, U.; Baars, B.J.; D’Mello, S.K.; Franklin, S. LIDA: A Working Model of Cognition. 2006. Available online: http://cogprints.org/5852/1/ICCM06-UR.pdf (accessed on 29 April 2019).
- Faghihi, U.; Franklin, S. The LIDA Model as a Foundational Architecture for AGI. In Theoretical Foundations of Artificial General Intelligence; Springer: Berlin, Germany, 2012; pp. 103–121. [Google Scholar]
- Hayles, N.K. Unthought: The Power of the Cognitive Nonconscious; University of Chicago Press: Chicago, IL, USA, 2017. [Google Scholar]
- Hayles, N.K. Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness. New Lit. Hist. 2014, 45, 199–220. [Google Scholar] [CrossRef]
- Hayles, N.K. Distributed Cognition at/in Work. Frame 2008, 21, 15–29. [Google Scholar]
- Althaus, D.; Gloor, L. Reducing Risks of Astronomical Suffering: A Neglected Priority; Foundational Research Institute: Berlin, Germany, 2016. [Google Scholar]
- Wykowska, A.; Chaminade, T.; Cheng, G. Embodied Artificial Agents for Understanding Human Social Cognition. Philos. Trans. R. Soc. B Biol. Sci. 2016, 371, 20150375. [Google Scholar] [CrossRef]
- Müller, V.C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence; Springer: Berlin, Germany, 2016; pp. 555–572. [Google Scholar]
- Kiela, D.; Bulat, L.; Vero, A.L.; Clark, S. Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research. arXiv 2016, arXiv:1610.07432. [Google Scholar]
- Gibbs, R.W., Jr. Embodiment and Cognitive Science; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
- Sotala, K.; Gloor, L. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. Informatica 2017, 41. [Google Scholar]
- Millikan, R.G. Language, Thought and Other Biological Categories: New Foundations for Realism; The MIT Press: London, UK, 1984. [Google Scholar]
- Ray, T.; Sarker, R.; Li, X. Artificial Life and Computational Intelligence; Springer: Berlin, Germany, 2016. [Google Scholar]
- Förster, F. Enactivism and Robotic Language Acquisition: A Report from the Frontier. Philosophies 2019, 4, 11. [Google Scholar] [CrossRef]
- Nilsson, N.J. Shakey the Robot; SRI International: Menlo Park, CA, USA, 1984. [Google Scholar]
- Wheeler, M. Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem. Int. J. Philos. Stud. 2008, 16, 323–349. [Google Scholar] [CrossRef] [Green Version]
- Matarić, M.J. Situated Robotics. Encyclopedia of Cognitive Science; Wiley Online Library: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
- Caillou, P.; Gaudou, B.; Grignard, A.; Truong, C.Q.; Taillandier, P. A Simple-to-Use BDI Architecture for Agent-Based Modeling and Simulation. In Advances in Social Simulation 2015; Springer: Berlin, Germany, 2017; pp. 15–28. [Google Scholar]
- Tao, Z.; Biwen, Z.; Lee, L.; Kaber, D. Service Robot Anthropomorphism and Interface Design for Emotion in Human-Robot Interaction. In Proceedings of the 4th IEEE Conference on Automation Science and Engineering, CASE 2008, Arlington, VA, USA, 23–26 August 2008; pp. 674–679. [Google Scholar] [CrossRef]
- Sharkey, A.; Sharkey, N. Granny and the Robots: Ethical Issues in Robot Care for the Elderly. Ethics Inf. Technol. 2012, 14, 27–40. [Google Scholar] [CrossRef]
- van Wynsberghe, A. Service Robots, Care Ethics, and Design. Ethics Inf. Technol. 2016, 18, 311–321. [Google Scholar] [CrossRef]
- Kennedy, J. Swarm Intelligence. In Handbook of Nature-Inspired and Innovative Computing; Springer: Berlin, Germany, 2006; pp. 187–219. [Google Scholar]
- Blum, C.; Merkle, D. Swarm Intelligence in Optimization. In Swarm Intelligence; Blum, C., Merkle, D., Eds.; Springer: Berlin, Heidelberg, 2008; pp. 43–85. [Google Scholar] [CrossRef] [Green Version]
- Karaboga, D.; Akay, B. A Survey: Algorithms Simulating Bee Swarm Intelligence. Artif. Intell. Rev. 2009, 31, 61–85. [Google Scholar] [CrossRef]
- Bonabeau, E.; Dorigo, M.; Theraulaz, G. From Natural to Artificial Swarm Intelligence; Oxford University Press, Inc.: New York, NY, USA, 1999. [Google Scholar]
- Hutchins, E.; Klausen, T. Distributed Cognition in an Airline Cockpit. In Cognition and Communication at Work; Engeström, Y., Middleton, D., Eds.; New York: Cambridge, UK, 1996; pp. 15–34. [Google Scholar] [CrossRef]
- Hollan, J.; Hutchins, E.; Kirsh, D. Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research. ACM Trans. Comput. Interact. 2000, 7, 174–196. [Google Scholar] [CrossRef]
- Hutchins, E. The Social Organization of Distributed Cognition. In Perspectives on Socially Shared Cognition; Resnick, L.B., Levine, J.M., Teasley, S.D., Eds.; American Psychological Association: Washington, DC, USA, 1991. [Google Scholar]
- Searle, J.R. Is the Brain’s Mind a Computer Program? Sci. Am. 1990, 262, 25–31. [Google Scholar] [CrossRef]
- Wallach, W.; Allen, C.; Franklin, S. Consciousness and Ethics: Artificially Conscious Moral Agents. Int. J. Mach. Conscious. 2011, 03, 177–192. [Google Scholar] [CrossRef]
- Magnani, L. Eco-Cognitive Computationalism: From Mimetic Minds to Morphology-Based Enhancement of Mimetic Bodies. Entropy 2018, 20, 430. [Google Scholar] [CrossRef]
- Peterson, J.B. Maps of Meaning: The Architecture of Belief; Routledge: New York, NY, USA, 1999. [Google Scholar]
- Sarbin, T.R. Embodiment and the Narrative Structure of Emotional Life. Narrat. Inq. 2001, 11, 217–225. [Google Scholar] [CrossRef]
- Nelson, K. Narrative and the Emergence of a Consciousness of Self. Narrat. Conscious. 2003, 17–36. [Google Scholar]
- Herman, D. Emergence of Mind: Representations of Consciousness in Narrative Discourse in English; University of Nebraska Press: Lincoln, NE, USA, 2011. [Google Scholar]
- Miłkowski, M. Situatedness and Embodiment of Computational Systems. Entropy 2017, 19, 162. [Google Scholar] [CrossRef]
- Damasio, A.R. The Feeling of What Happens: Body and Emotion in the Making of Consciousness; Houghton Mifflin Harcourt: Boston, MA, USA, 1999. [Google Scholar]
- Bayne, T.; Hohwy, J.; Owen, A.M. Are There Levels of Consciousness? Trends Cogn. Sci. 2016, 20, 405–413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Damasio, A.; Dolan, R.J. The Feeling of What Happens. Nature 1999, 401, 847. [Google Scholar]
- De Waal, F.B.M.; Ferrari, P.F. Towards a Bottom-up Perspective on Animal and Human Cognition. Trends Cogn. Sci. 2010, 14, 201–207. [Google Scholar] [CrossRef]
- Dawkins, M.S. Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-Being; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
- Boly, M.; Seth, A.K.; Wilke, M.; Ingmundson, P.; Baars, B.; Laureys, S.; Edelman, D.; Tsuchiya, N. Consciousness in Humans and Non-Human Animals: Recent Advances and Future Directions. Front. Psychol. 2013, 4, 625. [Google Scholar] [CrossRef]
- Godfrey-Smith, P. Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness; Farrar, Straus and Giroux: New York, NY, USA, 2016. [Google Scholar]
- Montgomery, S. The Soul of an Octopus: A Surprising Exploration into the Wonder of Consciousness; Simon and Schuster: New York, NY, USA, 2015. [Google Scholar]
- Edelman, D.B.; Baars, B.J.; Seth, A.K. Identifying Hallmarks of Consciousness in Non-Mammalian Species. Conscious. Cogn. 2005, 14, 169–187. [Google Scholar] [CrossRef]
- Izard, C.E. The Emergence of Emotions and the Development of Consciousness in Infancy. In The Psychobiology of Consciousness; Springer: Berlin, Germany, 1980; pp. 193–216. [Google Scholar]
- Gallup, G.G., Jr.; Anderson, J.R.; Shillito, D.J. The Mirror Test. Cogn. Anim. Empir. Theor. Perspect. Anim. Cogn. 2002, 325–333. [Google Scholar]
- Marten, K.; Psarakos, S. Evidence of Self-Awareness in the Bottlenose Dolphin (Tursiops Truncatus). In Self-Awareness in Animals and Humans: Developmental Perspectives; Parker, S.T., Mitchell, R., Boccia, M., Eds.; Cambridge University Press: Cambridge, UK, 1994; pp. 361–379. [Google Scholar]
- Delfour, F.; Marten, K. Mirror Image Processing in Three Marine Mammal Species: Killer Whales (Orcinus Orca), False Killer Whales (Pseudorca crassidens) and California Sea Lions (Zalophus californianus). Behav. Process. 2001, 53, 181–190. [Google Scholar] [CrossRef]
- Walraven, V.; Van Elsacker, L.; Verheyen, R. Reactions of a Group of Pygmy Chimpanzees (Pan paniscus) to Their Mirror-Images: Evidence of Self-Recognition. Primates 1995, 36, 145–150. [Google Scholar] [CrossRef]
- Suárez, S.D.; Gallup, G.G., Jr. Self-Recognition in Chimpanzees and Orangutans, but Not Gorillas. J. Hum. Evol. 1981, 10, 175–188. [Google Scholar]
- Robert, S. Ontogeny of Mirror Behavior in Two Species of Great Apes. Am. J. Primatol. 1986, 10, 109–117. [Google Scholar] [CrossRef]
- Gallup, G.G. Chimpanzees: Self-Recognition. Science 1970, 167, 86–87. [Google Scholar] [CrossRef]
- De Veer, M.W.; Gallup Jr, G.G.; Theall, L.A.; van den Bos, R.; Povinelli, D.J. An 8-Year Longitudinal Study of Mirror Self-Recognition in Chimpanzees (Pan Troglodytes). Neuropsychologia 2003, 41, 229–234. [Google Scholar] [CrossRef]
- Plotnik, J.M.; De Waal, F.B.M.; Reiss, D. Self-Recognition in an Asian Elephant. Proc. Natl. Acad. Sci. USA 2006, 103, 17053–17057. [Google Scholar] [CrossRef] [PubMed]
- Prior, H.; Schwarz, A.; Güntürkün, O. Mirror-Induced Behavior in the Magpie (Pica Pica): Evidence of Self-Recognition. PLoS Biol. 2008, 6, e202. [Google Scholar] [CrossRef]
- Uchino, E.; Watanabe, S. Self-recognition in Pigeons Revisited. J. Exp. Anal. Behav. 2014, 102, 327–334. [Google Scholar] [CrossRef]
- Kohda, M.; Hotta, T.; Takeyama, T.; Awata, S.; Tanaka, H.; Asai, J.; Jordan, A.L. If a Fish Can Pass the Mark Test, What Are the Implications for Consciousness and Self-Awareness Testing in Animals? PLoS Biol. 2019, 17, e3000021. [Google Scholar] [CrossRef]
- Edelman, G.; Tononi, G. A Universe of Consciousness How Matter Becomes Imagination: How Matter Becomes Imagination; Basic Books: New York, NY, USA, 2008. [Google Scholar]
- Edelman, G.M. Bright Air, Brilliant Fire: On the Matter of the Mind.; Basic Books: New York, NY, USA, 1992. [Google Scholar]
- Edelman, G.M. Wider than the Sky: A Revolutionary View of Consciousness; Penguin Press Science: London, UK, 2005. [Google Scholar]
- Seth, A.K.; Baars, B.J.; Edelman, D.B. Criteria for Consciousness in Humans and Other Mammals. Conscious. Cogn. 2005, 14, 119–139. [Google Scholar] [CrossRef]
- Edelman, D.B.; Seth, A.K. Animal Consciousness: A Synthetic Approach. Trends Neurosci. 2009, 32, 476–484. [Google Scholar] [CrossRef] [PubMed]
- Hayles, N.K. The Cognitive Nonconscious: Enlarging the Mind of the Humanities. Crit. Inq. 2016, 42, 783–808. [Google Scholar] [CrossRef]
- Slotnick, S.D.; Schacter, D.L. Conscious and Nonconscious Memory Effects Are Temporally Dissociable. Cogn. Neurosci. 2010, 1, 8–15. [Google Scholar] [CrossRef]
- Winkelhagen, L.; Dastani, M.; Broersen, J. Beliefs in Agent Implementation. In Proceedings of the International Workshop on Declarative Agent Languages and Technologies, Utrecht, The Netherlands, 25 July 2005; Springer: Berlin, Germany, 2005; pp. 1–16. [Google Scholar]
- Dong, W.; Luo, L.; Huang, C. Dynamic Logging with Dylog in Networked Embedded Systems. ACM Trans. Embed. Comput. Syst. 2016, 15, 5. [Google Scholar]
- Auletta, G. Cognitive Biology: Dealing with Information from Bacteria to Minds; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
- Auletta, G. Teleonomy: The Feedback Circuit Involving Information and Thermodynamic Processes. J. Mod. Phys. 2011, 2, 136. [Google Scholar] [CrossRef]
- Morton, T. Being Ecological; MIT Press: Boston, MA, USA, 2018. [Google Scholar]
- Simons, D.J.; Chabris, C.F. Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events. Perception 1999, 28, 1059–1074. [Google Scholar] [CrossRef] [Green Version]
- Freeman, W.J.; Núñez, R.E. Editors’ Introduction. In Reclaiming Cognition: The Primacy of Action, Intention, and Emotion; Freeman, W.J., Núñez, R.E., Eds.; Imprint Academic: Bowling Green, OH, USA, 1999; p. xvi. [Google Scholar]
- Johnson, M. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason; University of Chicago Press: Chicago, IL, USA, 2013. [Google Scholar]
- Stano, P.; Kuruma, Y.; Damiano, L. Synthetic Biology and (Embodied) Artificial Intelligence: Opportunities and Challenges. Adapt. Behav. 2018, 26, 41–44. [Google Scholar] [CrossRef]
- Damiano, L.; Stano, P. Understanding Embodied Cognition by Building Models of Minimal Life. In Proceedings of the Italian Workshop on Artificial Life and Evolutionary Computation, Venice, Italy, 19–21 September 2017; Springer: Berlin, Germany, 2017; pp. 73–87. [Google Scholar]
- Langton, C.G. Artificial Life: An Overview; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
- Umbrello, S. Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy. Int. J. Technoethics 2019, 10. [Google Scholar] [CrossRef]
- Umbrello, S.; Baum, S.D. Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing. Futures 2018, 100, 63–73. [Google Scholar] [CrossRef]
- Poole, D.L.; Mackworth, A.K. Artificial Intelligence: Foundations of Computational Agents; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Copeland, J. Artificial Intelligence: A Philosophical Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Umbrello, S.; Lombard, J. Silence of the Idols: Appropriating the Myths of Daedalus and Sisyphus for Posthumanist Discourses. Postmod. Open. 2018, 9, 98–121. [Google Scholar] [CrossRef]
- Hayles, K.N. Afterword: The Human in the Posthuman. Cult. Crit. 2003, 53, 134–137. [Google Scholar] [CrossRef]
- Marchesini, R. Tecnosfera: Proiezioni per Un Futuro Posthumano; Castelvechi: Rome, Italy, 2017. [Google Scholar]
- Sorgner, S.L. Pedegrees. In Post- and Transhumanism: An Introduction; Ranisch, R., Sorgner, S.L., Eds.; Peter Lang: Frankfurt am Main, Germany, 2014; pp. 29–48. [Google Scholar] [CrossRef]
- Caffo, L. Fragile Umanità; Giulio Einaudi editore: Torino, Italy, 2017. [Google Scholar]
- Sorgner, S.L. Dignity of Apes, Humans and AI. 2019. Available online: http://trivent-publishing.eu/ (accessed on 4 May 2019).
- Keim, B. An Orangutan Has (Some) Human Rights, Argentine Court Rules. Wired. 2014. Available online: https://www.wired.com/2014/12/orangutan-personhood/ (accessed on 29 April 2019).
- Singer, P. Practical Ethics; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Sorgner, S.L. Schöner Neuer Mensch; Nicolai Verlag: Berlin, Germany, 2018. [Google Scholar]
- Amsterdam, B. Mirror Self-image Reactions before Age Two. Dev. Psychobiol. J. Int. Soc. Dev. Psychobiol. 1972, 5, 297–305. [Google Scholar] [CrossRef]
- Bard, K.A.; Todd, B.K.; Bernier, C.; Love, J.; Leavens, D.A. Self-awareness in Human and Chimpanzee Infants: What Is Measured and What Is Meant by the Mark and Mirror Test? Infancy 2006, 9, 191–219. [Google Scholar] [CrossRef]
- Nagel, T. What Is It like to Be a Bat? Philos. Rev. 1974, 83, 435–450. [Google Scholar] [CrossRef]
- Singer, P. On Comparing the Value of Human and Nonhuman Life BT—Applied Ethics in a Troubled World; Morscher, E., Neumaier, O., Simons, P., Eds.; Springer: Dordrecht, The Netherlands, 1998; pp. 93–104. [Google Scholar] [CrossRef]
- Lakoff, G. Mapping the Brain’s Metaphor Circuitry: Metaphorical Thought in Everyday Reason. Front. Hum. Neurosci. 2014, 8, 958. [Google Scholar] [CrossRef]
- Lakoff, G.; Johnson, M. Metaphors We Live by; University of Chicago Press: Chicago, IL, USA, 2003. [Google Scholar]
- Sorgner, S.L. Human Dignity 2.0: Beyond a Rigid Version of Anthropocentrism. Trans-Humanit. J. 2013, 6, 135–159. [Google Scholar] [CrossRef]
- Umbrello, S. Safe-(for Whom?)-By-Design: Adopting a Posthumanist Ethics for Technology Design; York University: Toronto, ON, USA, 2018. [Google Scholar] [CrossRef]
- Vallor, S. Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century. Philos. Technol. 2011, 24, 251. [Google Scholar] [CrossRef]
- Kolb, M. Soldier and Robot Interaction in Combat Environments; The University of Oklahoma: Norman, OK, USA, 2012. [Google Scholar]
- Scheutz, M. The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots; MIT Press: Cambridge, MA, USA, 2011; p. 205. [Google Scholar]
- Darling, K. Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects; Robot Law, C., Froomkin, K., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2016. [Google Scholar]
- Hart, J.W.; Scassellati, B. Mirror Perspective-Taking with a Humanoid Robot. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012. [Google Scholar]
- Floridi, L. Consciousness, Agents and the Knowledge Game. Minds Mach. 2005, 15, 415–444. [Google Scholar] [CrossRef] [Green Version]
- Avishai, M. Politik Der Würde: Über Achtung Und Verachtung; Suhrkamp: Berlin, Germany, 2012. [Google Scholar]
- Lee, S.J.; Ralston, H.J.P.; Drey, E.A.; Partridge, J.C.; Rosen, M.A. Fetal Pain: A Systematic Multidisciplinary Review of the Evidence. JAMA 2005, 294, 947–954. [Google Scholar] [CrossRef] [PubMed]
- Garite, T.J.; Simpson, K.R. Intrauterine Resuscitation during Labor. Clin. Obstet. Gynecol. 2011, 54, 28–39. [Google Scholar] [CrossRef] [PubMed]
- Fetal Distress. Available online: https://americanpregnancy.org/labor-and-birth/fetal-distress/ (accessed on 6 April 2019).
- Bellieni, C.V.; Buonocore, G. Fetal Pain Debate May Weaken the Fight for Newborns’ Analgesia. J. Pain 2019, 20, 366–367. [Google Scholar] [CrossRef]
- Derbyshire, S.W.G. Can Fetuses Feel Pain? BMJ 2006, 332, 909–912. [Google Scholar] [CrossRef]
- Khakurel, J.; Penzenstadler, B.; Porras, J.; Knutas, A.; Zhang, W. The Rise of Artificial Intelligence under the Lens of Sustainability. Technologies 2018. [Google Scholar] [CrossRef]
- Watson, D.S.; Krutzinna, J.; Bruce, I.N.; Griffiths, C.E.; McInnes, I.B.; Barnes, M.R.; Floridi, L. Clinical Applications of Machine Learning Algorithms: Beyond the Black Box. BMJ 2019, 364, 1–4. [Google Scholar] [CrossRef]
- Carabantes, M. Black-Box Artificial Intelligence: An Epistemological and Critical Analysis Journal. Eng. Rep. 2019. [Google Scholar] [CrossRef]
- Sternberg, G.S.; Reznik, Y.; Zeira, A.; Loeb, S.; Kaewell, J.D. Cognitive and Affective Human Machine Interface. Google Patents 5 July 2018. Available online: https://patentscope.wipo.int/search/en/detail.jsf;jsessionid=4E37A46F5807F625B3D12E91A33E2659.wapp2nB;jsessionid=9DB31F5F7B0EEA4D1BB54F6FB168BAA4.wapp2nB?docId=US222845836&recNum=5806&office=&queryString=&prevFilter=&sortOption=Fecha+de+publicaci%C3%B3n%2C+orden+descendente&maxRec=69890666 (accessed on 4 May 2019).
- Damaševicius, R.; Wei, W. Design of Computational Intelligence-Based Language Interface for Human-Machine Secure Interaction. J. Univ. Comput. Sci. 2018, 24, 537–553. [Google Scholar]
1 | |
2 | The Foundational Research Institute defines suffering risks as: risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far [23]. |
3 | AI that benefits from being able to interact in the physical world through robotic technologies such as advanced sensors, actuators and motor control units. |
4 | This core consciousness requires a particular level of complexity of the brain and a specific connection to the nervous system and senses. Humans have been shown to have it (from 4–5 months) [63]. |
5 | One of the ways to test for this level of consciousness is through the Mirror Test which is a measure of self-awareness developed by Gordon Gallup Jr. in 1970. The test gauges this self-awareness by determining if the entity can recognize itself when encountering its reflection in a mirror [64]. Nine nonhuman animal species were also able to pass the mirror test: bottlenose dolphin [65], killer whale [66], bonobo [67], Bornean orang-utan [68,69], chimpanzee [70,71] (after 1 year of age) [69], Asian elephant [72], Eurasian magpie [73], pigeons [74], and the Cleaner Wrasse [75]. |
6 | This selective attention tests requires subjects to count the number of passes between a basketball team and are asked to determine if there was anything special about the video. However, the ‘specialness’ of this video is that during the scene in which players are passing the ball, a kickboxing gorilla comes on screen, yet it remains unnoticed by many. Still, this gorilla remains clearly in the cognitive field of the observers. The consequences of this test show that cognition and consciousness are differing (albeit co-varying) phenomena. |
7 | AL refers to the use of biochemistry, robotics and simulations to study the evolutions and processes of systems that are related to natural life [93]. |
8 | Naturally, autopoiesis is a biological capacity, strictly speaking the concepts of autonomy, self-maintenance and reproduction could in theory be interpreted as a capacity that can be possessed by a sufficiently advanced AI. This can feasibly be done through the marriage of advanced deep neural networks, machine learning (with variants of genetic evolutionary) and perhaps embodied with access to advanced molecular manufacturing technologies [94,95,96]. Still, this differs ontologically from the autopoiesis mentioned here. |
9 | κλέπτει Ἡφαίστου καὶ Ἀθηνᾶς τὴν ἔντεχνον σοφίαν σὺν πυρί—ἀμήχανον γὰρ ἦν ἄνευ πυρὸς αὐτὴν κτητήν τῳ ἢ χρησίμην γενέσθαι—καὶ οὕτω δὴ δωρεῖται ἀνθρώπῳ. τὴν μὲν οὖν περὶ τὸν βίον σοφίαν ἄνθρωπος ταύτῃ ἔσχεν, τὴν δὲ πολιτικὴν οὐκ εἶχεν: ἦν γὰρ παρὰ τῷ Διί. τῷ δὲ Προμηθεῖ εἰς μὲν τὴν ἀκρόπολιν τὴν τοῦ Διὸς οἴκησιν οὐκέτι ἐνεχώρει εἰσελθεῖν—πρὸς δὲ καὶ αἱ Διὸς φυλακαὶ φοβεραὶ ἦσαν—εἰς δὲ τὸ τῆς Ἀθηνᾶς καὶ Ἡφαίστου οἴκημα τὸ κοινόν, ἐν ᾧ (Protagoras 321d) |
10 | There usually was and still is a categorically dualistic ontological separation between humans and solely natural beings. This is most dominant and apparent in legal frameworks, with the exception of Argentina, which, on October 18, 2014 recognized the orang-utan named Sandra as the subject of (some) human rights in what turned out to be an unsuccessful habeas corpus case [105]. |
11 | A form of speciesism which is markedly similar to racism and sexism. |
12 | Stefan L. Sorgner criticizes the notion that higher-consciousness (self-consciousness) is a necessary condition for personhood. Similarly, he makes the further, and more controversial step that sentience is not required either for the affordance of personhood [107]. |
13 | 13 There is a difference, however, in that the AIs exposed to the mirror test are (obviously) not quite like humans or animals. The first time they encountered themselves they had to be told that what was being reflected was themselves. This provides a reason against the possibility of AI consciousness (not nonconscious cognition however). |
14 | |
15 | Fetal metabolic acidosis is a strong chemical predictor that is done by taking small blood samples from the fetus itself. It is more reliable than cardiotocography which has shown to produce more false positives [124]. |
16 | A cursory example would be a sufficiently advanced care robot giving a patient a prognosis given certain symptoms where the patients either disregards such advice or does something that is contrary to such advice. Doing so opens up questions that such a cognizer might realize as being punitive. The logic is that is debases the cognizer’s very reason for being. |
17 | Effective Altruism organizations broadly aim towards this goal. Particular focus on the long-term reduction of s-risks by and for AI has been undertaken by the Foundational Research Institute [23]. |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Umbrello, S.; Sorgner, S.L. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies 2019, 4, 24. https://doi.org/10.3390/philosophies4020024
Umbrello S, Sorgner SL. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies. 2019; 4(2):24. https://doi.org/10.3390/philosophies4020024
Chicago/Turabian StyleUmbrello, Steven, and Stefan Lorenz Sorgner. 2019. "Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence" Philosophies 4, no. 2: 24. https://doi.org/10.3390/philosophies4020024
APA StyleUmbrello, S., & Sorgner, S. L. (2019). Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence. Philosophies, 4(2), 24. https://doi.org/10.3390/philosophies4020024