Two New Philosophical Problems for Robo-Ethics
Abstract
:1. Introduction
2. Kripke’s Argument against Functionalism Extended to Robo-Agents
2.1. Worries about Physical Computer Breakdowns
2.2. The Argument
2.2.1. Physical Breakdown
2.2.2. Epistemic Indistinguishability
2.2.3. Stipulation
2.2.4. No Matters-of-Fact
3. Two New Philosophical Problems that Kripke’s Argument Creates for Robo-Ethics
- 1
- Should robo-agents perform risky surgery on human beings?
- 2
- Should robo-agents perform risky surgery on human beings given such-and-such stipulations as to what actions they perform?’
- 3
- Should human surgeons perform risky surgery on human beings?
- 4
- Should human surgeons perform risky surgery on human beings given such-and-such stipulations as to what actions they perform?’
4. Some Possible Objections
4.1. Moral Values and Computation
4.2. Breakdown Conditions Might Result in Arbitrary Computational States
4.3. How Could a Physical Computer Compute Two Different Functions When There Is Only One Physical Process?
- (A)
- either operating normally or not—that is, either computing F or in breakdown (and so not computing F).
- (B)
- either (i) the computer is operating normally in the computation of F and operating under breakdown in the computation of G or (ii) operating under breakdown in the computation of F and operating normally in the computation of G.
4.4. Surely There Is a Diagnostic Tool That Would Show the Physical Computer Is Computing Function F and That It Is Not Computing Function G
4.5. ‘Unreliability’ When Applied to Robo-Ethics Is a ‘Category Mistake’
4.6. There Are No Philosophical Problems for Robo-Ethics When the Software for Robo-Agents Has Been Proven to Be Correct
4.7. The ‘There Is Nothing Special about a Stipulation’ Objection
4.8. So What If a Computer Might Break Down? That Has No Importance in Itself and No Importance for Robo-Ethics
4.9. Kripke’s Argument Is on a Par with Skepticism about Dreaming
5. Which Aspects of Robo-Ethics Are Targeted by Kripke’s Argument?
5.1. McCarty’s ‘Deep’ Conceptual Models
5.2. Tavani on the ‘Moral Consideration Question’ in Robo-Ethics
- 5
- Robo-agents satisfy such-and-such conditions. Therefore, they should be accorded moral consideration.
- 6
- Robo-agents satisfy such-and-such conditions because a human agent has stipulated that they satisfy those conditions. Therefore, they should be accorded moral consideration.
- 7
- Robo-agents socially interacting with human agents can enhance our ability, as humans, to act in the world. Therefore, robo-agents should be accorded moral consideration.
- 8
- Robo-agents socially interacting with human agents—where what the robo-agents do is the result of a human stipulation—can enhance our ability, as humans, to act in the world. Therefore, robo-agents should be accorded moral consideration.
5.3. Property versus Relational Views in Robo-Ethics
- 9
- When robo-agents engage in such-and-such behavior in social interactions with human beings, those human agents infer that they are conscious.
- 10
- When robo-agents engage in such-and-such behavior stipulated by human agents to hold of them in social interactions with human agents, those human agents infer that they are conscious.
6. Who Is Legally and/or Morally Responsible for the Actions of a Robo-Agent?
6.1. Liability Responsibility
The Problem of Lying to Avoid Non-Exculpatory Moral Loss
6.2. Capacity Responsibility
6.3. Causal Responsibility
6.4. Human Agent Responsibility for All the Actions of a Robo-Agent
7. Does Kripke’s Argument Have Practical Import for the Field of Robo-Ethics?
7.1. Wallach and Allen
7.2. Anderson and Anderson
7.3. The Moral Turing Test (MTT)
7.4. Alternative Views of the Field of Robo-Ethics
8. Conclusions
Funding
Acknowledgments
Conflicts of Interest
References
- Levin, J. Functionalism; Stanford Encyclopedia of Philosophy: Palo Alto, CA, USA, 2018. [Google Scholar]
- Avigad, J.; Blanchette, J.; Klein, G. Introduction to Milestones in Interactive Theorem Proving. J. Autom. Reason. 2018, 61, 1–8. [Google Scholar] [CrossRef]
- Sitaraman, M. Building a push-button RESOLVE Verifier: Progress and Challenges. Form. Asp. Comput. 2011, 23, 607–626. [Google Scholar] [CrossRef]
- Avigad, J. Formally Verified Mathematics. Commun. ACM 2014, 57, 66–75. [Google Scholar] [CrossRef]
- Buechner, J. Not Even Computing Machines Can Follow Rules: Kripke’s Critique of Functionalism. In Saul Kripke; Berger, A., Ed.; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
- Buechner, J. Does Kripke’s Argument Against Functionalism Undermine the Standard View of What Computers Are? Minds Mach. 2018, 28, 491–513. [Google Scholar] [CrossRef]
- Buechner, J. Gödel, Putnam, and Functionalism; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Putnam, H. Reason, Truth, and History; Cambridge University Press: New York, NY, USA, 1981. [Google Scholar]
- Ryle, G. The Concept of Mind; University of Chicago Press: Chicago, IL, USA, 1983. [Google Scholar]
- Kaufman, S.; Rosset, S.; Perlich, C. Leakage in Data Mining: Formulation, Detection, and Avoidance. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD’11, San Diego, CA, USA, 21–24 August 2011. [Google Scholar]
- Christiano, P.; Leike, J.; Brown, T.; Martic, M.; Shane, L.; Amodei, D. Deep Reinforcement Learning from Human Preferences. arXiv, 2017; arXiv:1706.03741v3. [Google Scholar]
- Popov, I.; Heess, N.; Lillicrap, T.; Hafner, R.; Barth-Maron, G.; Vecerik, M.; Lampe, T.; Tassa, Y.; Erez, T.; Riedmiller, M. Data-Efficient Deep Reinforcement Learning for Dexterous Manipulation. arXiv, 2017; arXiv:1704.03073. [Google Scholar]
- McCarty, L.T. Intelligent Legal Information Systems: Problems and Prospects. Rutgers Comput. Technol. Law J. 1983, 9, 265–294. [Google Scholar]
- Shortliffe, E. Computer-Based Medical Consultations: MYCIN; Elsevier: Amsterdam, The Netherlands, 1976. [Google Scholar]
- Weis, S.; Kulikowski, C.; Amarel, S.; Safir, A. A Model-Based Method for Computer-Aided Medical Decision-Making. Artif. Intell. 1978, 11, 145–172. [Google Scholar] [CrossRef]
- McCarty, L.T. Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning. Harv. Law Rev. 1977, 90, 837–893. [Google Scholar] [CrossRef]
- Anderson, M. GenEth. Available online: http://uhaweb.hartford.edu/anderson/Site/GenEth.html (accessed on 12 June 2018).
- Tavani, H.T. Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights. Information 2018, 9, 73. [Google Scholar] [CrossRef]
- Jonas, H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age; University of Chicago Press: Chicago, IL, USA, 1984. [Google Scholar]
- Coeckelbergh, M. Robot Rights? Towards a Social-Relational Justification of Moral Consideration. Ethics Inf. Technol. 2010, 12, 209–221. [Google Scholar] [CrossRef]
- Gunkel, D.J. The Other Question: Can and Should Robots Have Rights? Ethics Inf. Technol. 2017, 19, 1–13. [Google Scholar] [CrossRef]
- Moor, J.H. Four Kinds of Ethical Robots. Philos. Now 2009, 17, 12–14. [Google Scholar]
- Audi, R. (Ed.) The Cambridge Dictionary of Philosophy, 2nd ed.; Cambridge University Press: New York, NY, USA, 1999. [Google Scholar]
- Coleman, J. Risks and Wrongs; Cambridge University Press: New York, NY, USA, 1992. [Google Scholar]
- Kripke, S. Wittgenstein On Rules and Private Language; Harvard University Press: Cambridge, MA, USA, 1982. [Google Scholar]
- Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press: New York, NY, USA, 2009. [Google Scholar]
- Gunkel, J. The Machine Question; MIT Press: Cambridge, MA, USA, 2012; p. 75. [Google Scholar]
- Tavani, H.T. Can We Develop Artificial Agents Capable of Making Good Moral Decisions? Minds Mach. 2011, 21, 465–474. [Google Scholar] [CrossRef]
- Anderson, M.; Anderson, S.L. A Prima Facie Duty Approach to Machine Ethics. In Machine Ethics; Anderson, M., Anderson, S.L., Eds.; Cambridge University Press: New York, NY, USA, 2011; pp. 476–494. [Google Scholar]
- Anderson, M.; Anderson, S.L. Case-Supported Principle-Based Behavior Paradigm. In A Construction Manual for Robot’s Ethical Systems; Trappl, R., Ed.; Springer: New York, NY, USA, 2015; pp. 155–168. [Google Scholar]
- Tavani, H.T. Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing, 5th ed.; John Wiley and Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
- Anderson, S.L. Machine Metaethics. In Machine Ethics; Anderson, M., Anderson, S.L., Eds.; Cambridge University Press: New York, NY, USA, 2011; pp. 21–27. [Google Scholar]
- Allen, C.; Varner, G.; Zinser, J. Prolegomena to Any Future Moral Agent. Exp. Theor. Artif. Intell. 2000, 12, 251–261. [Google Scholar] [CrossRef]
- Veruggio, G.; Abney, K. Roboethics: The Applied Ethics for a New Science. In Robot Ethics: The Ethical and Social Implications of Robotics; Lin, P., Abney, K., Bekey, G., Eds.; MIT Press: Cambridge, MA, USA, 2012; pp. 347–363. [Google Scholar]
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Buechner, J. Two New Philosophical Problems for Robo-Ethics. Information 2018, 9, 256. https://doi.org/10.3390/info9100256
Buechner J. Two New Philosophical Problems for Robo-Ethics. Information. 2018; 9(10):256. https://doi.org/10.3390/info9100256
Chicago/Turabian StyleBuechner, Jeff. 2018. "Two New Philosophical Problems for Robo-Ethics" Information 9, no. 10: 256. https://doi.org/10.3390/info9100256
APA StyleBuechner, J. (2018). Two New Philosophical Problems for Robo-Ethics. Information, 9(10), 256. https://doi.org/10.3390/info9100256