Next Article in Journal
Artificial Intelligence and the Limitations of Information
Previous Article in Journal
A Fuzzy Evaluation Model for Sustainable Modular Supplier
Previous Article in Special Issue
Can Social Robots Make Societies More Human?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial for the Special Issue on “ROBOETHICS”

by
Spyros G. Tzafestas
School of Electrical and Computer Engineering, National Technical University of Athens, 15773 Athens, Greece
Information 2018, 9(12), 331; https://doi.org/10.3390/info9120331
Submission received: 18 December 2018 / Accepted: 18 December 2018 / Published: 19 December 2018
(This article belongs to the Special Issue ROBOETHICS)

Abstract

:
Ethical and social issues of robotics have attracted increasing attention from the scientific and technical community over the years. These issues arise particularly in mental and sensitive robotic applications, such as robot-based rehabilitation, social robot (sociorobot) applications, and military robot applications. The purpose of launching this Special Issue was to publish high-quality papers addressing timely and important aspects of roboethics, and to serve as a dissemination source of novel ideas demonstrating the necessity of roboethics. The papers finally included in the Special Issue deal with fundamental aspects and address interesting deep questions in the roboethics and robophililosophy field.

1. Introduction

Robots in the past have been separated from humans using safety cages. This changes when robots enter our homes, control our cars, and expand into health care, become entertainment, and assist in emotional areas, etc. Today, ethics, privacy, agency, and liability issues lie at the core of the robotics and roboethics research.
In summary, roboethics is the applied ethics branch which is concerned with:
  • the identification and analysis of the ethical issues arising in present and future robotic applications. Such issues include loss of privacy, reduction of human–robot interaction distance, collection of data by personal robots or by drones that enter our houses, safety issues connected with the robots’ operation both in the workplace and home environment, liability issues with regard to who is responsible for faults or mistakes made during the operation of the robot, etc. [1].
  • the formulation and establishment of proper principles for facing the above ethical issues. These principles are primarily based on general ethics and technoethics principles, and include principles special to robot use [2].
  • the ethical/philosophical dimensions of ascribing agency and rights to mental robots [3].
  • the definition and introduction of prohibitive rules for bad actions, and encouragement rules of other desired actions (for example, ethical ramifications for robots used for elder care, related to loss of privacy) [4].
  • the enhancement of human well-being through the respect of the ethical principles and codes which include the human rights principles [5].
  • the improvement of the quality of life and the protection of physical integrity of people with special needs (PwSN) through proper robotic aids and service robots (autonomous wheelchairs, orthotic and prosthetic devices, etc.) of affordable cost by the average user [6,7].
  • the study of the ethical aspects and consequences of putting humans “out-of-the loop” in the firing decisions of war/lethal robots [1,8].

2. Summary of the Special Issue

The issue involves six research papers and one review paper.
The review paper by Spyros G. Tzafestas, entitled Roboethics: Fundamental Concepts and Future Prospects, provides an introduction to roboethics which starts with the question “what is roboethics?” a discussion of the methodologies of roboethics, and a quick look at the branches of roboethics, in general. Then, it presents an outline of the major branches of roboethics, namely: medical roboethics, assistive roboethics, social robot ethics, war roboethics, driverless car ethics, and cyborg ethics. The paper concludes with a discussion of the future prospects of robotics and roboethics, including major opinions on the philosophical issue of whether future robots should have rights.
The paper by João Silva Sequeira, entitled Can Social Robots Make Societies More Human is concerned with the issue of the integration of social robots in real social environments that may dehumanize some of the roles currently being played by humans. The author claims and justifies that social robots can be used for smoothing human behaviors, i.e., making humans more human, so preventing dehumanization. Another issue studied in the paper is the “fearing the unknown” question. The author asserts that the lack of knowledge can still generate some expectations, e.g., in what concerns motion, and that a social robot must exhibit its intelligence convincingly, thus, establishing trust in that there are no hidden meanings, feelings or intentions.
The paper by Jeff Buechner, entitled Two Philosophical Problems of Roboethics addresses, indirectly, the roboethics question of how human moral reasoning can be computationally modeled in robo-agents? and the question should human reasoning be computationally modeled in robo-agents? To this end, the author poses two new philosophical problems raised by Krimke’s argument against functionalism extended to robot agents. These problems show that the above questions in computational modeling need to be reformulated, and they may also show that the established work in roboethics needs to be reformulated. The paper studies various ethical and legal roboethics issues arising from the two philosophical problems, and concludes that if Kripke’s argument is sound, some aspects of roboethics research and development will be challenged.
The paper by Ugo Pagallo, entitled Vital, Sophia, and Co.: The quest for the Legal Personhood of Robots investigates the issue of the legal status of intelligent robot agents, particularly the issue of confusing the legal agenthood of these artificial agents with the status of legal personhood. The author proposes that policymakers should think seriously about the possibility of establishing new forms of accountability and liability for intelligent robot activities in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility that hinge on multiple accumulated actions of humans and robots which may lead to cases of impunity. The author points out that with the current state of the art, none of today’s intelligent robots meets the requirements for granting full legal personhood status.
The paper by Sara Lumbreras, entitled Getting Ready for the Next Step: Merging Information Ethics and Roboethics: A Project in the Context of Marketing Ethics discusses the issue of merging information ethics and roboethics relating them to the well-established field of marketing ethics. The author notes that his intention is not to present the entire process of merging these two fields, but rather to discuss the need for this merger and provide a number of initial guidelines. These action guidelines focus on the requirement for transparency and the establishment of limits on vulnerable products and services, aimed at limiting the potentially harmful effects of human–machine interactions.
The paper by Raya A. Jones, entitled Engineering Cheerful Robots: An Ethical Consideration is concerned with the ethical issues of human–robot symbiosis that might result in the engineering of human agents who, in Mill’s words, “will want to become a cheerful and willing robot”. The paper is actually concerned with meta-ethical implications at the cross-borders of robotics, ethics, psychology, and social sciences and discusses a series of questions, namely: “Can robots be agents of cultural transmission? Is a cultural shift an issue for roboethics? Should roboethics be an instrument of political/social engineering? How could biases of the technological imagination be avoided? Does technological determinism compromise the possibility of moral action?” These questions do not have straightforward Yes or No answers, and they are related to Mill’s metaphor of the “cheerful robot”.
Finally, the paper by Herman T. Tavani entitled Can Social Robots Qualify for Moral Consideration examines the controversial roboethics question whether robots should be granted rights. This question is a subject of continuous debate with strong disagreement on the criterion or a set of criteria that a robot must satisfy to qualify for some level of moral or legal agency. In this paper, the author aims to show how the present debate about whether to grant rights to robots would benefit from the analysis and clarification of some key concepts and assumptions underlying that question. His central goal is to show why this question should be reframed by asking whether some kinds of social robots qualify for moral consideration as moral patients. The author argues that the answer to this question is yes, by drawing some insights from Hans Jonas work, and studying five sub-questions, viz.: (i) the robot question; (ii) the rights question; (iii) the criterion question; (iv) the agency question, and (v) the rationale question.

3. Conclusions

This Special Issue contains a review paper on roboethics and its future prospects, and six research papers addressing the following crucial robot ethics- and philosophy-questions: (i) Can sociorobots make societies more human? (ii) How human reasoning can be modeled in robot agents and should this be done? (iii) What is the legal status of intelligent robots? (iv) How information ethics and roboethics should be merged to limit the harmful implications of human–robot interaction? (v) What are the ethical issues in human–robot symbiosis that might result in the engineering of human agents who want to be cheerful and willing robots? and (vi) Should sociorobots be granted moral rights as moral patients?

Acknowledgments

The guest editor would like to express his sincere thanks to the authors for their high-level contributions, to the reviewers for their effort to provide deep and constructive reviews.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Tzafestas, S.G. Roboethics: A Navigating Overview; Springer: Berlin, Germany; Dordrecht, The Netherlands, 2015. [Google Scholar]
  2. Tavani, H.T. Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing; John Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
  3. Gunkel, D.J. Robot Rights; The MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  4. Calo, M.R.; Froomkin, A.M.; Kerr, I. Robot Law; Edward Elgar Publishing: Northampton, MA, USA, 2016. [Google Scholar]
  5. Tarshys, D.; Massué, J.P.; Gerin, G. The Human Rights: Ethical and Moral Dimensions of Healthcare; Council of Europe Publishing: Strasbourg, France, 1998. [Google Scholar]
  6. Alonso, I.G.; Fernández, M.; Maestre, J.M.; Fuente, M.D.P.A.G. Service Robotics within the Digital Home: Applications and Future Prospects; Springer: Berlin, Germany, 2011. [Google Scholar]
  7. Tzafestas, S.G. Sociorobot World: A Guided Tour for All; Springer: Berlin, Germany, 2016. [Google Scholar]
  8. Lin, P.; Bekey, G.; Abney, K. Robots in war: Issues of risk and ethics. In Ethics and Robotics; Capurro, R., Nagenborg, M., Eds.; AKA Verlag: Heidelberg, Germany, 2009. [Google Scholar]

Share and Cite

MDPI and ACS Style

Tzafestas, S.G. Editorial for the Special Issue on “ROBOETHICS”. Information 2018, 9, 331. https://doi.org/10.3390/info9120331

AMA Style

Tzafestas SG. Editorial for the Special Issue on “ROBOETHICS”. Information. 2018; 9(12):331. https://doi.org/10.3390/info9120331

Chicago/Turabian Style

Tzafestas, Spyros G. 2018. "Editorial for the Special Issue on “ROBOETHICS”" Information 9, no. 12: 331. https://doi.org/10.3390/info9120331

APA Style

Tzafestas, S. G. (2018). Editorial for the Special Issue on “ROBOETHICS”. Information, 9(12), 331. https://doi.org/10.3390/info9120331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop