Next Article in Journal
Operations and Aggregation Methods of Single-Valued Linguistic Neutrosophic Interval Linguistic Numbers and Their Decision Making Method
Next Article in Special Issue
Vital, Sophia, and Co.—The Quest for the Legal Personhood of Robots
Previous Article in Journal
Quantization-Based Image Watermarking by Using a Normalization Scheme in the Wavelet Domain
Previous Article in Special Issue
Engineering Cheerful Robots: An Ethical Consideration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Getting Ready for the Next Step: Merging Information Ethics and Roboethics—A Project in the Context of Marketing Ethics

Institute for Research in Technology, Universidad Pontificia Comillas, 28015 Madrid, Spain
Information 2018, 9(8), 195; https://doi.org/10.3390/info9080195
Submission received: 30 June 2018 / Revised: 30 July 2018 / Accepted: 31 July 2018 / Published: 1 August 2018
(This article belongs to the Special Issue ROBOETHICS)

Abstract

:
This article presents some pressing issues on roboethics, which lie at the frontier between roboethics and information ethics. It relates them to the well-established field of marketing ethics, stressing two main points. First, that human attention and willpower is limited and susceptible to be exploited. Second, that the possibility of using consumer profiles considerably increases the possibility of manipulation. It presents the interactions with robots as a particularly intense setting, in which the humanlike presence and the possibility of tailoring communications to the profile of the human target can be especially problematic. The paper concludes with some guidelines that could be useful in limiting the potentially harmful effects of human–robot interactions in the context of information ethics. These guidelines focus on the need for transparency and the establishment of limits, especially for products and services and vulnerable collectives, as well as supporting a healthy attention and willpower.

1. Introduction

Human beings interact in terms of information with other human beings, as well as with virtual entities such as computer programs and with robots. All humans, robots, and virtual entities can be understood as inforgs interacting in a common infosphere. The physical reality where human beings develop their activities can be understood as a component of the infosphere that is contained within it. Robots interact at this physical level as well. Roboethics study the ethical problems that arise from robots, such as whether robots pose a threat to humans, whether some uses of robots are problematic (such as in healthcare or war), and how robots should be designed such that they act ‘ethically’ [1]. Roboethics can be understood within the larger sphere of information ethics. Information ethics is “the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society” [2]. It studies information as a resource, a product, or as a target, and provides a critical framework for considering moral issues concerning informational privacy, moral agency in artificial agents, or other potential problems regarding informaiton. We refer to the literature [3] for an outline in information ethics. The number of existing information entities is growing rapidly. In addition, the interactions between humans and machines are becoming more frequent. These interactions are getting also more complex and probably more intense. The most paradigmatic examples of this are service, assistive, and social robots. The interaction can be tailored tosuit the individual preferences of the human, either explicitly expressed via a given configuration or implicit, for instance, by assigning him to a particular cluster. This customization can result in a better service, more adequate to the particular needs of its client. However, it can also be used for manipulation techniques aimed at capturing higher service sales or generating addictive behaviors. These issues, which are currently arising in the context of online interactions, can be anticipated to be much more intense in the interactions with robots.
As an added factor, the amount of information amassed about a particular person is growing as well, and the possibility of trading data among companies creates the possibility of generating increasingly accurate profiles. These profiles can be used as a tool to steer behavior in the desired direction for commercial purposes. This is already an issue in online interactions that new legislation like the general data protection regulation is attempting to tackle, albeit in a very limited manner, as will be discussed.
This article presents the need to merge information ethics and roboethics to consider these emerging issues in the interaction between humans and machines. It should be noted that the article does not intend to carry out the whole project of merging these two disciplines, but rather to propose the need for this merge and some initial guidelines. The paper is structured as follows. First, the origin of the issues, which can be traced back nicely to marketing ethics, will be discussed. Then, the reasons for increased concern in the medium-term future will be presented. Last, some guidelines for action aimed at limiting the potentially harmful effects of human–machine interactions will be presented.

2. Attention Is Limited and Automatic Customization Is Powerful: A Case for Reviewing Marketing Ethics

The germ of these issues is already present at the core of marketing ethics, a discipline within business ethics that has a history of over fifty years [4]. Marketing ethics studies the moral principles behind the operation and regulation of marketing. Some areas of marketing ethics (ethics of advertising and promotion) have an overlap with media ethics. The effect of marketing techniques on consumer behavior has long been known. For instance, customers who view online recommendations spend on average twice as much buying recommended products than customers who did not see the recommendations [5]. Companies will tend to use the means at their disposal to increase sales. This includes using all available data to know which marketing strategy will work better on a given individual—or alternatively, select which individual has the highest probability of making a purchase and bidding to show an advertisement to him specifically. Many of these strategies are based on appealing to the most irrational part of our brains, system 1 in terms of Kahneman, which processes information in a fast, unconscious way. System 1 is efficient in terms of energy and requires little effort and it is quick, but it is prone to biases and errors. By contrast, system 2 is an effortful, slow, and controlled way of thinking [6]. Most marketing strategies take advantage of some of the shortcuts used by system 1, which rely on what has been known as cognitive biases—which also overlap with logical fallacies. For instance, showing a celebrity-endorsed product recommendation is akin to using an authority bias, where the belief displayed by an authority figure is more likely to be perceived as true. Another example is the bandwagon bias, where people do something primarily because other people are doing it, regardless of their own beliefs, which they may ignore or override. The bandwagon effect has wide implications, particularly in politics and consumer behavior. An example of this would be the abovementioned consumer recommendations, where products that get the highest recommendation are more likely to be chosen. Marketing science keeps exploring the way of using these biases to its advantage.
In parallel, it seems to be the case that the capacity to switch to system 2 is hindered by the time spent in system 1. The capacity for attention, as for self-regulation (willpower), decreases with each failed attempt to use it. Some recent studies have linked impulsivity as a personality trait to attention deficits. The use of information and communication technologies (ICTs) in particular has been linked specifically to a rise in attention deficit disorder (ADD) [7], although this link has not been proven yet. In this context, the case of television (TV) is particularly interesting, with TV exposure considerably increasing the probability of children suffering from ADD [8] when they grow older. Multitasking, an activity that decreases the opportunity for reflection, has been linked to ADD as well. Although a careful scrutiny is needed, ICTs could be responsible for much of the recent rise in ADD diagnosis.
The ability to perform rational decision-making is not only linked to attention but also to self-regulation. Recent evidence has shown that self-regulation (also known as willpower) is a highly adaptive trait that enables humans to override and alter their innate responses. Self-regulation seems to consume a limited resource, which could be understood as energy or strength. Therefore, when self-regulation has been exercised for a given period of time, there is less of the resource to be spent in the following decision. This becomes manifest, for instance, in a time-of-the-day effect; it seems to be more difficult to maintain attention or exercise self-regulation later in the day than in the morning. Exercises in self-regulation can mitigate this effect, producing broad improvements in the ability for self-regulation. On the contrary, a lack of restraint seems to lead to a general depletion of resources for self-regulation. For a comprehensive review of recent findings, we refer the reader to the literature [9].
The constant exposure to stimuli that deplete attention and self-regulation resources results in an increase in behavioural addictions. The addiction to internet use has already been documented for several years [10]. Users feel a compulsion to be online, which can have pernicious consequences for their professional and personal life. Cellphone addiction is a similar disorder [11] that affects an increasingly large share of the population. According to the media-analytics company ComScore, the average American person spends on average 2 h 51 min per day on their smartphone, 36% more than they spend eating and drinking. Social media are the apps with the highest use, totaling around of 60% of the total time spent on the internet. Another derivative of interactions only is the addiction to online shopping [12]. It is now easy for business in the search for clients to nudge clients toward their shopping carts, even if they are just spending some time on social media. Showing a previous search result in the corner of the screen and grabbing the attention of potential cravings. An estimated 5% to 8% of Americans are thought to suffer from shopping addiction, and online activities seem to exacerbate the potential triggers. As reported in the literature [13], when popular shows are released in platforms such as Netflix, a relevant share of viewers indulge in binge-watching, sometimes devouring a whole series in just a day, which is generating some discussions on binge-watching qualifying as a behavioural addiction. Apart from all these relatively new addictions, the online context has facilitated the development of a large industry around online betting that preys, and creates, gambling addicts.
Overall, it seems that the amount of addictions facilitated by ICTs could be increasing considerably [14] (other factors, such as a media focus or changes in diagnosis, might be responsible for this increase). In general, the ability to focus and to exercise self-restraint is hindered by the ubiquity of online interactions and by their possibility to be tailored to the consumer’s specific taste.
Attention is a currency: social media, as well as other sites, benefit from the attention of their users, which can be used to generate a commercial benefit. Many of the current online business models are based on individual money expenditure directly or indirectly establishing attention as a mediator. The latter case uses measures such as clicks, views, or viewing time—very importantly, in seemingly free entertainment sites—or, in some cases, emotional reactions. The artificial intelligence (AI) deployed by companies is getting particularly efficient at capturing attention and at manipulating emotion. This manipulation of emotions was very interestingly presented recently by Facebook [15]. In 2012, 689,000 Facebook users were unknowingly part of an experiment over a period of one week. According to the report of the company itself, “the experiment manipulated the extent to which people were exposed to emotional expressions in their news feed”. The experiment showed that users who had fewer negative stories in their news feed were less likely to write a negative post, and vice versa. The company wanted to investigate the extent to which emotional contagion (that is, the propensity for an individual to feel emotions similar to those of his or her connections) was real. In addition, they were concerned that a general negative tone in their feeds would lead users to spend less time on the social network. They confirmed these by means of the experiment. The Facebook–Cambridge Analytica data scandal, a bit later in 2014, involved the collection of information of up to 87 million Facebook users, which was allegedly used to steer public opinion before the upcoming election [16].

3. The Impact of Customization

As demonstrated in the examples above, ICTs open the doors to a greater influence on consumer behavior. The possibility of harvesting large amounts of information about individuals makes it possible to tailor the manipulation strategy to win his attention or to nudge his decisions.
Showing relevant advertisement to a given potential customer is only a way of increasing sales. The possibility of targeting vast numbers of potential clients results in a much larger base, so that products that would not be profitable in a local context become good sellers in a global environment. This makes it possible to offer products that appeal to a very low percentage of the population.
This is exacerbated by the application of customization technologies. One particularly interesting example of the impact of customization is the creation of bespoke products to cater to the precise tastes of the customer. Mass-customization techniques such as 3D printing make it possible to create designs that bring together differing preferences of the user into a product that was previously nonexistent. For instance, an ad could show a garment that combines the most flattering shape, a favorite color, and a pattern based on some previously bought designs. The item will never be manufactured unless it is bought online. Another example is the automatic creation of entertainment content. AI is getting better and better at composing music, and it will soon be possible to create tunes to accompany the specific emotional state of the listener [17]. These compositions will be created in the moment, based on an accumulated profile of the preferences of the listener and dynamic information about his emotional reactions to the compositions. A less sophisticated instance of this is the automatic creation of videos for children in Youtube Kids. Based on the most searched keywords, automatic engines create videos that group these keywords together mashing the content of existing videos. Sometimes, however, this process leads to unexpected consequences—for instance, with inappropriate content inadvertently being added by the engine [18].
The possibility of automatically customizing content to the potential customer’s preferences can result in an increased attractiveness of the product. On the one hand, the product is more relevant and useful for the customer. On the other, it might be too attractive to resist, making prey of an already weakened self-regulation. It should be noted that not only money is at stake, but also attention and self-regulation, which are the mediators for money. Entertainment sites will, for instance, attempt to attract attention for the longest period possible by concatenating offers that seem to appeal to the viewer. If the strategy works too well, there can be problems of addiction.
All these issues, which could be summarized as the possibility of manipulation and addiction aggravated by the technologically-enabled customization, will only grow as the interactions with robots become widespread.

4. Robots Multiply Concerns

Robots could soon be interacting with humans in a pervasive way. From the full spectrum of robotics applications, the ones that have the highest potential for interactions are social robots (which interact and communicate with humans or other autonomous physical agents by following social behaviors and rules attached to their role, and therefore have these interactions at their root functionality), service robots (which are devised to assist human beings, typically by performing a job that is dirty, dull, distant, dangerous, or repetitive, including household chores), and assistive robots (which have the function of helping people in convalescence, rehabilitation, training, and education). The interactions with robots could be more intense than the ones online (It should be mentioned that there is a substantial debate over whether the interactions with physically embedded robots are more or less intense than with virtual agents).
The main reason for this is that, because they have a physical presence, they have the possibility of communicating in a more emotionally meaningful way—through voice, gestures, or touch. In addition, these robots will work in situations where the interaction is especially important.
At a first stage, service robots will limit their activities to their basic job (i.e., household chores or serving food at a restaurant). However, very soon, the interaction component of the robots will take a more important role. For instance, a waiter robot would try to steer the behavior of clients in the direction desired by the restaurant (probably towards a more expensive choice or a faster turnover). Their selling tactics will probably not work in the direction of encouraging healthy decisions or austerity, but rather work in the interest of the company. Much alike the chocolate bars that nowadays tempt every customer at the checking counter, robot recommendation engines will haunt clients waving their particular “weak-spot” treats under their noses.
It should be considered that, in the context of the Internet of Things (IoT), it will be possible to register data of each and every interaction with every customer, at a much more fundamental level than online software can. It will be possible to record each word, scrutinize every movement and gesture, or study the inflections of the voice. This could be used to infer the physical and emotional state of the potential customer; recommendation algorithms will be able to bring chocolate ice-cream to the depressed teenage boy or offer a war videogame to a seemingly angry, frustrated client. It could also offer the twentieth identical sweater to a shopping addict, or another chocolate bar to a person fighting with obesity.
In addition, the fact that robots share the physical space with human beings strengthens the possibility of the robot initiating contact. In the same way that salespeople sometimes walk behind customers frantically trying to push them into making a purchase, robots can follow customers. The main difference is that the marketing strategy of robots could be perfectly engineered for maximum success—and tailored to the specific characteristics of the client at hand. In addition, a robot does not suffer from exhaustion. There is no limit to how much a robot salesperson can insist, and it will insist, in such a way as to minimize any complaints from the customers—then, again, the possibility of a complaint can be modelled for each client in the same way that the propensity to make a purchase was. This should be viewed as a much more intensified context than the one online—and as such, we should be prepared to limit the interactions.
We should also consider that the impact of the expected developments in AI will have the impact of making the interactions with robots much more humanlike. Some robots can already identify emotion by looking at the relative positions of features on the faces of the humans around them and can mimic them in order to appear emphatic. When the human has a smiley face, they will speak to him in a cheerful manner, while if the human displays turned-down corners of the mouth, the robot will use a softer voice [19].
The robot can use AI to generate messages in an immediate and personalized way. In some cases, these messages are relatively simple such as spam, but in other cases, such as chatbots (algorithms design to give conversation), they are extremely interesting.
The first of these chatbots was probably Eliza, created in the 1960s by Joseph Weizenbaum, as support for the diagnosis of mental illness. Later versions of this chatbot [20] are still running. Eliza did not have a physical embodiment. However, the limited interface was enough to be useful for its purpose. Eliza asks questions to the patient in a very repetitive way, limiting itself to the purpose of trying to gather more information. It is surprising to discover how, only through questions and more or less simple repetitions, the programming of the chatbot at least provokes in its human interlocutor the desire to share its deepest emotions, which can then be used for the purpose of diagnosis. According to its creator, Weizenbaum, Eliza could get some patients to open themselves in a way that he could not. He suspected that some of the barriers that held his patients—for instance, whether they thought he was judging them—were not as intense for Eliza. This example is particularly interesting, as it was a very early proof that robots can sometimes have advantages in the interaction with respect to their human counterparts.
Another interesting example is Lolita, a chatbot created by Spanish researchers, who holds conversations with users of social networks with the aim of identifying possible pedophiles [21]. The conversations are realistic enough to convince other users that the chatbot is a real young girl or boy. If the user registers responses that were deemed risky, the chatbot sends a warning to the local police, which would be followed by an investigation of the suspect. This project shows another key element of nonhuman interactions; it is possible to gain access to a vast number of human targets. Of course, the investigations carried out by Lolita need to study the reactions of large numbers of potential criminals in order to filter a few of them that will be monitored more closely. In the same way, robots can interact in the physical space with many different humans and can share information with other entities without a physical presence.
The fact that social interactions for robots can be personalized has also been shown in the context of chatbots. Among many other interesting examples, we can also find the portal Chatbot4u [22], in which chatbots that pretend to emulate different characters, among them, several celebrities from the world of television or cinema, are made available to users (upon payment). Users enjoy having a conversation with an algorithm that takes over the personality of the celebrity of choice, or the personality preferred by the client in a conversational partner. In some cases (as sadly seems to be the case for a large part of newly developed technology), the chatbots are set to have a disposition for sexual conversations. This example shows two things. First, that the social component of the human–robot interactions can be so important that it justifies the existence of some robots only for this purpose. Second, that even a very crude approximation to customizing the interaction to the preferences of the user is already attractive, and as such, users are already willing to pay for it.
Although the chats in Chatbot4u are easily identifiable as something other than a human conversation, social robots could soon be almost humanlike. Some authors estimate that in the coming years, chatbots could get to a point where they can pass the Turing test (that is, manage to convince a human interlocutor that he is talking to another person). There are annual competitions in which the creations of the programmers are beaten, like the ChatterBox Challenge [23], the Loebner prize (in which there are several levels, the highest of which includes the artificial generation of an interlocutor in 3D indistinguishable from a human being), or the Kurzweil/Kapor test, in which an extended conversation is maintained for two hours (in this last one, the positions of the transhumanist Kurzweil, which holds that the machines will soon pass the test, with Mitchell Kapor, founder of Electronic Frontiers Corporation, which states otherwise). Although news has appeared in which it was claimed that the Turing test had already been overcome thanks to a chatbot that emulated a teenager, Eugene Goostman, we are still far from it [24].
An interesting example was the surprising advertising campaign of the movie Ex Machina [25], which narrates the process of evaluating the first android that passes the Turing test. The advertising of the film did not look like publicity. Instead, a false social profile was created including photos of the actress who played the leading android and a chatbot was programmed to engage in conversations with the users of the social network Tinder, whose main objective is to pair up their users in dates. The chatbot was programmed to emulate being a young and attractive woman. Once the user expressed his desire to meet the chatbot, he was referred to the website of the film.
The advances in AI that allow the development of increasingly realistic conversations are paralleled by the generation of visuals to support them. It is currently easy to change the features of an actor into another one, and will soon be possible to generate a 3D version of them—a real version of the highest Loebner level.
Recently, Google unveiled a digital assistant that was able to perform some chores telephonically, such as booking a table at a restaurant or cancelling a hairdressing appointment [26]. In the examples of use provided by the company, the digital assistant interacted with humans without revealing that it was not in fact human. In many cases indeed, humans will not know whether they are interacting with another human or with a robot. These realistic forms of communication should be treated carefully—and much more when considered together with the possibility of tailoring the features interaction to the particular profile of the customer. For instance, particularly emphatic humans could be tricked into being compassionate to a seemingly distressed caller.
Assistive robots have a particularly special position in this respect. The “personality” exhibited by an assistive robot can have dramatic effects on the human they care for. In the same way that a human caregiver, they are in a position to encourage behaviors that promote the independence of their client—but also take them out of their comfort zones. For instance, a robot can encourage a senior that spends most of the time sitting down to walk for some time every day. Depending on how intensely it pushes, the strategy will be more effective and have different effects on the wellbeing of the senior. It is desirable that this behavior is adjusted to take into account the particular state and personality of the client so that the robot can be perceived as kind and encouraging, but this tailoring opens the door for an array of possible problems.

5. Some Guidelines for Action

Until now, this article has reviewed the origins of some pressing issues on roboethics, which lie at the frontier between roboethics and information ethics. First, it has put them in context with marketing ethics, stressing two main points. First, that human attention and self-regulation is limited and susceptible to being exploited. Second, that tailoring the product or service to the particular client increases the opportunities for manipulation. Then, it has presented the interactions with robots as a particularly intense setting where the humanlike presence and, again, the possibility of tailoring communications to the profile of the human target, can be especially problematic. Finally, this section presents some guidelines that could be useful in limiting the potentially harmful effects of human–robot interactions in the context of information ethics.
All these recommendations have, at their core, the idea of defining human beings as vulnerable, so they should be protected from manipulating practices. This idea is not new; it arises, once again, from marketing ethics, where groups such as children or the elderly have long been recognized as especially vulnerable [27], and hence should be protected. We should consider for the purpose of our investigation that there is nothing qualitatively different about adults that distinguished them from children [28]. On the contrary and as discussed above, adults have limited resources for attention and self-regulation. Protecting consumers should not be seen as a paternalistic, rather than a necessary step in a word where interactions with robots will become commonplace. What is more, some industries (namely, banking) have already implemented protective policies that have completely changed their activities. In a way, there is more at stake when dealing with the interactions with robots—not only financial resources, but emotions are at play.

5.1. Transparency Is Key

A first, essential mandate would be to disclose when the interaction is automatic. If a call is performed by an automatic assistant (à la Google, as commented above), it should be clearly stated at the beginning of the call in a way that minimizes any possible misunderstandings. One ramification of this would be banning the common practice of giving names to automatic engines. Instead of introducing itself with a human name, the assistant should only be referred to as “assistant”, to reinforce its nature as a robot. It should be stressed that these algorithms are proprietary, so transparency is difficult to reconcile with the obscurity of the algorithms.
The need for transparency has been thoroughly posed in information ethics. This idea lies at the spirit of article 22 in the GDPR (general data protection regulation) of 2018, “Automated individual decision-making, including profiling”.
When a given product or service is selected or customized, the potential customer should be informed of the reasons that this selection or customization was made. Even if the algorithm that generates the offer is so complex that its internal mechanism is obscure, it should still be able to show the inputs that are having the largest impact. At any point, the human should be able to know how he is perceived by the machine in simple terms “because you did X or are identified as X, we offer you Y”. GDPR forces some disclosure of this information, but in such a general way that there are serious doubts that it will serve its purpose [29].
The main reasons for a particular offer (or a price for a service) should be disclosed in a meaningful way. That is, every company making use of customization in their activity should, if their algorithms are obscure, develop a parallel way of providing a reasonably accurate, but easily comprehensible explanation of their actions. For instance, if a user is identified as insensitive to price (and hence offered a more expensive choice), he should know he has been classified in that way. This implies that the user will only accept a higher price when he is receiving something else in return.
The need for transparency applies not only to products, but also to the algorithms in charge of adjusting the behavior of robots in the service sector, where the personal preferences discovered by the learning algorithms would be particularly interesting.

5.2. A Reset Button

GDPR forces the possibility to switch to browsing with a blank slate, without using any of the data previously collected. This should always be available to humans in their interactions with robots. This right requires transparency to be operative—when the client does not like the way he has been profiled, he must be able to choose to present himself in anonymity, “browsing incognito”.
This would also be necessary, in the online context, when checking the news or looking for information. It should be easy and immediate to “unfilter reality”, so that the profile created for a user does not influence her worldview in a limited extent.
In addition, these “reset buttons” should often be compared to the catered experience even without being requested, for a limited time, allowing the human to compare the customized to the original experience and assessing how much it was changed specifically for him.

5.3. Hard Limits

While all humans are vulnerable, some of us are even more so. Casinos do not allow ludopaths, and bars do not serve to alcoholics. Given the increased addiction problems, hard limits should be imposed when approaching individuals that are already struggling with addiction. Emerging concerns such as online shopping or social media addiction should be treated extremely carefully.
Children are especially exposed in social interactions—they have been shown, for instance, to not be able to distinguish advertising from content even at the age of 11 [27]. The use of social robots is especially problematic in this context and might be enough to establish a ban on social robots aimed at children.
In addition to establishing hard limits when there are specific health concerns for an individual, the person herself should be able to establish any hard limits as she deems appropriate. For instance, she might choose to never be approached while shopping, or to never be offered unhealthy food. These settings should be easily accessible and modifiable by all users.

5.4. Soft Limits: Not All Interactions Are the Same

Not all interactions or products are the same, and no two humans are the same either. Each sector should study the need for establishing thresholds for client vulnerability and product risk. The financial services industry can serve as a leading example for this.
The Markets in Financial Instruments Directive 2004/39/EC (known as “MiFID”) is a European Union law that provides harmonised regulation for investment services. The directive’s main objectives are to increase competition and consumer protection in investment services. As part of its requirements, MiFID requires firms to categorise clients as “eligible counterparties”, professional clients or retail clients (these have increasing levels of protection). It forces the establishment of clear procedures to assess client suitability for each type of investment product, which is known as an appropriateness test. In addition, when the firm recommends a particular investment, it should make sure that the recommendation is the best possible one for the client, taking into account her best investment interests into consideration, as well as her protection level.
One of the factors that should be considered key in the assessment of risk is the involvement of emotion. This is particularly relevant in the cases of social robots and assistive robots. There should be limits to the extent to which emotions can be influenced—or even assessed.

5.5. Help Attention and Self-Regulation

We should be careful with attention and self-regulation, given that they are limited and subject to being deteriorated. Robots can be of support in ways such as the following:
Suppress distractions. Robots should not generate unnecessary distractions. On the contrary, AI should in the coming years work to optimize our communications, filtering and managing information so that the “age of distraction” we are currently experiencing becomes something of the past.
Limit immediate decisions. Impulsivity hinders good decision making. In some cases, it would be interesting to disable the possibility of making impulse decisions. Requesting a confirmation some time afterwards could be as useful.
Limit exposure. Interactions with robots should be limited in time, especially for social robots and service robots.
Transparency again. When in a position where self-regulation is weakened (for instance, with low blood glucose or late at night [30]), the human should know about his state so that he can take any appropriate measures.

6. Conclusions

The interactions with robots will, in the next few decades, be increasingly both frequent and intense. These interactions will have a physical component, but the information component will grow in importance. This factor should illustrate the need for integrating some of the concerns of information ethics to roboethics.
This paper has tried to establish a context for examining some pressing issues in the well-established field of marketing ethics, stressing two main points. First, that human attention and self-regulation is limited. This admits the potential for manipulation towards even healthy adults. Their existing self-regulation and capacity to focus can be damaged or temporarily depleted relatively easily. This should lead to the establishment of regulation aimed at protecting the individual.
The possibility of customizing products, services, or marketing strategies, as well as the ubiquity of the interaction, has meant a surge in addictions, as well as the development of some new ones (such as cell phone or online shopping addictions). The physical reality of robots makes human–robot interactions particularly problematic and more concerning in this respect. Moreover, communication is becoming increasingly humanlike and tailored to the target client, which is especially important for assistive and social robots.
I conclude with some guidelines that could be useful in limiting the potentially harmful effects of human–robot interactions in the context of information ethics:
A need for transparency to know when an interaction is happening with a robot and the extent to which the interaction is based on a customer profile.
The existence of a “reset button” to avoid this customization at any time.
Imposing hard limits on especially sensitive products and services or vulnerable collectives.
Imposing soft limits based on classifications based, again, on the sensitivity of the product or service and the vulnerability of the client. These limits should be concerned specifically with the use of emotion in marketing strategies.
Processes to support a healthy attention and self-regulation should be established.
Robots are evolving quickly. We need to merge roboethics and information ethics if we want to be ready for the next step.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, P.; Abney, K.; Bekey, G.A. Robot Ethics: The Ethical and Social Implications of Robotics; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  2. Froehlich, T.J. A Not-So-Brief Account of Current Information Ethics: The Ethics of Ignorance, Missing Information, Misinformation, Disinformation and Other Forms of Deception or Incompetence. BiD-Textos Universitaris de Biblioteconomia i Documentacio 2017, 39. [Google Scholar]
  3. Moor, J.H. What is computer ethics? Metaphilosophy 1985, 16, 266–275. [Google Scholar] [CrossRef]
  4. Schlegelmilch, B.B.; Öberseder, M. Half a century of marketing ethics: Shifting perspectives and emerging trends. J. Bus. Ethics 2010, 93, 1–19. [Google Scholar] [CrossRef]
  5. Senecal, S.; Nantel, J. The influence of online product recommendations on consumers’ online choices. J. Retail. 2004, 80, 159–169. [Google Scholar] [CrossRef]
  6. Kahneman, D.; Patrick, E. Thinking, Fast and Slow; Farrar, Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
  7. Christakis, D.A. Rethinking attention-deficit/hyperactivity disorder. JAMA Pediatr. 2016, 170, 109–110. [Google Scholar] [CrossRef] [PubMed]
  8. Hallowell, E.M. Overloaded circuits. Harv. Bus. Rev. 2005, 83, 54–62. [Google Scholar] [PubMed]
  9. Baumeister, R.F.; Gailliot, M.; DeWall, C.N.; Oaten, M. Self-regulation and personality: How interventions increase regulatory success, and how depletion moderates the effects of traits on behavior. J. Personal. 2006, 74, 1773–1802. [Google Scholar] [CrossRef] [PubMed]
  10. Mitchell, P. Internet addiction: Genuine diagnosis or not? Lancet 2000, 355, 632. [Google Scholar] [CrossRef]
  11. Jenaro, C.; Flores, N.; Gómez-Vela, M.; González-Gil, F.; Caballo, C. Problematic internet and cell-phone use: Psychological, behavioral, and health correlates. Addict. Res. Theory 2007, 15, 309–320. [Google Scholar] [CrossRef]
  12. Rose, S.; Dhandayudham, A. Towards an understanding of Internet-based problem shopping behaviour: The concept of online shopping addiction and its proposed predictors. J. Behav. Addict. 2014, 3, 83–89. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Matrix, S. The Netflix effect: Teens, binge watching, and on-demand digital media trends. Jeun. Young People Texts Cult. 2014, 6, 119–138. [Google Scholar] [CrossRef]
  14. Carbonell, X.; Guardiola, E.; Beranuy, M.; Bellés, A. A bibliometric analysis of the scientific literature on Internet, video games, and cell phone addiction. J. Med. Libr. Assoc. 2009, 97, 102–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Booth, R. Facebook reveals news feed experiment to control emotions. Guardian 2014, 30, 2014. [Google Scholar]
  16. Cadwalladr, C.; Graham-Harrison, E. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian, 17 March 2018. [Google Scholar]
  17. Fernández, J.D.; Vico, F. AI methods in algorithmic composition: A comprehensive survey. J. Artif. Intell. Res. 2013, 48, 513–582. [Google Scholar] [CrossRef] [Green Version]
  18. Barr, A. Google’s YouTube Kids App Criticized for ‘Inappropriate Content’. Wall Street Journal, 19 May 2015. [Google Scholar]
  19. Chella, A.; Pilato, G.; Sorbello, R.; Vassallo, G.; Cinquegrani, F.; Anzalone, S.M. An emphatic humanoid robot with emotional latent semantic behavior. In Proceedings of the International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Venice, Italy, 3–6 November 2008; pp. 234–245. [Google Scholar]
  20. Eliza. Software. Available online: http://nlp-addiction.com/eliza/ (accessed on 1 August 2018).
  21. Laorden, C.; Galán-García, P.; Santos, I.; Sanz, B.; Hidalgo, J.M.G.; Bringas, P.G. Negobot: A conversational agent based on game theory for the detection of paedophile behaviour. In International Joint Conference CISIS’12-ICEUTE´ 12-SOCO´ 12 Special Sessions; Springer: Berlin/Heidelberg, Germany, 2013; pp. 261–270. [Google Scholar]
  22. Chatbot4u. Software. Available online: http://www.chatbot4u.com/en/chatbots (accessed on 1 August 2018).
  23. Vallverdú, J.; Shah, H.; Casacuberta, D. Chatterbox challenge as a test-bed for synthetic emotions. In Creating Synthetic Emotions through Technological and Robotic Advancements; IGI Global: Hershey, PA, USA, 2010. [Google Scholar]
  24. Aaronson, S. My Conversation with “Eugene Goostman” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test. Goodreads, 9 June 2014. [Google Scholar]
  25. Ex Machina. Directed by Alex Garland. Universal Pictures. 2015. Available online: https://en.wikipedia.org/wiki/Ex_Machina_(film) (accessed on 1 August 2018).
  26. Saddler, H.J.; Piercy, A.T.; Weinberg, G.L.; Booker, S.L. Intelligent Automated Assistant. U.S. Patent 15/385,606, 29 March 2018. [Google Scholar]
  27. Nairn, A.; Dew, A. Pop-ups, pop-unders, banners and buttons: The ethics of online advertising to primary school children. J. Direct Data Digit. Mark. Pract. 2007, 9, 30–46. [Google Scholar] [CrossRef] [Green Version]
  28. Ambler, T. Who’s messing with whose mind? Debating the Nairn and Fine argument. Int. J. Advert. 2008, 27, 885–895. [Google Scholar] [CrossRef]
  29. Wachter, S.; Mittelstadt, B.; Floridi, L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 2017, 7, 76–99. [Google Scholar] [CrossRef]
  30. Gailliot, M.T.; Baumeister, R.F. The physiology of willpower: Linking blood glucose to self-control. In Self-Regulation and Self-Control; Routledge: Abingdon, UK, 2018; pp. 137–180. [Google Scholar]

Share and Cite

MDPI and ACS Style

Lumbreras, S. Getting Ready for the Next Step: Merging Information Ethics and Roboethics—A Project in the Context of Marketing Ethics. Information 2018, 9, 195. https://doi.org/10.3390/info9080195

AMA Style

Lumbreras S. Getting Ready for the Next Step: Merging Information Ethics and Roboethics—A Project in the Context of Marketing Ethics. Information. 2018; 9(8):195. https://doi.org/10.3390/info9080195

Chicago/Turabian Style

Lumbreras, Sara. 2018. "Getting Ready for the Next Step: Merging Information Ethics and Roboethics—A Project in the Context of Marketing Ethics" Information 9, no. 8: 195. https://doi.org/10.3390/info9080195

APA Style

Lumbreras, S. (2018). Getting Ready for the Next Step: Merging Information Ethics and Roboethics—A Project in the Context of Marketing Ethics. Information, 9(8), 195. https://doi.org/10.3390/info9080195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop