2. Attention Is Limited and Automatic Customization Is Powerful: A Case for Reviewing Marketing Ethics
The germ of these issues is already present at the core of marketing ethics, a discipline within business ethics that has a history of over fifty years [
4]. Marketing ethics studies the moral principles behind the operation and regulation of marketing. Some areas of marketing ethics (ethics of advertising and promotion) have an overlap with media ethics. The effect of marketing techniques on consumer behavior has long been known. For instance, customers who view online recommendations spend on average twice as much buying recommended products than customers who did not see the recommendations [
5]. Companies will tend to use the means at their disposal to increase sales. This includes using all available data to know which marketing strategy will work better on a given individual—or alternatively, select which individual has the highest probability of making a purchase and bidding to show an advertisement to him specifically. Many of these strategies are based on appealing to the most irrational part of our brains,
system 1 in terms of Kahneman, which processes information in a fast, unconscious way. System 1 is efficient in terms of energy and requires little effort and it is quick, but it is prone to biases and errors. By contrast,
system 2 is an effortful, slow, and controlled way of thinking [
6]. Most marketing strategies take advantage of some of the shortcuts used by system 1, which rely on what has been known as
cognitive biases—which also overlap with logical fallacies. For instance, showing a celebrity-endorsed product recommendation is akin to using an
authority bias, where the belief displayed by an authority figure is more likely to be perceived as true. Another example is the
bandwagon bias, where people do something primarily because other people are doing it, regardless of their own beliefs, which they may ignore or override. The bandwagon effect has wide implications, particularly in politics and consumer behavior. An example of this would be the abovementioned consumer recommendations, where products that get the highest recommendation are more likely to be chosen. Marketing science keeps exploring the way of using these biases to its advantage.
In parallel, it seems to be the case that the capacity to switch to system 2 is hindered by the time spent in system 1. The capacity for attention, as for self-regulation (willpower), decreases with each failed attempt to use it. Some recent studies have linked impulsivity as a personality trait to attention deficits. The use of information and communication technologies (ICTs) in particular has been linked specifically to a rise in attention deficit disorder (ADD) [
7], although this link has not been proven yet. In this context, the case of television (TV) is particularly interesting, with TV exposure considerably increasing the probability of children suffering from ADD [
8] when they grow older. Multitasking, an activity that decreases the opportunity for reflection, has been linked to ADD as well. Although a careful scrutiny is needed, ICTs could be responsible for much of the recent rise in ADD diagnosis.
The ability to perform rational decision-making is not only linked to attention but also to self-regulation. Recent evidence has shown that self-regulation (also known as willpower) is a highly adaptive trait that enables humans to override and alter their innate responses. Self-regulation seems to consume a limited resource, which could be understood as energy or strength. Therefore, when self-regulation has been exercised for a given period of time, there is less of the resource to be spent in the following decision. This becomes manifest, for instance, in a time-of-the-day effect; it seems to be more difficult to maintain attention or exercise self-regulation later in the day than in the morning. Exercises in self-regulation can mitigate this effect, producing broad improvements in the ability for self-regulation. On the contrary, a lack of restraint seems to lead to a general depletion of resources for self-regulation. For a comprehensive review of recent findings, we refer the reader to the literature [
9].
The constant exposure to stimuli that deplete attention and self-regulation resources results in an increase in behavioural addictions. The addiction to internet use has already been documented for several years [
10]. Users feel a compulsion to be online, which can have pernicious consequences for their professional and personal life. Cellphone addiction is a similar disorder [
11] that affects an increasingly large share of the population. According to the media-analytics company ComScore, the average American person spends on average 2 h 51 min per day on their smartphone, 36% more than they spend eating and drinking. Social media are the apps with the highest use, totaling around of 60% of the total time spent on the internet. Another derivative of interactions only is the addiction to online shopping [
12]. It is now easy for business in the search for clients to nudge clients toward their shopping carts, even if they are just spending some time on social media. Showing a previous search result in the corner of the screen and grabbing the attention of potential cravings. An estimated 5% to 8% of Americans are thought to suffer from shopping addiction, and online activities seem to exacerbate the potential triggers. As reported in the literature [
13], when popular shows are released in platforms such as Netflix, a relevant share of viewers indulge in binge-watching, sometimes devouring a whole series in just a day, which is generating some discussions on binge-watching qualifying as a behavioural addiction. Apart from all these relatively new addictions, the online context has facilitated the development of a large industry around online betting that preys, and creates, gambling addicts.
Overall, it seems that the amount of addictions facilitated by ICTs could be increasing considerably [
14] (other factors, such as a media focus or changes in diagnosis, might be responsible for this increase). In general, the ability to focus and to exercise self-restraint is hindered by the ubiquity of online interactions and by their possibility to be tailored to the consumer’s specific taste.
Attention is a currency: social media, as well as other sites, benefit from the attention of their users, which can be used to generate a commercial benefit. Many of the current online business models are based on individual money expenditure directly or indirectly establishing attention as a mediator. The latter case uses measures such as clicks, views, or viewing time—very importantly, in seemingly free entertainment sites—or, in some cases, emotional reactions. The artificial intelligence (AI) deployed by companies is getting particularly efficient at capturing attention and at manipulating emotion. This manipulation of emotions was very interestingly presented recently by Facebook [
15]. In 2012, 689,000 Facebook users were unknowingly part of an experiment over a period of one week. According to the report of the company itself, “the experiment manipulated the extent to which people were exposed to emotional expressions in their news feed”. The experiment showed that users who had fewer negative stories in their news feed were less likely to write a negative post, and vice versa. The company wanted to investigate the extent to which
emotional contagion (that is, the propensity for an individual to feel emotions similar to those of his or her connections) was real. In addition, they were concerned that a general negative tone in their feeds would lead users to spend less time on the social network. They confirmed these by means of the experiment. The Facebook–Cambridge Analytica data scandal, a bit later in 2014, involved the collection of information of up to 87 million Facebook users, which was allegedly used to steer public opinion before the upcoming election [
16].
4. Robots Multiply Concerns
Robots could soon be interacting with humans in a pervasive way. From the full spectrum of robotics applications, the ones that have the highest potential for interactions are social robots (which interact and communicate with humans or other autonomous physical agents by following social behaviors and rules attached to their role, and therefore have these interactions at their root functionality), service robots (which are devised to assist human beings, typically by performing a job that is dirty, dull, distant, dangerous, or repetitive, including household chores), and assistive robots (which have the function of helping people in convalescence, rehabilitation, training, and education). The interactions with robots could be more intense than the ones online (It should be mentioned that there is a substantial debate over whether the interactions with physically embedded robots are more or less intense than with virtual agents).
The main reason for this is that, because they have a physical presence, they have the possibility of communicating in a more emotionally meaningful way—through voice, gestures, or touch. In addition, these robots will work in situations where the interaction is especially important.
At a first stage, service robots will limit their activities to their basic job (i.e., household chores or serving food at a restaurant). However, very soon, the interaction component of the robots will take a more important role. For instance, a waiter robot would try to steer the behavior of clients in the direction desired by the restaurant (probably towards a more expensive choice or a faster turnover). Their selling tactics will probably not work in the direction of encouraging healthy decisions or austerity, but rather work in the interest of the company. Much alike the chocolate bars that nowadays tempt every customer at the checking counter, robot recommendation engines will haunt clients waving their particular “weak-spot” treats under their noses.
It should be considered that, in the context of the Internet of Things (IoT), it will be possible to register data of each and every interaction with every customer, at a much more fundamental level than online software can. It will be possible to record each word, scrutinize every movement and gesture, or study the inflections of the voice. This could be used to infer the physical and emotional state of the potential customer; recommendation algorithms will be able to bring chocolate ice-cream to the depressed teenage boy or offer a war videogame to a seemingly angry, frustrated client. It could also offer the twentieth identical sweater to a shopping addict, or another chocolate bar to a person fighting with obesity.
In addition, the fact that robots share the physical space with human beings strengthens the possibility of the robot initiating contact. In the same way that salespeople sometimes walk behind customers frantically trying to push them into making a purchase, robots can follow customers. The main difference is that the marketing strategy of robots could be perfectly engineered for maximum success—and tailored to the specific characteristics of the client at hand. In addition, a robot does not suffer from exhaustion. There is no limit to how much a robot salesperson can insist, and it will insist, in such a way as to minimize any complaints from the customers—then, again, the possibility of a complaint can be modelled for each client in the same way that the propensity to make a purchase was. This should be viewed as a much more intensified context than the one online—and as such, we should be prepared to limit the interactions.
We should also consider that the impact of the expected developments in AI will have the impact of making the interactions with robots much more humanlike. Some robots can already identify emotion by looking at the relative positions of features on the faces of the humans around them and can mimic them in order to appear emphatic. When the human has a smiley face, they will speak to him in a cheerful manner, while if the human displays turned-down corners of the mouth, the robot will use a softer voice [
19].
The robot can use AI to generate messages in an immediate and personalized way. In some cases, these messages are relatively simple such as spam, but in other cases, such as chatbots (algorithms design to give conversation), they are extremely interesting.
The first of these chatbots was probably Eliza, created in the 1960s by Joseph Weizenbaum, as support for the diagnosis of mental illness. Later versions of this chatbot [
20] are still running. Eliza did not have a physical embodiment. However, the limited interface was enough to be useful for its purpose. Eliza asks questions to the patient in a very repetitive way, limiting itself to the purpose of trying to gather more information. It is surprising to discover how, only through questions and more or less simple repetitions, the programming of the chatbot at least provokes in its human interlocutor the desire to share its deepest emotions, which can then be used for the purpose of diagnosis. According to its creator, Weizenbaum, Eliza could get some patients to open themselves in a way that he could not. He suspected that some of the barriers that held his patients—for instance, whether they thought he was judging them—were not as intense for Eliza. This example is particularly interesting, as it was a very early proof that robots can sometimes have advantages in the interaction with respect to their human counterparts.
Another interesting example is
Lolita, a chatbot created by Spanish researchers, who holds conversations with users of social networks with the aim of identifying possible pedophiles [
21]. The conversations are realistic enough to convince other users that the chatbot is a real young girl or boy. If the user registers responses that were deemed risky, the chatbot sends a warning to the local police, which would be followed by an investigation of the suspect. This project shows another key element of nonhuman interactions; it is possible to gain access to a vast number of human targets. Of course, the investigations carried out by Lolita need to study the reactions of large numbers of potential criminals in order to filter a few of them that will be monitored more closely. In the same way, robots can interact in the physical space with many different humans and can share information with other entities without a physical presence.
The fact that social interactions for robots can be personalized has also been shown in the context of chatbots. Among many other interesting examples, we can also find the portal Chatbot4u [
22], in which chatbots that pretend to emulate different characters, among them, several celebrities from the world of television or cinema, are made available to users (upon payment). Users enjoy having a conversation with an algorithm that takes over the personality of the celebrity of choice, or the personality preferred by the client in a conversational partner. In some cases (as sadly seems to be the case for a large part of newly developed technology), the chatbots are set to have a disposition for sexual conversations. This example shows two things. First, that the social component of the human–robot interactions can be so important that it justifies the existence of some robots only for this purpose. Second, that even a very crude approximation to customizing the interaction to the preferences of the user is already attractive, and as such, users are already willing to pay for it.
Although the chats in Chatbot4u are easily identifiable as something other than a human conversation, social robots could soon be almost humanlike. Some authors estimate that in the coming years, chatbots could get to a point where they can pass the Turing test (that is, manage to convince a human interlocutor that he is talking to another person). There are annual competitions in which the creations of the programmers are beaten, like the ChatterBox Challenge [
23], the Loebner prize (in which there are several levels, the highest of which includes the artificial generation of an interlocutor in 3D indistinguishable from a human being), or the Kurzweil/Kapor test, in which an extended conversation is maintained for two hours (in this last one, the positions of the transhumanist Kurzweil, which holds that the machines will soon pass the test, with Mitchell Kapor, founder of Electronic Frontiers Corporation, which states otherwise). Although news has appeared in which it was claimed that the Turing test had already been overcome thanks to a chatbot that emulated a teenager, Eugene Goostman, we are still far from it [
24].
An interesting example was the surprising advertising campaign of the movie
Ex Machina [
25], which narrates the process of evaluating the first android that passes the Turing test. The advertising of the film did not look like publicity. Instead, a false social profile was created including photos of the actress who played the leading android and a chatbot was programmed to engage in conversations with the users of the social network Tinder, whose main objective is to pair up their users in dates. The chatbot was programmed to emulate being a young and attractive woman. Once the user expressed his desire to meet the chatbot, he was referred to the website of the film.
The advances in AI that allow the development of increasingly realistic conversations are paralleled by the generation of visuals to support them. It is currently easy to change the features of an actor into another one, and will soon be possible to generate a 3D version of them—a real version of the highest Loebner level.
Recently, Google unveiled a digital assistant that was able to perform some chores telephonically, such as booking a table at a restaurant or cancelling a hairdressing appointment [
26]. In the examples of use provided by the company, the digital assistant interacted with humans without revealing that it was not in fact human. In many cases indeed, humans will not know whether they are interacting with another human or with a robot. These realistic forms of communication should be treated carefully—and much more when considered together with the possibility of tailoring the features interaction to the particular profile of the customer. For instance, particularly emphatic humans could be tricked into being compassionate to a seemingly distressed caller.
Assistive robots have a particularly special position in this respect. The “personality” exhibited by an assistive robot can have dramatic effects on the human they care for. In the same way that a human caregiver, they are in a position to encourage behaviors that promote the independence of their client—but also take them out of their comfort zones. For instance, a robot can encourage a senior that spends most of the time sitting down to walk for some time every day. Depending on how intensely it pushes, the strategy will be more effective and have different effects on the wellbeing of the senior. It is desirable that this behavior is adjusted to take into account the particular state and personality of the client so that the robot can be perceived as kind and encouraging, but this tailoring opens the door for an array of possible problems.
5. Some Guidelines for Action
Until now, this article has reviewed the origins of some pressing issues on roboethics, which lie at the frontier between roboethics and information ethics. First, it has put them in context with marketing ethics, stressing two main points. First, that human attention and self-regulation is limited and susceptible to being exploited. Second, that tailoring the product or service to the particular client increases the opportunities for manipulation. Then, it has presented the interactions with robots as a particularly intense setting where the humanlike presence and, again, the possibility of tailoring communications to the profile of the human target, can be especially problematic. Finally, this section presents some guidelines that could be useful in limiting the potentially harmful effects of human–robot interactions in the context of information ethics.
All these recommendations have, at their core, the idea of defining human beings as vulnerable, so they should be protected from manipulating practices. This idea is not new; it arises, once again, from marketing ethics, where groups such as children or the elderly have long been recognized as especially vulnerable [
27], and hence should be protected. We should consider for the purpose of our investigation that there is nothing qualitatively different about adults that distinguished them from children [
28]. On the contrary and as discussed above, adults have limited resources for attention and self-regulation. Protecting consumers should not be seen as a paternalistic, rather than a necessary step in a word where interactions with robots will become commonplace. What is more, some industries (namely, banking) have already implemented protective policies that have completely changed their activities. In a way, there is more at stake when dealing with the interactions with robots—not only financial resources, but emotions are at play.