Next Article in Journal
What Is Physical Information?
Previous Article in Journal
i-Soc: An Info-Sociological Approach to Structural–Agent Causal Symmetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digitocracy: Ruling and Being Ruled

by
Alfonso Ballesteros
Department of Law, Universidad Miguel Hernández de Elche, 03207 Elche, Spain
Philosophies 2020, 5(2), 9; https://doi.org/10.3390/philosophies5020009
Submission received: 17 April 2020 / Revised: 1 June 2020 / Accepted: 9 June 2020 / Published: 12 June 2020

Abstract

:
Digitalisation is attracting much scholarly attention at present. However, scholars often take its benefits for granted, overlooking the essential question: “Does digital technology make us better?” This paper aims to help fill this gap by examining digitalisation as a form of government (digitocracy) and the way it shapes a new kind of man: animal digitalis. I argue that the digitalised man is animal-like rather than machine-like. This man does not use efficient and cold machine-like language, but is rather emotionalised through digital technology. If those who are ruled acted like machines, data would not be produced on a mass scale, and machine learning would stop learning. Digital man has animal features and is ruled by his brain’s reward system. We need to abandon this new form of government and the resulting man. To overcome digitalisation, we need a humanism that recovers the proper place of man over animals and artefacts, but maintains respect for the value of nature.

1. Introduction

A significant amount of research in recent years has focused on digitalisation: On its new forms of power, and on how it is shaping society and man. Frank Pasquale gave an in-detail analysis of the first two elements (power and society) and the technologies used to control information and money with algorithms (from Silicon Valley and Wall Street) [1]. Éric Sadin has shown the stages and spirit behind Silicon Valley’s rule [2]. Byung-Chul Han argues that the peculiar efficiency of this power through digital technology resides in the exploitation of pretended individual choices. It is the so-called psychopolitics, the predominant form of power in post-industrial capitalism [3]. The work of the Center for Humane Technology is also essential to understand this new form of government and the resulting man. This centre is the most crucial think tank on digitalisation, based in Silicon Valley, although it acts as a kind of conscience of the Valley.
Many scholars have studied digitalisation as a process that will lead us to an inhuman world through strong artificial intelligence (AI). This perspective seems to be problematic, as it takes digitalisation as an unavoidable necessity. Moreover, it forgets to question current digitalisation [4] and how it shapes power, society, and man. In a similar vein, other scholars take for granted that we live in an “AI society” and that digitalisation is something that needs to be moralized with principles. This might be misleading [5]. On one hand, we cannot take anything for granted—we do not live in an AI society. Moreover, principles might be used to avoid discussion because we only formulate principles relevant to those things we accept. Of course, many documents with principles to regulate AI have appeared, and these are useful and necessary [6]. However, they are not enough. We need further discussion on this topic, both by scholars and by laypersons [4]. Digitalisation is a political matter. For these reasons, more work is needed to describe and evaluate digitalisation.
The purpose of this study is to describe and examine digitalisation as a form of government: Digitocracy. This paper investigates the relationship between those who rule and those who are ruled; in particular, how the latter are shaped by digitalisation. Therefore, from it emerges a new kind of man: Animal digitalis. This paper is inspired by Hannah Arendt’s political philosophy, whose humanism allows us to show the superiority of humans over animals and artefacts—precisely, a distinction that digitalisation is blurring [7] (pp. 44–45).
The main thesis of this paper is that digital technology as it is today tends to animalise man. This is the main novelty of this paper. The thesis is built in the philosophy of Hannah Arendt and Byung-Chul Han. The former considered man in the 20th century animalised, but described it then as machine-like in terms of language, as she did not live during the irruption of digital technology and its animalisation of language with emotion. The complement to Arendt is Byung-Chul Han, who explicitly reasons against Arendt’s assumption of animalisation; however, I argue that his philosophy confirms hers, and also serves to describe the animalisation of 21st century man by digital technology. As digitalisation expands even more as the Covid-19 crisis also expands, our understanding of it is even more necessary.

2. Does Digital Technology Make Us Better?

Asking the right questions can lead us to what is essential. Questions also show us what we care about and what we take for granted. Even those questions that pretend to be value-free and purely scientific are value-driven. If we value science for science’s sake or technology for the sake of technology, we are judging. That is, we are in the realm of values and human goods. Some of those who advocate for digitalisation do not ask questions about technology. However, this is precisely what we need to do. There are specific questions we need to ask, such as “Should we allow the Internet of Things at all?” [2] or “Should we even use weak AI in this domain [or in that one...] at all?” [4]. It seems those are some of the right questions. Those who do not take the benefits of digitalisation for granted ask these kinds of questions. In a more general sense, we can pose a humanist inquiry about what we really value, that is, human dignity: “Does digital technology make us better?”
Debate on digital technology is sometimes silenced by asserting that it is neutral. The discussion is mostly based on how it is used or an ethics of how it should be used. The point of departure of this reflection is that technological neutrality is impossible. This means that any new tool or technology has its features and its influence on human life. It introduces a new scale and new tendencies in human life, as has been pointed out in the classics of media theory [8] (p. 17), [9] (pp. 3–20) and, more recently, from the philosophy of technology [10].
This is an old topic. Plato explored it in the old fable of King Thamus in his Phaedrus. The fable talks about the appearance of writing. Writing significantly influences the human being who makes use of it, regardless of the content of what is written. The structure of technology and the specific use of it are very different things. In the fable, King Thamus is sceptical about writing. Why? Because it is with writing that memory becomes less relevant. Therefore, it diminishes that human capacity regardless of what the person writes or how the person uses writing. If we assume that every tool strongly influences human life, it can be said that the design of things is a moral activity. The more structured or persuasive the technology, the more moral it is.
If digital technology is not neutral, we should ask the humanist question: “Does digital technology make us better?” or “What kind of man is homo digitalis?”. This does not mean asking about well-being improvement or task efficiency allowed by digital technology, but about man as such. Indeed, digital technology today is determined by an economy whose primary interest is the absolute capture of the individual’s attention. The goal is to optimize screen-time. At the same time, digital technology allows, in the hands of the state, accomplishing the dream of a more efficient and organised society with unprecedented control by the government [11]. However, in this paper, I will focus on the first aspect of digitalisation, which is predominant in Western countries. China represents the second aspect.
According to the above, it might be said that digital technology is not just a tool. It is an efficient device to rule. It allows a new power to rule without precedents by those who design these technologies or by those who have access to the data produced by them. This power gives rise to so-called psychopolitics, very much in the Californian spirit of Silicon Valley. Psychopolitics implies that those who are being ruled think they are acting freely, but they are acting the way that those who are in charge want. This ruling exploits the freedom of the individual and makes them choose the most convenient choice for the ruler through a precise control of the psyche. This is the new form of power of post-industrial capitalism according to German–Korean philosopher Byung-Chul Han [3].
Digital technology may influence the psyche by offering what the user desires and makes this technology addictive by design, as Natasha Schüll has shown [12,13]. Technologies of search, which are discussed below, are particularly addictive. Digital technology is addictive on a pre-reflective level because it leaves little time for reason to manifest. Therefore, I approach digital technology as a kind of ruling, although a smart and soft ruling, a “soft totalitarianism” in the words of Éric Sadin [2], or the “capitalism of like” in Han’s words [3] (p. 30). The first point that is dealt with here concerns those that rule in the digital realm.

3. Ruling: Global Algorithm Governance

3.1. Homo Faber’s Post-Modern Ruling

We have jumped out of the frying pan and into the fire. That is, from bad to worse: From modern Western technocratic humanism to post-modern and post-humanist thought [10] (pp. 21–40). Technocratic humanism was the predominant tendency in the Modern Age. This thought asserts that man is spirit, reason, and the craftsman that dominates nature. This is the mentality of homo faber, a dualist approach to reality that tends to despise nature and the body. It sharply distinguishes man from everything else, and sees everything else in the measure of man: Nature or human-made things. For that reason, homo faber looks upon everything with a means-to-an-end mentality. Therefore, his reason is “calculating reason”; he is in control and constructs the man-made durable world [14] (p. 11 ff.), [15] (p. 199), [16] (p. 100).
Our world is now strongly controlled by engineers, the newest kind of homo faber. Is the engineer still faithful to these features? It is clear that technocratic humanism is not dominant today: Man is no longer the measure of reality, and the relationship with nature is not simply that of a ruler. At the same time, the seeds of calculating reason are hybridized with Eastern thought.
We are ruled today by Silicon Valley’ engineers [1] (p. 4), who raise their children tech-free [17]. That is, they make the ruling devices, but, knowing their dangers, they exclude themselves from that influence, as a tyrant excludes himself from his “ruling devices”, e.g., the statute law he creates. They also follow a means-to-an-end mentality, but of a new kind. They have been smart enough to organise the world according to their interests and pretend, at the same time, to be acting for the good of humanity [2,18].
Regarding the tools homo faber makes, there is a great difference between modern and post-modern homo faber. The modern homo faber confirmed his superiority over nature with tools that augmented his strength. Strength—more particularly, violence—uses tools, and can become greater with them and can beat more easily the strength of nature. The instruments that post-modern homo faber makes today are no longer tools that multiply the strength to the point of replacing human strength [14] (p. 122), neither are they “extensions of man”, in McLuhan’s words [8].
The new homo faber has replaced those tools that ensured that nature was under rule with tools that rule over our minds: Our perception, our free will, and our judgment. In short, as pointed out before, they rule over our psyche. This can be explained by the use of the term “smart” for AI artefacts. Smart does not mean simply smart as a human, which would be wrong and dangerous enough, but smarter than a human. The so-called spirit of Silicon Valley comes with a lack of confidence in human action and humans in general [2]. This spirit is still technocratic (as modern predominant thought), but it is not humanist anymore. Humans need to be surrounded by tools that are smarter than them.
Human superiority over artefacts is gone. Technology is increasingly produced to educate human beings on how to properly do whatever they do, like driving a car (“Shouldn’t you take a break?”) or feeding one’s own baby [2]. Those artefacts are pretending to replace us in some of the most human tasks, like care or sex, with care [2] and sexual robots [19]. The growing equalisation between humans and artefacts is highly significant as artefacts receive higher legal recognition: Legal personhood, like gynoid Sophia, Saudi citizen [20].
Calculating reason is a great equalizer: If everything is quantifiable, everything can be compared. Distinctions become purely numerical. Digitalisation implies blurring the distinction between humans, animals, and artefacts [7]1, but human relations with those realities are quite different. The relationship with nature is different if we think about human beings or the ecosystem. Concerning human nature, there is the predominant Gnosticism: Human nature is something imperfect or incomplete that has to be enhanced by human selection or machine-hybridisation (e.g., in vitro fertilization, eugenics, or cyborgs) [21]. Regarding the ecosystem, Deep Ecology or Zen Buddhism predominates, as non-human nature has to remain untouched (e.g., vegan practices, animal rights, ecosystem rights).
This post-humanist thought fits well with some elements of modern thought that take calculating reason as a great equalizer very seriously. The awareness of suffering and the desire to overcome it are predominant in the modern age. In particular, Helvétius or Bentham recognise only one good: Pleasure [22] (p. 332 ff.). They are radical anti-metaphysicists—materialists—and they blur distinctions between different realities with their calculating reason: Particularly, animals and humans. Utilitarianism shares some perspectives with Zen Buddhism, which is popular in Silicon Valley. Zen Buddhism is the most immanent version of Buddhism, and is radically anti-metaphysical [23]. It is a kind of “religion without God”, and its main goal is to avoid suffering [24]. Helvétius’s utilitarianism is not that far from this thought (although it wants to optimise pleasure, not just avoid suffering). Buddhism is a kind of negative utilitarianism. Utilitarianism is one of those modern doctrines that has survived particularly well in post-modern thought, combining a (very modern) mathematical reduction to reality with a (post-modern) blurring of distinctions between nature and human beings.
Chade-Meng Tan, a former Google engineer, earned a reputation with his mindfulness-training course called “Search Inside Yourself”, later a bestseller book. His book includes attention training and self-mastery [18], precisely some of the qualities more threatened by Silicon Valley’s addictive-by-design technology. The title of Tan’s book shows how smart this new ruling is. Tan hybridizes economic success (“The Unexpected Path to Achieving Success, Happiness”, says the subtitle) with good intentions that will manage to change the world completely (“and World Peace”, as the subtitle continues) [18]. Éric Sadin has described these features of Silicon Valley’s spirit very clearly [2].
The next subsection is about ruling technologies; that is, technologies with two clear features: Information control and human perception control [1]. How homo faber gives the user what he has to desire is approached in the next section.

3.2. Technologies of Perception: A Threat against Common Sense?

We might start this section by recalling how phenomenology defines self-perception and world-perception. According to Hannah Arendt (following Merleau Ponty), our knowledge of the world acquires certainty because it is shared knowledge:
“Our certainty that what we perceive has an existence independent of the act of perceiving depends entirely on the object’s also appearing as such to others and being acknowledged by them. Without this tacit acknowledgment by others we would not even be able to put faith in the way we appear to ourselves” [25] (p. 46).
The above text makes clear that, not only the common world, but also our knowledge of who we are, depends in part (I would not say entirely as Arendt does) on our appearing to others. For Arendt, sensus communis is like a sixth sense that coordinates the other five and ensures that they respond to the perception of the same object. They ensure that the private senses perceive what is in a common world. Since we enjoy five radically different senses that have the same common object, and since all human beings agree on the identity of the object, subjectivity is in some way saved by this community in perception. We have our own point of view or doxa, but the object is the same. From this affinity arises, says Arendt, the “sense of reality” [25] (p. 50).
It might be said that digitalisation puts at great risk the sense of reality. Digital changes in perception replace how we perceive the world and how we are perceived: They offer a new world and a new self. This double epistemological transformation has its corresponding technologies, according to Frank Pasquale. The first are search technologies (which mediate how we perceive), and the second are technologies of reputation (which mediate how we are perceived) [1] (p. 58)2.
Technologies of reputation determine how we are perceived. They are in the sphere of control and calculating reason. They analyse information and evaluate the individual in any field and for any purpose, concerning credit, health, work, or even inclusion in a police file. These technologies replace personal history with algorithmically interpreted information. Despite the fact that the individual is stripped of his or her identity by the algorithm, the rules that score him are opaque, unregulated, and do not reflect an unbiased result. They, therefore, might (and do) give rise to various forms of discrimination—against the poor, the sick, or the dissident. It does not seem that the so-called “digital divide” will be the main form of discrimination, but rather this digital class society based on reputation. When the government enters this field, the surveillance nation emerges, characterized by the public–private partnership in which data are exchanged, achieving increasingly complete surveillance of the individual without blind spots [1].
On the other hand, technologies of search go far beyond mere search engines, and they produce the world that they want to show us. They are Google, Amazon, Apple, Twitter, Facebook, and all of the others that algorithmically mediate how we perceive. The distortion of technologies of search does not replace one reality with another, but with multiple personalized realities [26]. Therefore, the number of realities presented to the individual is virtually infinite. This means that we no longer perceive the same virtual world; there is not just one. We perceive the one that it has been predicted will please us [1]. The world as presented by technologies of search is not the cold world of mathematics, but a ludic world of emotion: A web of emotion is built, a web where information is produced and reproduced unlimitedly.
The data obtained with those technologies of search increase the raw material of the algorithms, which is information. Greater volume improves statistical accuracy. For that reason, it has been pointed out that watching and improving the watchers are part of the same movement [1] (pp. 140–188).
It seems that these technologies are a significant threat to our senses and our sense of reality3. At the same time, they allow the rulers to rule over a brand-new version of man: Digital man. I present here the spirit of those who rule; the way I see the rulers is necessary to show how those who are ruled tend to be shaped by this ruling.
Roger Berkowitz, a leading expert in Hannah Arendt’s thought and on the philosophy of science, has said about our relationship with technology: “The real threat is that our lives are increasingly habituated to the thoughtless automatism of drone [in a wide sense, including robots, devices…] behavior, we humans habituate ourselves to acting in mechanical, algorithmic, and logical ways” [27] (p. 169, my emphasis).
It does not seem that this is the case with current digitalisation. Digitalisation requires a constant production of data to maintain the governance of post-industrial capitalists. Digitalisation uses mechanisms, algorithms, and logic to achieve that, but it does not transform people into machines. Digitalisation seems to be a kind of animalisation because it works by the exploitation of the human brain’s reward system that we have in common with animals. It is precisely the system that allows animals to learn. The brain’s reward system is linked with how communication takes place through digital technology. Machine language is formal language, mathematical (binary code), but the language of an animal is linked, at least in part, with his conditioned system. Humans, in so far as we are animals, when triggered by fear or hunger, act (and ask for help) due to this brain’s reward system. Machine-like man, like the one Berkowitz describes, would be completely efficient. Therefore, he would use language with complete efficiency and speak only in order to achieve his goals. Machine-like man would not be addicted to screens or to sharing and communication [28]. The calculating reason is out of sight for this homo digitalis. Animal digitalis is a better name for this new kind of man. The following section will develop this point. I should make the distinction between how a machine-like man communicates and how an animal-like man does.

3.3. The Limits of Formal Language and Information

Behind digital technology works the so-called AI. Artificial intelligence is a term that adds another element to blurring the divide between humans and artefacts. AI means that machines learn; they are not merely machines anymore, but machines that learn. This learning gives rise to weak AI and strong AI [29]. Weak AI is widespread today with digital technology4 (e.g., to optimize screen-time), although strong AI is almost here. What do they have in common? The substitution of human judgment for more or less sophisticated statistics and applied mathematics. For that reason, knowledge becomes certain. There are none of the grey zones that are typical of practical wisdom [2,24].
AI lacks semantics, i.e., an understanding of meaning, but controls the sphere of syntaxes (as AI applied to language translation shows: e.g., DeepL), i.e., the arrangement of words and phrases. AI deals well (and needs to work well) with a large amount of information. It is this “formal knowledge” of the so-called AI that needs an enormous amount of data to overcome its lack of semantics. According to the most radical defenders of AI, we do not need wisdom or common sense, but a more extensive power of learning and dealing with data.
The predominance of information, Dataism, pretends to do superfluous conceptual thinking [30]. Indeed, digitalisation allows a prediction without precedents that, for some people, allows us to abandon theory. Heidegger considered the possibility that modern technology and science will lead to pushing reflective thinking aside as something useless and hence superfluous; thus, a dialogue with the tradition would be impossible, as we have lost our roots because of technology, he says [16] (p. 15). This is a good opportunity to point out a threat, but losing theory and reflection is not an unavoidable necessity. Technology tends to un-root us, and technologies of search tend to make reflective thinking superfluous. However, tendencies are not unavoidable necessities.
Like her old professor, Hannah Arendt was also concerned about science and technology. In her most important essay on philosophy of technology, “The Conquest of Space and the Stature of Man”, she warned about blurring the division between human beings and artefacts. She stated that scientists become dehumanized when they look at the world from an external point of view. From there, they easily confuse their technological inventions with human beings. Scientists blur distinctions between what is given and what is constructed; they abandon humanism. Our human-pride is gone, as technology is not about extending man’s material powers anymore:
“All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears ‘as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process’” [31] (p. 53).
It seems that she was right about that, but she did not see the new forms of communication (those that are animal-like) when she wrote about “everyday language”:
“Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behaviour even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs” [31] (p. 53).
This is precisely the way Roger Berkowitz shows the influence of AI machines in our lives. It is interesting to realise that Arendt did see the large-scale biological process. However, she did not realise that this biological process would be based on psychosomatic language. The body and its brain’s reward system are used as a way to trigger communication. Indeed, they might be used as a never-ending communication tool. The abstraction of mathematics is still in the calculating reason of the ruler, but those who are ruled do not speak in mathematical signs. On this point, it is not the more abstract and (according to the modern age) objective language of formal mathematics, but the less abstract and objective language, that of internal rewards of the body. The language of emotions is the “language” that digital technology uses to make people “labour”—posting and sharing. Communication through digital technology exploits the brain’s reward system that controls focus, pleasure, and addiction. Emotions and game-based appearances allow the unlimited communication sought by engineers and needed for machine learning5.
This is the language of man tending to be reduced to biology in the digital ecosystem: The animal digitalis. Arendt defined current man as an animal in her treatise The Human Condition (1958) [14]. Byung-Chul Han has criticized this view. According to him, digital man is not animal-like, but it is like “a thing” [32]. However, Han’s own work stresses the animalisation of digital man6. We approach it in greater detail in the next section.

4. Being Ruled: Animal Digitalis and Endless Communication

4.1. Animal Features and the Process of Animalisation of Digital Technology Man

The main thesis of this paper is that digital technology as it is today tends to animalise man. I emphasize that animalisation is a tendency, not a cause–effect necessity. Smoking is addictive by its chemical composition, as digital technology is addictive by design, but some people might smoke and not get hooked at all. How do we animalise a person? Not by transforming him into an animal from the outside, as man is already an animal, but by blurring the distinctive human features as we emphasize those in common with animals. Addictions are an example.
My thesis requires stating briefly what I understand by being human. Man is the most relational animal and the most dependent, particularly at the beginning of his life: He mimics language and the position of others to adapt an erect posture. The infant human needs other individuals to become himself or herself, to become fully human. Of course, there are animals that live and chase in packs, have feeling and recognising each other, but none are so dependent on others, so relational by nature. Man is also the only animal able to be in a moral dialogue with himself in solitude. Man is able to reflect upon and avoid his tendencies. Man is also a creature of tools, a creature of the world (not just the environment); homo faber, builder of the world, and of culture. Culture includes the walls of the house that distinguishes private from public spheres.
It seems that digital man is not in control, and he does not think in a means-to-an-end mentality. He thinks, instead, in terms of well-being. That is, he is not a homo faber. He is ruled by smart technology. It seems that his features are very much those of an animal if we consider the outcome, the set of them: Learning by the stimulus and reward system (by addictive and emotional technology), lacking privacy, dominated by predominance of touch, lonely, and reduced to the present.
Digital technology is so efficient because the user willingly accepts to be connected, but then all kinds of chains appear. Technologies of search are persuasive and addictive by design. This technology tries to offer the world what animal laborans desires according to data. This addictive design is considered today typical of the current form of capitalism that works in the neuronal circuits of pleasure and reward to increase desire and reproduce desires indefinitely [28].
In particular, the specific addictive nature of current digital technology is produced in the image of slot machines and the Las Vegas casino environment. Animal digitalis is also a homo ludens. Gaming is seen as an addictive process, not as the interaction of a human with his environment and others. Ludic digitalisation is shaped by the “ludic loop” with these features: Individual solitude, fast feedback, random rewards, and an open-ended mechanic that maintains the curvature of the loop [13] (pp. 9–12). This ludic loop is a very lucrative one, as it extends “time in device” without limits. The economy that exploits this addiction has been called the “attention economy”; that is, the competition to capture and keep attention as a scarce commodity. Catching attention is the first step, then what really matters is to produce and reproduce digital shadows of every human activity.
Another important element is emotions: Personalised emotions for animal digitalis that will make him hooked by technology. I understood emotion here as something different from feeling; emotion is dynamic and situational. It is more of a brain’s reaction than a more-or-less permanent state of the mind. Emotion has a very short temporality, completely different from feelings, that can last (guilt or love) and that requires time to develop from reflection. Emotions are performative, perfect for digital technology man as it is hooked, not passive, as this technology requires both labouring and consumption. The brain’s reward system precisely deals with emotional motivation. It is the instrument that triggers the unending communication loop of animal digitalis. Hyper-communication, data in mass-scale, and excessive information are the result. Everything is abundant. Indeed, abundance, according to Hannah Arendt, is the ideal of animal laborans, not durability, nor justice or freedom [33]. Abundance in the context of post-industrial capitalism mainly means abundance of data. The user of digital technology is the wageless labourer who communicates in an endless process that has no particular purpose or relevant content. For that reason, it can be replaced efficiently by images. Addiction and emotion are common to animals and humans, a first element in animalisation.
Another feature of animal digitalis is the lack of privacy. The individual (partly willingly, partly without his knowledge) becomes completely transparent. According to some authors, pornography becomes a kind of model for the rest of reality: Everything, even the most private, should be transparent. Byung-Chul Han describes society as a pornographic society. However, it is not just bodies that are exposed, but the subconscious [34]. A kind of addiction to exposure is achieved. In life, there are no black spots to either the digital-other or to the algorithm. The use of algorithms leads to presentation of information that the individual does not know. This total transparency allows post-industrial capitalists, basing their progress in the production of data, an unlimited field of expansion: An “economy of integral life”, as Éric Sadin has said [2]. The distinction between the private and the public is specifically human: The radical distinction between those activities that are kept in private and those that appear in public and are seen and heard by all.
It has been said that while things (like fireplaces or musical instruments) evoke practices that need an engagement with reality and with other people, devices evoke disengaged consumption (e.g., heating installation) [10] (pp. 47–48). It seems that this goes further with digital technology and the predominance of touch, a predominance very much connected with the smartphone that was launched in 2007. It is not only the smartphone, however; the Internet of Things also needs repeated touching to work and extract data, such as in the case of blood pressure, for example.
As one of our senses, touch allows us access to reality. It is the most vital and basic sense, common to all animals, and the one most related with animal nature and necessities. It is a sense strongly attached to pleasure. Traditionally, it has been pointed out that the higher forms of knowledge are achieved through seeing and hearing, through nous or logos. However, touch is not one of the higher senses, but the most basic one, the one all animals have.

4.2. Loneliness and Reduction to Present

There are two other essential aspects of digital technology: One is the reduction to the present, the other, loneliness. Lewis H. Lapham wrote the introduction to the classic on media, Understanding Media (1964), by Marshall McLuhan. The introduction is called “the eternal now”, remaining in the present [8]. Electronic media against printed word tends to blur the distinction between past, present, and future. Years earlier, the School of Frankfurt published the well-known Dialektik der Aufklärung (1947). They pointed out that the paradox of communication (like media or cars) is that it isolates people and makes them conform by that isolation [35] (pp. 183–184).
These thinkers were truly on the track towards a real problem of media and digital technology. If we focus on isolation, it seems much more important today, and we really need to worry about it. However, it seems that isolation is not exactly the problem, as we are in constant communication and in company with others. The problem is loneliness, as it can arise despite being in constant communication with others and in other people’s company. Moreover, loneliness is sharper when we are in the company of others [36] (p. 476). Digital man is always connected and, in that sense, with others, not isolated. For that reason, it has been said, very accurately, that technology keeps us “alone together” [37].
A quite different concept is solitude, which requires that a person be alone, and means being by oneself, “talking with oneself”. Arendt calls it the “two-in-one”, whereas loneliness means being one deserted from all others [36] (p. 476). Thinking needs solitude, reflection needs solitude. Loneliness allows no thinking, as the “undivided” person cannot reflect. According to Arendt, the lack of the habit of reflection when a person is alone makes thoughtlessness possible. Lack of thought is what made Adolf Eichmann’s crimes possible. She called it “banality of evil” [38]. However, solitude is almost forbidden by this technology that monopolizes every task and achieves the undivided attention of the individual7. The individual has no opportunity to be that “two-in-one”. I would say that digital technology (as it is today) cannot replace meaningful and direct relationships or build a common order, but, rather, it can “organize loneliness” [36] (p. 478).
Therefore, the problem with digital technology is not that the individual is with himself (e.g., reflecting, thinking, remembering, praying), but with no one. Media before digital technology (mainly, before the smartphone, Internet of Things, and smart houses) were not so present and so invasive. Driving a car might isolate us in a sense, but it allows us to be with ourselves in solitude.
How does loneliness animalise us? Arendt understands loneliness in a spiritual sense. It means that no significant relationship with others or with oneself arises. This spiritual sense means that one could be in the (physical) company of others. The combination of loneliness (the opposite of a relationship with others and with oneself) with those features above tends to animalise digital man even more; e.g., loneliness without the exploitation of brain’s reward system and emotionalisation might lead to a very functional, machine-like man, and not one who is animal-like. However, loneliness combined with those features above leads to animalisation.
The lack of reflection is directly connected with the reduction of time to the present. Cicero defines human reason precisely by that openness to time, as opposed to animals:
“The beast, just as far as it is moved by the senses and with very little perception of past or future, adapts itself to that alone which is present at the moment; while man—because he is endowed with reason […] draws analogies, and connects and associates the present and the future—easily surveys the course of his whole life and makes the necessary preparations for its conduct” [39] (MCMXIII, I, 4).
Digital technology helps us to be stuck in the present. Humans know the world with their heart, which is a mixture of reason, feelings, and senses. Therefore, senses in human experience are open to time. As it was pointed out above, digital technology pretends to replace our senses, it takes us away from the here perceived by senses, and replaces it with the now with total ubiquity [40], depriving us of the environment. At the same time, the present becomes extraordinarily large: An extraordinarily augmented present.
Cicero points out that animals are stuck in the present, in an eternal now. Cicero does not adequately describe how animals are. Superior animals do have courage (thymós), understood as a feeling that makes it possible to postpone the pleasant-present for something pleasant in the future through a hard path. Thymós is related with memory, experience, and projection to the future, and is superior to desire (epytimía), as this is related with the pleasant-present. Digital technology tends to animalise us, as it expands the present as if we were inferior animals without courage, thymós. As we are stuck in the augmented present but are deprived of the senses and environment, those things lead us to animalhood [29]—specifically, an inferior animalhood8.
According to previous reflections, fundamental human goods are treated by current digital technology: Our access to reality, freedom, and our relationships with others and with ourselves. As a consequence of our detachment from reality, current digital technology also makes it quite difficult to distinguish truth from lies and degrades the political in different ways.

4.3. A Note on Digital Freedom of Speech or Free Reach?

The promise of the Internet for politics was indeed great; its post-national and post-state character and decentralization sounded deeply attractive. Those promises can be a reality if things are suitably fixed.
Today, digital technology has disappointed our political expectations. The use of technologies of search and reputation has threatened the political. The use of those technologies has damaged the integrity of elections, and allowed wide-scale spreading of lies, encouragement of hatred, discourse polarization, foreign manipulation, and silencing of dissent through lies. It is probable that the source of those evils is the downgrading of politics: From the rational to the emotional.
It might be said that post-truth politics is the result of animal digitalis’ disregard of reality or, at least, it fits very well with it. This tendency makes fact less and less important, and “likes” are at the centre. Precisely for that reason, post-truth has been described as the dominance of “lies we like” [41]. Post-truth might be seen as one of the consequences of the reduction of information and communication to a pleasant experience. However, truth is not always pleasant and likeable. For that reason, T. S. Eliot said: “Humankind cannot bear very much reality”, but animal digitalis seems to bear very little.
Emotion, a non-political form of expression, dominates social networks and goes hand-in-hand with acceleration. The greater the acceleration, the greater the dominance of emotion. For what reason? Emotion is dynamic and situational, as opposed to rationality, which is stable and slow [3]. Within this emotionalisation at a pre-reflective level, algorithms give primacy to what statistically achieves the greatest attention. The structure of digital technology is non-political or anti-political because it multiplies everything that is emotional or irrational.
Since our animal conditioned system is ruled by digital technology, information is not on an equal footing. Why? Information is considered in terms of its reproduction and user’s time dedicated to a device. As a consequence, truth or free speech becomes less relevant. Digital technology makes free speech very unequal, as in social networks, algorithms encourage lies and denial of obvious realities because this content allows increasing users’ screen-time. It is not that social networks want to spread misinformation, but that misinformation spreads well.
Given the above, users who offer this misinformation not only enjoy free speech, but an additional advantage: Free reach. Free reach, or free multiplication of one’s own content, is granted to those who share content that technically works and multiplies screen time of users, as Renee DiResta has explained [42] (p. 27ff.). This is the case, for example, with the trend of YouTube recommendations, which are designed to take us into the rabbit hole by design [1,43]. This technology and emotionalisation have other dangers, such as manufactured consensus and political manipulation that achieves a terrifying level of perfection.
The algorithm cannot have a less political or less humane structure, as it gives total primacy to speed. It is not uncommon, then, for automated bots to be used to “manufacture a truth” by trending it [42]. Bots are much faster than humans at achieving a trending topic. In contrast, political positions, such as Occupy Wall Street, despite their huge following and popularity, are not a trend, to the surprise of many [1] (p. 76). The problem? Their slow popularity or, in other words, their greater human rationality and temporality.
With the above, the motto that states “if you make it trend, you make it true” is fulfilled [42]. In all platforms, the extraordinary simplicity of communication allows a rapid extension and an apparently homogeneous consensus that would be impossible around complex ideas. Against this tool, it is logical that a correction of information, in the case of misinformation, never extends as far the truth. If digital technology is fast and emotional, it means that these corrections are hardly useful. They oppose the very structure of this technology: Emotional, fast, and ludic.

5. Conclusions: Towards a Postmodern Humanism?

Digitocracy seems to be a new form of government. It is a new way to rule an unprecedented number of people smartly and efficiently. This rule takes advantage of everybody’s free use of digital devices. Rulers are no more modern technocratic humanists than mere rational entrepreneurs seeking to earn money. They are postmodern entrepreneurs. They have been able to hybridise their economic interests with new postmodern ideas; in particular, those that blur the distinctions between artefacts and humans, and a declared pretension to be acting for the good of humanity. These new rulers use technologies that replace human perception. Those technologies require black-boxed algorithms which pose a threat to the human sense of reality, a precondition of any community. This gives rise to new forms of discrimination that developed underneath, behind the emotional screen of digital technology in the realm of those algorithms.
The language of ruling technologies is mathematical, but those who are ruled speak no mathematics. If those who are ruled were acting like machines, they would be efficient. Communication would be little and cold, data would not be produced on a mass scale, and machine learning would stop learning. On the contrary, hyper-communication is predominant. Sharing and posting are the ways that those who are ruled are expected to act. Those who are ruled do not become similar to the technology that surrounds them. I have presented six features of this digital-technology-man and its relationship with technology: Addictive, emotional, lacks privacy, touch-predominance, loneliness, and reduction to the present. As the goal of these rulers is to optimize screen-time, the content that prevails is the post-truth and emotional content that happens to spread very quickly and very efficiently. I would say that these features are all related with the goal to achieve time in device and mass-scale communication, and severely threaten us as humans, and, in particular, our freedom.
Against this form of government and this new kind of man, we need a new humanism; to put it simply, a humanism as the emphasis of the superiority of humans over animals and artefacts. We need a humanism that accepts freedom, our relational nature, and our capacity to judge and act morally. Digitalisation needs to correct the way it works today, as it serves not the user, but the Californian technological companies. Digital technology can be a real tool that helps people to live a free and human life. This will be possible only if we abandon data as the main source of money (in financial markets or digital companies) and focus the attention on the production of durable things to serve and not to extract.

Funding

This research received no external funding.

Acknowledgments

I deeply appreciate the comments on an earlier version of this paper and bibliography suggestions by Jesús Ballesteros, A.C. Pereira-Menaut, A. Legerén Molina, and Nicole Dewandre, as well as five anonymous reviewers, although any errors are mine.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press: Cambridge, MA, USA, 2015. [Google Scholar]
  2. Sadin, E. La Siliconisation du Monde: L’irrésistible Expansion du Libéralisme Numérique; L’échappée: Paris, France, 2016. [Google Scholar]
  3. Han, B.-C. Psychopolitics: Neoliberalism and New Technologies of Power; Butler, E., Translator; Verso: London, UK; New York, NY, USA, 2017. [Google Scholar]
  4. Zimmerman, A.; Di Rosa, E.; Kim, H. Technology Can’t Fix Algorithmic Injustice. Available online: http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic (accessed on 13 April 2020).
  5. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, Atomium; European Institute for Science, Media and Democracy: Brussels, Belgium, 2018. [Google Scholar]
  6. Bartosch, U.; Bauberger SJ, S.; von Damm, T.; Engels, R.; Rehbein, M.; Schmiedchen, F.; Stapf-Finé, H.; Sülzen, A. Policy Paper on the Asilomar Principles on Artificial Intelligence; Federation of German Scientist (VDW): Berlin, Germany, 2018. [Google Scholar]
  7. Floridi, L.; Dewandre, N.; Broadbent, S.; Ess, C.; Ganascia, J.-G.; Hildebrant, M.; Laouris, Y.; Lobet-Maris, C.; Oates, S.; Pagallo, U. The Onlife Initiative. Background Document: Rethinking Spaces in the Digital Transition. In The Onlife Manifesto: Being Human in a Hyperconnected Era; Floridi, L., Ed.; Springer Open: Basel, Switzerland, 2015; pp. 41–48. [Google Scholar]
  8. McLuhan, M. Understanding Media: The Extensions of Man; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  9. Postman, N. Technopoly: The Surrender of Culture of Technology; Vintage Books: New York, NY, USA, 1992. [Google Scholar]
  10. Verbeek, P.-P. Understanding and Designing the Morality of Things; University of Chicago Press: Chicago, IL, USA, 2011. [Google Scholar]
  11. Han, B.-C. La Emergencia Viral y el Mundo de Mañana. Available online: https://elpais.com/ideas/2020-03-21/la-emergencia-viral-y-el-mundo-de-manana-byung-chul-han-el-filosofo-surcoreano-que-piensa-desde-berlin.html (accessed on 13 April 2020).
  12. Schüll, N. Addiction by Design: Machine Gambling in Las Vegas; Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  13. Harris, T.; Raskin, A. Should’ve Stayed in Vegas. In Your Undivided Attention; Interview with Natasha Schüll; Center for Humane Technology: San Francisco, CA, USA, 2019; pp. 1–17. [Google Scholar]
  14. Arendt, H. The Human Condition, 2nd ed.; The University of Chicago Press: Chicago, IL, USA; London, UK, 1998. [Google Scholar]
  15. Dewandre, N. Rethinking the Human Condition in a Hyperconnected Era: Why Freedom is Not about Sovereignty Buy about Beginnings. In The Onlife Manifesto: Being Human in a Hyperconnected Era; Floridi, L., Ed.; Springer Open: Basel, Switzerland, 2015; pp. 195–215. [Google Scholar]
  16. Heidegger, M. The Principle of Reason; Lilly, R., Translator; Indiana University Press: Bloomington, IN, USA, 1991. [Google Scholar]
  17. Stevens, L.B. Why Silicon Valley CEOs Raise Their Kids Tech Free. 2019. Available online: https://wezift.com/parent-portal/blog/why-tech-ceos-raise-their-kids-tech-free (accessed on 13 April 2020).
  18. Tan, C.-M. Search inside Yourself; Harper One: San Francisco, CA, USA, 2014. [Google Scholar]
  19. Sharkey, N.; van Wynsberghe, A.; Robbins, S.; Hancock, E. Our Sexual Future with Robots; A Foundation for Responsible Robotics Consultation Report; Foundation for Responsible Robotics: The Hague, The Netherlands, 2017. [Google Scholar]
  20. Singh Dang, S. Artificial Intelligence in Humanoid Robots. Available online: https://www.forbes.com/sites/cognitiveworld/2019/02/25/artificial-intelligence-in-humanoid-robots/#b8ba8df24c72 (accessed on 13 April 2020).
  21. Ballesteros, J. La constitución de la imagen actual del hombre. Tópicos Rev. Filos. 1998, 15, 9–29. [Google Scholar] [CrossRef]
  22. Taylor, C. Sources of the Self: The Making of the Modern Identity; Harvard University Press: Cambridge, MA, USA, 1989. [Google Scholar]
  23. Kempf, J.A. Silicon Valley Monk: From Metaphysics to Reality on the Buddhist Path; Dharma Gates Publishing: Cazadero, CA, USA, 2014. [Google Scholar]
  24. Han, B.-C. Philosophie des Zen-Buddhismus; Reklam: Leipzig, Germany, 2002. [Google Scholar]
  25. Arendt, H. The Life of the Mind; Harvest Book: San Diego, CA, USA; New York, NY, USA; London, UK, 1978. [Google Scholar]
  26. Pariser, E. The Filter Bubble: What the Internet Is Hiding from You; Penguin: London, UK, 2012. [Google Scholar]
  27. Berkowitz, R. Drones and the Question of ‘The Human’. Ethics Int. Aff. 2014, 28, 159–169. [Google Scholar] [CrossRef]
  28. Pharo, P. Le Capitalisme Addicctif; Editions PUF: Paris, France, 2018. [Google Scholar]
  29. López Moratalla, N. Inteligencia Artificial ¿Conciencia Artificial? Digital Reasons: Madrid, Spain, 2017. [Google Scholar]
  30. Steadman, I. Big Data and the Death of the Theorist. Wired, 25 January 2013. [Google Scholar]
  31. Arendt, H. The Conquest of Space and the Stature of Man. New Atlantis 2017, 18, 43–55. [Google Scholar]
  32. Han, B.-C. The Burnout Society; Stanford Briefs: Stanford, CA, USA, 2015. [Google Scholar]
  33. Ballesteros, A. Hannah Arendt: From Property to Capital… and Back? Archiv Rechts Sozialphilosophie 2018, 2, 184–201. [Google Scholar] [CrossRef]
  34. Han, B.-C. Transparenzgessellschaft; MSB Matthes & Seitz Verlag: Berlin, Germany, 2012. [Google Scholar]
  35. Horkheimer, M.; Adorno, T.W. Dialectic of Enlightment: Philosophical Fragments; Jephcott, E., Translator; Stanford University Press: Stanford, CA, USA, 2002. [Google Scholar]
  36. Arendt, H. The Origins of Totalitarianism; Harvest: Orlando, FL, USA, 1994. [Google Scholar]
  37. Turkle, S. Alone Together: Why We Expect More from Technology and Less from Each Other; Basic Books: New York, NY, USA, 2011. [Google Scholar]
  38. Arendt, H. Eichmann in Jerusalem: A Report on the Banality of Evil; The Viking Press: New York, NY, USA, 1963. [Google Scholar]
  39. Cicero. De Officiis; Miller, W., Translator; McMillan: London, UK; New York, NY, USA, 2014. [Google Scholar]
  40. Virilio, P. Cybermonde, la Politique du Pire; Textuel: Paris, France, 1996. [Google Scholar]
  41. Ballesteros, J. La postverdad: O las mentiras que nos gustan. In Para una Nueva Cultura Política; Masferrer, A., Ed.; Catarata: Madrid, Spain, 2019. [Google Scholar]
  42. DiResta, R. Computational Propaganda: If you make it trend, you make it true. Yale Rev. 2018, 106, 12–29. [Google Scholar] [CrossRef]
  43. Harris, T.; Raskin, A. Down the Rabbit Hole by Design. In Your Undivided Attention; Interview with Guillaume Chaslot; Center for Humane Technology: San Francisco, CA, USA, 2019; pp. 1–19. [Google Scholar]
1
The spirit of Eastern thought and the ever-increasing digital connections blur distinctions between humans, animals, and artefacts in the so-called technological ecosystem.
2
Pasquale talks about a third kind, outside the scope of this paper: Finance technologies.
3
There is at least one objection to this. The selection of information happens in a large number of contexts; e.g., when my wife asks me to pick a restaurant, I do select between restaurants. This process might be compared to the Google engine when I search for a restaurant for myself. Google gives me the websites I will like most, in the same way I do with my wife’s restaurants. I would suggest that the main difference is that the Google engine has the purpose of making me more dependent, which leads to asymmetry and a lack of intelligibility of the results. These features are not present in a reasonable wife–husband relationship. I thank a reviewer for this food-for-thought critique.
4
A classic example of the first is the program AlphaGo, the program that managed to beat Fan Hui, European Champion of the game Go. Go is a more complex game than chess; thus, AlphaGo (or its designer, Demis Hassabis) went even further than Deep Blue had twenty years previously when it beat Kasparov in a game of chess.
5
Will this change the moment AI works without so much information? The moment AI knows how to influence a person with a “look” on his face? Is this digitalisation just the one that machine learning needs today?
6
In the same book, Han points out that digital man, with his multi-tasking, is like a wild animal. He is animal-like. A few pages later, in the chapter “Vita activa”, he neglects the Arendtian description of current man as an animal [29].
7
“Your Undivided Attention”, podcast on digitalisation from a humanist perspective, by Tristan Harris and Aza Raskin (Center for Humane Technology).
8
One important objection to all of this is: How can something that requires us to be humans (such as AI, e.g., Netflix recommendations) make us less than human? I would say because it considers humanity as something that applied mathematics can convey. It considers humans in species terms. I would say this reveals another element of dehumanisation, that is, to forget unpredictability. Can AI predict the rejection of AI?

Share and Cite

MDPI and ACS Style

Ballesteros, A. Digitocracy: Ruling and Being Ruled. Philosophies 2020, 5, 9. https://doi.org/10.3390/philosophies5020009

AMA Style

Ballesteros A. Digitocracy: Ruling and Being Ruled. Philosophies. 2020; 5(2):9. https://doi.org/10.3390/philosophies5020009

Chicago/Turabian Style

Ballesteros, Alfonso. 2020. "Digitocracy: Ruling and Being Ruled" Philosophies 5, no. 2: 9. https://doi.org/10.3390/philosophies5020009

APA Style

Ballesteros, A. (2020). Digitocracy: Ruling and Being Ruled. Philosophies, 5(2), 9. https://doi.org/10.3390/philosophies5020009

Article Metrics

Back to TopTop