Next Article in Journal
Pastoral Places and the City: Environmental Rhetoric in Plato’s Phaedrus
Previous Article in Journal
Transcendence of the Human Far Beyond AI—Kafka’s In the Penal Colony and Schopenhauerian Eschatology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human, All Too Human: Do We Lose Free Spirit in the Digital Age?

by
Aleksandra Sushchenko
1,* and
Olena Yatsenko
2
1
Department of Art and Media, School of Arts, Design and Architecture, Aalto University, Otaniementie 14, 02150 Espoo, Finland
2
Institute for Human-Centered Engineering, School of Engineering and Computer Science, Bern University of Applied Sciences, Quellegasse 21, 2501 Biel, Switzerland
*
Author to whom correspondence should be addressed.
Humanities 2025, 14(1), 6; https://doi.org/10.3390/h14010006
Submission received: 7 October 2024 / Revised: 30 December 2024 / Accepted: 3 January 2025 / Published: 9 January 2025
(This article belongs to the Section Philosophy and Classics in the Humanities)

Abstract

:
This article engages in a philosophical dialogue with Nietzsche’s views on the discourse of power, examining the rising concerns surrounding the digitization and algorithmization of society in the context of advancements in robotics and AI. It highlights human agency through Nietzsche’s perspective on creative culture as a space for individuals to actively engage in free thought and action, with responsibility as the key foundation of social resilience. By approaching metaphysical systems through the discourse of power, Nietzsche emphasizes that humanity can overcome system-driven delusions through reason, which he understands as the form of critical reflection existing solely in the domain of creative culture. We assert that Nietzsche’s arguments offer alternative perspectives on the ethics of technology, particularly through the dialectics of “weak and strong types of behavior”. It allows us to explore how resistance—existing in creative culture—can serve as a vital counterbalance to the mechanization of social life. Such dialectics provide a strong foundation for supporting algorithmic resistance by inspiring ethical frameworks rooted in individuality and emotional depth, challenging the homogenizing tendencies of digitization and algorithmization. It emphasizes the importance of subjective stories, emotions, and compassion, forming human-centered ethical principles that preserve the richness of individual experiences and protect against system-driven delusions.

1. Introduction

As discussions about the automation of processes and the algorithmic governance of our lives intensify—driven by the deep integration of technology into nearly all areas of life—Nietzsche’s Human, All Too Human gains fresh significance. It provides a perspective on human–society interaction that emphasizes the primacy of subjective experience and personal responsibility over collective norms and behavioral rules. This serves as a foundation for exploring a personalized approach to digital ethics. This perspective has been studied from various angles. One approach is to examine the dialectics between individual freedom and power relations, as discussed by Nietzsche ([1878] 2006). Similarly, Leiter (2003), drawing on On the Genealogy of Morality, emphasizes the psychological foundations of moral values, particularly the interconnection between the master–slave relationship and its origins in personal choice. In his later work, Moral Psychology with Nietzsche (Leiter 2019), Leiter focuses on the psychology of moral behavior and decision making, emphasizing the role of emotions as the foundation of moral beliefs and actions. In doing so, he highlights the importance of creativity and individuality as central to moral choice. In Nietzsche: Philosopher, Psychologist, Antichrist (Kaufmann 2013), Kaufmann argues that Nietzsche’s philosophical concepts are systematic and profound, challenging the initial perception of them as mere sensationalism. Kaufmann interprets the “will to power” as a universal force for both the creation and destruction of values, emphasizing its connection to personal choice and responsibility.
In a related vein, Clark in Nietzsche on Truth and Philosophy (Clark 1990) stresses the impact of Nietzsche’s claims, focusing on the verification of truth and value in human practices, as well as the life-affirming meanings that emerge from them. In What is a Free Spirit? Nietzsche on Fanaticism, Reginster (2003) explores the problem of fanaticism, arguing that blind and excessive devotion to an idea or “truth”, coupled with self-sacrificial adherence (“strong behavior”), is fundamentally a lack of intellectual freedom and creativity. In contrast, the “free spirit”, expressed through “weak behavior”, affirms genuine life values, embracing their dynamism and flexibility. Similarly, Moore and Brobjer (2003), in Nietzsche and Science, examine the influence of Nietzsche’s philosophy on scientific thinking and methods. They highlight how metaphors in Nietzsche’s work help articulate complex scientific ideas, making them more accessible for comprehension and analysis. Further, Kroker (2004), in his work The Will to Technology and the Culture of Nihilism: Heidegger, Nietzsche, and Marx, identifies technology as both the cause and consequence of nihilism as the dominant ideology of modern society. Kroker regards Nietzsche’s works as conceptual tools for overcoming nihilism and affirming life’s values through individual effort. Branston (2023), in the work AGI, All Too Human: Nietzsche and Artificial General Intelligence, explains the drive to create AI as stemming from the slave ideology of Christian values. He argues that humanity, to avoid the fullness of agency and responsibility for its own fate, creates AI as a tool, imbuing it with sacred meaning and purpose. Mellamphy and Biswas Mellamphy (2016), in their work The Digital Dionysus: Nietzsche and the Network-Centric Condition, describe interaction in the digital world as a Dionysian bacchanalia, using Nietzschean philosophical metaphors. Depersonalized communicative spaces, situationally employing masks of traditional values to assert and realize their own discourse of power, characterize hyperconnected modern sociality. Further, Grève (2024) argues that the metaphysical foundations and ethical regulations of integrating various automated mechanisms and programs into the social sphere require not only close attention and expert discussion but also philosophical reflection and well-reasoned prognostic analysis. Issembert (2023) analyzes the development of AI based on Nietzsche’s metaphorical concept of Nietzsche’s Three Metamorphoses and Their Relevance to Artificial Intelligence Development, namely, the stages of the “camel”, “lion”, and “child”. The essence of the author’s argument lies in the exploration of various possibilities for goal setting and the application of AI technologies in social life, which are influenced by different understandings of human values and the corresponding logic of behavior and decision making. Kosar (2024) seeks to explore the essence of humanity through the lens of Nietzsche’s philosophy, extending its implications to the contemporary context of technology’s role in society. He contends that interpreting AI, particularly Large Language Models (LLMs), through the analogy of human brain functioning is fundamentally flawed. Drawing on Nietzsche’s critique of grammar, Kosar highlights the importance of addressing the fetishization of AI in current narratives, arguing that such perspectives obstruct a deeper understanding and the productive utilization of these technologies.
These perspectives demonstrate that human–technology interaction is influenced by the ambiguity of human nature and the logocentrism of technical materiality, both of which are embodied in the way technology functions. This functionality conducts the order of societal space. Therefore, by raising the question of whether we risk losing the free spirit in the digital age, Nietzsche’s work becomes a compelling framework for rethinking the place of human agency amidst the pervasive influence of algorithms and systems. This occurs for at least two reasons: first, the unprecedented dominance of technology is a phenomenon that has emerged only recently. While technology itself is far from a novel invention, the discourse surrounding whether artificial intelligence already possesses, or will soon acquire, agency introduces an entirely new dimension to our existence (Barrat 2023). This issue becomes particularly pressing as we keep delegating tasks to technology, compensating for human frailties such as fatigue and enhancing qualities like productivity and precision. Simultaneously, technology deepens our dependencies by entertaining us, shaping our opinions, and mediating our interactions, thereby exerting a profound influence on our behavior and the habits that define how we live (Bicen and Arnavut 2015).
To understand what are the differences and commonalities between human and possible technological agencies, we find it useful to turn to the way Nietzsche distinguishes between weakstrong behaviors and weakstrong natures through the lens of power and autonomy. Both terms are used in the context of coexistence within the structures, norms, and rules created by societies. Weak nature, for Nietzsche, reveals itself in a reactive mode of existence—one molded by external forces, marked by conformity, and dependent upon established norms or systems. It reflects a tendency to avoid confrontation with uncertainty or the effort required for self-overcoming. Yet, for societies, such behavior paradoxically signifies strength in individuals and is seen as strong behavior, as it aligns with structures forged and sustained by authoritative power (Nietzsche 2009). Strong nature, conversely, in Nietzsche’s view, is characterized by an active stance—self-determined, willing to confront challenges, and capable of personally understanding and striving to live according to one’s values through critical reflection and inner resilience. In contrast, society generally considers this to be weak behavior, as it often involves deviating from established norms and rules, which are typically valued for maintaining order and stability. By challenging these norms, individuals who embrace such active stances may be perceived as vulnerable or unstable, leading society to label their behavior as weak (Nietzsche and Hollingdale 2020). This distinction has an important role in the reexamination of the ethics of technology, as it enriches our exploration of human and technological agencies, demonstrating that algorithmic resistance should require not just deviation from the “program” but the strength to reshape the narrative and reclaim autonomy.
The core aim of this paper is to understand how Nietzsche’s ideas can guide the development of ethical frameworks that safeguard human individuality in a world increasingly shaped by algorithms and automation. Undoubtedly, Nietzsche’s original text is far removed from the issues of digital ethics, akin to “the light of the most distant stars”, yet the current sociocultural tensions—or the present moment on the eve of apocalypse (or singularity)—justify seeing his prophecies as fulfilled. Nietzsche’s distinction between strong and weak types provides a lens through which it is possible not only to critique the growing influence of AI in decision making and daily life but also to offer a different perspective through a better understanding of the nature of their dialectic determination. By exploring how AI threatens to impose habitual, deterministic patterns of interaction, this article emphasizes the importance of preserving spaces for human creativity, emotional vulnerability, and spontaneity—phenomena that make us human, or “too human”, and unsuitable for automation.
In this paper we use a philosophical dialogue as our way to engage with Nietzsche’s work. The dialogue we build embodies the nature of a postmodern allusion, or intertextuality—a free interpretation of concepts circulating within the semiotic field of culture, such as free spirit, strongness–weakness, and responsibility, with the aim to discover a new perspective, approach, and methods for addressing the ethics of technology through assessing the importance of resilience and human agency. To localize the infinite play of interpretations and maintain analytical focus, we engage in polemics with Nietzsche’s discourse on power through the dialectics of weak–strong. The main interest and purpose of this dialogue lie in rethinking the current discourse on the power dynamics between the individual and society in the context of digitalization, delving deeper into the dialectical contradiction between the strong and the weak and its ethical and moral implications. Thus, it becomes an inquiry into what makes us human, where the authenticity of our existence is rooted, and why automation and algorithmization of processes are narrowly functional tools.
The structure of this paper is as follows. Section 2, titled “Human–Machine Interaction Through the Dialectics of Power”, examines transformative effects of automation and AI on human identity, creativity, and agency. It analyzes how technological advancements, while enhancing efficiency, risk constraining human spontaneity and creativity by imposing algorithmic predictability and mechanistic traits onto human behavior, which echoes Nietzsche’s critique of the dehumanizing aspects of progress. Section 3, “Loneliness and Emotional Engagement in a Digital Age: New Configurations of Herd Mentality”, examines how advancements in AI and robotics impact human loneliness, communication, and individuality. It highlights the paradox of technology offering connection while often diminishing the quality of interactions. Drawing on Nietzsche’s critique of herd mentality, it explores how conformity to technological norms and the productivity paradox undermine personal freedom and self-reflection, emphasizing the need to reassess societal values in the digital age. Section 4, “Moral Rules in Computer Code or Personal Perspective of Responsibility?”, explores the challenges of accountability and ethical responsibility in digital ethics, highlighting the role of human agency in the absence of established moral frameworks. It emphasizes the need for proactive engagement, creative solutions, and cultural shifts to address ethical dilemmas in data technologies and robotics.

2. Human–Machine Interaction Through the Dialectics of Power

The automation of various processes and the digitization of operations create a new reality for human existence. Automation, much like other forms of standardization, risks reducing human existence to algorithmic predictability, thereby threatening to erase the spontaneity and irrationality that Nietzsche saw as essential to human flourishing (Nietzsche 2009). According to the traditional understanding of technology, it can undoubtedly be used to enhance certain weaknesses in human physical and cognitive abilities. At the same time, interaction with technology not only alters our environment and habitual patterns but also transforms humans themselves. The boundary between human and non-human disappears: “when we became posthuman” (Hayles 2000). Technical means are an integral part of our habitus, information, and semiotic systems and are tightly integrated into the architecture of our thoughts, perception, and life activities.
However, everything has two sides. While the benefits of technological progress are clear, its potential drawbacks should not be overlooked. The logic of technological advancement is dialectical. On the one hand, humans enhance technical tools by projecting their own qualities—rational thinking, purposeful action, linguistic systems for meaning and communication, and the creation of imaginary worlds and characters. As Nietzsche notes, technology is the outcome of man’s artistic nature, and it follows the path of artistic imitation (Nietzsche 1999). Yet, while this projection of human qualities into technology has led to incredible advancements, it also risks objectifying and dehumanizing the very traits that define us, leading to a mechanization of life and thought. Nietzsche cautions against this with his observation that all great progress takes place at the expense of another power (Nietzsche 1929), suggesting that the rise of rational, technological systems may constrain original thinking and emotional expression, potentially diminishing the richness of human creativity and depth of feeling.
In the philosophical interpretation of culture, there exists a notable tension between the dynamic evolution of cultural practices and the algorithmic structures that conform to social norms and technological imperatives (Spengler 1991). This tension reflects the struggle between tradition and innovation, where algorithmic processes can reinforce existing paradigms but may also inhibit original thought and the emergence of new cultural expressions. Both algorithmic logic and social norms emphasize validation and confirmation, promoting established behaviors that favor efficiency and predictability. Consequently, cultural expressions may be constrained by rigid social norms that prioritize effective communication over emotional depth. In contrast, art and creative expression provide a space to explore the possible, offering a way to transcend these norms and engage with experiences beyond conventional logic. This openness leads to a broader perspective on life, allowing for deeper emotional exploration and genuine human connection (Zahira et al. 2023). This broader perspective on human experience, shaped by subjective emotional engagement, can be interpreted in light of Nietzsche’s distinction between strong and weak types of behavior. While Nietzsche did not speak of algorithms, we can understand algorithmic structures as a modern manifestation of the stable, predictable systems that align with strong behaviors—those which conform to established norms and promote order. In contrast, weak behaviors, as Nietzsche describes, involve the courage to deviate from these norms, embracing uncertainty and emotional depth. As Nietzsche suggests, “the strongest natures retain the type, the weaker ones help it to develop” (Nietzsche 2009). In this sense, it is not the rigid adherence to algorithmic logic and social norms that drives cultural evolution and innovation but rather the willingness to break free from these structures and explore new creative possibilities.
Regarding the first type, Nietzsche claimed the following:
“History teaches that a race of people is best preserved where the greater number hold one common spirit in consequence of the similarity of their accustomed and indisputable principles: in consequence, therefore, of their common faith. Thus strength is afforded by good and thorough customs, thus is learnt the subjection of the individual, and strenuousness of character becomes a birth gift and afterwards is fostered as a habit. The danger to these communities founded on individuals of strong and similar character is that gradually increasing stupidity through transmission, which follows all stability like its shadow. It is on the more unrestricted, more uncertain and morally weaker individuals that depend on the intellectual progress of such communities, it is they who attempt all that is new and manifold”.
Thus, Nietzsche critiqued societies that adhere rigidly to tradition, warning of the stagnation that can follow stability. He values those “morally weaker” individuals who deviate from norms, as they are essential for innovation and intellectual progress:
“To have to acknowledge for all duration the consequences of anger, of raging vengeance, of enthusiastic devotion—this can incite a bitterness against these feelings all the greater because everywhere, and especially by artists, precisely these feelings are the object of idol worship. Artists cultivate the esteem for the passions, and have always done so; to be sure, they also glorify the frightful satisfactions of passion, in which one indulges, the outbursts of revenge that have death, mutilation, or voluntary banishment as a consequence, and the resignation of the broken heart. In any event, they keep alive curiosity about the passions; it is as if they wished to say: without passions you have experienced nothing at all”.
Nietzsche underscores the value of vulnerability and openness to new experiences in the following claim:
“People that are crumbling and weak in any one part, but as a whole still strong and healthy, are able to absorb the infection of what is new and incorporate it to its advantage. The task of education in a single individual is this: to plant him so firmly and surely that, as a whole, he can no longer be diverted from his path. Then, however, the educator must wound him, or else make use of the wounds which fate inflicts, and when pain and need have thus arisen, something new and noble can be inoculated into the wounded places”.
In a world shaped by the “death of God” (Nietzsche and Hollingdale 2020), it becomes logical to view humans as the standard and criterion of authenticity and perfection—rational beings capable of creating and altering the surrounding reality. However, according to the dialectical principle, any alteration in reality simultaneously provokes changes in the actor themselves. For example, some argue that the increasing use of gadgets is gradually transforming us into cyborgs (Coeckelbergh 2017). Extending this logic, the ontological status of robots, the same as humans, is perceived as liminal (Prescott 2017), occupying a space that is neither purely mechanical nor entirely alive. While ethical norms suggest that the mechanical nature of robots should remain transparent, the autonomy in their actions leads to assumptions about their independence and thus their unpredictability. But does this really create space for unpredictability and chaos, or is it simply built upon familiar patterns, cycling through repetitive loops? In many ways, modern life mirrors this mechanized, algorithmic repetition, as individuals, too, can adopt these traits of liminality—living as though on autopilot, navigating routines without true emotional engagement or creativity. This links to Proust’s notion of a personal hell (Proust 2013), where one is trapped within a repetitive, familiar reality, endlessly reshuffling its elements without ever escaping its boundaries. The illusion of novelty masks the underlying sameness, offering no genuine departure from the established order. Nietzsche heavily criticizes habitual thinking and repetitive patterns that prevent true creativity and growth. He suggests that humans often get trapped in familiar routines and established structures, which he metaphorically refers to as a form of personal or societal hell. This “hell” is marked by the illusion of change when, in reality, we are merely rearranging pre-existing ideas and experiences, never truly transcending our current state. Over the course of a life journey, human agency is formed through a series of irreversible decisions, where the actual is fixed and cannot be undone, unlike the variability of the potential. The existential experience of being-towards-death, as described by philosophers like Heidegger, acts as a ‘built-in safeguard’ that grounds human beings in the here and now. This awareness of mortality reinforces the stability of human identity, and it helps to distinguish between true—corresponding to values—and imaginary—hypothetically possible, roles. For AI all hypothetical possibilities are equally legitimate. Thus, while robots and AI may mimic certain aspects of human autonomy, they do not share the core features of agency—rooted in lived experience and the existential awareness of mortality—that form the substance of human identity. For AI, time is not existential, and pluralism of possibilities does not imply irreversibility of choice or the necessity of responsibility for the outcomes obtained. In other words, while it is vital for humans to distinguish between the imagined and the real, for AI, different alternatives are equally valid and do not exclude one another. In truth, AI does not differentiate reality from hallucinations. Humans are capable of playing with reality and enjoy hallucinations.
Through the illusion of autonomy, anthropomorphic robots in the service industry are perceived as more efficient (Lv et al. 2023). This perception arises because such a design simulates a familiar and safe interaction scenario, even though the robots lack true agency. Based on subjective motives, individuals are more positively disposed towards those who are like themselves (the similarity attraction effect in psychology (Philipp-Muller et al. 2020). However, it is impossible to construct a robot that looks like everyone and that everyone likes, and one that only evokes a positive impression. Therefore, along with the positive perception of anthropomorphic robots, the “uncanny valley hypothesis”(Cheetham 2018) is widely known, when people feel a sense of unease or revulsion in this interaction. The reasons for this phenomenon are named differently (Kendall 2022), as a particular example: customers may assume that the robot has malicious intentions, laziness, deliberate politeness, or unacceptable rudeness. It is significant that people often evaluate communication with robots not based on objective criteria but rather through the lens of their own behavioral models, life scenarios, and value systems (Payr 2019). This aligns with Nietzsche’s concept of ressentiment, where individuals, unable to change their circumstances, project their frustrations and values onto external entities. In this case, humans may unconsciously impose their own emotional and existential limitations onto their interactions with AI, reflecting a deeper dissatisfaction with the rigidity and predictability of algorithmic systems. This projection of agency onto AI echoes the same patterns of ressentiment, where humans assign meaning to something external as a way of coping with their own internal limitations (Nietzsche 2023).
A situation arises where the desire to solve one problem (fear and mistrust of the robot as a mechanical tool) leads to another problem (perceiving the robot as another subject with whom communication needs to be established). In this regard, researchers note the opinion of users about a safer and more comfortable interaction with robots that look like children or pets (de Visser et al. 2022). Or the so-called “old-aged” robot looks shabby and long-used, in contrast to the novelty of the robot’s sparkling armor, which inspires anxiety and awe in inexperienced users (Chirico et al. 2017).
The idea of “pleasure from the ordinary” (Dissanayake 1995), or the pleasure derived from habit and custom, provides a sense of understandable order and significant predictability, which influences the dynamics of agency. In this context the idea becomes paralleled by the concept of familiarity (Kamide et al. 2014) in robotics development. Is it justified to draw a direct connection between familiarity, understood as positive emotional and intuitive impressions that reflect individual acceptance of robots, and humanness, defined as similarity to humans in appearance, motion, and internal traits such as mind and will? For example, Kamide distinguishes between different types of familiarity and acceptance, including physical, informational, emotional, ecological, and economic familiarity but denies the existence of a direct link between these phenomena. In other words, the humanoid appearance of a robot is not a guarantee of trust from a human, and a trusting relationship can be effectively built in interaction with a robot without anthropological features.
Distinguishing reality from fiction is vital for humanity. Siderits refers to this process as “local utility maximizing” (Siderits 2016), where the ability to transform the surrounding environment has been transformed into the ability to transform oneself. Under the influence of robotics and AI, the following transformations of human agency components can occur: self-scrutiny procedures will be distorted by fabricated images and impressions (which explains the popularity of superhero movies), because in our time, it is not enough to be a decent person. One needs to be successful, or even better, outstanding. The functions of self-control as a prerequisite for identity continuity can be significantly improved through working with robots, as it requires following instructions, adhering to safety protocols, concentration, and responsibility, resulting in high speed and efficiency. However, formalized thinking carries the risk of losing initiative, interest, and originality, i.e., it leads to human passivity. This conclusion may seem contradictory. However, to adhere to established norms and meet expectations, rebellious inner strength is entirely inappropriate and excessive. Hence, “learned helplessness” and passivity emerge. To act in accordance with personal beliefs and values not dictated by authorities and rules, a break from conventional patterns is required—in other words, inner strength. For humans, these dynamics of self-identification between strong and weak extremums become much richer with robotics and AI development.
Robotics and AI provide an exceptionally broad spectrum of possibilities for shaping their forms of agency. And not just in imagination, in a distant future, or exclusively in the digital space (social media) but here and now. For example, a person can identify themselves as having a happy marital relationship with an imaginary character, such as Akihiko Kondo (Dooley and Ueno 2022) with a hologram. Of course, such examples may seem strange, but it is relatively safe for a person to experience different versions of themselves: married/divorced, parent/child, aggressor/victim, and so on. The question of the boundary of this safe space for experimenting with self-images is ambiguous and requires further careful study. Undoubtedly, it is an ethical question, because even in the absence of actual interaction with other people, this format of self-identification influences the narrative components of human identity. It is known that human identity is formed in the meaningful space of culture, influenced by people, events, and circumstances that have meaning and value. “We only find and understand ourselves in the gaze of the Other” (Sartre 2021), a well-known leitmotif of Jean-Paul Sartre’s phenomenology. Technical means of such a gaze do not possess it. Their value lies in their utility. Can increased interaction with robots lead to significant changes in existing narratives of human identity? Yes, firstly, it becomes a priority in the present, here and now, as these technologies provide fast and qualitative results.
Accordingly, individuals project similar expectations onto themselves and others, establishing this mode of interaction as the standard. The issue arises from the understanding that the “human, all too human”, characterized by its imperfections, is what fundamentally defines the uniqueness and value of identity in contrast to the anonymity of the crowd or the templates offered by technological solutions. Thus, unlike the “human-likeness” of robots and AI, a more realistic and perilous scenario emerges in the hybridization of human agency, where individuals risk adopting mechanistic or “machine-like” traits.
This blurring of boundaries, especially when machines demonstrate autonomy and decision-making capabilities, challenges our ability to maintain a clear distinction between human and machine. For example, advanced AI systems like autonomous vehicles can make real-time decisions in complex environments, raising questions about accountability and moral judgment. Similarly, algorithms that personalize user experiences—such as those used in social media or online shopping—can mimic human-like interactions, further complicating our perceptions of agency and choice. These instances highlight how the increasing sophistication of machines leads us to reevaluate our understanding of consciousness, autonomy, and what it means to be human. As we aim to perfect our tools, our tendency to view them as mirrors of ourselves—despite their inherent differences—raises critical questions. This dynamic of seeing robots as having essences similar to our own aligns with the broader contradiction between the imperfect nature of humanity and the desire for perfection in the tools we create.
In our pursuit to create perfect technological beings, we attempt to build flawless versions of ourselves, as if to prove our own strength and superiority. Yet, in this self-satisfying quest, we risk overlooking a crucial aspect of existence: our true potential often lies in our very imperfections. Nietzsche’s philosophy emphasizes that it is through our vulnerabilities and our capacity to adapt to changes that we ensure the survival of our species. Our ability to deviate from strict logic and embrace the chaos of life allows us to discover new pathways for growth and transcend our limitations. This adaptability, rather than the pursuit of perfection, is an ultimate fundamental feature to our evolution through enriched experiences.
There is an opinion that robotics and AI should be developed based on mimicking (De Greeff and Belpaeme 2015) human thinking and behavior, judgments and reactions, and adaptability and creativity. It is assumed that in this case, the predictability of possible behaviors and transformations will be higher and more reliable (Van Edwards 2023). Here, we want to turn to Nietzsche’s skepticism on the reliability of scientific conclusions, rooted in flawed assumptions, as it reminds us that while robots may excel in efficiency and productivity, they lack the emotional depth and adaptability inherent in human experience. This calls for an examination of our beliefs regarding collaboration with machines. As we integrate robotics into our work, we should acknowledge the qualitative aspects of human existence that cannot be quantified or replicated, ensuring that the rise of automation enriches rather than diminishes the human experience (Dyens 2016).
Nietzsche says the following: “The invention of the laws of numbers was made on the basis of the error, dominant even from the earliest times, that there are identical things (but in fact nothing is identical with anything else); at least that there are things (but there is no ‘thing’). The assumption of plurality always presupposes the existence of something that occurs more than once: but precisely here error already holds sway, here already we are fabricating beings, unities which do not exist. Our sensations of space and time are false, for tested consistently they lead to logical contradictions. The establishment of conclusions in science always unavoidably involves us in calculating with certain false magnitudes: but because these magnitudes are at least constant, as for example are our sensations of time and space, the conclusions of science acquire a complete rigorousness and certainty in their coherence with one another; one can build on them—up to that final stage at which our erroneous basic assumptions, those constant errors, come to be incompatible with our conclusions, for example in the theory of atoms” (Nietzsche 1964, p. 90). According to the Spanish company Alias Robotics, by 2030, the number of working people and robots will be equal (Yaacoub et al. 2022). This makes the security of robotic and AI systems a significant concern. For instance, a hacker attack can completely alter the robot’s algorithm and thus cause significant damage (ibid.) both materially and immaterially. The security issues of robot work are numerous and deeply discussed by experts in the fields where they are involved: design and programming (Yaacoub et al. 2022, operation and interaction, quality of work and emotional state of users, and so on. Based on this, we can conclude that an exclusively instrumental application of robotics and AI technologies is most expedient, particularly in the spectrum of technical solutions as they currently exist. In other words, we assume that the development of these technologies requires not a substantial qualitative variety of functional tasks but rather a more significant quantitative implementation in solving current problems in various areas of social life.
In the context of robotics and AI development, this mimicry and anthropomorphism reflect how we, as humans, project our own perspectives onto what is “new”. Rather than genuinely creating something novel, we impose our cultural structures and traditions—our first loop of restrictions—onto these beings. In the past, such structures, though pervasive, could be more readily challenged or bypassed through art, which, while inherently weak, offered a space for resistance and reinterpretation. However, with the development of technology, these same structures have gained immense controlling power through their physical embodiment in machines and AI. This embodiment amplifies their influence over our bodies and minds, turning this control back onto us. The result is a clear threat: systems that enforce their rules with unprecedented dominance, critically shrinking the space for weak behaviors—those fragile, yet vital acts of freedom and creativity that underpin human agency.

3. Loneliness and Emotional Engagement in a Digital Age: New Configurations of Herd Mentality

The rapid development of technology inherently troubles the human psyche and provokes stress and anxiety (Fekih-Romdhane et al. 2023). With the rise of mass production, the philosophy of alienating humans, subjugating them, and dissolving them in objectified speculative principles is postulated. The concept of standardization and template execution replaces originality and craftsmanship. Technology embodies objectified rationality. Adding emotional components to the functionality of AI-powered robots is also an attempt to rationalize emotions and pragmatically utilize them (as affective computing). Therefore, in the behavior of robots, it is fair to note the illusion of emotions as their imitation rather than genuine presence. However, for effective interaction, emotions are as important as rationality. The spectrum of emotional reactions can be extremely wide. Productive interaction does not necessarily require eliciting positive emotions in humans (which can be quite challenging); it is important at least not to provoke negative ones.
The first difference between man and beast, according to Nietzsche, is rational behavior or purposeful activity (Nietzsche 2009). However, the true human essence is revealed in the social dimension, in the coordinates of honor and dignity, the priority of long-term prospects for the common good over current personal benefits. In a certain sense, a person strives to become an “opinion maker” to prescribe the maxim of their own value judgments to other members of the community. Of course, such a communicative experience can be painful, since a clash of different maxims or opinions is inevitable. In dispute, truth is born, and in the dialectic of maxims, there is a reason and condition for sustainable social progress. In this context, a person is an “uncomfortable” communication partner. Can an AI robot become such an ideal conversational partner?
The development of robotics and AI has a significant impact on the content and ways of expressing sociality. On the one hand, companion robots, service, rehabilitation, and care robots, AI applications to facilitate daily routines, reduce the number of social contacts, lead to the isolation of individuals, and minimize the necessity and motivation to communicate with other people. On the other hand, the development of ICT contributes to the increase in relevant and potential communication channels and, accordingly, stimulates communication and interaction skills both for business and leisure.
Automation, which accelerates processes, including communication, affects the specificity of our speech. Compared to artistic texts, business and everyday communication are concise and pragmatic. Language as a semiotic tool, in turn, becomes the object of automation. For example, Large Language Models (LLMs) or ChatGPT are often used to improve foreign language skills. However, the result of “automating” language as a means of quickly working with texts and large amounts of information is rather superficial. The undeniable advantage of these AI technologies is quick access and a concise and meaningful way of presenting information. But referring to John Searle’s Chinese room experiment, we see that AI is currently unable to grasp the symbolic or metaphorical level of language. Therefore, the nature of its information processing corresponds to the principle of “parrot speaking”. Such a format of communication is sufficient for successfully performing specific tasks, but as a conversational partner, ChatGPT is quite strange and predictable.
Why do people seek interpersonal communication with AI programs like Replica in the first place? We can assume that two factors influence this: the inherent human need for communication and the accessibility of the program, both in terms of its availability (its “at hand”) and its psychological accessibility (lack of psycho-emotional barriers in communication). However, such simulated communicative strategies do not add to the quality of humans’ interaction and solidarity; the sense of loneliness in the world population is still high. According to a global survey, 33% of the world’s population feels lonely (Joint Research Centre 2018), and around 30 million European adults frequently feel lonely (Ibid.). In Switzerland, in 2017, 38% of the population aged 15 and over said that they felt lonely (Federal Statistical Office 2019). Loneliness has a negative impact on people’s mental and physical health. ICT seemed to be the solution to this problem, but its magnitude only continues to grow in the modern digital era (Schafer et al. 2021; Reedman-Flint et al. 2022): “We live amidst an epidemic of loneliness” (Killeen 1998), “Loneliness is an increasing societal issue worldwide”, and “loneliness is now prevalent among the young” (Pittman and Reich 2016).
The quantity of social interactions and communicative connections does not compensate for the quality of human communication. In other words, “we have a mismatch between the quantity and quality of social relationships that we have, and those that we want” (Perlman and Peplau 1981). Can we hope that robots and AI will help us overcome loneliness? Yes, “robots represent a feasible option to remedy social disconnection” (Hoorn 2018), not only in the widely discussed HMI context. It has been proven that social robots are effective in situational loneliness, rather than chronic loneliness: “chronically lonely people would react more negatively to a social connection opportunity with a social robot than situationally lonely people” (Penner and Eyssel 2022). Therefore, in the context of the issue of human loneliness, robots should be regarded not as communication partners but as tools for facilitating communication with other people. We will illustrate this point with an example of two robots. The first one is a robot called Vector, who “is more than a home robot. He’s your buddy. Your companion” (Anki 2024). He is presented to the public with a marketing strategy based on the basic human need for communication and belonging. In this case, the robot is attributed with predicates of agency and the ability to establish psycho-emotional connections. However, expressions like “buddy” or “companion” imply a close and positive connection between partners. An example of another strategy is the robot Fribos (Gartenberg 2018), created to enhance communication among young people by sharing information about their leisure activities, tastes, and hobbies. The developers of this robot do not aim to create an alternative to human communication. Their goal is to optimize the quality of communication based on shared interests and preferences. Since automation is necessary to expedite processes, why not apply it to organize quality leisure activities among like-minded individuals or simply interesting people? In order to preserve human agency, both at the personal (individual) and collective (ancestral) level, a sense of belonging and connectedness is necessary. However, the likely substitution of a real companion with an artificial one is dangerous and can lead to the degradation of situational loneliness into chronic loneliness. In other words, in everyday life, robots and AI should complement rather than replace humans.
Emotions are an “unconditional gift” (Zewe 2022), shared by all mammals (Panksepp 2005). Based on the experience of shared emotions, we can interact and predict our collaborative partner’s behavior safely. In the process of evaluating, interpreting, and understanding something, various emotional affirmations are used. The existing problem with understanding robot behavior, or rather, learning how to interact with it (Szollosy 2017a), is precisely the lack of a shared emotional background for interaction. Since the quality and quantity of task performance by robots are usually significantly higher than by humans, new concerns arise about the dehumanizing influence of technology (Siderits 2016), the devaluation of the unique in favor of the universal, the emotional in favor of the rational, and the artistic in favor of the pragmatic. For example, an important attribute for self-identification of a person is “ironic engagements” (Guo et al. 2021), which refers to the flexibility in changing social roles (daughter, mother, employee, neighbor, student, teacher, etc.) and the variability of personal acceptance of surrounding events (from fanatical engagement to passive observation). Irony, in this case, signifies the preservation of autonomy, the voluntary and playful nature of activity, or self-determination. Technological processes, on the contrary, are standard and algorithmic, and collaboration with machines often does not require originality and creativity. As Nietzsche argued, a person receives pleasure from nonsense from non-purposeful action (Nietzsche 2009). On this ground, the problem of alienation becomes more obvious when human everyday practices look like machine operations. Therefore, the alternative in development of an assistant robot (Beckerle et al. 2017; Newman et al. 2022) with humanoid features or machines with specific functions are becoming the most common tendency in robotics (Stroessner and Benitez 2019).
Nietzsche’s reflections on the herd instinct and the need for belonging reveal a deep tension between individuality and conformity, a theme that resonates in the modern context of technological development. He argues that the human desire to conform to social norms and collective beliefs leads to the surrender of individuality, weakening critical thinking and stifling self-overcoming. This herd mentality, while providing comfort through connection, hinders personal growth and authentic self-expression. Nietzsche contrasts this with the value of independence and solitude, where true personal development occurs. Only by stepping away from the superficiality of social bonds, which are often based on convenience rather than genuine connection, can individuals cultivate a unique perspective and resist being absorbed by the collective. In his broader cultural critique, Nietzsche warns that the human need for belonging, though it stabilizes societies, also perpetuates stagnation. This critique is particularly relevant in an era where technological processes increasingly standardize human behavior, threatening the devaluation of individuality. Following Nietzsche’s train of thought, there is a pressing need to re-evaluate societal values in light of these trends. As technology advances, the challenge of breaking free from the conformist drive it fosters grows even more daunting, making it harder than ever to embrace a truly independent, self-determined existence.
The automation of many production processes should have led to the release of a significant amount of free time for people. However, we can observe that this has not happened, and the acceleration of work processes provokes tension in the existential field of humans, whose physical time is often insufficient to properly fulfill all their current obligations. A simple explanation for why the abundance and accessibility of technology do not add but rather take away free time can be attributed to the so-called “productivity paradox” (Kahlon 2020). By automating certain processes, we effectively create new functions and tasks that are necessary to ensure stable processes and expected outcomes. In other words, “we have so much information to take in and so many platforms to manage that we’ve become overwhelmed” (Rhomberg 2020). We should admit that routine mechanical operations are susceptible to automation. Actions that require critical thinking, creativity, and improvisation cannot currently be delegated to robots and AI. Therefore, the belief that “... jobs now are more interesting than the repetitive routine jobs that were common in earlier manufacturing companies” (Autor 2015) is justified. Perhaps this is why people often do not notice how much time they spend on fulfilling their work responsibilities, as “the distinction between work and leisure becomes gradually less evident” (Harari 2014), and they perceive their work as a calling, passion, and mission, rather than just a contractual arrangement.
Nietzsche’s idea of the herd mentality provides a deeper understanding of these dynamics. The herd mentality, according to Nietzsche, reflects humanity’s tendency to conform to societal norms, surrendering individual freedom in exchange for the security and comfort of collective behavior. This instinct to belong, often at the expense of personal autonomy and critical thinking, can be seen in the way people adapt to the accelerating pace of technological change. In the era of digitalization, this manifests as an increasing dependency on technology to dictate how we spend our time and prioritize our tasks. The automation of production processes, which should ideally have liberated people and granted them more free time, instead creates new demands and expectations. These technological advancements trap individuals in a cycle of productivity, where the need to manage information overload and digital platforms becomes a form of self-imposed enslavement. As Nietzsche argues, the herd mentality weakens the individual capacity for critical thinking and self-overcoming, and this is precisely what happens when people passively accept the pressures of technological acceleration without questioning their own role within this system.
The concept of the productivity paradox aligns with Nietzsche’s critique of the conformity fostered by social and technological pressures. Rather than using the time gained from automation for personal growth or meaningful pursuits, individuals find themselves overwhelmed with new tasks created by technology. This endless cycle of work, which blurs the boundaries between leisure and labor, reflects how easily people fall into routine behaviors dictated by external forces rather than engaging in independent, self-directed activities. Wage labor can be reasonably regarded as time sold from one’s life, during which a person is not free (a slave is never free, while a wage worker is unfree for a certain portion of time). Accordingly, the widespread use of digital technologies increases the amount of this “unfree”, “not one’s own”, or “sold” time. The notion that work has become more interesting and fulfilling, which may obscure the true loss of leisure, also ties into Nietzsche’s observation that the herd often follows what is socially praised or expected, without genuine reflection on whether it aligns with individual needs or desires.
So, as we see, Nietzsche’s concept of freedom is meaningfully connected to the idea of losing free time in the era of digitalization and mechanization. In this modern context, the increasing intrusion of technology into everyday life—through constant connectivity, digital labor, and mechanized routines—can be seen as a new form of societal constraint that limits personal freedom (Nietzsche 2009). Just as Nietzsche warns against the “herd instinct” and the tendency to conform to societal expectations, modern digital and mechanized environments push individuals toward continuous engagement with work and technology, often at the expense of personal reflection and genuine leisure.
In Nietzsche’s view, true freedom requires solitude, intellectual independence, and time to engage in deep self-reflection. However, in today’s digitalized world, the omnipresence of technology fragments our attention, diminishes the quality of leisure, and reduces free time to mere breaks between work-driven tasks. This continuous engagement with technology leaves little room for the introspection and self-overcoming that Nietzsche saw as essential for personal growth and freedom. Digitalization encourages a form of standardization. Social media, algorithms, and mechanized labor processes encourage uniform behavior, replacing the creative and individualistic expression. As we lose control over our free time—either to digital distractions or to the efficiency-driven demands of mechanization—we risk becoming more like the automated systems we interact with, losing the capacity for this kind of self-determined existence.

4. Moral Rules in Computer Code or Personal Perspective of Responsibility?

Digital ethics is one of the most topical themes for discussion today. This fact is explained by an increasing number of people being impacted both at a professional level and on a daily user level and the influence of data technologies on various spheres of life as intensifying business, tourism, education, production, healthcare, transportation, mobility, etc. Given the involvement of numerous actors across various stages including data collection, processing, analysis, and utilization, the “problem of many hands” emerges, complicating the attribution of responsibility for ethical concerns. The question of accountability is therefore highly problematic.
It becomes evident why this issue begins to be questioned, particularly in light of its influence—or perhaps even governance—over human agency. This question lies at the heart of the ethics of technology. While technology may possess automated decision-making capabilities, it is widely debated that it bears no moral responsibility for the outcomes it produces. But if technology, being created by humans, is absolved of this responsibility, who, then, is accountable? And accountable to whom? What are the distinctions and intersections between human and technological agency? If we approach this inquiry on an ontological level, can it help clarify these differences? Specifically, Nietzsche’s thought contributes to this discussion in two key ways: (1) It highlights the frailties, contradictions, and historical contingencies of human nature. It critiques the steadiness of metaphysical systems, idealism, and moral absolutism, emphasizing the historical and psychological origins of human beliefs and values. (2) Nietzsche argues that humanity can overcome its delusions and progress only through reason and critical thinking, which are cultivated within a culture that fosters free spirits capable of rethinking and redirecting epistemic narratives.
Technologies create a lulling hum of civilization, a space of comfort and conformity. The current stage of striving to make this human life-world safe is characterized by an accelerated search for a universal ethical framework for decision making in the use of technologies. The problem lies in the fact that ethical rules, which are essentially theoretical abstractions but have acquired specific content within the horizon of human history, lack such specificity and praxeology in the modern era. The reason for this is that digital technologies blur the boundaries between ideal and real and, by this, change the traditional procedures for verifying actions within the coordinates of good and evil, while also astonishing the imagination with the scale of their speed and scope of impact.
High technologies not only offer a wide range of information but may input some limitation on it, like a separate bubble on the internet, produced based on customer tastes and habits (Fourberg et al. 2021), or even distort real circumstances and conditions (cases of AI bias discrimination) (European Union Agency for Fundamental Rights 2022). Moreover, the relationship in human–robot interaction has no certain law regulation for now. High technologies also raise the problem of infringement of intellectual property and data protection: the necessity of privateness and safety are basic for human nature. And now, we may find information about a person without her agreement (i.e., her marriage status, children, hobbies, tastes, etc.). These data could be used for manipulation or also criminal actions. For example, in the United States, a criminal used an AI-generated child’s voice to convince a mother that her child was kidnapped and demanded a ransom (Reshef 2023). But anyway, in such kinds of manipulation, it is still humans’ conflicts of interest.
After analyzing Ethics Codes for Robotics and AI, we can conclude their abstractness and inconsistency. For example, the famous Laws of Robotics by Asimov “do not address ‘real life’ and cannot be used in practice” (Szollosy 2017b). The Principles of Robotics (Winfield et al. 2017) emphasize the role of the robot as a tool and place all responsibility and potential guilt on the person who uses/creates this tool. From a logical point of view, this position is justified, but in many practical cases, moral dilemmas are not so straightforward. For example, in the case of a traffic accident involving self-driven cars, the question arises “who is to blame?”, and the spectrum of potential answers includes the following: the driver/passenger, engineer, programmer, infrastructure employee, city authorities, road services, the victim themselves, etc. Or, in the case of the kidnapping of a companion robot, how should the offense be identified: theft of personal property, pet-napping, or kidnapping? Or, in the context of posthumanism on the one hand and the principles of inclusion and diversity on the other, how should the category of “vulnerable users” (Collins 2017) be determined? People with physical disabilities? Limited abilities? With a low level of education? Who is not able to use modern gadgets and the internet? Who does not strive to protect their personal data?
The British Standard BS 8611 (BS 8611 2023) Guide to the ethical design and application of robots and robotic systems (BS 8611 2023) provides detailed information on how to design and create robots for different areas of application in order to avoid problematic and dangerous situations from an ethical and moral perspective. Its creators base it on well-known ethical principles and codes of conduct and propose a constructive approach to the stable use of robotics in conditions of uncertainty. How effective and sufficient is such an empirical and descriptive approach? Is it possible to predict all possible use cases of robots and AI? Like any other tool, robotics and AI can be used in various fields, including illegal and criminal ones, such as route planning for drug trafficking, tax evasion, fraud, or industrial espionage. Thousands of devices in homes, offices, and companies worldwide can potentially be embedded with data-collecting programs about the tastes and habits of different people, their daily routines, and special incidents, as well as their material, physiological, and psycho-emotional states.
This is not just an intrusion into personal life. Such a situation can become the basis for total control by the state or individual corporations. It is justifiable to conclude that universal ethical rules for human interaction with robots as a liminal object are not clear but are necessary to avoid potential risks to the physical and mental health of people. A violation of moral norms by robots or AI is a consequence of the imperfection of human nature, both biological and social. Well-known cases of discrimination by AI programs against people (European Parliamentary Research Service 2020; Datatron 2024) are a result of systematic errors in machine learning, and harm caused by robots is a result of imperfect work of a programmer, negligence, and carelessness of a worker, not the malicious intent of the robot. Due to the lack of subjectivity, a robot can cause harm but not commit violence. Violence implies the presence of emotions, and robots and AI are pure rationality, code, and an algorithm for achieving the goal set by humans.
Ethical rules in society perform their regulatory functions based on the moral emotions inherent to humans (guilt, conscience, duty, shame, etc.). If affective computing technologies are developed, such as incorporating basic emotions into robotics, can we assume the possibility of implementing moral and ethical norms into computer code (Hieida and Nagai 2022)? Will such research provide more information about the mechanisms of human moral emotions? And will they become effective regulators for the use/behavior of robots? Unlike basic emotions, moral emotions are not exclusively reactive but presuppose a certain level of reflection, the ability to identify oneself and others, and, therefore, social abilities. What challenges for the ethics of human relationships does the development of robotics and AI produce? The interesting experiment described by Christakis (Christakis 2019) demonstrates the fragility of ethical regulations that govern interactions in society when faced with minor negative influences. Indeed, in the format of digital communication, participants typically demonstrate friendliness and selflessness. However, if artificial intelligence algorithms interfere with such communication by spreading aggressive and offensive expressions and unjust and selfish evaluations and decisions, then the entire community will quickly be influenced by these very “worst” examples of communication.
Friendship, trust, and cooperation can easily be disrupted by the selfish (but rational in the short-term perspective of benefit) behavior of a robot with AI. Sociality, the ability to collaborate for the common good, and consciously setting limits to natural selfishness are essentially the conditions for the survival of our species. Can robots and AI destroy or significantly modify this multi-century evolutionary mechanism of forming human agency and, as a result, the total field of culture? When interacting with another subject, we assume the presence of a common emotional context of interaction, an empathic connection, and a minimum set of moral values and regulations. Is it justified to introduce elements of moral self-awareness and ethical behavior into robot programming algorithms as a variation of “No-harm-by-design”? Bryson (2018) argues against this strategy, because morality implies suffering (either one’s own or others’), and seeking to teach robots compassion may lead to the opposite result, where the efficiency of the robot and AI work is evaluated not in terms of increasing benefits and reducing risks/harm but rather in terms of legal practice. In this case, the ontological status of a robot would no longer be liminal but equal to that of a living being. It is obvious that such argumentation is sophistical and meaningless.
When considering digital ethics, it is important to recognize the inherent subjectivity of human beings and their freedom of self-determination. However, a passive attitude when dealing with data issues, a ‘privacy apathy’, has been observed, meaning that people “specifically believe that privacy violations are inevitable and opting out is not an option” (Hargittai and Marwick 2016). The focus is on correcting errors and discrepancies instead of on their active prevention, which can lead to harmful situations. This passivity can be explained through the concept of “escape from freedom” (Fromm 2014), which is a tendency to accept the proposed choices out of fear to make autonomous decisions and to take responsibility for them.
Let us recall that according to Nietzsche, weak behavior/strong nature occurs when an individual, breaking established patterns, is guided by his/her own will, measure, or conscience, while strong behavior/weak nature is when an individual acts within the framework of social rules. Digital ethics represents a qualitatively new format of interaction, where ‘strong’, that is, well-established and tested, rules do not exist. Therefore, everyone, to some extent, is forced to improvise, test various formats and cases, and experiment with different strategies and techniques of ‘weak’ behavior. The artificial creation and enforcement of ‘strong’ scenarios, that is, frameworks in digital ethics, does not work. Luciano Floridi refers to such an agency as “ethics washing” (Floridi 2008), recognizing this fact could transform how ethical values are implemented in real-world processes. Based on the aforementioned dichotomy of strong/weak nature and strong/weak behavior, it becomes clear why widely accepted values of digital ethics, such as transparency and accountability, are so difficult to implement in a highly competitive business environment. Implementing the principle of transparency entails revealing the vulnerable aspects of a company and its technologies, products, or services, while accountability requires additional efforts and investments in improving the company’s organizational culture.
Building on the concept of freedom as “recognized necessity” from Spinoza (2002), we suggest that active responsibility not only requires the understanding but also the personal conviction of the relevance of digital ethics. This conviction will serve as a strong motivational stimulus for pro-active ethical strategies when dealing with digital technologies in professional and private lives. We emphasize the need to personalize ethical values through lived engagement and the cathartic experience of art as a means of transcendence, which, at a minimum, involves rejecting harmful masculinity (Sanders et al. 2024). A compelling example is the artist Johanna Burai’s 2015 project addressing racial bias in Google Image search results (Velkova and Kaun 2021). By creating a platform offering diverse images of non-white hands and launching a targeted media campaign, Burai successfully disrupted algorithmic biases and elevated underrepresented images in search rankings. This vividly demonstrates the importance of user agency and creative culture in countering the deterministic narratives of algorithmic systems. As technological influence on society creates a situation of uncertainty, preserving creative spaces becomes crucial for humanity’s resilience, emphasizing art’s role in critical engagement and safeguarding human values. As a result, personal responsibility stands at the crossroads of the dialectical relation between sociality and personality, see Figure 1.

5. Discussion

Nietzsche is among the thinkers who shifted the paradigm of Western philosophy away from pure rationalism and logocentrism, emphasizing the uniqueness of human essence. Today, his work holds even greater significance as logocentrism regains influence, deeply embedded in the functioning of technology and its impact on our daily lives. Nietzsche’s critique of absolute truths and his exploration of the will to power helps us to challenge the frameworks imposed by technological systems and demonstrates the importance of a reevaluation of human values in an increasingly mechanized world. In the digital era, the distinction between strong and weak behaviors that Nietzsche highlighted becomes increasingly blurred, as technology objectifies and amplifies human tendencies. The mechanization of thought and action, guided by rigid algorithms, transforms our internal patterns—our repetitive loops of behavior, thought, and emotion—into external systems that now structure our social framework. In this way, we may become captives of our personal “hells”, as our once fluid and spontaneous human qualities are now mirrored and reinforced by technology, locking us into predictable and deterministic patterns. This objectification of human behavior leaves little room for what Nietzsche considered “weak” behaviors, which he saw not as inherently negative but as essential aspects of our existence that allow for vulnerability, reflection, and growth through error.
Thus, Nietzsche’s question about the survival of “free spirits” has gained new relevance. As humans aim to create perfect technology, they project their own qualities onto machines, a process close to what Nietzsche critiqued as an extension of the “herd instinct”—the tendency to conform to social norms and suppress individuality. This drive to standardize human behavior through technology mirrors the broader societal pressures. In Nietzsche’s context, the ethics of technology must be approached with careful attention to individual experiences, embracing the paradox that human errors—often seen as flaws to be eliminated—are, in fact, crucial for personal growth and ethical development. Traditional ethical frameworks, especially in the realm of technology, tend to focus on minimizing mistakes and maximizing efficiency, which aligns with the mechanized nature of AI and automation. However, Nietzsche’s philosophy challenges this utilitarian approach by highlighting the value of human imperfection. It is through our missteps, irrational decisions, and emotional responses that we gain deeper insights into life, challenge established norms, and foster genuine moral progress.
In contrast, technology, driven by strict algorithms, seeks to eliminate these “errors”, promoting a flawless and predictable system that stifles the unpredictability of human existence. Yet, this drive toward perfection risks diminishing the richness of ethical life, which depends on the ability to acknowledge moral ambiguities, make mistakes, and learn from them. Nietzsche warns against the suppression of individual will and creativity, arguing that true ethical engagement arises from the freedom to err and reflect on those errors. Therefore, the ethics of technology should not aim to eradicate human flaws but instead recognize the irreplaceable role they play in shaping moral consciousness and fostering a deeper understanding of what it means to be human. This perspective calls for a technological future that preserves room for human spontaneity, allowing for mistakes that lead to growth, rather than imposing rigid moral codes that reduce human experience to mechanical precision.
In this context, the dominance of “strong” behaviors—characterized by efficiency, control, and conformity—threatens to eradicate the space for mistakes, uncertainties, and non-linear growth, which are necessary for true personal development. Thus, a new ethical approach is required; one that ensures space for these “weak” behaviors, recognizing that they foster creativity, compassion, and humaneness. Instead of seeking to eliminate human errors through technological precision, ethics in the age of digitalization must allow for imperfection and cultivate environments where vulnerability and reflection are valued. By doing so, we can resist the tendency to be absorbed into the cold rationality of objectified technological logic and preserve the richness of human life, which thrives on the dynamic interplay between strength and weakness, rationality and emotion, and error and learning.

Author Contributions

Conceptualization, A.S. and O.Y.; methodology, A.S.; investigation, A.S.; resources, A.S. and O.Y.; writing—original draft preparation, A.S.; writing—review and editing, A.S. and O.Y.; visualization, A.S.; supervision, O.Y.; project administration, O.Y.; funding acquisition, O.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research is a part of the project “A future that works: Cobotics, digital skills and the re-humanization of the workplace (CODIMAN)”, which is supported by the Swiss National Science Foundation (SNSF) as part of the National Research Program NRP77 Digital Transformation, grant no. 407740_187298.

Data Availability Statement

No data was generated for this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Anki. 2024. Robot Vector. Available online: https://anki.com/en-us/vector.html (accessed on 4 October 2024).
  2. Autor, David H. 2015. Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives 29: 3–30. [Google Scholar] [CrossRef]
  3. Barrat, James. 2023. Our Final Invention: Artificial Intelligence and the End of the Human Era. London: Hachette UK. [Google Scholar]
  4. Beckerle, Philipp, Gionata Salvietti, Ramazan Unal, Domenico Prattichizzo, Simone Rossi, Claudio Castellini, Sandra Hirche, Satoshi Endo, Heni Ben Amor, Matei Ciocarlie, and et al. 2017. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics. Frontiers in Neurorobotics 11: 24. [Google Scholar] [CrossRef]
  5. Bicen, Huseyin, and Ahmet Arnavut. 2015. Determining the effects of technological tool use habits on social lives. Computers in Human Behavior 48: 457–62. [Google Scholar] [CrossRef]
  6. Branston, Tyler. 2023. AGI, All Too Human; Nietzsche and Artificial General Intelligence. Ph.D. dissertation, University of Victoria, Victoria, BC, Canada. [Google Scholar]
  7. Bryson, Joanna J. 2018. Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology 20: 15–26. [Google Scholar] [CrossRef]
  8. BS 8611:2023. 2023. Robots and Robotic Devices. Ethical Design and Application of Robots and Robotic Systems. Guide. Available online: https://www.en-standard.eu/bs-8611-2023-robots-and-robotic-devices-ethical-design-and-application-of-robots-and-robotic-systems-guide/ (accessed on 1 September 2024).
  9. Cheetham, Marcus, ed. 2018. The Uncanny Valley Hypothesis and Beyond. Lausanne: Frontiers Media SA. [Google Scholar]
  10. Chirico, Alice, Pietro Cipresso, David B. Yaden, Federica Biassoni, Giuseppe Riva, and Andrea Gaggioli. 2017. Effectiveness of immersive videos in inducing awe: An experimental study. Scientific Reports 7: 1218. [Google Scholar] [CrossRef] [PubMed]
  11. Christakis, Nicholas A. 2019. How AI Will Rewire Us. The Australian Financial Review. Available online: https://www.afr.com/technology/how-ai-will-rewire-us-20190326-p517ki (accessed on 18 October 2024).
  12. Clark, Maudemarie. 1990. Nietzsche on Truth and Philosophy. Cambridge: Cambridge University Press. [Google Scholar]
  13. Coeckelbergh, Mark. 2017. New Romantic Cyborgs: Romanticism, Information Technology, and the End of the Machine. Cambridge: MIT Press. [Google Scholar]
  14. Collins, Emily C. 2017. Vulnerable users: Deceptive robotics. Connection Science 29: 223–29. [Google Scholar] [CrossRef]
  15. Datatron. 2024. Real-Life Examples of Discriminating Artificial Intelligence. Available online: https://datatron.com/real-life-examples-of-discriminating-artificial-intelligence/ (accessed on 4 October 2024).
  16. De Greeff, Joachim, and Tony Belpaeme. 2015. Why robots should be social: Enhancing machine learning through social human-robot interaction. PLoS ONE 10: e0138061. [Google Scholar] [CrossRef] [PubMed]
  17. de Visser, Ewart J., Yigit Topoglu, Shawn Joshi, Frank Krueger, Elizabeth Phillips, Jonathan Gratch, Chad C. Tossell, and Hasan Ayaz. 2022. Designing man’s new best friend: Enhancing human-robot dog interaction through dog-like framing and appearance. Sensors 22: 1287. [Google Scholar] [CrossRef]
  18. Dissanayake, Ellen. 1995. The pleasure and meaning of making. American Craft 55: 40–45. [Google Scholar]
  19. Dooley, Ben, and Hisako Ueno. 2022. This Man Married a Fictional Character. He’d Like You to Hear Him Out. The New York Times, April 24. Available online: https://www.nytimes.com/2022/04/24/business/akihiko-kondo-fictional-character-relationships.html (accessed on 24 September 2024).
  20. Dyens, Ollivier. 2016. The Human/Machine Humanities: A Proposal. Humanities 5: 17. [Google Scholar]
  21. European Parliamentary Research Service. 2020. The Ethics of Artificial Intelligence: Issues and Initiatives. London: European Parliamentary Research Service. [Google Scholar]
  22. European Union Agency for Fundamental Rights. 2022. Bias in Algorithms—Artificial Intelligence and Discrimination. Vienna: European Union Agency for Fundamental Rights. [Google Scholar]
  23. Federal Statistical Office. 2019. Feeling Loneliness. Last Modified 2019. Available online: https://www.statista.com/statistics/1104187/us-adults-social-media-loneliness/ (accessed on 14 September 2024).
  24. Fekih-Romdhane, Feten, Haitham Jahrami, Rami Away, Khaled Trabelsi, Seithikurippu R. Pandi-Perumal, Mary V. Seeman, Souheil Hallit, and Majda Cheour. 2023. The relationship between technology addictions and schizotypal traits: Mediating roles of depression, anxiety, and stress. BMC Psychiatry 23: 67. [Google Scholar] [CrossRef] [PubMed]
  25. Floridi, Luciano. 2008. Foundations of information ethics. In The Handbook of Information and Computer Ethics. Edited by Kenneth Einar Himma and Herman T. Tavani. Hoboken: Wiley, pp. 1–23. [Google Scholar]
  26. Fourberg, Niklas, Tas Serpil, Lukas Wiewiorra, Ilsa Goldovitch, Alexandre De Streel, Herve Jacquemin, Jordan Hill, Madalina Nunu, Camille Bourguigon, Florian Jacques, and et al. 2021. Online Advertising: The Impact of Targeted Advertising on Advertisers, Market Access and Consumer Choice. Bruxelles: European Parliament. [Google Scholar]
  27. Fromm, Erich. 2014. The escape from freedom. In An Introduction to Theories of Personality. London: Psychology Press, pp. 121–35. [Google Scholar]
  28. Gartenberg, Chaim. 2018. Meet Fribo, a Robot Built for Lonely Young People. The Verge, April 5. Available online: https://www.theverge.com/2018/4/5/17201646/fribo-robot-social-lonely-young-people-home (accessed on 18 October 2024).
  29. Grève, Sebastian Sunday. 2024. Nietzsche and the Machines. The Philosophers’ Magazine. Available online: https://philosophersmag.com/nietzsche-and-the-machines/?utm_source=chatgpt.com (accessed on 14 September 2024).
  30. Guo, Yao, Xiao Gu, and Guang-Zhong Yang. 2021. Human–Robot Interaction for Rehabilitation Robotics. Cham: Springer International Publishing, pp. 269–95. [Google Scholar]
  31. Harari, Yuval Noah. 2014. Sapiens: A Brief History of Humankind. New York: Random House. [Google Scholar]
  32. Hargittai, Eszter, and Alice Marwick. 2016. “What can I really do?” Explaining the privacy paradox with online apathy. International Journal of Communication 10: 3737–57. [Google Scholar]
  33. Hayles, N. Katherine. 2000. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press. [Google Scholar]
  34. Hieida, Chie, and Takayuki Nagai. 2022. Survey and perspective on social emotions in robotics. Advanced Robotics 36: 17–32. [Google Scholar] [CrossRef]
  35. Hoorn, Johan F. 2018. From lonely to resilient through humanoid robots: Building a new framework of resilience. Journal of Robotics 2018: 8232487. [Google Scholar] [CrossRef]
  36. Issembert, Beni Beeri. 2023. Nietzsche’s three metamorphoses and their relevance to artificial intelligence development. Philosophy and Technology 36: 39. [Google Scholar]
  37. Joint Research Centre. 2018. Loneliness—An Unequally Shared Burden in Europe. Brussels: European Commission. Available online: https://knowledge4policy.ec.europa.eu/sites/default/files/fairness_pb2018_loneliness_jrc_i1.pdf (accessed on 14 September 2024).
  38. Kahlon, Param. 2020. Overcoming the Productivity Paradox with RPA. Available online: https://www.uipath.com/blog/rpa/overcoming-productivity-paradox-with-rpa (accessed on 21 September 2024).
  39. Kamide, Hiroko, Koji Kawabe, Satoshi Shigemi, and Tatsuo Arai. 2014. Relationship between familiarity and humanness of robots–quantification of psychological impressions toward humanoid robots. Advanced Robotics 28: 821–32. [Google Scholar] [CrossRef]
  40. Kaufmann, Walter A. 2013. Nietzsche: Philosopher, Psychologist, Antichrist. Princeton: Princeton University Press. [Google Scholar]
  41. Kendall, Emily. 2022. Uncanny Valley. Chicago: Encyclopedia Britannica. [Google Scholar]
  42. Killeen, Colin. 1998. Loneliness: An epidemic in modern society. Journal of Advanced Nursing 28: 762–70. [Google Scholar] [CrossRef]
  43. Kosar, Anthony. 2024. Nietzschean Language Models and Philosophical Chatbots: Outline of a Critique of AI. The Agonist 18: 7–17. [Google Scholar] [CrossRef]
  44. Kroker, Arthur. 2004. The Will to Technology and the Culture of Nihilism: Heidegger, Nietzsche, and Marx. Toronto: University of Toronto Press. [Google Scholar]
  45. Leiter, Brian. 2003. The Routledge Philosophy Guidebook to Nietzsche on Morality. London: Routledge. [Google Scholar]
  46. Leiter, Brian. 2019. Moral Psychology with Nietzsche. New York: Oxford University Press. [Google Scholar]
  47. Lv, Linxiang, Minxue Huang, and Ruyao Huang. 2023. Anthropomorphize service robots: The role of human nature traits. The Service Industries Journal 43: 213–37. [Google Scholar] [CrossRef]
  48. Mellamphy, Dan, and Nandita Biswas Mellamphy. 2016. The Digital Dionysus: Nietzsche and the Network-Centric Condition. Berlin/Heidelberg: Springer. [Google Scholar]
  49. Moore, Gregory, and Thomas H. Brobjer, eds. 2003. Nietzsche and Science. Berlin/Heidelberg: Springer. [Google Scholar]
  50. Newman, Benjamin A., Reuben M. Aronson, Kris Kitani, and Henny Admoni. 2022. Helping people through space and time: Assistance as a perspective on human-robot interaction. Frontiers in Robotics and AI 8: 720319. [Google Scholar] [CrossRef]
  51. Nietzsche, Friedrich Wilhelm. 1929. Beyond Good and Evil. Translated by Helen Zimmern. Bucharest: SC Active Business Development SRL. [Google Scholar]
  52. Nietzsche, Friedrich Wilhelm. 1964. The Will to Power: An Attempted Transvaluation of All Values. Translated by W. Kaufmann, and R. J. Hollingdale. New York: Russell & Russell. [Google Scholar]
  53. Nietzsche, Friedrich Wilhelm. 1999. The Birth of Tragedy and Other Writings. Cambridge: Cambridge University Press. [Google Scholar]
  54. Nietzsche, Friedrich Wilhelm. 2006. Human, All Too Human: A Book for Free Spirits. Translated by R. J. Hollingdale. Cambridge: Cambridge University Press. First published 1878. [Google Scholar]
  55. Nietzsche, Friedrich Wilhelm. 2009. Human, All-Too-Human: Parts One and Two. New York: Great Books in Philosophy. [Google Scholar]
  56. Nietzsche, Friedrich Wilhelm. 2023. On the Genealogy of Morality. Peterborough: Broadview Press. [Google Scholar]
  57. Nietzsche, Friedrich Wilhelm, and Reginald John Hollingdale. 2020. Thus spoke zarathustra. In The Routledge Circus Studies Reader. London: Routledge, pp. 461–66. [Google Scholar]
  58. Panksepp, Jaak. 2005. Affective consciousness: Core emotional feelings in animals and humans. Consciousness and Cognition 14: 30–80. [Google Scholar] [CrossRef] [PubMed]
  59. Payr, Sabine. 2019. In search of a narrative for human–robot relationships. Cybernetics and Systems 50: 281–99. [Google Scholar] [CrossRef]
  60. Penner, Angelika, and Friederike Eyssel. 2022. Germ-free robotic friends: Loneliness during the COVID-19 pandemic enhanced the willingness to self-disclose towards robots. Robotics 11: 121. [Google Scholar] [CrossRef]
  61. Perlman, Daniel, and L. Anne Peplau. 1981. Toward a social psychology of loneliness. Personal Relationships 3: 31–56. [Google Scholar]
  62. Philipp-Muller, Aviva, Laura E. Wallace, Vanessa Sawicki, Kathleen M. Patton, and Duane T. Wegener. 2020. Understanding when similarity-induced affective attraction predicts willingness to affiliate: An attitude strength perspective. Frontiers in Psychology 11: 1919. [Google Scholar] [CrossRef] [PubMed]
  63. Pittman, Matthew, and Brandon Reich. 2016. Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in Human Behavior 62: 155–67. [Google Scholar] [CrossRef]
  64. Prescott, Tony J. 2017. Robots are not just tools. Connection Science 29: 142–49. [Google Scholar] [CrossRef]
  65. Proust, Marcel. 2013. Swann’s Way: In Search of Lost Time. New Haven: Yale University Press, vol. 1. [Google Scholar]
  66. Reedman-Flint, Dominic, John Harvey, James Goulding, and Gary Priestnall. 2022. I Wandered Lonely in the Cloud: A Review of Loneliness, Social Isolation and Digital Footprint Data. Paper presented at the 6th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2022), Valletta, Malta, October 27–28; pp. 225–35. [Google Scholar]
  67. Reginster, Bernard. 2003. What is a free spirit? Nietzsche on fanaticism. Journal of the History of Philosophy 41: 585–610. [Google Scholar] [CrossRef]
  68. Reshef, Erielle. 2023. Kidnapping Scam Uses Artificial Intelligence to Clone Teen Girl’s Voice, Mother Issues Warning. Available online: https://abc7news.com/ai-voice-generator-artificial-intelligence-kidnapping-scam-detector/13122645/ (accessed on 24 September 2024).
  69. Rhomberg, Charlie. 2020. Why Hasn’t All This Technology Given Us More Leisure Time? Available online: https://www.uipath.com/blog/digital-transformation/what-happened-four-hour-workweek (accessed on 24 September 2024).
  70. Sanders, Steven Michael, Claudia Garcia-Aguilera, Nicholas C. Borgogna, John Richmond T. Sy, Gianna Comoglio, Olivia AM Schultz, and Jacqueline Goldman. 2024. The Toxic Masculinity Scale: Development and Initial Validation. Behavioral Sciences 14: 1096. [Google Scholar] [CrossRef]
  71. Sartre, Jean-Paul. 2021. Nausea. London: Penguin UK. [Google Scholar]
  72. Schafer, Valérie, Gabriele Balbi, Nelson Ribeiro, and Christian Schwarzenegger. 2021. Digital Roots: Historicizing Media and Communication Concepts of the Digital Age. Vienna: De Gruyter, p. 318. [Google Scholar]
  73. Siderits, Mark. 2016. Personal Identity and Buddhist Philosophy: Empty Persons. London: Routledge. [Google Scholar]
  74. Spengler, Oswald. 1991. The Decline of the West. Translated by A. Helps, and C. F. Atkinson. Oxford: Oxford University Press. [Google Scholar]
  75. Spinoza, Baruch. 2002. Spinoza: Complete Works. Indianapolis: Hackett Publishing. [Google Scholar]
  76. Szollosy, Michael. 2017a. EPSRC Principles of Robotics: Defending an obsolete human (ism)? Connection Science 29: 150–59. [Google Scholar] [CrossRef]
  77. Szollosy, Michael. 2017b. Freud, Frankenstein and our fear of robots: Projection in our cultural perception of technology. Ai & Society 32: 433–39. [Google Scholar]
  78. Stroessner, Steven J., and Jonathan Benitez. 2019. The social perception of humanoid and non-humanoid robots: Effects of gendered and machinelike features. International Journal of Social Robotics 11: 305–315. [Google Scholar] [CrossRef]
  79. Van Edwards, Vanessa. 2023. Human Robot Interaction: The Psychology of Working Together. Available online: https://www.scienceofpeople.com/human-robot-interaction/ (accessed on 7 September 2024).
  80. Velkova, Julia, and Anne Kaun. 2021. Algorithmic resistance: Media practices and the politics of repair. Information, Communication & Society 24: 523–40. [Google Scholar]
  81. Winfield, A., Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, and et al. 2017. Principles of robotics: Regulating robots in the real world. Connection Science 29: 124–29. [Google Scholar]
  82. Yaacoub, Jean-Paul A., Hassan N. Noura, Ola Salman, and Ali Chehab. 2022. Robotics cyber security: Vulnerabilities, attacks, countermeasures, and recommendations. International Journal of Information Security 21: 115–58. [Google Scholar] [CrossRef]
  83. Zahira, Syifa Izzati, Fauziah Maharani, and Wily Mohammad. 2023. Exploring emotional bonds: Human-AI interactions and the complexity of relationships. Serena: Journal of Artificial Intelligence Research 1: 1–9. [Google Scholar]
  84. Zewe, Adam. 2022. How to Help Humans Understand Robots. Available online: https://news.mit.edu/2022/humans-understand-robots-psychology-0302 (accessed on 14 September 2024).
Figure 1. Concept of personal responsibility.
Figure 1. Concept of personal responsibility.
Humanities 14 00006 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sushchenko, A.; Yatsenko, O. Human, All Too Human: Do We Lose Free Spirit in the Digital Age? Humanities 2025, 14, 6. https://doi.org/10.3390/h14010006

AMA Style

Sushchenko A, Yatsenko O. Human, All Too Human: Do We Lose Free Spirit in the Digital Age? Humanities. 2025; 14(1):6. https://doi.org/10.3390/h14010006

Chicago/Turabian Style

Sushchenko, Aleksandra, and Olena Yatsenko. 2025. "Human, All Too Human: Do We Lose Free Spirit in the Digital Age?" Humanities 14, no. 1: 6. https://doi.org/10.3390/h14010006

APA Style

Sushchenko, A., & Yatsenko, O. (2025). Human, All Too Human: Do We Lose Free Spirit in the Digital Age? Humanities, 14(1), 6. https://doi.org/10.3390/h14010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop