Next Article in Journal / Special Issue
Innovative Transitions: Exploring Demand for Smart City Development in Novi Sad as a European Capital of Culture
Previous Article in Journal
Collaborative Intelligence for Safety-Critical Industries: A Literature Review
Previous Article in Special Issue
Deep Learning and Knowledge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home

1
Unit W, Laurea University of Applied Sciences, 02650 Espoo, Finland
2
COJOT, 02270 Espoo, Finland
*
Author to whom correspondence should be addressed.
Information 2024, 15(11), 729; https://doi.org/10.3390/info15110729
Submission received: 5 September 2024 / Revised: 4 November 2024 / Accepted: 12 November 2024 / Published: 15 November 2024
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)

Abstract

:
The aging population, combined with the scarcity of healthcare resources, presents significant challenges for our society. The use of artificial intelligence (AI) and robotics offers a potential solution to these challenges. However, such technologies also raise ethical and cybersecurity concerns related to the preservation of privacy, autonomy, and human contact. In this case study, we examine these ethical challenges and the opportunities brought by AI and robotics in the care of old individuals at home. This article aims to describe the current fragmented state of legislation related to the development and use of AI-based services and robotics and to reflect on their ethics and cybersecurity. The findings indicate that, guided by ethical principles, we can leverage the best aspects of technology while ensuring that old people can maintain a dignified and valued life at home. The careful handling of ethical issues should be viewed as a competitive advantage and opportunity, rather than a burden.

Graphical Abstract

1. Introduction

The growing number of the elderly population and the scarcity of healthcare resources are challenges that our society faces simultaneously. The use of artificial intelligence, or AI, and robotics to help older people survive at home offers an opportunity to meet these challenges. However, such technologies also raise ethical questions related to the protection of privacy, autonomy, and the preservation of the human touch. The SHAPES project, funded by the European Union’s Horizon 2020 program, aimed to create an ethical digital service ecosystem to support the well-being of the aging population both at home and outside [1]. The project involved designing, building, piloting, and implementing a standardized digital system platform that utilized digital services and solutions to promote independent living and health for older individuals.
Care robots have been widely discussed both in academic circles and in the public, especially during the past few decades [2,3]. Expectations are high, and numerous ethical questions and possible risks have also been brought up in discussions related to care robots [4]. However, these questions and concerns are mostly unfounded. The real questions regarding care robots that require ethical reflection concern the margin of error of care robots’ decisions and the data used in machine learning, and not assumed future scenarios where robots replace people in care work. Care robots also suffer from the same cyber security problems that computers have suffered from for decades.
The ethical use of artificial intelligence and robotics in elderly care focuses on respecting autonomy, protecting privacy, and promoting equality. While these technological applications can assist with daily activities, they must not undermine the decision-making of the elderly. Continuous monitoring raises concerns about data privacy, which in turn necessitates the secure handling of personal information. Additionally, AI and robotics can never replace human interaction; thus, their use should be complementary in nature, providing social support to ensure that the elderly feel safe and valued. Economic disparities may also lead to unequal access to care. Therefore, ethical guidelines are crucial to balance the benefits of technology with the dignity and individual needs of the elderly.
This case study examines the aforementioned ethical challenges, and opportunities that cyber-secure robotics and AI-based services offer in the care of elderly people living at home. The research question is as follows: how are ethics understood when developing AI-based, cyber-secure services that help aging people manage at home? Previous studies address ethics through one framework, such as biomedical ethics [5], care ethics [6], or cybersecurity ethics. or at most a combination of two frameworks, like Loi et al. [7], when examining the relationship between the core tasks of information technology and biomedical ethics. This research combines several different, partially conflicting ethical frameworks in a previously unexplored manner. The structure of this article is as follows: Section 2 provides a review of the research related to the topic of the article. Section 3 describes how the case study methodology has been applied in this article and what research materials have been utilized. Section 4 presents the findings of the study, which are discussed in Section 5. Section 6 concludes the article with future work.

2. Literature Review

In this section, we explore a comprehensive review of the literature related to the topic of our paper. We examine several studies that have focused on the ethical issues of artificial intelligence and cybersecurity in healthcare.

2.1. Use of Artificial Intelligence in the Healthcare Supply

One of the major concerns related to the future is the aging of the population and the resulting jeopardy of the security of healthcare supply. The baby boomers are retiring or have already retired, and the younger generations are no longer able to meet the labor demand brought by retirement [8]. The quality of caring for the elderly is already seen as problematic, and the public has seen examples of these problems manifesting in the form of, for example, the constant rush of caregivers and the deterioration of service quality. The problem is multifaceted, and numerous alternatives have been presented by different parties [9].
Artificial intelligence is a very broad concept, the possibilities of which are almost limitless. Artificial intelligence systems, especially machine learning, or ML in short, are widely used in the healthcare sector, in such applications as the assessment of patients’ needs regarding treatment, allocation of resources, and diagnostic analysis, such as X-rays. Mainly, these systems support professionals and do not make independent decisions by themselves. ML algorithms assist the general practitioner in distinguishing the cases that require a specialist appointment. Artificial intelligence is used in healthcare to identify risk groups and symptoms [10]. AI has been taught to identify blood vessel stenosis, retinal diseases, and skin cancers from images. One example of using artificial intelligence is recognizing depression from a person’s voice and speech [11]. AI is used in, for example, pharmacy (drug research), cardiology (e.g., monitoring of heart diseases), sleep disorder diagnosis, lung disease theory, oncology (tumor theory), eye disease diagnosis (e.g., cataract diagnosis), and mental health disorders such as schizophrenia [12]. Artificial intelligence is used to evaluate medication, select necessary treatment procedures, and create forecast models [8]. AI can be used to identify various risk factors in patients, for example, osteoporosis, brain aneurysm, dementia, mental health disorders, and heart diseases [10]. In addition, AI can be used well from the perspective of preventive healthcare [10].
The use of AI in healthcare and well-being has great potential. Generative AI models that can communicate in natural language and at the same time create new content such as text and images have already been developed [13]. In the future, these generative AI models will also revolutionize health and welfare. It has been predicted that generative artificial intelligence could even prepare medical reports based solely on the conversation between doctor and patient. Today, many chatbots are used in healthcare to answer simple questions. However, generative AI would be able to communicate naturally with the patient and give individualized advice. There are still potential legal and ethical issues in the adoption of generative AI systems [13].
The concept of a digital twin (DT) refers to a digital replica of physical processes involving people, systems, and devices [14]. In healthcare, a DT could involve predictive modeling based on an individual’s health history, offering more accurate and faster services, particularly in elder care [15]. Finland is also considering leveraging DTs to maintain people’s health [16]. Homomorphic encryption allows computations on encrypted data without decrypting it, ensuring privacy. It is extensively researched, especially in healthcare for privacy protection. In the future, human digital twins may utilize homomorphic encryption to respect privacy concerns [17].

2.2. Care Robots

A much-talked-about solution to healthcare maintenance reliability problems are care robots [18]. The public has presented the most diverse and complex robotics solutions to solve the future shortage of nurses [19]. Various human-like robots have been presented to the public, and these have raised expectations and hopes regarding care robots even more. Various concerns have also been essentially connected to the expectations, which range from different views on humans becoming less relevant in nursing work, even to the total replacement of human nurses by robots. Autonomous robots are often associated with a lot of images of possible unethical behavior of robots and the blurring of responsibility as robots become equally more complex and independent actors. Can an autonomous robot be expected to act ethically and responsibly? And who is responsible if the robot’s actions violate data protection?
The ‘Robots and the Future of Welfare Services’ (ROSE) research project investigates how advances in service robotics enable the innovation of products and services and the renewal of welfare services, especially for the needs of the aging population [20]. The robotics of social and health services is in a development phase. The use of robots in welfare services is limited, but new applications for robots are being introduced with technological development. Increasing technology in human-centered treatment and care places special demands on robotics, whether its application target is functional or social. Questions remain, what kind of robots are available for the needs of housing services and day activities now and in the future, how do care services personnel feel about robots, and how should their perspective be taken into account in the robotization of care services [20]?
Some of the robots used in the social work and healthcare sectors are used to provide direct assistance to employees. For example, the logistics robots used in hospitals are practically invisible to the customers, but they make everyday life for employees much easier. Robots used in pharmacies for storing and collecting medicine make operations more efficient and reduce the number of human errors when picking up medicine packages. The functional robots used in these examples are also known as more traditionally robotized industries [21].
Some of the robots available today are closer to the customer’s everyday life. Medicine robots placed in the customer’s home dispense medicine and remind them to take it at the correct time. Information about taking or not taking the medicine is also passed on to the care service provider. This increases patient safety and in part enables communication between the client, their relatives, and their caregiver [20].
Day center operators have also used the so-called social humanoid robots. Humanoid robots resemble humans in appearance and are often able to, for example, recognize human faces and expressions and conduct within limits, pre-programmed dialogue. In Finland, the Zora robot has been used, among other things, to activate the cognitive, emotional, social, and physical alertness of elderly people. Social robots have also been used to support physiotherapy. Zora can remind customers of daily exercises and act as an example of how to perform the exercise correctly. According to research, it is essential for the motivation of the exerciser that the robot also moves itself and does not just tell them what to do next. That is why it is better to have a robot than just a laptop or tablet from which to follow gym instructions. The robots available for therapy use are diverse. The current situation can be best summarized in a way that the more autonomous the robot, the fewer functions it performs. For example, the therapeutic seal robot Paro gestures in place, but does not move from one place to another. It works as an interactive pet robot that reacts to physical stimuli in its environment with its sensors [21].

2.3. Cybersecurity of Care Robots

As stated above, care robots have the potential to facilitate daily life, create a sense of security, and perform various tasks. However, they also suffer from cybersecurity issues similar to those that have plagued computers for decades. Robots are capable of recognizing, processing, and storing information about their surroundings, continuously collecting data [22]. The operation of robots (such as navigation, speech, object recognition, etc.) requires substantial computational power, often enabled by cloud services. As the number of interconnected systems and devices increases, so does the likelihood of vulnerabilities within these systems, thereby escalating the risk of malicious attacks [23].
Cybersecurity in the context of care robots is a complex concept. As previously mentioned, it is not straightforward to define a care robot beyond its intended use or environment. A care robot, like any other robot, is subject to a wide range of security threats. These threats can target the robot itself, the data it contains and collects, and the mental and physical health of the end user [24].

2.3.1. Ambiguity in Regulation

The lack of clear regulations regarding the cybersecurity of care robots within EU legislation exists [23]. Despite care robots being directly connected to vulnerable groups such as children, the elderly, and the disabled, there is no specific regulation addressing their cybersecurity. Furthermore, numerous articles on service and care robots indicate that cybersecurity threats related to these robots have been significantly less studied compared to those intended for industrial environments [22,23].
The absence of clear definitions leads to a complex situation where the obligations and responsibilities related to the cybersecurity of care robots can be interpreted in various ways. Different product safety regulations do not directly address, but can influence, the cybersecurity of robots. Certain sectors have specific regulations related to the general safety of various products (e.g., radio equipment, medical devices, toys). However, it is often unclear how different robots should be classified, as classification depends on their intended use. This situation is further complicated by the industry’s efforts to create new intermediate product categories, such as “personal care”, to which medical device regulations do not apply, even though the robots are developed for care purposes [25].
European consumer organizations have criticized the existing product legislation’s concept of safety as outdated, as it does not cover security risks related to connectivity and hacking. The regulation of cybersecurity is also complicated by the constantly evolving nature of cyber threats; new devices and their connectivity are developed, existing software becomes outdated, and the operating environment changes [23].

2.3.2. Practical Cybersecurity Concerns

Cybersecurity can be divided into two areas: safety and privacy [22]. People are often concerned about the threats that hacked robots can pose to physical property or personal safety. For care robots used at home, physical safety issues include the potential for robots to damage valuable property or harm small children (e.g., collisions, arson). Hacked robots can also leak private and sensitive information about their users, compromising their privacy. Cybersecurity threats to robots can arise from three sources: natural causes (such as natural disasters), accidents (human error), or external attacks, which can cause physical or virtual damage [22]. Physical damage includes the complete or partial destruction of the device (malfunction) or unexpected behavior of the device. Virtual damage primarily involves software-related issues that affect the device’s normal operation in a virtual manner, such as the data the robot collects, stores or transmits.
Cybersecurity threats can cause financial, material, and psychological harm to end users (people interacting with the robot), business users (companies using the robot for specific tasks), manufacturers and sellers, and developers. These threats affect different stakeholders in various ways. For example, end users of service and care robots are generally more concerned about privacy, while business users are more worried about the company’s reputation. End users are also more concerned about the financial damage that robots might cause to their property, whereas business users are more concerned about potential lawsuits [22].
Building a robot is technically very challenging in many ways and it would not be surprising if manufacturers focused primarily on developing the functions of robots rather than securing them against all cyberattacks [23]. Competitive pressures in the market are intense and can negatively impact the cybersecurity of devices, as these pressures push manufacturers to bring products to market quickly, often prioritizing security measures later. Enhancing cybersecurity also incurs costs for manufacturers. Cybersecurity threats do not typically directly affect manufacturers, so they may not prioritize them. Consumers often overlook cybersecurity issues and value usability, functionality, and competitive prices more. Consequently, the demand does not significantly influence the cybersecurity of devices, and the quality of demand does not encourage manufacturers to invest in cybersecurity [23].
In practice, achieving cybersecurity requires foresight and risk identification. On the other hand, investing in cybersecurity from the design stage could promote safer technology, benefiting both users and manufacturers in the long run. Research indicates that consumers are willing to prioritize and pay more for security when purchasing connected products if the security level is communicated clearly. Clear communication is crucial because, in practice, it is not evident whether robot buyers can assess the cybersecurity level of robots. It is almost impossible to distinguish robots with high cybersecurity from those without optimal cybersecurity if the security level is not communicated clearly. It would be beneficial for end users if the security level was conveyed in an understandable manner, such as through certification [23].

2.3.3. Examples of Cybersecurity Threats to Care Robots

Various robots, such as service robots, social robots, and care robots, raise concerns about privacy and security. This concern is understandable because robots are often equipped with the ability to sense, process, and store information about their surroundings. Due to internal cameras, microphones, speakers, and mobile devices, all remotely controlled, wireless, and Wi-Fi-connected robots can pose a risk to their users, as external parties can access the devices through the network. This concern is particularly significant for robots that use cameras for user recognition. In a home environment, robots can listen to conversations or take pictures of the personal information, or the user herself. Robots placed in public spaces can collect data on people’s interests, record conversations, and clone identification methods [22].
Attacks can be particularly insidious because it may be difficult for the end user to notice if a robot has been attacked. An attack might be noticed if the robot does not function normally. However, it is also possible for an attacked robot to operate entirely normally, making it impossible to detect the attack [22,23].
A practical example of an attack on a care robot is an elderly person living alone who has a care robot at home [23]. The care robot’s task is to enable family members to monitor the elderly person remotely and locate them if their health deteriorates. The care robot is connected to the home wireless network and is equipped with a video camera, microphone, and speaker, allowing the family to communicate with the elderly person via video and audio. In the example, a financially motivated attacker infiltrates the home network and takes control of the robot. The attacker can then monitor the elderly person through the camera and microphone, potentially stealing their information, such as credit card details, for personal gain [23].
The aforementioned example is just one of many possible threat scenarios. According to Lera et al., there are several ways to attack different robots, which also apply to care robots [22]:
  • Stealth attack: The attacker manipulates the robot’s sensors, causing, for example, a mobile robot to collide.
  • Replay attack: The attacker intercepts system communications and manipulates data traffic, disrupting sensor operations.
  • False data injection: The attacker modifies the data processed by the robot.
  • Eavesdropping: One of the most common attacks on robots from a privacy perspective.
  • Denial of Service (DoS): The attacker effectively stops the robot’s operation. A DoS attack may not cause direct harm to the device or its user but prevents the robot from providing its service.
  • Remote access: One of the most dangerous attacks, where an external user takes control of the device, potentially causing harm to both privacy and physical health.
Different robots may be subject to various attacks depending on the environment in which they operate and the tasks they perform. According to a Finnish study [25], traditional computer-based security threats also apply to care robots, which face both conventional security threats and those related to their physical nature. These findings align with the study by Lera et al. [22], indicating that network-connected robots pose risks due to potential external access. As care robots become more complex and reliant on network connections, the likelihood of these threats increases. Additionally, new technologies such as artificial intelligence and cloud services introduce new risks, emphasizing the need for robust cybersecurity measures [25].

2.4. Ethics in the Above

2.4.1. Ethical AI in Healthcare

Artificial intelligence and robotics are advancing at an unprecedented pace, but it is important to remember that the ultimate responsibility always rests with humans. The development of AI should incorporate morality and responsibility on an equal level [26]. AI challenges our healthcare principles such as confidentiality, continuity of care, and avoiding conflicts of interest. Ethical AI in healthcare should meet the principles of biomedical ethics [5], care ethics [6], and guidelines for trustworthy AI [27].
The principles of biomedical ethics include autonomy, justice, beneficence, and nonmaleficence. Autonomy refers to the right of patients to make their own decisions, provided they are capable of making informed choices. Justice emphasizes fairness and equality, ensuring that everyone receives the same treatment regardless of age, gender, ethnicity, or culture. Nonmaleficence obligates medical practitioners to avoid causing harm, including through negligence. Beneficence aims to maximize the benefits to the patient while minimizing disadvantages, with an emphasis on cost-effective service delivery [5].
The ethics of care, based on Carol Gilligan’s 1982 ideas, distinguishes between the ethics of justice and the ethics of care [6]. According to Gilligan, the ethics of care focuses on maintaining interpersonal relationships by addressing the needs of others and preventing harm. This approach views morality through the lens of relational issues and tensions, rather than strictly adhering to legalistic hierarchies of rights or rules. The care discipline values Gilligan’s theory for integrating patient–nurse relationships into the core of care and highlighting the ethical challenges nurses face in medically dominated healthcare environments [28].
The EU is a frontrunner in AI legislation and ethical guidelines. The binding AI regulation (AI Act) came into force on 1 August 2024. As early as 2019, the EU recognized the need to establish ethical guidelines for trustworthy AI. According to the EU, AI systems must be human-centered, meaning they should benefit humanity while enhancing people’s freedom and well-being. The EU suggests that the framework for creating trustworthy AI can be established based on three conditions:
  • AI must comply with the law (the legal basis is the EU’s founding treaties, the EU Charter of Fundamental Rights, and international human rights legislation);
  • AI must be ethical and AI systems must ensure compliance with ethical principles and values;
  • AI must be socially and technically reliable.
In addition to these three prerequisites for reliable AI, the EU has outlined four ethical principles for AI systems in the following order:
  • Respecting people’s right to self-determination;
  • Avoidance of damages;
  • Fairness;
  • Explainability.
AI systems operating in the EU must adhere to the prerequisites and ethical principles throughout all phases of their use and implementation [27]. Aaltonen [26] says in his book ‘Tekoäly’ (Artificial Intelligence) that AI is neither impartial nor neutral; it reflects human decisions and choices. The people who design, develop, and maintain these systems shape them according to their understanding and values. The ongoing challenge in AI development is ensuring that AI performs exactly as intended and respects human rights. Despite extensive use, today’s AI systems are not yet ‘smart’ enough, and their ethical and social impacts often go unnoticed [29]. Privacy is a significant concern in discussions about AI and ethics. AI systems are pervasive, constantly collecting and analyzing data, often without people’s awareness. Individuals may not know that their data are being collected or that information provided in one context might later be transferred to a third party. It is crucial to protect individuals’ privacy when collecting, storing, analyzing, and transferring data [29].
With the help of AI solutions created in the SHAPES project, the aim is to take care of the aging population, and in particular the aim is to take care of the elderly people living at home as long as possible. In the SHAPES project, the risks related to artificial intelligence are considered, for example, by developing artificial intelligence solutions and implementing them while considering ethical frameworks. One must make sure that a person’s autonomy is preserved, that the use of artificial intelligence is fair, and that it does not cause harm or negative effects to anyone. It is especially important to consider people in a vulnerable position, for example, elderly people, and to keep in mind the possible risks. For prevention and mitigation, it would be a good idea to create a dashboard or model that can be used to minimize any risks involved [30]. In the SHAPES project, the European Union’s ethical guidelines regarding reliable artificial intelligence and its use have been considered for this purpose. Four ethical principles have been highlighted: respecting human autonomy, avoiding harm, fairness, and explainability. When utilizing artificial intelligence in the SHAPES project, these four principles serve as guidelines for operation.
The SHAPES project examined developed AI solutions using the ALTAI tool [31]. In 2019, the European Commission formed the AI HLEG (High-Level Expert Group on Artificial Intelligence), which created the ALTAI (Assessment List for Trustworthy AI) tool. With the increasing prevalence of AI solutions in healthcare, there is a growing need to establish an ethical framework for assessing the ethicality of AI implementations. ALTAI, comprising seven steps, serves as an assessment checklist. Through these steps, companies or service providers can assess the ethics and reliability of their AI-based services. The purpose of this evaluation list is to safeguard against potential harm caused by artificial intelligence.

2.4.2. Cybersecurity Ethics

The term “cybersecurity” inherently reflects the primary ethical objective of ensuring safety from the threats present in cyberspace. Security, in general, is frequently perceived not as an ethical value in itself, but as a means to safeguard other ethical values. Similarly, cybersecurity is often regarded as a collection of technologies and practices designed to defend against cybercrime, including data theft [32].
Cybersecurity ethics is not yet an established ethical framework; rather, it examines our decisions in the cyber environment from the perspectives of information, technology, and social aspects, and how our decisions align with our values. According to Van de Poel [33], four value clusters should be considered when deciding on cybersecurity measures. The first cluster, broad security, is a combination of more specific values such as individual safety, national resilience, and information security. These values protect people and other valuable entities from all kinds of harm and help respond to morally problematic situations such as data breaches, cybercrime, and hybrid influence. The second cluster, privacy, includes values such as data privacy, moral independence, dignity, identity, personality, freedom, anonymity, and confidentiality. We must treat others with dignity, respect people’s moral independence, and not store or share personal data without their informed consent, etc. The third cluster, fairness, consists of values such as equality, accessibility, impartiality, non-discrimination, democracy, and civil liberties. These values ensure that cybersecurity threats or measures to avoid them affect everyone equally and not in a morally unjust manner. Measures to reduce cybersecurity threats must not undermine democracy, civil rights, or individual freedom. The fourth cluster, accountability, includes values such as openness, clarity, and transparency. These are very important in situations where authorities implement cybersecurity measures that restrict citizens’ privacy or rights [33].

3. Materials and Methods

This study’s research question is the following: how are ethics understood when developing AI-based, cyber-secure services that help aging people manage at home? The ethical understanding is examined through several ethical frameworks, such as biomedical ethics, care ethics, cybersecurity ethics, and AI ethics. The research approach is a case study. A case study is suitable for topics where one is trying to find an answer to the question “Why?”, and when the research subject is topical, as in, investigating a phenomenon happening at this moment [34]. A case study can be an independent part of a larger research. This study aims to understand social phenomena, and a diverse range of research data is used, so the case study is well suited as a research approach.
The case study is empirical, and the use is contextual. A current phenomenon or case is investigated in a real environment. Describing the context is important, as it helps to provide advice about the case. In addition to the context, it is important to determine the incident’s surrounding environment, i.e., where the phenomenon occurs. The event environment (setting) is, as it were, the stage on which the incident takes place or works. The event environment is part of the context and contributes to what or who are the actors in the case. On the other hand, the boundaries between the investigated phenomenon and the context are not always precisely demarcated in a case study [34].
The purpose of this study is to produce an understanding (unit of analysis: ethical understanding) of a phenomenon (phenomenon: cyber-secure AI-based services and care robots) occurring in the present day in a natural operating environment (case: helping aging people survive at home). The objective of this study is not to uncover a single truth or make broad generalizations, but rather to deepen the understanding of the subject and identify practical solutions.

3.1. Research Process

A case study is characterized by applying multiple data collection methods to obtain in-depth information about the phenomenon or process under investigation and to form a comprehensive picture. By gathering results from various sources or employing different methods, the accuracy of the claims presented as research findings can be validated [34]. In this study, triangulation has been applied to the material, as it has been collected using various methods, resulting in a diverse set of data. Table 1 presents the evidence gathered from this study. Multiple sources of evidence are needed because ethics are examined through several ethical frameworks (biomedical ethics, care ethics, cybersecurity ethics, and AI ethics).

3.2. Document Selection and Analysis

Utilizing documents as research material in case studies is a widely accepted and effective approach, enabling researchers to collect detailed and contextualized information. The primary research material in this case study comprises the 11 deliverables from Work Package 8 (WP8), titled “Legal, Ethics, Privacy and Fundamental Rights Protection”, of the SHAPES project. The documents from WP8 of the SHAPES project were selected as research material due to their comprehensive nature and public accessibility. The creation of these documents involved contributions from 7 research, industrial, or end-user organizations, and a total of 72 person-months were dedicated to their production.
The selected documents were analyzed using content analysis which is a commonly used method in qualitative research in nursing [35]. It aims to summarize the collected material through various models, conceptual systems, and classifications, and to form descriptions, and grasp different meanings, cause-and-effect relationships, and contents. This study employed theory-driven qualitative analysis, where the themes that emerged in the literature review guided the analysis. The data were coded according to the framework identified in the literature review, by identifying and marking themes and concepts related to the chosen theory. Subsequently, the coded data were analyzed by identifying connections and relationships between the themes and concepts. Subsequently, the themes and categories were interpreted with the research question, considering what the data reveal about the research subject and how it answers the research questions.

3.3. Interviews

The research seeks to explore fresh perspectives on ethics and responsibilities associated with care robots. To achieve this, interviews can be considered one of the most appropriate methods in terms of data collection. According to Hirsjärvi, Remes, and Sajavaara [36], when researching a new subject that is considered complex and multifaceted, where the amount of previous research material is limited and the aim is to uncover new, previously unknown information, a qualitative study can yield the best results. According to Hirsjärvi and Hurme [37], in a thematic interview, the interviewer defines the topics and themes of the interview in advance. However, these can be individually focused depending on the interviewee, enabling an understanding of the interviewee’s viewpoint and experiences [37]. Thus, the free-flowing nature of a thematic interview, where the conversation can revolve around the topic unrestricted and the questions are not necessarily specifically defined, is perfect to bring up the interviewees’ ideas and thoughts, leading to new ideas regarding the topic.
According to Hirsjärvi et al. [36], interviews offer versatility and adaptability, making them suitable for various research purposes. Through direct interaction, interviews facilitate flexible information gathering and uncovering underlying motives. Among interview formats, open interviews are the most unrestricted, resembling casual conversations. They are characterized by their free-flowing, in-depth, and informal nature, aiming to uncover interviewees’ thoughts, opinions, feelings, and perceptions [36]. Unlike structured interviews, open interviews allow flexibility in discussing topics. However, conducting a successful open interview requires interviewer control and skill to maintain coherence [36].
Five individuals from diverse backgrounds were interviewed for this study, as detailed in Table 1. The interview research was performed according to the principles of cooperative development where the betterment of the community is front and center. In cooperative development, each actor in a community is considered an equal partner in the betterment of that community, and they are all included to participate in the development process [38]. The aim is to benefit all stakeholders involved. In this light, a diverse group of interviewees should be selected to obtain a better understanding of the topic [38]. The five individuals were thus selected based on their close personal relationship with the subject matter. It was deemed important to acquire knowledge from different sides of the topic to obtain a better understanding of the whole. Thus, it was deemed necessary to include interviewees who represent the side of the developers of care robots and AI-powered healthcare solutions, the care worker’s side, the side of a patient’s guardian, and the side of a legal expert. The researcher utilized personal networks and connections to locate individuals who would fit into one of these categories and who would be willing to participate in the research. These individuals were then approached via messages and emails for initial inquiry and to propose an interview. The individuals who first expressed interest in participating in the interview were selected. These selected individuals were then interviewed on separate occasions either in person or virtually in a video call via Teams or Zoom. The interviews were conducted between January and March of 2022.
Due to the varying nature of each individual’s background and their relationship with the topic at hand, the interview questions varied between each interview. The same questions could not be applied to all interviews since each interviewee represents a different side of the topic and has varying stakes and expertise on the subject. Also, since the thematic interview was selected as the main method of extracting information, the interview setting already meant that the interview questions could not be fixed but could instead vary and change even during the interview when new aspects of the subject are discovered. Broadly, each interviewee gave first a short introduction establishing their affiliation with the subject matter as well as their experiences in elderly care and AI and robotics in health care. Depending on the expertise of the individual, each interviewee was then asked about their general understanding and experiences on the current state of elderly care and the possible ethical and accountability questions related to it in the present day. Next, the interview focused on the prospect of including autonomous AI-powered applications and robots, how it could be conducted responsibly, and how it would change healthcare. The focus of the interview was on the attitudes, expectations, and fears relating to these types of solutions as well as considering the current regulations and legislation surrounding this topic.
Regarding the varying themes of the interviews, for the legal expert, the interview theme was very much fixated on the legal side of healthcare work and care robots. As for the elderly care worker, the questions revolved around the ethical aspects and other topics that are more ambiguous related to the current state of healthcare work as well as reflecting on these questions in a hypothetical situation where a robot would take over some or all of the tasks currently handled by human healthcare workers. The elderly care worker was also asked about their views on the possibilities to affect the development of AI solutions, whether they had suspicions related to care robot development, and on the sharing of responsibility and accountability. For the AI and robotics developers, the questions were focused on their experiences in developing robotics for care work, on the openness of the development, the future of AI and robotics development as well as ethical aspects revolving around the topic including the ethical frameworks that currently exist in this field. For the guardian of a patient, the questions revolved around the customer experience in elderly care and what qualifies as a responsible elderly care service provider. Also, questions were asked about the division of responsibility as well as attitudes towards utilizing care robots in general.
The interview results were analyzed using inductive content analysis. This was considered the best option since the subject is quite new and the interview research aimed to identify new aspects of the topic.

3.4. Artifact Analysis

In 2023, Laurea University of Applied Sciences’ researchers conducted three online workshops that targeted SHAPES partners directly involved in the SHAPES pilots [39]. The workshops include researchers, IT developers/technology owners, and healthcare providers. The workshops’ objectives were to present the ALTAI self-assessment, collect data on the pilots’ design and lessons learned, and gather the partners’ perspectives on the ALTAI assessment, particularly their recommendations [40].

4. Results

4.1. Analysis of SHAPES Deliverables

A summary of the qualitative analysis of the documents presented in Table 1 is summarized here.

4.1.1. AI’s Role in Healthcare Supply and Elderly Care

New digital healthcare solutions encompass mobile applications, eHealth sensors, wearable devices, Internet of Health Things (IoHT) devices, as well as assistive and care robots, all of which extensively utilize artificial intelligence (AI). During the pandemic, digital tools can be employed to (1) monitor the spread and impact of viruses (such as COVID-19), (2) research and develop diagnostics, treatments, and vaccines, and (3) ensure that Europeans stay connected with friends and family and remain safe online. AI and high-performance computing are used in advanced data analytics to detect patterns underlying the spread of the coronavirus. In healthcare, AI plays a crucial role in robots and other tools used to maintain social activities when direct human interaction must be minimized due to public health concerns.
AI is also used to empower citizens and provide personalized care, enabling the creation of citizen- and patient-centered solutions. AI-driven digital solutions ensure the continuity and availability of services. The AI solutions developed in the SHAPES project aim to care for the aging population, with a particular focus on enabling the elderly to live at home for as long as possible. The digital services offered by the SHAPES project include various online communication tools, IoHT and Big Data platforms, robotics (care robots), conversational assistants and chatbots, various safety solutions, a health and wellness assessment platform, cognitive stimulation and rehabilitation, COVID-19 response tools, and facial and emotion recognition. AI is present in all these functions. These services are tailored individually to meet the needs of each client.

4.1.2. Robotics Offered by SHAPES

To help elderly individuals living alone manage better at home and avoid feelings of loneliness, SHAPES provides assistive robots for their homes. These robots are capable of moving around the residence and interacting with the inhabitants. They can play various games, remind residents of tasks (such as taking medication), perform assessments, and monitor the residents via a camera. The robots use the camera to track facial expressions and learn to recognize the residents and frequent visitors, such as caregivers.
The ARI robot is designed for situations where physical handling of the resident is not required, emphasizing social interaction. Conversely, the TIAGo robot is built for various physical tasks that require handling. The KOMPAÏ-3 robot includes walking supports, enabling it to assist with walking. KOMPAÏ-3 is specifically designed with healthcare and elderly care in mind. It features a screen for playing games and watching news, and it also entertains with music and reads stories to the elderly.

4.1.3. Cybersecurity Challenges of Care Robots

The use of care robots involves the same risks and threats as other IT devices or robots, as care robots are technically very similar to other devices. Reliable cybersecurity systems protect privacy and identity online—increased online activity can attract malicious actors and increase the risk of cyberattacks. The General Data Protection Regulation (GDPR) is used as a basis when addressing the privacy and cybersecurity requirements of care robots.
One of the biggest risks is the hijacking of control over the care robot, which can then be used for espionage and eavesdropping. Hijacking is an attack through which a cybercriminal can use the care robot for virtually all the same activities as the user. In the future, the use of care robots for espionage and eavesdropping is possible and even likely, as similar activities have already been carried out with devices that use cameras and microphones.

4.1.4. Ethical Challenges of AI

Ethical challenges impact the development of AI technology in the health and wellness sector, but they have an even greater influence on the business, governance, and ecosystem models produced in the SHAPES project. These models enable the design and active use of AI solutions to promote the rights and values of healthcare clients. Service providers should view ethical thinking as a resource rather than a source of risk.
SHAPES is a multifaceted project from an ethical perspective. Ethical requirements and their implementation are essential for the sustainability of SHAPES. These requirements are based on both EU fundamental rights and various ethical norms and approaches, as well as different business and technology ethical guidelines.
The SHAPES project considers AI-related risks by developing and implementing AI solutions within an ethical framework. It is crucial to ensure that human autonomy is not compromised and that the use of AI is fair and does not cause harm or negative impacts. Special attention must be given to vulnerable individuals, such as the elderly, and the potential risks of AI must be kept in mind. It would be beneficial to create a governance panel or model to minimize these risks. The SHAPES project adheres to the European Union’s ethical guidelines for trustworthy AI and its use. Four ethical principles are emphasized: respect for human autonomy, prevention of harm, promotion of fairness, and explicability of AI. These four principles guide the use of AI in the SHAPES project.

4.1.5. Ethical Value Conflicts in Cybersecurity

Cybersecurity ethics is an interdisciplinary field that draws from various research areas, including medical ethics, military ethics, legal ethics, and media ethics. As such, it can be regarded as a professional ethic that offers detailed and specific guidance to practitioners with particular characteristics. Cybersecurity professionals should integrate ethics into their practice, not only to prevent harm, illegal activities, or destructive behavior but also to appreciate the ethical importance of their profession. Ethical cybersecurity professionals leverage their skills to create not just superior products or services but also to contribute to a better world.
Conflicts between the fundamental values of cybersecurity and other values, such as those in healthcare, complicate adherence to ethical standards. Excessive investment in cybersecurity can conflict with privacy and freedom, while insufficient investment poses a threat to user security and can undermine trust in the digital society and its services.
Examples of conflicts between the desiderata of information technology in health and the core value clusters of cybersecurity include:
  • Usability vs. Security: Security measures can sometimes hinder usability, and overly simple usability can pose a security risk. Conversely, if usability is too complex, it can also threaten security because users may inadvertently misuse or damage a device or service. For instance, multi-factor authentication enhances security but can slow down service usage, particularly for services accessed multiple times a day, thereby impairing usability.
  • Confidentiality and Privacy vs. Security: Confidentiality is a key component of information security, alongside availability and integrity. However, balancing these elements can be challenging, as enhancing one aspect may compromise another.
  • Privacy vs. Efficiency and Quality of Services: Privacy often conflicts with the efficiency and quality of services. Improving service quality and efficiency frequently involves sharing information to discover new treatment and prescription solutions, which can compromise privacy.

4.2. Interview Findings

With inductive content analysis of the interview results, it was possible to find three relevant categories for this study. Two of the important categories could also be divided into two sub-categories. The first of the important categories was the patient’s right to autonomy, which could be divided into legislative and ethical aspects sub-categories. The second category was the important moral questions regarding the development of care robotics, which could then be divided into two subcategories: firstly, the quality of data used for machine learning, and secondly the acceptable error rate of care robots and AI solutions. A third main category was the current ethical and legal frameworks guiding AI and robotics development and how it affects the development work. These three categories were deemed the most important for the sake of the topic at hand and were thus included in the analysis.
Five other categories came up in the interviews but were not included in this research analysis due to them not being relevant to the topic at hand. These categories were the current state, development, and usage of AI-powered solutions in healthcare, general ethical and responsibility questions in healthcare, AI algorithm transparency and explainability, and the different types of procurement contracts for developing new robotics and AI solutions. These categories did not bring any new aspects to the considerations when reflecting on the literature and document analysis and thus were excluded from the analysis.
Regarding the first category of patient autonomy and legislative framework, discrepancies between the claims related to legal technical matters and legal responsibility presented in the documents and the views presented in the interviews were observed. Based on the document analysis, difficulties exist in robotics development from the point of view of legislation and regulations. If an AI-based system is used for healthcare purposes, it is subject to strict Medical Device Regulation (MDR). The documents claim that legislation that is too strict can hold back the development of artificial intelligence and robotics which use AI. At the same time, legislation that is too loose can cause risks; for example, from the point of view of the safety of the end users. The fragmentation of regulation and legislation, especially in the EU region, and unclear guidelines were considered to be factors that hinder development and should be re-examined. The interview results did not support these claims. The national healthcare legislation in Finland is strict, but not necessarily limiting development. In Finland, the operation of care robots is subject to the same legislation and regulations as the operation of caregivers and the production of health services in general. This legislation was made to protect the patient’s rights, and at its center is a strong patient’s right to self-determination and autonomy, which cannot be restricted except in certain precisely defined situations.
“[In Finland,] the patient liability law is always implemented when using robots to provide any service described in the said law. After that, the question remains in what terms is the robot supplier accountable to the health care provider.”
(Interview of a Deputy Judge, 2022.)
“Any forced health treatment measures are ethically unjustified when the patient is not under guardianship. The patient has autonomy always even if the caretaker might disagree with the patient.”
(Interview of a Nurse, 2022.)
Moreover, the regulative frameworks create a clear basis for the development of artificial intelligence-based services and robots, and there is not much ambiguity regarding these. Thus, based on the interviews, regulations can be seen more as a solid foundation on which to create and develop new AI-driven technologies instead of a hindrance as it is sometimes being portrayed.
“We used the EU Ethics guidelines of trustworthy AI to test it, since the development framework used to be very new back then. We used the 20-section checklist… and for example, the parts concerning continuous supervision, training, and management were important. Through those, we could compose an action plan for the upcoming phases… The framework was very useful to us.”
(Interview of a Senior Manager, 2022.)
Regarding the ethical side, the interview results seem to indicate that the ethical questions that are topical are regarding the data used for machine learning and the acceptable error rate for AI applications. Bad data can be a liability when an AI-run application is thought to interpret signals. In the healthcare sector, this can lead to disastrous and potentially life-threatening situations.
“In our project, for example, we were advised not to let the algorithm weigh the income or gender of the patients when making decisions. That doesn’t mean, however, that there’s no correlation there. For example, morbidity in women is different from men. Combined with diagnostics, it matters, because for example breast cancer does not manifest in men similarly as it does in women. Likewise, musculoskeletal diseases manifest differently in different income classes. These factors could have improved our decision-making model, but we could not use them.”
(Interview of a Senior Manager, 2022.)
“For example, if we teach a hundred different scenarios to an AI and consider it safe after that, because someone has defined that it is safe, it still doesn’t make it 100% safe. This kind of situation could in principle happen when monitoring heart rate, where the algorithm has been taught that a certain blood pressure is okay, but there are still special cases when it’s not. It all depends on what kind of data has been used to teach the AI… The data hasn’t necessarily included sufficient readiness to react to certain situations. This can lead to a sort of randomness.”
(Interview of a CEO, 2022.)
This same issue applies to acceptable error rates. It is not clear where the line is drawn on this question and what is the acceptable rate of, e.g., misdiagnosis for an AI-driven solution. If an AI misinterprets signals and misdiagnoses, for example, a stroke, it has real-life consequences for the patients. However, suppose the AI algorithm interprets every detail as a potential stroke causing false alarms. In that case, it can cause the users to resent the AI and potentially ignore its warnings ultimately leading it to be deemed useless. It is impossible to obtain 100% accurate results from AI-driven solutions, when it comes to interpreting signals and thus, an acceptable error rate should be set.
“[The accuracy] brought in the ethical questions. We had to consider which data was such, which could be used in decision-making, and how we can raise accuracy. We could not define a good enough accuracy in advance. Traditionally, any accuracy increase is a good thing, but now we had to compromise [.]”
(Interview of a Senior Manager, 2022.)
“For example, when considering health care, we can teach an algorithm to make decisions similarly to a doctor, who makes decisions based on their experience and work history. However, some fraction of the decisions are always more complex. There could be diseases that can’t be diagnosed with only a limited set of information. In those cases, thresholds have to be defined for example whether it is a specific disease or not. It is typically safest to determine the threshold as conservatively as possible […] This threshold definition is probably the most ethical question. Too high a threshold can cause the machine to not function correctly.”
(Interview of a CEO, 2022.)
Thus, the interviews would indicate that the regulations and legislation are not necessarily hindering development as the previous literature seems to suggest. The regulation and legislation give a strong foundation on which to build new applications. In addition, there are clear reasons why the regulation and legislation are implemented, i.e., patient autonomy. The interviews seem to suggest, however, that there is still ambiguity when it comes to the regulation of AI development and this needs to be addressed. The data used for machine learning as well as the acceptable error rates are topics that require further study and consideration.

4.3. ALTAI Tool

In the self-assessment of SHAPES pilots using the web-based ALTAI tool prototype, the best results were obtained from the evaluation point of “transparency” and the worst from “technical robustness and safety.” For transparency, two pilots were recommended to regularly ask users about their understanding of the AI system’s decision-making process. Two other pilots were encouraged to inform users that they are interacting with a machine in the case of interactive AI systems. Additionally, one pilot was advised to continuously assess the quality of input data used by their AI systems, explain the decisions made or suggested by the system to end-users, and regularly ask users if they understand these decisions.
Regarding human agency and oversight, three recommendations were given to more than one pilot, aiming to promote the responsible use of AI systems by avoiding excessive reliance on the system, preventing unintended impacts on human autonomy, and providing appropriate training and oversight for those monitoring the system’s decisions.
For the requirement of technical robustness and safety, two pilots did not receive recommendations. Instead, one pilot received five recommendations aimed at identifying and managing risks associated with the use of AI systems, including potential attacks and threats, possible consequences of system failure or malfunction, and continuous monitoring and evaluation of the system’s technical robustness and safety.
Regarding privacy and data protection management, pilots were encouraged to create mechanisms that allow flagging privacy or data protection issues related to the AI system. One pilot received four individual recommendations aimed at ensuring that privacy and data protection are considered throughout the AI system’s lifecycle, from data collection to processing and use, and that appropriate mechanisms are in place to protect individuals’ privacy rights.
Compliance with the requirements of diversity, non-discrimination, and fairness received the most recommendations, a total of 43 recommendations, of which 17 were given to at least two pilots. These recommendations can be divided into the following subcategories:
  • Data and algorithm design: Recommendations focus on input data and algorithm design, such as avoiding bias and ensuring diversity in the data. Additionally, the use of advanced technical tools is recommended to understand the data and model, as well as to test and monitor biases throughout the AI system’s lifecycle.
  • Awareness and training: Recommendations relate to training AI designers and developers on the potential for bias and discrimination in their work. Additionally, mechanisms are recommended for flagging bias issues and ensuring that information about the AI system is accessible to all users, including those using assistive devices.
  • Defining fairness: Recommendations concern defining fairness and consulting with affected communities to ensure the definition is appropriate and inclusive. Additionally, the creation of quantitative metrics to measure and test fairness is suggested.
  • Risk assessment: Recommendations relate to assessing the potential unfairness of the AI system’s outcomes for end-users or target communities. Additionally, identifying groups that may be disproportionately affected by the system’s outcomes is recommended.
For the requirement of societal and environmental well-being, 11 recommendations were given, 1 of which was given to all 3 pilots and 3 to 2 pilots. All three pilots were encouraged to create strategies to reduce the environmental impact of the AI system throughout its lifecycle and to participate in competitions focused on solving this issue.
Regarding accountability, all three pilots were given a recommendation that if AI systems are used in decision-making, it is important to ensure that the impacts of these decisions on people’s lives are fair, value-aligned, and responsible. Therefore, any conflicts or trade-offs between values should be documented and thoroughly explained.
Each partner received tailored recommendations to improve these areas and enhance the performance and compliance of their AI systems.

5. Discussion

According to the study conducted in the context of the SHAPES project’s pilots, the use of the ALTAI tool was considered easy when assessing the trustworthiness of AI systems. The study also revealed that using the self-assessment tool would be highly beneficial before bringing an AI solution to market. A major drawback is the lack of global regulations regarding the reliability and ethics of AI.
One area requiring further research is the fundamental knowledge needed as a basis for machine learning. There is no clear consensus on what constitutes good data for machine learning. In the absence of such standards, the use of poor data in machine learning-based AI solutions could have consequences. Public datasets are available for training AI in the development of autonomous cars, yet there are real-life examples where the safety of autonomously driven cars has been at least questionable. Similar datasets could, however, be compiled for the use and training of care robots and other healthcare AI solutions. Additionally, such data could effectively promote health innovations by leveling the playing field for all innovators and preventing the hoarding of such data by a few major players. Public machine learning data could also serve as a fair compromise on other issues, such as AI transparency, algorithm functionality, and robotics development. It would be worth investigating further whether such datasets could be compiled for healthcare applications.
Another topic that needs to be researched and discussed is the accuracy of AI-based solutions and acceptable error margins. These two can lead to many ethically challenging situations that require addressing before the widespread implementation of AI-based solutions and robotics in healthcare. There is no consensus on what constitutes an acceptable error margin for AI in the healthcare sector. Since this issue can lead to life-threatening consequences, it is crucial to address it properly and consider what regulatory or legislative restrictions need to be implemented.
Based on the research, it can be concluded that future scenarios based on the assumed potential of robots are not realistic in the near future. Care robots and the implementation of AI in caregiving involve real ethical issues that academic reflection should focus on. However, the discussion should focus more on the current issues that still seem unclear and are acknowledged even by those developing AI solutions for healthcare and other fields. While technology can assist in many daily tasks, it cannot replace human interaction and touch. The use of AI and robotics must be complemented by social support and human presence to ensure that older people feel safe and valued. It is also important to recognize that the use of AI and robotics in elderly care can create economic and social disparities. Not everyone may have access to such technologies, which can lead to inequality in the availability of care.
AI and machine learning can improve the accuracy of human digital twins, offering significant potential for personalized healthcare. Personalization can occur through biological aspects, focusing on precision medicine to provide treatments tailored to individuals based on health data. Alternatively, non-biological aspects relate to respecting the commitments and values of individual patients, giving them the autonomy to choose treatments that align with their values or needs [41]. One of the most significant ethical issues in the use of AI and robotics in elderly care is the respect for autonomy. While technology can assist in daily activities such as cooking or taking medication, it is important to ensure that old people can maintain their decision-making ability and autonomy. The use of AI and robotics in the homes of older individuals may require continuous data collection and monitoring. This raises questions about privacy and data security. Technology developers and users must work together to ensure that personal data is handled appropriately and securely. In the future, more attention should also be paid to studying the motives of cybercrime and the benefits derived from criminal activities to better determine the likelihood of threats to care robots or other new home AI-based appliances.
Figure 1 summarizes the perspectives that guide how ethics is understood in the context of this research. On one hand, ethical thinking is guided by legislation, such as human rights enshrined in the Charter of Fundamental Rights of the European Union, EU-level directives and regulations (e.g., GDPR, MDR, and AI Act), and national healthcare legislation. From one perspective (values that guide the activities), ethical thinking is guided by biomedical ethics and ethics of care. A third viewpoint is provided by values that guide technological development. These include, among others, ethical guidelines for trustworthy AI and core value clusters in cybersecurity.

6. Conclusions

This article examines how ethics are understood when developing AI-based, cyber-secure services that help elderly people manage at home. Ethical thinking is guided by legal requirements such as human rights and data protection; on the other hand, healthcare activities are guided by ethical frameworks such as biomedical ethics and the ethics of care; and technology development is guided by ethical frameworks such as trustworthy AI and cybersecurity ethics. AI and robotics offer many opportunities to improve the care and quality of life for the elderly in their homes. However, the ethical use of these technologies requires careful consideration and discussion to ensure they support the individual needs and values of the elderly. Guided by ethical principles, we can leverage the best aspects of technology while ensuring that the elderly can maintain a dignified and valued life at home. Service developers and providers should view the careful handling of ethical issues as a competitive advantage and opportunity, not a burden.

Author Contributions

Conceptualization, J.R.; methodology, J.R. and J.H.; validation, J.R. and J.H.; formal analysis, J.R. and J.H.; investigation, J.R. and J.H.; resources, J.R. and J.H.; data curation, J.R. and J.H.; writing—original draft preparation, J.R. and J.H.; writing—review and editing, J.R. and J.H.; supervision, J.R.; project administration, J.R.; funding acquisition, J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the SHAPES Project, which has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement number 857159.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Author Jaakko Helin was employed by COJOT. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. SHAPES. 2022. Available online: https://shapes2020.eu/ (accessed on 1 September 2024).
  2. Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. J. Med. Internet Res. 2018, 20, e10410. Available online: https://www.jmir.org/2018/7/e10410/ (accessed on 1 September 2024). [CrossRef] [PubMed]
  3. Van Aerschot, L.; Parviainen, J. Robots responding to care needs? A multitasking care robot pursued for 25 years, available products offer simple entertainment and instrumental assistance. Ethics Inf. Technol. 2020, 22, 247–256. [Google Scholar] [CrossRef]
  4. Westerlund, M. An Ethical Framework for Smart Robots. Technol. Innov. Manag. Rev. 2020, 10, 35–44. [Google Scholar] [CrossRef]
  5. Beauchamp, T.; Childress, J. Principles of Biomedical Ethics, 5th ed.; Oxford University Press: New York, NY, USA, 2001. [Google Scholar]
  6. Gilligan, C. In a Different Voice: Psychological Theory and Women’s Development; Harvard University Press: Cambridge, UK, 1982. [Google Scholar]
  7. Loi, M.; Christen, M.; Kleine, N.; Weber, K. Cybersecurity in health—Disentangling value tensions. J. Inf. Commun. Ethics Soc. 2019, 17, 229–245. [Google Scholar] [CrossRef]
  8. van Bavel, J.; Reher, D.S. The baby boom and its causes: What we know and what we need to know. Popul. Dev. Rev. 2013, 39, 257–288. [Google Scholar] [CrossRef]
  9. Zamiela, C.; Hossain, N.U.; Jaradat, R. Enablers of Resilience in the Healthcare Supply Chain: A Case Study of U.S Healthcare Industry during COVID-19 Pandemic. Res. Transp. Econ. 2022, 93, 101174. [Google Scholar] [CrossRef]
  10. Vahteristo, A.; Kinnunen, U.-M. Tekoälyn hyödyntäminen terveydenhuollossa terveysriskien ja riskitekijöiden tunnistamiseksi ja ennustamiseksi. Finn. J. Ehealth Ewelfare 2019, 11. [Google Scholar] [CrossRef]
  11. Koi, P.; Heimo, O. Koneoppimisalgoritmit mahdollistavat jo ihmisen parantelun. In Tekoäly, Ihminen ja Yhteiskunta; Raatikainen, P., Ed.; Gaudeamus: Tallinna, Estonia, 2021; pp. 217–233. [Google Scholar]
  12. Vähäkainu, P.; Neittaanmäki, P. Tekoäly Terveydenhuollossa. Jyväskylän Yliopisto. Informaatioteknologian Julkaisuja No. 45/2018. 2018. Available online: https://jyx.jyu.fi/handle/123456789/57682 (accessed on 1 September 2024).
  13. Heinäsenaho, M.; Äyräs-Blumberg, O.; Lähesmaa, J. Tekoäly Mullistaa Terveydenhuoltoa—Mahdollisuudet Hyödynnettävä Viipymättä. Valtioneuvosto. 14 April 2023. Available online: https://valtioneuvosto.fi/-/1271139/tekoaly-mullistaa-terveydenhuoltoa-mahdollisuudet-hyodynnettava-viipymatta (accessed on 1 September 2024).
  14. Grieves, M.; Vickers, J. Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In Transdisciplinary Perspectives on Complex Systems; Springer: Berlin/Heidelberg, Germany, 2017; pp. 85–113. [Google Scholar]
  15. Liu, Y.; Zhang, L.; Yang, Y.; Zhou, L.; Ren, L.; Wang, F.; Liu, R.; Pang, Z.; Deen, M.J. A Novel Cloud-Based Framework for the Elderly Healthcare Services Using Digital Twin. IEEE Access 2019, 7, 49088–49101. [Google Scholar] [CrossRef]
  16. Kettunen, P.; Hahto, A.; Kopponen, A.; Mikkonen, T. Predictive “maintenance” of citizens with digital twins. In Proceedings of the 26th Finnish National Conference on Telemedicine and eHealth, Oulu, Finland, 7–8 October 2021. [Google Scholar]
  17. Kocabas, O.; Soyata, T. Towards Privacy-Preserving Medical Cloud Computing Using Homomorphic Encryption. In Management Association, Virtual and Mobile Healthcare: Breakthroughs in Research and Practice; IGI Global: Hershey, PA, USA, 2020; pp. 93–125. [Google Scholar]
  18. Kyrarini, M.; Lygerakis, F. A Survey of Robots in Healthcare. Technologies 2020, 9, 8. [Google Scholar] [CrossRef]
  19. Soriano, G.P.; Yasuhara, Y.; Ito, H.; Matsumoto, K.; Osaka, K.; Kai, Y.; Locsin, R.; Schoenhofer, S.; Tanioka, T. Robots and Robotics in Nursing. Healthcare 2022, 10, 1571. [Google Scholar] [CrossRef] [PubMed]
  20. Turja, T.; Saurio, R.; Katila, J.; Hennala, L.; Pekkarinen, S.; Melkas, H. Intention to Use Exoskeletons in Geriatric Care Work: Need for Ergonomic and Social Design. Ergon. Des. 2020, 30, 13–16. [Google Scholar] [CrossRef]
  21. Pirhonen, J.; Melkas, H.; Laitinen, A.; Pekkarinen, S. Could robots strengthen the sense of autonomy of older people residing in assisted living facilities?—A future-oriented study. Ethics Inf. Technol. 2020, 22, 151–162. [Google Scholar] [CrossRef]
  22. Lera, F.J.R.; Llamas, C.F.; Guerrero, Á.M.; Olivera, V.M. Cybersecurity of robotics and autonomous systems: Privacy and safety. In Robotics-Legal, Ethical and Socioeconomic Impacts; INTECH: Vienna, Austria, 2017. [Google Scholar]
  23. Fosch-Villaronga, E.; Mahler, T. Cybersecurity, safety and robots: Strengthening the link between cybersecurity and safety in the context of care robots. Comput. Law Secur. Rev. 2021, 41, 105528. [Google Scholar] [CrossRef]
  24. Giansanti, D.; Gulino, R. The Cybersecurity and the Care Robots: A Viewpoint on the Open Problems and the Perspectives. Healthcare 2021, 9, 1653. [Google Scholar] [CrossRef] [PubMed]
  25. Rajamäk, J.; Järvinen, M. Exploring care robots’ cybersecurity threats from care robotics specialists’ point of view. In Proceedings of the 21th European Conference on Cyber Warfare and Security (ECCWS 2022), Chester, UK, 16–17 June 2022. [Google Scholar]
  26. Aaltonen, M. Tekoäly; Alma Talent: Helsinki, Finland, 2019. [Google Scholar]
  27. European Comission. Ethics Guideline for Thrustworthy AI. 8 August 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 1 September 2024).
  28. Juujärvi, S.; Ronkainen, K.; Silvennoinen, P. The ethics of care and justice in primary nursing of older patients. Clin. Ethics 2019, 14, 187–194. [Google Scholar] [CrossRef]
  29. Coeckelbergh, M. Tekoälyn Etiikka; Libris/Painoliber Oy: Helsinki, Finland, 2021. [Google Scholar]
  30. Sarlio-Siintola, S. SHAPES. Ethical Framework Final Version. 30 April 2021. Available online: https://shapes2020.eu/deliverables/ (accessed on 1 September 2024).
  31. Rajamäki, J.; Rocha, P.; Perenius, M.; Gioulekas, F. SHAPES Project Pilots’ Self-assessment for Trustworthy AI. In Proceedings of the 12th International Conference on Dependable Systems, Services and Technologies (DESSERT), Athens, Greece, 9–11 December 2022; pp. 1–7. [Google Scholar]
  32. Christen, M.; Gordijn, B.; Loi, M. The Ethics of Cybersecurity; Springer Nature: Dordrecht, The Netherlands, 2020. [Google Scholar]
  33. van de Poel, I. Core values and value conflicts in cybersecurity: Beyond privacy versus security. In The Ethics of Cybersecurity; Springer Nature: Dordrecht, The Netherlands, 2020; pp. 45–71. [Google Scholar]
  34. Yin, R. Case Study Research: Design and Methods, 4th ed.; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  35. Kankkunen, P.; Vehviläinen-Julkunen, K. Tutkimus Hoitotieteessä. 3; Uudistettu Painos; Sanoma Pro: Helsinki, Finland, 2013. [Google Scholar]
  36. Hirsjärvi, S.; Remes, P.; Sajavaara, P. Tutki ja Kirjoita; Otava: Keuruu, Finland, 2007. [Google Scholar]
  37. Hirsjärvi, S.; Hurme, H. Tutkimushaastattelu—Teemahaastattelun Teoria ja Käytäntö; Gaudeamus: Helsinki, Finland, 2014. [Google Scholar]
  38. Hirvikoski, T.; Äyväri, A.; Hagman, K.; Wollstén, P. Yhteiskehittämisen Käsikirja; Laurea: Espoo, Finland, 2018; ISBN 978-951-857-776-1. [Google Scholar]
  39. SHAPES. SHAPES Pilots. 2023. Available online: https://shapes2020.eu/about-shapes/pilots/ (accessed on 1 September 2024).
  40. Rajamäki, J.; Gioulekas, F.; Rocha, P.; Garcia, X.; Ofem, P. ALTAI Tool for Assessing AI-Based Technologies: Lessons Learned and Recommendations from SHAPES Pilots. Healthcare 2023, 11, 1454. [Google Scholar] [CrossRef] [PubMed]
  41. Huang, P.; Kim, K.; Schermer, M. Ethical Issues of Digital Twins for Personalized Health Care Service: Preliminary Mapping Study. J. Med. Internet Res. 2022, 24, e33081. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Perspectives of ethical thinking in this article.
Figure 1. Perspectives of ethical thinking in this article.
Information 15 00729 g001
Table 1. Sources of evidence.
Table 1. Sources of evidence.
Source CategoriesNumberDescription
Documents: Deliverables produced in WP8 “SHAPES Legal, Ethics, Privacy and Fundamental Rights Protection” of the SHAPES project(14) documentsD8.1 Set-up Ethical Advisory Board
D8.2 Baseline for SHAPES Project Ethics
D8.3 Assessing the Regulatory Frameworks Facilitating Pan-European Smart Healthy Aging
D8.4 SHAPES Ethical Framework V1
D8.5 First Periodic Ethical Report
D8.6 Second Periodic Ethical Reports
D8.7 Third Periodic Ethical Report
D8.10 Privacy and Ethical Risk Assessment
D8.11 Privacy and Data Protection Legislation in SHAPES
D8.13 SHAPES Data Management Plan
D8.14 SHAPES Ethical Framework
Interviews(5) individuals(1) Master of Laws with court training,
(2) Practical Nurse
(3) CEO of a Finnish robotic systems development company
(4) A relative of an older person
(5) Senior Manager of a global information technology services and consulting company
Artifact “ALTAI Tool”;
Lessons learned from the tool in SHAPES pilots
(7) pilots(1) Smart living environment for healthy aging at home
(2) Improving in-home and community-based care
(3) Medicine control and optimisation
(4) Psycho-social and cognitive stimulation promoting wellbeing
(5) Caring for older individuals with neurodegenerative diseases
(6) Physical rehabilitation at home
(7) Cross-border health data exchange
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rajamäki, J.; Helin, J. The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information 2024, 15, 729. https://doi.org/10.3390/info15110729

AMA Style

Rajamäki J, Helin J. The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information. 2024; 15(11):729. https://doi.org/10.3390/info15110729

Chicago/Turabian Style

Rajamäki, Jyri, and Jaakko Helin. 2024. "The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home" Information 15, no. 11: 729. https://doi.org/10.3390/info15110729

APA Style

Rajamäki, J., & Helin, J. (2024). The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information, 15(11), 729. https://doi.org/10.3390/info15110729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop