The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home
Abstract
:1. Introduction
2. Literature Review
2.1. Use of Artificial Intelligence in the Healthcare Supply
2.2. Care Robots
2.3. Cybersecurity of Care Robots
2.3.1. Ambiguity in Regulation
2.3.2. Practical Cybersecurity Concerns
2.3.3. Examples of Cybersecurity Threats to Care Robots
- Stealth attack: The attacker manipulates the robot’s sensors, causing, for example, a mobile robot to collide.
- Replay attack: The attacker intercepts system communications and manipulates data traffic, disrupting sensor operations.
- False data injection: The attacker modifies the data processed by the robot.
- Eavesdropping: One of the most common attacks on robots from a privacy perspective.
- Denial of Service (DoS): The attacker effectively stops the robot’s operation. A DoS attack may not cause direct harm to the device or its user but prevents the robot from providing its service.
- Remote access: One of the most dangerous attacks, where an external user takes control of the device, potentially causing harm to both privacy and physical health.
2.4. Ethics in the Above
2.4.1. Ethical AI in Healthcare
- AI must comply with the law (the legal basis is the EU’s founding treaties, the EU Charter of Fundamental Rights, and international human rights legislation);
- AI must be ethical and AI systems must ensure compliance with ethical principles and values;
- AI must be socially and technically reliable.
- Respecting people’s right to self-determination;
- Avoidance of damages;
- Fairness;
- Explainability.
2.4.2. Cybersecurity Ethics
3. Materials and Methods
3.1. Research Process
3.2. Document Selection and Analysis
3.3. Interviews
3.4. Artifact Analysis
4. Results
4.1. Analysis of SHAPES Deliverables
4.1.1. AI’s Role in Healthcare Supply and Elderly Care
4.1.2. Robotics Offered by SHAPES
4.1.3. Cybersecurity Challenges of Care Robots
4.1.4. Ethical Challenges of AI
4.1.5. Ethical Value Conflicts in Cybersecurity
- Usability vs. Security: Security measures can sometimes hinder usability, and overly simple usability can pose a security risk. Conversely, if usability is too complex, it can also threaten security because users may inadvertently misuse or damage a device or service. For instance, multi-factor authentication enhances security but can slow down service usage, particularly for services accessed multiple times a day, thereby impairing usability.
- Confidentiality and Privacy vs. Security: Confidentiality is a key component of information security, alongside availability and integrity. However, balancing these elements can be challenging, as enhancing one aspect may compromise another.
- Privacy vs. Efficiency and Quality of Services: Privacy often conflicts with the efficiency and quality of services. Improving service quality and efficiency frequently involves sharing information to discover new treatment and prescription solutions, which can compromise privacy.
4.2. Interview Findings
“[In Finland,] the patient liability law is always implemented when using robots to provide any service described in the said law. After that, the question remains in what terms is the robot supplier accountable to the health care provider.”(Interview of a Deputy Judge, 2022.)
“Any forced health treatment measures are ethically unjustified when the patient is not under guardianship. The patient has autonomy always even if the caretaker might disagree with the patient.”(Interview of a Nurse, 2022.)
“We used the EU Ethics guidelines of trustworthy AI to test it, since the development framework used to be very new back then. We used the 20-section checklist… and for example, the parts concerning continuous supervision, training, and management were important. Through those, we could compose an action plan for the upcoming phases… The framework was very useful to us.”(Interview of a Senior Manager, 2022.)
“In our project, for example, we were advised not to let the algorithm weigh the income or gender of the patients when making decisions. That doesn’t mean, however, that there’s no correlation there. For example, morbidity in women is different from men. Combined with diagnostics, it matters, because for example breast cancer does not manifest in men similarly as it does in women. Likewise, musculoskeletal diseases manifest differently in different income classes. These factors could have improved our decision-making model, but we could not use them.”(Interview of a Senior Manager, 2022.)
“For example, if we teach a hundred different scenarios to an AI and consider it safe after that, because someone has defined that it is safe, it still doesn’t make it 100% safe. This kind of situation could in principle happen when monitoring heart rate, where the algorithm has been taught that a certain blood pressure is okay, but there are still special cases when it’s not. It all depends on what kind of data has been used to teach the AI… The data hasn’t necessarily included sufficient readiness to react to certain situations. This can lead to a sort of randomness.”(Interview of a CEO, 2022.)
“[The accuracy] brought in the ethical questions. We had to consider which data was such, which could be used in decision-making, and how we can raise accuracy. We could not define a good enough accuracy in advance. Traditionally, any accuracy increase is a good thing, but now we had to compromise [.]”(Interview of a Senior Manager, 2022.)
“For example, when considering health care, we can teach an algorithm to make decisions similarly to a doctor, who makes decisions based on their experience and work history. However, some fraction of the decisions are always more complex. There could be diseases that can’t be diagnosed with only a limited set of information. In those cases, thresholds have to be defined for example whether it is a specific disease or not. It is typically safest to determine the threshold as conservatively as possible […] This threshold definition is probably the most ethical question. Too high a threshold can cause the machine to not function correctly.”(Interview of a CEO, 2022.)
4.3. ALTAI Tool
- Data and algorithm design: Recommendations focus on input data and algorithm design, such as avoiding bias and ensuring diversity in the data. Additionally, the use of advanced technical tools is recommended to understand the data and model, as well as to test and monitor biases throughout the AI system’s lifecycle.
- Awareness and training: Recommendations relate to training AI designers and developers on the potential for bias and discrimination in their work. Additionally, mechanisms are recommended for flagging bias issues and ensuring that information about the AI system is accessible to all users, including those using assistive devices.
- Defining fairness: Recommendations concern defining fairness and consulting with affected communities to ensure the definition is appropriate and inclusive. Additionally, the creation of quantitative metrics to measure and test fairness is suggested.
- Risk assessment: Recommendations relate to assessing the potential unfairness of the AI system’s outcomes for end-users or target communities. Additionally, identifying groups that may be disproportionately affected by the system’s outcomes is recommended.
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- SHAPES. 2022. Available online: https://shapes2020.eu/ (accessed on 1 September 2024).
- Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. J. Med. Internet Res. 2018, 20, e10410. Available online: https://www.jmir.org/2018/7/e10410/ (accessed on 1 September 2024). [CrossRef] [PubMed]
- Van Aerschot, L.; Parviainen, J. Robots responding to care needs? A multitasking care robot pursued for 25 years, available products offer simple entertainment and instrumental assistance. Ethics Inf. Technol. 2020, 22, 247–256. [Google Scholar] [CrossRef]
- Westerlund, M. An Ethical Framework for Smart Robots. Technol. Innov. Manag. Rev. 2020, 10, 35–44. [Google Scholar] [CrossRef]
- Beauchamp, T.; Childress, J. Principles of Biomedical Ethics, 5th ed.; Oxford University Press: New York, NY, USA, 2001. [Google Scholar]
- Gilligan, C. In a Different Voice: Psychological Theory and Women’s Development; Harvard University Press: Cambridge, UK, 1982. [Google Scholar]
- Loi, M.; Christen, M.; Kleine, N.; Weber, K. Cybersecurity in health—Disentangling value tensions. J. Inf. Commun. Ethics Soc. 2019, 17, 229–245. [Google Scholar] [CrossRef]
- van Bavel, J.; Reher, D.S. The baby boom and its causes: What we know and what we need to know. Popul. Dev. Rev. 2013, 39, 257–288. [Google Scholar] [CrossRef]
- Zamiela, C.; Hossain, N.U.; Jaradat, R. Enablers of Resilience in the Healthcare Supply Chain: A Case Study of U.S Healthcare Industry during COVID-19 Pandemic. Res. Transp. Econ. 2022, 93, 101174. [Google Scholar] [CrossRef]
- Vahteristo, A.; Kinnunen, U.-M. Tekoälyn hyödyntäminen terveydenhuollossa terveysriskien ja riskitekijöiden tunnistamiseksi ja ennustamiseksi. Finn. J. Ehealth Ewelfare 2019, 11. [Google Scholar] [CrossRef]
- Koi, P.; Heimo, O. Koneoppimisalgoritmit mahdollistavat jo ihmisen parantelun. In Tekoäly, Ihminen ja Yhteiskunta; Raatikainen, P., Ed.; Gaudeamus: Tallinna, Estonia, 2021; pp. 217–233. [Google Scholar]
- Vähäkainu, P.; Neittaanmäki, P. Tekoäly Terveydenhuollossa. Jyväskylän Yliopisto. Informaatioteknologian Julkaisuja No. 45/2018. 2018. Available online: https://jyx.jyu.fi/handle/123456789/57682 (accessed on 1 September 2024).
- Heinäsenaho, M.; Äyräs-Blumberg, O.; Lähesmaa, J. Tekoäly Mullistaa Terveydenhuoltoa—Mahdollisuudet Hyödynnettävä Viipymättä. Valtioneuvosto. 14 April 2023. Available online: https://valtioneuvosto.fi/-/1271139/tekoaly-mullistaa-terveydenhuoltoa-mahdollisuudet-hyodynnettava-viipymatta (accessed on 1 September 2024).
- Grieves, M.; Vickers, J. Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In Transdisciplinary Perspectives on Complex Systems; Springer: Berlin/Heidelberg, Germany, 2017; pp. 85–113. [Google Scholar]
- Liu, Y.; Zhang, L.; Yang, Y.; Zhou, L.; Ren, L.; Wang, F.; Liu, R.; Pang, Z.; Deen, M.J. A Novel Cloud-Based Framework for the Elderly Healthcare Services Using Digital Twin. IEEE Access 2019, 7, 49088–49101. [Google Scholar] [CrossRef]
- Kettunen, P.; Hahto, A.; Kopponen, A.; Mikkonen, T. Predictive “maintenance” of citizens with digital twins. In Proceedings of the 26th Finnish National Conference on Telemedicine and eHealth, Oulu, Finland, 7–8 October 2021. [Google Scholar]
- Kocabas, O.; Soyata, T. Towards Privacy-Preserving Medical Cloud Computing Using Homomorphic Encryption. In Management Association, Virtual and Mobile Healthcare: Breakthroughs in Research and Practice; IGI Global: Hershey, PA, USA, 2020; pp. 93–125. [Google Scholar]
- Kyrarini, M.; Lygerakis, F. A Survey of Robots in Healthcare. Technologies 2020, 9, 8. [Google Scholar] [CrossRef]
- Soriano, G.P.; Yasuhara, Y.; Ito, H.; Matsumoto, K.; Osaka, K.; Kai, Y.; Locsin, R.; Schoenhofer, S.; Tanioka, T. Robots and Robotics in Nursing. Healthcare 2022, 10, 1571. [Google Scholar] [CrossRef] [PubMed]
- Turja, T.; Saurio, R.; Katila, J.; Hennala, L.; Pekkarinen, S.; Melkas, H. Intention to Use Exoskeletons in Geriatric Care Work: Need for Ergonomic and Social Design. Ergon. Des. 2020, 30, 13–16. [Google Scholar] [CrossRef]
- Pirhonen, J.; Melkas, H.; Laitinen, A.; Pekkarinen, S. Could robots strengthen the sense of autonomy of older people residing in assisted living facilities?—A future-oriented study. Ethics Inf. Technol. 2020, 22, 151–162. [Google Scholar] [CrossRef]
- Lera, F.J.R.; Llamas, C.F.; Guerrero, Á.M.; Olivera, V.M. Cybersecurity of robotics and autonomous systems: Privacy and safety. In Robotics-Legal, Ethical and Socioeconomic Impacts; INTECH: Vienna, Austria, 2017. [Google Scholar]
- Fosch-Villaronga, E.; Mahler, T. Cybersecurity, safety and robots: Strengthening the link between cybersecurity and safety in the context of care robots. Comput. Law Secur. Rev. 2021, 41, 105528. [Google Scholar] [CrossRef]
- Giansanti, D.; Gulino, R. The Cybersecurity and the Care Robots: A Viewpoint on the Open Problems and the Perspectives. Healthcare 2021, 9, 1653. [Google Scholar] [CrossRef] [PubMed]
- Rajamäk, J.; Järvinen, M. Exploring care robots’ cybersecurity threats from care robotics specialists’ point of view. In Proceedings of the 21th European Conference on Cyber Warfare and Security (ECCWS 2022), Chester, UK, 16–17 June 2022. [Google Scholar]
- Aaltonen, M. Tekoäly; Alma Talent: Helsinki, Finland, 2019. [Google Scholar]
- European Comission. Ethics Guideline for Thrustworthy AI. 8 August 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 1 September 2024).
- Juujärvi, S.; Ronkainen, K.; Silvennoinen, P. The ethics of care and justice in primary nursing of older patients. Clin. Ethics 2019, 14, 187–194. [Google Scholar] [CrossRef]
- Coeckelbergh, M. Tekoälyn Etiikka; Libris/Painoliber Oy: Helsinki, Finland, 2021. [Google Scholar]
- Sarlio-Siintola, S. SHAPES. Ethical Framework Final Version. 30 April 2021. Available online: https://shapes2020.eu/deliverables/ (accessed on 1 September 2024).
- Rajamäki, J.; Rocha, P.; Perenius, M.; Gioulekas, F. SHAPES Project Pilots’ Self-assessment for Trustworthy AI. In Proceedings of the 12th International Conference on Dependable Systems, Services and Technologies (DESSERT), Athens, Greece, 9–11 December 2022; pp. 1–7. [Google Scholar]
- Christen, M.; Gordijn, B.; Loi, M. The Ethics of Cybersecurity; Springer Nature: Dordrecht, The Netherlands, 2020. [Google Scholar]
- van de Poel, I. Core values and value conflicts in cybersecurity: Beyond privacy versus security. In The Ethics of Cybersecurity; Springer Nature: Dordrecht, The Netherlands, 2020; pp. 45–71. [Google Scholar]
- Yin, R. Case Study Research: Design and Methods, 4th ed.; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
- Kankkunen, P.; Vehviläinen-Julkunen, K. Tutkimus Hoitotieteessä. 3; Uudistettu Painos; Sanoma Pro: Helsinki, Finland, 2013. [Google Scholar]
- Hirsjärvi, S.; Remes, P.; Sajavaara, P. Tutki ja Kirjoita; Otava: Keuruu, Finland, 2007. [Google Scholar]
- Hirsjärvi, S.; Hurme, H. Tutkimushaastattelu—Teemahaastattelun Teoria ja Käytäntö; Gaudeamus: Helsinki, Finland, 2014. [Google Scholar]
- Hirvikoski, T.; Äyväri, A.; Hagman, K.; Wollstén, P. Yhteiskehittämisen Käsikirja; Laurea: Espoo, Finland, 2018; ISBN 978-951-857-776-1. [Google Scholar]
- SHAPES. SHAPES Pilots. 2023. Available online: https://shapes2020.eu/about-shapes/pilots/ (accessed on 1 September 2024).
- Rajamäki, J.; Gioulekas, F.; Rocha, P.; Garcia, X.; Ofem, P. ALTAI Tool for Assessing AI-Based Technologies: Lessons Learned and Recommendations from SHAPES Pilots. Healthcare 2023, 11, 1454. [Google Scholar] [CrossRef] [PubMed]
- Huang, P.; Kim, K.; Schermer, M. Ethical Issues of Digital Twins for Personalized Health Care Service: Preliminary Mapping Study. J. Med. Internet Res. 2022, 24, e33081. [Google Scholar] [CrossRef] [PubMed]
Source Categories | Number | Description |
---|---|---|
Documents: Deliverables produced in WP8 “SHAPES Legal, Ethics, Privacy and Fundamental Rights Protection” of the SHAPES project | (14) documents | D8.1 Set-up Ethical Advisory Board D8.2 Baseline for SHAPES Project Ethics D8.3 Assessing the Regulatory Frameworks Facilitating Pan-European Smart Healthy Aging D8.4 SHAPES Ethical Framework V1 D8.5 First Periodic Ethical Report D8.6 Second Periodic Ethical Reports D8.7 Third Periodic Ethical Report D8.10 Privacy and Ethical Risk Assessment D8.11 Privacy and Data Protection Legislation in SHAPES D8.13 SHAPES Data Management Plan D8.14 SHAPES Ethical Framework |
Interviews | (5) individuals | (1) Master of Laws with court training, (2) Practical Nurse (3) CEO of a Finnish robotic systems development company (4) A relative of an older person (5) Senior Manager of a global information technology services and consulting company |
Artifact “ALTAI Tool”; Lessons learned from the tool in SHAPES pilots | (7) pilots | (1) Smart living environment for healthy aging at home (2) Improving in-home and community-based care (3) Medicine control and optimisation (4) Psycho-social and cognitive stimulation promoting wellbeing (5) Caring for older individuals with neurodegenerative diseases (6) Physical rehabilitation at home (7) Cross-border health data exchange |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rajamäki, J.; Helin, J. The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information 2024, 15, 729. https://doi.org/10.3390/info15110729
Rajamäki J, Helin J. The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information. 2024; 15(11):729. https://doi.org/10.3390/info15110729
Chicago/Turabian StyleRajamäki, Jyri, and Jaakko Helin. 2024. "The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home" Information 15, no. 11: 729. https://doi.org/10.3390/info15110729
APA StyleRajamäki, J., & Helin, J. (2024). The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home. Information, 15(11), 729. https://doi.org/10.3390/info15110729