Algorethics in Healthcare: Balancing Innovation and Integrity in AI Development
Abstract
:1. Introduction
1.1. Applications of Artificial Intelligence Algorithms in Healthcare: Innovation and Perspectives
1.2. The Role of Algorethics in Modern Healthcare, Starting Point, and Purpose of the Paper
- Evaluate Contributions: Categorize and analyze how algorethics is contributing to the ethical development and deployment of AI technologies, highlighting the emerging themes that illustrate its impact.
- Explore Opportunities and Challenges: Identify the opportunities that algorethics present for improving AI ethics as well as the key challenges that still need to be addressed.
- Provide Recommendations: Offer recommendations for enhancing algorethics in AI, aiming to better guide the ethical use and governance of AI systems.
2. Materials and Methods
2.1. Search Strategies
2.2. Assessment Criteria for the Inclusion
2.3. Assessment Process
2.4. Managing Bias in the Narrative Review
2.5. Selected Studies
2.6. Further Analysis Insights
3. Results
- 3.3.1 Assessing the Impact of Algorithmic Ethics: Analyses how algorithmic ethics (algorethics) contributes to the ethical development and deployment of AI technologies. This section categorizes the contributions and highlights emerging themes that demonstrate the impact of algorethics on AI practices.
- 3.3.2 Identifying Opportunities and Overcoming Challenges in Algorithmic Ethics: Identifies the opportunities that algorethics presents for advancing AI ethics and outlines the major challenges that need to be addressed. This section explores both the potential benefits and the obstacles facing the integration of ethical principles into AI systems.
- 3.3.3 Strategic Recommendations for Enhancing Algorithmic Ethics: Offers actionable recommendations for improving algorethics in AI. This section aims to guide the ethical use and governance of AI systems, proposing strategies for enhancing ethical practices and addressing identified challenges.
3.1. Synoptic Diagram of Results
3.2. Trends
3.3. In-Depth Review of Algorithmic Ethics: Evaluating Impact, Opportunities, Challenges, and Strategic Recommendations
3.3.1. Assessing the Impact of Algorithmic Ethics
- 1.
- AI Validation and Generalizability:
- Overview: Effective AI systems must be rigorously validated to ensure their reliability and applicability across diverse scenarios. Jetzmann et al. [31] emphasized the need for thorough validation processes in musculoskeletal ultrasound to confirm the generalizability of AI algorithms. Similarly, Kim et al. [40] highlighted the importance of ongoing evaluation in digital pathology to address validation and interpretability issues.
- Ethical Implications: Independent and comprehensive validation processes are crucial for ensuring that AI algorithms are unbiased and reliable. This emphasis on validation helps with mitigating potential biases and ensures that AI systems perform consistently across varied real-world conditions, thereby addressing ethical concerns related to algorithmic fairness and reliability.
- 2.
- Ethical Implications of Data Use:
- Overview: The ethical use of data is a critical aspect of AI development, involving concerns about data privacy, potential biases, and the necessity for diverse datasets. Daher et al. [32] examined data privacy issues and the risk of biased outcomes in pancreatic cancer detection, while Veritti et al. [37] discussed the implications of data privacy and healthcare inequality. Wang et al. [38] highlighted the importance of addressing biases in medical AI to promote fairness.
- Ethical Implications: Addressing data privacy and ensuring the representativeness of datasets are essential for developing fair AI systems. These studies underscore the importance of robust ethical guidelines to protect personal information and ensure equitable outcomes in AI applications.
- 3.
- Algorithmic Bias and Fairness:
- Overview: Tackling algorithmic bias and ensuring fairness in AI applications are critical to preventing exacerbation of existing disparities. Grybowsky [33] and Maroufi et al. [34] discussed the risks of biased outcomes resulting from skewed training data and emphasized the need for transparent AI systems. Singh et al. [36] also addressed the importance of fairness in pharmacological research.
- Ethical Implications: Ensuring fairness involves identifying and mitigating biases in AI algorithms, which requires the creation of standardized practices and robust evaluation methods. These efforts are vital for maintaining equity and trust in AI technologies, ensuring that all demographic groups are treated fairly.
- 4.
- Transparency and Explainability:
- Overview: Transparency and explainability are essential for fostering trust in AI systems. Vo et al. [35] and Saw et al. [42] stressed the need for AI systems to be transparent and their decision-making processes to be explainable. Maroufi et al. [34] also highlighted the need for clear ethical guidelines to ensure responsible AI use.
- Ethical Implications: Transparent and explainable AI systems are fundamental to ethical AI development. By making AI processes understandable, these studies advocate for practices that enhance user trust and align with ethical standards, ensuring that AI technologies are used responsibly and effectively.
- 5.
- Interdisciplinary Collaboration and Ethical Frameworks:
- Overview: Developing comprehensive ethical frameworks for AI requires interdisciplinary collaboration. Kontiainen et al. [39] proposed using access to justice as a framework for AI ethics, while Akgun et al. [43] and Kazim et al. [41] highlighted the need for integrating various perspectives to address ethical and regulatory challenges.
- Ethical Implications: Combining insights from multiple disciplines helps in crafting holistic ethical guidelines that address the complexities of AI. This collaborative approach is essential for developing robust frameworks that ensure re-sponsible AI development and implementation.
- 6.
- Impact on Stakeholders and Society:
- Ethical Implications: Considering the broader societal impact of AI is crucial for ethical development. These studies emphasize the need for responsible AI practices that account for stakeholder impacts and societal outcomes, ensuring that AI technologies contribute positively to society.
Focus Area | Mini-Summary | Focus | References |
---|---|---|---|
AI Validation and Generalizability | Emphasizes the importance of rigorous validation methods to ensure that AI algorithms are reliable and applicable across different scenarios. AI systems must undergo thorough testing to demonstrate their effectiveness and robustness in diverse real-world conditions. | The focus is on the need for independent and comprehensive validation processes to avoid biases and ensure that AI algorithms perform reliably and fairly when deployed in real-world settings. This includes using external datasets for validation. | Jetzmann et al. [31], Kim et al. [40] |
Ethical Implications of Data Use | Discusses the critical ethical concerns related to the use of data in AI systems, including issues of privacy, potential biases in data, and the need for diverse and representative datasets. Ensuring data security and ethical handling of personal information is paramount. | The focus is on addressing concerns about data privacy, algorithmic bias, and the necessity of using diverse and representative datasets to enhance fairness and accuracy in AI applications. | Daher et al. [32], Veritti et al. [37], Wang et al. [38] |
Algorithmic Bias and Fairness | Highlights the risks associated with algorithmic bias, which can lead to unfair and unequal outcomes across different population groups. Ensures that AI systems are developed with fairness in mind to avoid exacerbating existing disparities. | The focus is on identifying and mitigating biases in AI algorithms to promote fairness and equity. This involves creating standardized practices for AI development and performance evaluation to ensure that all populations are treated equitably. | Grybowsky [33], Maroufi et al. [34], Singh et al. [36] |
Transparency and Explainability | Stresses the need for AI systems to be transparent and their decision-making processes to be explainable. This helps build trust among users and stakeholders by making it clear how and why decisions are made by AI systems. | The focus is on developing AI systems that are transparent in their operations and that provide clear explanations of their decision-making processes. This enhances trust and ensures that AI technologies are used ethically. | Vo et al. [35], Saw et al. [42], Maroufi et al. [34] |
Interdisciplinary Collaboration and Ethical Frameworks | Advocates for a collaborative approach to developing ethical guidelines for AI through interdisciplinary research. This includes integrating perspectives from various fields to create comprehensive and robust ethical frameworks. | The focus is on fostering interdisciplinary collaboration to develop holistic ethical guidelines and regulatory frameworks that address the complexities of AI ethics. This approach aims to ensure that AI systems are designed and implemented responsibly. | Kontiainen et al. [39], Akgun et al. [43], Kazim et al. [41] |
Impact on Stakeholders and Society | Examines the broader societal implications of AI technologies, including their effects on various stakeholders and potential changes in societal dynamics. This includes considerations of how AI can influence healthcare access, quality, and equity. | The focus is on assessing how AI technologies impact different stakeholder groups and societal structures. It emphasizes the importance of ethical considerations in the development and deployment of AI to ensure positive societal outcomes. | Bonnefon [44], Jalal et al. [45] |
Study | AI Techniques/Focus | Ethical Considerations |
---|---|---|
Jetzmann et al. [31] | Deep Learning (DL): Convolutional neural networks (CNNs) for analyzing musculoskeletal ultrasound (MSK US) images. Machine learning (ML): Some studies utilized conventional ML techniques such as support vector machines (SVMs) and random forests. | Validation Gaps: The study revealed that while internal cross-validation techniques (like K-fold) were prevalent, none of the studies included external clinical validation. This raises concerns about the generalizability of AI algorithms across diverse clinical settings and populations, potentially leading to algorithmic biases and inaccuracies that could impact patient outcomes. |
Daher et al. [32] | Advanced AI Techniques: Deep learning models and ensemble methods for processing imaging and biomarker data related to pancreatic cancer detection and management. Support vector machines (SVMs) and random forests were also noted in some applications. | Data Scarcity and Privacy: Issues include the lack of comprehensive and diverse datasets, which can introduce biases in AI models and affect diagnostic accuracy. Data privacy and security are critical due to the sensitive nature of medical information. There is a need for robust ethical frameworks to address these concerns and ensure fairness and confidentiality in AI systems. |
Grybowsky [33] | Machine Learning (ML): Techniques such as decision trees, neural networks, and logistic regression for various medical applications, including dermatology. | Bias and Transparency: Ethical challenges include potential biases arising from skewed training data and lack of transparency in AI decision-making processes. The “black-box” nature of many AI systems complicates understanding and trust. Ethical considerations also involve ensuring informed consent and protecting patient data from misuse. Improving data quality and making AI systems more explainable are key to addressing these issues. |
Maroufi et al. [34] | Machine Learning (ML): Diverse algorithms, including logistic regression and decision trees, applied to preoperative planning and surgical decision-making for pituitary adenoma surgery. | Standardization and Fairness: The study highlighted the diversity of AI/ML algorithms and raised questions about the standardization and fairness of these technologies. Ensuring that AI models are rigorously tested for reliability and fairness, and addressing any potential biases, is essential for equitable patient outcomes and trust in AI-assisted surgical decision-making. |
Vo et al. [35] | General AI Methods: Including deep learning and ensemble learning techniques applied to various aspects of healthcare. | Privacy and Equity: Ethical concerns include data privacy issues, particularly in relation to third parties like insurance companies, and the risk of perpetuating existing healthcare disparities due to biased AI systems. Transparency in AI systems and the development of clear regulations are necessary to address these concerns and ensure responsible AI implementation. |
Singh et al. [36] | AI in Pharmacological Research: Techniques such as deep learning for drug discovery, target identification, and toxicity prediction. Reinforcement learning was also noted for optimizing drug efficacy. | Privacy and Bias: Ethical challenges include ensuring data privacy and security, addressing algorithmic biases that may skew drug efficacy predictions, and maintaining transparency in AI-driven research. It is crucial to implement robust ethical frameworks and maintain human oversight to mitigate these issues and ensure the responsible use of AI in pharmacology. |
Veritti et al. [37] | Various AI Approaches: Machine learning and deep learning techniques used in ophthalmology for diagnostics and treatment planning. | Bias and Transparency: Key ethical issues include biases in AI algorithms leading to unequal healthcare outcomes, the “black-box” problem complicating accountability, and data security concerns. Ensuring that AI models are explainable, improving data quality, and ensuring equitable access are crucial to addressing these challenges and preventing increased healthcare inequality. |
Wang et al. [38] | Medical AI Techniques: Incorporating deep learning and statistical models for analyzing and improving fairness in medical AI applications. | Fairness and Bias: Ethical issues include ensuring fairness by addressing data quality and algorithmic biases. The review underscores the importance of interdisciplinary discussions to bridge gaps in understanding and implementing practical measures for fairness in medical AI. Legal, ethical, and technological measures are necessary to promote equitable outcomes. |
Kontiainen et al. [39] | Interdisciplinary AI Frameworks: Combining legal, social, and technological perspectives to address systemic challenges in AI ethics and governance. | Systemic Fairness and Justice: The study proposes using access to justice as a framework for understanding and addressing algorithmic biases and ensuring fair AI governance. Integrating multiple perspectives helps develop comprehensive solutions to the ethical and regulatory challenges posed by AI, fostering justice and fairness. |
Kim et al. [40] | AI in Digital Pathology: Convolutional neural networks (CNNs) and other image-based AI tools for diagnostic purposes. | Validation and Interpretability: Ethical challenges include ensuring the accuracy and interpretability of AI systems in pathology. Transparency in AI decision-making processes is crucial for trust and effective integration into diagnostic workflows. Ongoing development and evaluation of AI tools are needed to address these challenges and ensure reliable performance. |
Kazim et al. [41] | AI as a Digital Asset: Focus on the ontological nature of algorithms and their role in representing and capturing value. | Ontological and Ethical Implications: The study emphasized the need to understand how digital technologies and AI represent value and align with societal ethical standards. Addressing these foundational shifts is crucial for aligning AI technologies with broader ethical and societal norms. |
Saw et al. [42] | AI Techniques in Medical Imaging: Deep learning and advanced image processing methods for analyzing medical images. | Algorithm Reliability and Equity: Key challenges include ensuring the creation of reliable and fair AI algorithms, establishing best practices for data governance, and developing regulatory frameworks that support innovation while protecting patient privacy. Addressing transparency and equitable access to AI technologies is essential for ethical development. |
Akgun et al. [43] | AI in Education: Adaptive learning systems and personalized learning engines for K-12 education settings. | Privacy and Bias: Ethical concerns include privacy issues, algorithmic bias, and the need for transparency in educational AI systems. Integrating ethical considerations into AI applications in education and providing resources for educators and students to understand these aspects are essential for responsible AI use. |
Bonnefon [44] | AI Cognitive Analogies: AI systems emulating human cognitive processes, such as fast and slow thinking models. | Misleading Analogies and Design: Ethical considerations involve the potential for misunderstandings or misuse of AI due to misleading analogies to human cognition. Clear and responsible design, along with accurate communication about AI capabilities and limitations, is crucial for ethical AI development. |
Jalal et al. [45] | AI in Emergency Radiology: Automated image analysis and diagnostic algorithms for handling increased imaging volumes. | Integration and Oversight: Challenges include ensuring AI systems are accurate, fair, and transparent while maintaining necessary human oversight in emergency care. Developing frameworks to balance the benefits of AI with ethical standards is crucial for improving care quality and patient safety. |
Study | Application Type | Justification for Application Type |
---|---|---|
Jetzmann et al. [31] | Practical | Examines the use of deep learning (DL) and machine learning (ML) in medical imaging, focusing on real-world challenges such as validation and generalizability in clinical settings. |
Daher et al. [32] | Practical | Investigates the application of advanced AI techniques in cancer detection, addressing practical issues like data privacy and the accuracy of diagnostic algorithms. |
Grybowsky [33] | Practical | Applies ML techniques to dermatology, highlighting practical concerns such as biases in training data and the need for transparency in AI decision-making processes. |
Maroufi et al. [34] | Practical | Focuses on using ML algorithms in surgical planning, emphasizing practical needs for standardization and fairness in AI models used in clinical decision-making. |
Vo et al. [35] | Practical | Explores the implementation of AI methods in various healthcare applications, with a focus on practical issues including data privacy, equity, and transparency of AI systems. |
Singh et al. [36] | Practical | Analyzes the use of AI in pharmacological research, addressing practical concerns such as data privacy, algorithmic biases, and maintaining transparency in drug discovery processes. |
Veritti et al. [37] | Practical | Investigates AI applications in ophthalmology, focusing on practical challenges related to bias, data security, and the transparency of AI models. |
Wang et al. [38] | Practical | Examines AI techniques for improving fairness in medical applications, with a practical focus on addressing biases and ensuring equitable outcomes in healthcare settings. |
Kontiainen et al. [39] | Theoretical | Proposes theoretical frameworks for AI ethics, integrating perspectives from legal, social, and technological fields to address systemic challenges in AI governance. |
Kim et al. [40] | Practical | Studies the application of AI in digital pathology, focusing on ensuring the accuracy and interpretability of AI systems in diagnostic processes. |
Kazim et al. [41] | Theoretical | Explores the ontological aspects of AI, examining how AI technologies represent and capture value and their alignment with broader societal ethical norms. |
Saw et al. [42] | Practical | Addresses practical issues in medical imaging with AI, focusing on ensuring algorithm reliability, data governance, and the development of regulatory frameworks. |
Akgun et al. [43] | Practical | Examines the use of AI in educational settings, with a focus on practical concerns such as privacy, algorithmic bias, and the need for transparency in educational tools. |
Bonnefon [44] | Theoretical | Explores the theoretical implications of AI systems mimicking human cognitive processes, discussing the ethical considerations and potential for misunderstanding or misuse. |
Jalal et al. [45] | Practical | Investigates the application of AI in emergency radiology, focusing on practical challenges such as ensuring accuracy, fairness, and maintaining necessary human oversight. |
3.3.2. Identifying Opportunities and Overcoming Challenges in Algorithmic Ethics
Study | Opportunities |
---|---|
Jetzmann et al. [31] | Enhanced AI Validation: There is a significant opportunity to advance AI validation techniques by incorporating external datasets. This approach will enhance the generalizability and reliability of AI algorithms across diverse clinical environments. Effective external validation can ensure that AI models are not limited to specific datasets and can perform reliably in varied real-world settings, thus improving trust in their clinical application. |
Daher et al. [32] | Diverse Dataset Utilization: Building and utilizing comprehensive datasets that encompass a broad spectrum of demographic and clinical variations represents a key opportunity. By addressing the scarcity of diverse data, AI models can be trained to detect and manage conditions like pancreatic cancer more accurately and fairly. This helps in creating more equitable diagnostic and treatment tools that are effective across different populations. |
Grybowsky [33] | Increased Transparency: Developing methods and tools to enhance transparency in AI systems can significantly improve stakeholder trust. Creating explainable AI models that clarify decision-making processes can demystify AI operations and build confidence among users, including healthcare professionals and patients. This opportunity focuses on fostering understanding and accountability in AI applications. |
Maroufi et al. [34] | Standardization of AI Practices: Establishing standardized practices and benchmarks for AI algorithms, especially in surgical contexts, can ensure consistency and fairness in AI-assisted decision-making processes. By setting clear standards for evaluating and implementing AI tools, the medical field can achieve more reliable and equitable outcomes in surgical procedures. |
Vo et al. [35] | Regulatory Framework Development: Designing and implementing robust regulatory frameworks to address data privacy, equity, and transparency in AI systems presents an opportunity for ethical deployment. Effective regulations can protect patient privacy, ensure fair treatment, and promote transparent AI practices, ultimately fostering a more responsible integration of AI technologies in healthcare. |
Singh et al. [36] | Ethical Frameworks for Drug Discovery: Creating comprehensive ethical frameworks for AI applications in pharmacology can address bias mitigation, data privacy, and transparency issues. This opportunity involves developing guidelines and standards to ensure that AI-driven drug discovery processes are fair, secure, and transparent, leading to more ethical and effective pharmacological research. |
Veritti et al. [37] | Improved Data Security and Access: Enhancing data security protocols and ensuring equitable access to AI technologies across patient demographics represent critical opportunities. Improving data protection measures can safeguard sensitive patient information, while ensuring broad access to AI tools can help reduce healthcare disparities and promote fair treatment. |
Wang et al. [38] | Interdisciplinary Collaboration: Fostering collaboration between different disciplines to address fairness in medical AI offers a chance for more holistic solutions. By integrating perspectives from computer science, medical science, social science, and other fields, researchers can develop comprehensive strategies to address algorithmic biases and improve fairness in AI applications. |
Kontiainen et al. [39] | Integrated Ethical Frameworks: Utilizing interdisciplinary perspectives to develop comprehensive frameworks for algorithmic fairness and governance can address systemic challenges in AI ethics. This opportunity involves creating integrated ethical guidelines that consider legal, social, and technological aspects, promoting justice and ensuring responsible AI development and use. |
Kim et al. [40] | Continuous AI Development: Focusing on the ongoing development and refinement of AI tools in digital pathology presents an opportunity to enhance their transparency and integration into diagnostic workflows. Continuous improvement and evaluation of AI systems can ensure their accuracy and reliability, making them valuable tools for pathologists and improving patient outcomes. |
Kazim et al. [41] | Exploration of Ontological Implications: Delving into the ontological aspects of AI technologies and their value representations can help align AI systems with societal and ethical standards. This opportunity involves examining how AI captures and expresses value, ensuring that its applications are consistent with broader ethical norms and societal expectations. |
Saw et al. [42] | Best Practices for Data Governance: Developing and implementing best practices for data governance in medical imaging can enhance innovation while safeguarding patient privacy and ensuring transparency. This opportunity involves creating frameworks that balance data protection with the advancement of AI technologies, fostering ethical development and deployment in medical imaging. |
Akgun et al. [43] | Ethical Education Integration: Integrating ethical considerations into educational resources for AI applications in educational settings represents an opportunity to promote awareness and understanding. By developing instructional materials that address privacy, bias, and transparency, educators and students can navigate the ethical dimensions of AI more effectively. |
Bonnefon [44] | Clear Design Communication: Ensuring that AI systems are designed and communicated clearly can prevent misunderstandings and misuse. This opportunity involves creating accurate representations of AI capabilities and limitations, promoting responsible design practices and enhancing user understanding of AI technologies. |
Jalal et al. [45] | Balanced AI Integration: Creating frameworks that balance the benefits of AI with necessary human oversight in emergency radiology can enhance care quality and patient safety. This opportunity focuses on developing guidelines that ensure AI systems complement human expertise while maintaining high standards of care and ethical oversight. |
Study | Areas Needing Further Research/Challenges |
---|---|
Jetzmann et al. [31] | External Validation: There is a critical need for research on methods to incorporate external validation for AI algorithms in healthcare. Current studies often rely on internal datasets, which may not reflect the variability encountered in real-world clinical settings. To ensure AI models’ generalizability and reliability, research should focus on creating methodologies for effective external validation across diverse patient populations and clinical environments. |
Daher et al. [32] | Dataset Diversity and Privacy: Addressing the challenges of dataset diversity and data privacy is essential for the ethical deployment of AI. There is a need to develop comprehensive datasets that are representative of different demographic and clinical variations. Additionally, research should focus on enhancing privacy measures to protect sensitive patient data from breaches and on creating ethical guidelines to prevent biases in AI models that could affect diagnostic accuracy. |
Grybowsky [33] | Bias and Transparency: There is a need for research to identify and mitigate biases in AI algorithms used in medical applications. This includes developing methods to improve transparency in AI decision-making processes to ensure that users can understand and trust AI systems. Investigating ways to make AI systems more explainable and addressing the ethical implications of algorithmic biases are crucial for maintaining fairness and trust in healthcare. |
Maroufi et al. [34] | Algorithm Selection and Performance: More research is required to determine the best practices for selecting and evaluating AI algorithms in complex scenarios like surgical decision-making. This includes developing standardized criteria for algorithm performance, addressing the diversity of algorithms used, and ensuring that they are tested rigorously for reliability and fairness. Research should also focus on overcoming challenges related to data heterogeneity and algorithmic bias in surgical contexts. |
Vo et al. [35] | Regulatory Challenges: There is a significant need for research to develop and implement comprehensive regulatory frameworks for AI in healthcare. This includes creating guidelines that address ethical concerns such as data privacy, equity, and transparency. Research should focus on how to balance innovation with ethical considerations and establish clear regulations that ensure AI technologies are used responsibly and effectively in healthcare settings. |
Singh et al. [36] | Bias and Transparency in Pharmacology: Research should address biases and improve transparency in AI applications within pharmacology. This involves developing methods to identify and mitigate biases in drug discovery and efficacy predictions as well as enhancing transparency in AI-driven research processes. Ensuring data privacy and obtaining informed consent for data use are also critical areas needing further exploration to uphold ethical standards in pharmacological research. |
Veritti et al. [37] | Data Security and Accessibility: Research should focus on improving data security measures and ensuring equitable access to AI technologies in ophthalmology and healthcare. This includes developing advanced strategies for protecting sensitive health data and addressing the risk of increased healthcare inequality due to biased AI algorithms. Ensuring that AI systems are accessible to diverse patient populations and improving the transparency and security of AI processes are crucial for ethical implementation. |
Wang et al. [38] | Fairness and Interdisciplinary Approaches: There is a need for more research on interdisciplinary approaches to ensure fairness in medical AI. This involves exploring how various disciplines can collaborate to develop comprehensive solutions for addressing algorithmic biases and promoting fairness. Research should focus on integrating insights from computer science, medical science, and the social sciences to create effective measures for achieving equitable outcomes in AI applications. |
Kontiainen et al. [39] | Interdisciplinary Ethical Frameworks: Developing integrated ethical frameworks that combine legal, social, and technological perspectives is essential for addressing systemic challenges in AI ethics and governance. Research should focus on creating comprehensive guidelines that address algorithmic biases and ensure justice and fairness in AI systems. Adopting interdisciplinary approaches can help develop more holistic and actionable solutions to the ethical and regulatory challenges posed by AI. |
Kim et al. [40] | AI Integration in Pathology: Research should focus on the integration of AI tools into digital pathology workflows, emphasizing the need for accuracy, reliability, and transparency in AI systems. Investigating methods to ensure the effective integration of AI into diagnostic processes and continuously developing and refining AI tools to enhance their utility and performance in pathology is crucial for improving diagnostic accuracy and patient outcomes. |
Kazim et al. [41] | Ontological and Ethical Implications: There is a need to explore the ontological aspects of AI technologies and their ethical implications. Research should investigate how digital technologies represent and process value and ensure that these representations align with broader societal and ethical norms. Understanding the foundational shifts introduced by AI can help ensure that its applications adhere to ethical standards and societal values. |
Saw et al. [42] | Best Practices for Data Governance: Developing best practices for data governance in medical imaging is essential for ensuring ethical AI development. Research should focus on creating frameworks that balance innovation with privacy protection, transparency, and equitable access. Establishing guidelines for secure data management and transparent AI operations is crucial for fostering ethical development and deployment of AI in healthcare. |
Akgun et al. [43] | Ethical Education Integration: There is a need to integrate ethical considerations into educational resources for AI applications. This includes developing instructional materials that address privacy, bias, and transparency issues in AI technologies. Promoting awareness and understanding among educators and students about these ethical challenges is essential for preparing future professionals to navigate the ethical implications of AI. |
Bonnefon [44] | Misleading Analogies and Design: Research should focus on addressing the potential for misunderstandings or misuse of AI due to misleading analogies to human cognition. This involves ensuring that AI systems are designed and communicated in ways that accurately reflect their capabilities and limitations. Clear and responsible design, along with accurate communication about AI systems, is crucial for preventing ethical issues related to cognitive analogies. |
Jalal et al. [45] | AI in Emergency Radiology: Research should explore the integration of AI in emergency radiology, focusing on challenges such as ensuring accuracy, fairness, and transparency. Developing frameworks that balance the benefits of AI with necessary human oversight and ethical standards is crucial for improving care quality and patient safety in emergency situations. Research should also address how to effectively train and validate AI systems in this high-stakes field. |
3.3.3. Strategic Recommendations for Enhancing Algorithmic Ethics
- R-1-
- Implement Rigorous Validation Practices: The need for comprehensive validation was highlighted by Jetzmann et al. [31] and Maroufi et al. [34]. External validation is essential for ensuring that AI algorithms are not only effective in controlled environments but also in real-world scenarios, reducing the risk of biases and inaccuracies.
- R-2-
- Develop Comprehensive Data Governance Frameworks: Daher et al. [32] and Singh et al. [36] emphasized the importance of diverse and representative datasets to avoid introducing biases into AI systems. Implementing robust data governance frameworks will help safeguard patient privacy and enhance the fairness and effectiveness of AI technologies.
- R-3-
- Enhance Transparency and Explainability: As noted by Grybowsky [33], Kim et al. [40], and Jalal et al. [45], transparency and explainability are critical for fostering trust and accountability in AI systems. Making AI decision-making processes more understandable will help address concerns about the “black-box” nature of many AI applications.
- R-4-
- Promote Fairness through Interdisciplinary Collaboration: Wang et al. [38] and Kontiainen et al. [39] stressed the value of interdisciplinary approaches to address fairness in AI. Collaborative efforts across different fields can lead to more equitable solutions and a better understanding of how fairness can be practically implemented.
- R-5-
- Address Data Security and Privacy Concerns: Data security and privacy are paramount, especially in sensitive areas like healthcare. Vo et al. [35] and Veritti et al. [37] advocated for the development of clear regulations and guidelines to protect patient information and ensure that AI systems do not perpetuate existing disparities.
- R-6-
- Integrate Ethical Considerations in Educational AI: Akgun et al. [43] highlighted the need for ethical considerations in AI applications within education. Addressing privacy, bias, and transparency issues will be crucial for ensuring that AI technologies contribute positively to educational outcomes.
- R-7-
- Advance Research on Ontological Implications: Kazim et al. [41] called for a deeper exploration of the ontological implications of AI technologies. Understanding how AI represents and captures value can help align these technologies with broader societal and ethical standards, ensuring their responsible use.
Recommendation | Description | References |
---|---|---|
1. Implement Rigorous Validation Practices | Ensure that AI algorithms undergo both internal and external validation to enhance their generalizability and real-world applicability. This includes testing AI systems in diverse clinical settings and populations to verify their reliability and fairness. | Jetzmann et al. [31]; Maroufi et al. [34] |
2. Develop Comprehensive Data Governance Frameworks | Create and enforce robust ethical frameworks for data collection, usage, and privacy to prevent biases and protect sensitive information. This includes ensuring that datasets are diverse and representative of the populations served. | Daher et al. [32]; Singh et al. [36]; Saw et al. [42] |
3. Enhance Transparency and Explainability | Focus on developing AI systems that are transparent and provide explanations for their decisions. This will help build trust among users and stakeholders and facilitate accountability. | Grybowsky [33]; Kim et al. [40]; Jalal et al. [45] |
4. Promote Fairness through Interdisciplinary Collaboration | Foster interdisciplinary collaborations to address fairness in AI. This includes integrating insights from computer science, medical science, and social science to develop equitable AI systems. | Wang et al. [38]; Kontiainen et al. [39] |
5. Address Data Security and Privacy Concerns | Implement robust measures to safeguard data security and address privacy concerns, particularly in sensitive areas like healthcare and pharmacology. This includes developing clear regulations and guidelines for data use. | Vo et al. [35]; Veritti et al. [37] |
6. Integrate Ethical Considerations into Educational AI | Ensure that AI applications in education are developed with a focus on ethical considerations such as privacy, bias, and transparency. Provide resources to help educators and students navigate these issues. | Akgun et al. [43] |
7. Advance Research on Ontological Implications | Explore the ontological nature of AI and its role in representing and capturing value. This research should align AI technologies with broader societal and ethical standards. | Kazim et al. [41] |
4. Discussion
4.1. Synoptic Diagram of Discussion
4.2. Discussion on the Added Value of the Review of Reviews and Alignment with the Purpose
4.3. Discussion: Key Areas for Improvement and Suggested Actions
4.4. Discussion: Emerging Trends and Contributions in Algorethics from Recent Literature
4.4.1. Insights from Cross-Disciplinary Studies Affecting Health Domain Ethics
Analysis
Impact on the Health Domain
Reference | Focus on Algorethics and AI |
---|---|
Mantini, A. [47] | “Technological Sustainability and Artificial Intelligence Algor-ethics”: Emphasizes integrating sustainability principles into AI ethics to ensure that AI technologies contribute positively to societal and environmental goals. |
Benanti, P. [48] | “The Urgency of an Algorethics”: Highlights the need for a robust ethical framework for AI due to the rapid advancements in technology and the necessity to address ethical issues proactively. |
Benanti, P. [49] | “Algor-éthique: Intelligence Artificielle et Réflexion Éthique”: Discusses the integration of ethical standards in AI development, addressing philosophical and practical dimensions of AI ethics. |
Montomoli, J., Bitondo, M.M., Cascella, M. [50] | “Algor-ethics: Charting the Ethical Path for AI in Critical Care”: Explores the ethical challenges specific to critical care settings, emphasizing the need for tailored ethical guidelines. |
Brady, A.P., Neri, E. [51] | “Artificial Intelligence in Radiology—Ethical Considerations”: Reviews ethical issues in AI applications in radiology, including data privacy, algorithmic bias, and its impact on clinical decision-making. |
Anyanwu, U.S. [52] | “Towards a Human-Centered Innovation in Digital Technologies and Artificial Intelligence: The Contributions of the Pontificate of Pope Francis”: Examines how human-centered approaches influence AI ethics, ensuring alignment with human values and dignity. |
Aynla et al. [53] | “Ethical AI in Practice: Balancing Technological Advancements with Human Values”: Discusses the balance between technological progress and ethical considerations in AI practice. |
Di Tria, F. [54] | “Measurement of Ethical Issues in Software Products”: Proposes methodologies for assessing the ethical quality of software, including AI systems, to ensure responsible use. |
Amato, S. [55] | “Artificial Intelligence and Constitutional Values”: Explores how AI technologies must align with constitutional values and fundamental legal and ethical principles. |
Casà, C. et al. [56] | “COVID-19 and Digital Competencies Among Young Physicians”: Evaluates the digital skills of young physicians in Italy, focusing on their readiness to integrate AI technologies into healthcare. |
Arokiaswamy, G. [57] | “Artificial Intelligence within the Context of Economy, Employment and Social Justice”: Examines AI’s impact on economy, employment, and social justice, discussing ethical considerations to address these impacts. |
4.4.2. IEEE Perspectives on AI Ethics: Key Findings and Implications for Healthcare
Analysis
Impact on the Health Domain
Topic | Details | Reference |
---|---|---|
The Rise and Risks of Algorithms | Algorithms influence numerous sectors but also pose risks like financial loss and reputational damage. The growing concern over algorithmic failures is driving the development of algorithm auditing to ensure functionality, safety, and ethical considerations. | [58] |
Ethical Challenges in AI Design | AI systems face ethical issues related to fairness, privacy, and accountability. The GDPR’s “privacy by design” emphasizes data protection and privacy in AI, ensuring that technologies are used responsibly while safeguarding individual rights. | [59] |
Understanding and Addressing Ethical Issues | Ethical dilemmas in AI include privacy invasion, discrimination, and job displacement. Ongoing research aims to identify and mitigate these risks, with a focus on evaluating adherence to ethical standards. | [60] |
Role of Standards and Frameworks | Standards like the IEEE P7003 provide frameworks to address biases and promote fairness in algorithm development. The IEEE CertifAIEd Ontological Specification for Algorithmic Bias offers an ontological approach for certifying algorithms, focusing on transparency and fairness. | [62,65] |
Moral Decision-Making in AI Projects | Integrating moral decision-making into AI project management is crucial. The responsibility for ethical decisions often lies with developers and project managers, highlighting the need for clear guidelines and best practices. | [63] |
Educating Future Technologists | Educational methods such as role-play case studies are employed to improve students’ understanding of algorithmic ethics. These approaches help students engage with ethical dilemmas from various perspectives, preparing them to tackle ethical challenges in technology. | [64] |
The Path Forward | The evolving nature of AI necessitates robust ethical frameworks, standards, and educational programs. Ensuring AI systems respect human rights and societal values is essential for building trust and realizing the full potential of these technologies. | [58,59,60,61,62,63,64,65] |
4.4.3. National and International Frameworks on Algorethics
Analysis
- World Health Organization (WHO) [66]: The WHO has released guidelines emphasizing the importance of ethical considerations in the use of AI for global health. These guidelines are part of a broader effort to ensure that AI technologies are developed and applied in ways that respect human rights and promote equity in healthcare. The WHO’s role as an international health authority positions it uniquely to influence global standards and practices in AI ethics.
- European Union (EU) [67]: The EU’s comprehensive AI Act represents a significant regulatory effort to address ethical concerns related to AI. The act aims to establish a legal framework that ensures AI systems are used responsibly and transparently within the EU. By setting standards for AI risk management and accountability, the EU seeks to balance innovation with ethical responsibility across its member states.
- FDA (Food and Drug Administration) [68,69]: The FDA has issued guidelines focused on the ethical use of AI in medical research. These guidelines stress the importance of transparency, accountability, and the protection of public health. The FDA’s regulatory oversight ensures that AI technologies in the medical field adhere to high ethical standards, promoting safe and effective use.
- NHS AI Ethics Initiative [70]: The UK’s NHS AI Ethics Initiative supports the ethical integration of AI in healthcare settings. This initiative provides ethical assurance and manages risks associated with AI technologies, ensuring that healthcare applications of AI maintain a high standard of ethical practice.
- Public Health Agency of Canada [71]: This document outlines an ethical framework for AI applications in public health. It emphasizes the importance of responsible AI practices and the safeguarding of personal data, reflecting Canada’s commitment to ethical standards in technology deployment.
- Georgetown University’s Center for Security and Emerging Technologies (CSET) [72]: The CSET document reports ethical norms for AI use in China. The norms covers areas such as the use and protection of personal information, human control over and responsibility for AI, and the avoidance of AI-related monopolies [72]. Table 10 reports a sketch of the national and international documents on algorethics.
Reference | Focus on Algorethics | Organization and Role | Expanded Focus |
---|---|---|---|
[66] | Global AI Ethics Guidelines | World Health Organization (WHO): A leading international public health authority dedicated to addressing global health challenges, including ethical standards for AI technologies, to ensure responsible and equitable development and use. | Focuses on global health equity, human rights in AI applications, and responsible AI use across health sectors. |
[67] | Regulatory Framework for AI | European Union (EU): Political and economic union working on the AI Act to set standards for AI use, emphasizing ethical practices, risk management, and accountability across member states. | Establishes comprehensive legal and regulatory standards for AI, balancing innovation with ethical considerations and risk management. |
[68] | Responsible AI Use in Medical Research | FDA (Food and Drug Administration): Key regulatory body in the United States issuing guidelines for ethical and responsible AI use in medical research, focusing on transparency and public health and safety. | Emphasizes transparency, accountability, and safety in AI technologies used in medical research. |
[69] | Ethical Use of AI in Medical Research | FDA (Food and Drug Administration): Similar focus as the previous reference, with additional stress on ethical standards in medical research applications. | Highlights the importance of ethical guidelines for AI in medical research, ensuring responsible use and public health protection. |
[70] | AI Ethics in Healthcare | NHS (National Health Service) AI Ethics Initiative: Initiative within the UK NHS supporting ethical AI integration in healthcare, providing assurance and managing associated risks. | Focuses on ethical integration and risk management of AI in healthcare settings, promoting high standards of ethical practice. |
[71] | Ethical Framework for AI Applications | Public Health Agency of Canada: Outlines ethical guidelines for AI in public health, emphasizing responsible practices and data protection. | Provides a framework for ethical AI use in public health, focusing on responsible practices and personal data protection. |
[72] | Ethical Norms for AI in China | Georgetown University’s Center for Security and Emerging Technologies (CSET): Research center providing ethical norms for AI in China. The norms cover areas such as the use and protection of personal information, human control over and responsibility for AI, and the avoidance of AI-related monopolies. | Focuses on personal information protection and the prevention of monopolistic practices, with limited guidance on enforcement mechanisms. |
Impact on the Health Domain
4.5. Limitations
5. Final Reflections: Broadening Ethical Considerations in New AI Applications beyond Algorithm Development
6. Conclusions and Future Research Directions
6.1. Conclusions
6.2. Future Research Directions
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Jiang, Y.; Zhang, Y.; Zhu, H. Medical image analysis using deep learning algorithms. Front. Public Health 2023, 11, 1273253. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
- Visvikis, D.; Lambin, P.; Beuschau Mauridsen, K.; Hustinx, R.; Lassmann, M.; Rischpler, C.; Shi, K.; Pruim, J. Application of artificial intelligence in nuclear medicine and molecular imaging: A review of current status and future perspectives for clinical translation. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 4452–4463. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Qie, Y. Applying Deep Learning to Medical Imaging: A Review. Appl. Sci. 2023, 13, 10521. [Google Scholar] [CrossRef]
- Morozov, A.; Taratkin, M.; Bazarkin, A.; Rivas, J.G.; Puliatti, S.; Checcucci, E.; Belenchon, I.R.; Kowalewski, K.F.; Shpikina, A.; Singla, N.; et al. Working Group in Uro-technology of the European Association of Urology. A systematic review and meta-analysis of artificial intelligence diagnostic accuracy in prostate cancer histology identification and grading. Prostate Cancer Prostatic Dis. 2023, 26, 681–692. [Google Scholar] [CrossRef]
- Allahqoli, L.; Laganà, A.S.; Mazidimoradi, A.; Salehiniya, H.; Günther, V.; Chiantera, V.; Karimi Goghari, S.; Ghiasvand, M.M.; Rahmani, A.; Momenimovahed, Z.; et al. Diagnosis of Cervical Cancer and Pre-Cancerous Lesions by Artificial Intelligence: A Systematic Review. Diagnostics 2022, 12, 2771. [Google Scholar] [CrossRef]
- Vasdev, N.; Gupta, T.; Pawar, B.; Bain, A.; Tekade, R.K. Navigating the future of health care with AI-driven digital therapeutics. Drug Discov. Today 2024, 29, 104110. [Google Scholar] [CrossRef]
- Rezayi, S.; RNiakan Kalhori, S.; Saeedi, S. Effectiveness of Artificial Intelligence for Personalized Medicine in Neoplasms: A Systematic Review. BioMed Res. Int. 2022, 2022, 7842566. [Google Scholar] [CrossRef]
- Yang, C.C. Explainable Artificial Intelligence for Predictive Modeling in Healthcare. J. Healthc. Inform. Res. 2022, 6, 228–239. [Google Scholar] [CrossRef]
- Batko, K.; Ślęzak, A. The use of Big Data Analytics in healthcare. J. Big Data 2022, 9, 3. [Google Scholar] [CrossRef] [PubMed]
- Kurniawan, M.H.; Handiyani, H.; Nuraini, T.; Hariyati, R.T.S.; Sutrisno, S. A systematic review of artificial intelligence-powered (AI-powered) chatbot intervention for managing chronic illness. Ann. Med. 2024, 56, 2302980. [Google Scholar] [CrossRef] [PubMed]
- Hossain, E.; Rana, R.; Higgins, N.; Soar, J.; Barua, P.D.; Pisani, A.R.; Turner, K. Natural Language Processing in Electronic Health Records in relation to healthcare decision-making: A systematic review. Comput. Biol. Med. 2023, 155, 106649. [Google Scholar] [CrossRef] [PubMed]
- Vora, L.K.; Gholap, A.D.; Jetha, K.; Thakur, R.R.S.; Solanki, H.K.; Chavda, V.P. Artificial Intelligence in Pharmaceutical Technology and Drug Delivery Design. Pharmaceutics 2023, 15, 1916. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Chen, W.; Liu, X.; Zhang, S.; Chen, S. Artificial intelligence for drug discovery: Resources, methods, and applications. Mol. Ther. Nucleic Acids 2023, 31, 691–702. [Google Scholar] [CrossRef] [PubMed]
- Kuziemsky, C.E.; Chrimes, D.; Minshall, S.; Mannerow, M.; Lau, F. AI Quality Standards in Health Care: Rapid Umbrella Review. J. Med. Internet Res. 2024, 26, e54705. [Google Scholar] [CrossRef]
- Klimova, B.; Pikhart, M.; Kacetl, J. Ethical issues of the use of AI-driven mobile apps for education. Front. Public Health 2023, 10, 1118116. [Google Scholar] [CrossRef]
- Goisauf, M.; Cano Abadía, M. Ethics of AI in Radiology: A Review of Ethical and Societal Implications. Front. Big Data 2022, 5, 850383. [Google Scholar] [CrossRef]
- How Ethical AI Transforms Society. Available online: https://algorethics.ai/ (accessed on 20 July 2024).
- Oxholm, C.; Christensen, A.S.; Nielsen, A.S. The Ethics of Algorithms in Healthcare. Camb. Q. Healthc. Ethics 2022, 31, 119–130. [Google Scholar] [CrossRef]
- Benanti, P. Oracoli. Tra Algoretica e Algocrazia; Luca Sossella Editore: Roma, Italy, 2018. [Google Scholar]
- Available online: https://accademiadellacrusca.it/it/parole-nuove/algoretica/18479#:~:text=Parola%20macedonia%20formata%20da%20algor,un%20individuo%20o%20di%20un (accessed on 20 July 2024).
- Available online: https://www.romecall.org/algorethics-at-the-un/ (accessed on 20 July 2024).
- Available online: https://think.nd.edu/algorethics-potentiality-and-challenges-in-the-age-of-ai/ (accessed on 20 July 2024).
- Available online: https://paulwagle.com/what-is-algorethics/ (accessed on 20 July 2024).
- Available online: https://www.nupi.no/en/events/2023/algorethics-responsible-governance-of-artificial-intelligence (accessed on 20 July 2024).
- Available online: https://www.paolobenanti.com/post/algorethics-oxford (accessed on 20 July 2024).
- Available online: https://www.romecall.org/ (accessed on 20 July 2024).
- Available online: https://medium.com/@harriet.gaywood/algorethics-who-should-govern-ai-ab1962681078 (accessed on 20 July 2024).
- Available online: https://legacyfileshare.elsevier.com/promis_misc/ANDJ%20Narrative%20Review%20Checklist.pdf (accessed on 20 July 2024).
- Getzmann, J.M.; Zantonelli, G.; Messina, C.; Albano, D.; Serpi, F.; Gitto, S.; Sconfienza, L.M. The use of artificial intelligence in musculoskeletal ultrasound: A systematic review of the literature. La Radiol. Medica 2024, 129, 1405–1411. [Google Scholar] [CrossRef] [PubMed]
- Daher, H.; Punchayil, S.A.; Ismail, A.A.E.; Fernandes, R.R.; Jacob, J.; Algazzar, M.H.; Mansour, M. Advancements in pancreatic cancer detection: Integrating biomarkers, imaging technologies, and machine learning for early diagnosis. Cureus 2024, 16, e56583. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Grzybowski, A.; Jin, K.; Wu, H. Challenges of artificial intelligence in medicine and dermatology. Clin. Dermatol. 2024, 42, 210–215. [Google Scholar] [CrossRef] [PubMed]
- Maroufi, S.F.; Doğruel, Y.; Pour-Rashidi, A.; Kohli, G.S.; Parker, C.T.; Uchida, T.; Asfour, M.Z.; Martin, C.; Nizzola, M.; De Bonis, A.; et al. Current status of artificial intelligence technologies in pituitary adenoma surgery: A scoping review. Pituitary 2024, 27, 91–128. [Google Scholar] [CrossRef] [PubMed]
- Vo, V.; Chen, G.; Aquino, Y.S.J.; Carter, S.M.; Do, Q.N.; Woode, M.E. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc. Sci. Med. 2023, 338, 116357. [Google Scholar] [CrossRef] [PubMed]
- Singh, S.; Kumar, R.; Payra, S.; Singh, S.K. Artificial intelligence and machine learning in pharmacological research: Bridging the gap between data and drug discovery. Cureus 2023, 15, e44359. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Veritti, D.; Rubinato, L.; Sarao, V.; De Nardin, A.; Foresti, G.L.; Lanzetta, P. Behind the mask: A critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefe Arch. Clin. Exp. Ophthalmol. 2024, 262, 975–982. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Wang, Y.; Song, Y.; Ma, Z.; Han, X. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int. J. Med. Inform. 2023, 178, 105175. [Google Scholar] [CrossRef] [PubMed]
- Kontiainen, L.; Koulu, R.; Sankari, S. Research agenda for algorithmic fairness studies: Access to justice lessons for interdisciplinary research. Front. Artif. Intell. 2022, 5, 882134. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Kim, I.; Kang, K.; Song, Y.; Kim, T.J. Application of artificial intelligence in pathology: Trends and challenges. Diagnostics 2022, 12, 2794. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Kazim, E.; Fenoglio, E.; Hilliard, A.; Koshiyama, A.; Mulligan, C.; Trengove, M.; Gilbert, A.; Gwagwa, A.; Almeida, D.; Godsiff, P.; et al. On the sui generis value capture of new digital technologies: The case of AI. Patterns 2022, 3, 100526. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Saw, S.N.; Ng, K.H. Current challenges of implementing artificial intelligence in medical imaging. Phys. Med. 2022, 100, 12–17. [Google Scholar] [CrossRef] [PubMed]
- Akgun, S.; Greenhow, C. Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics 2022, 2, 431–440. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Bonnefon, J.F.; Rahwan, I. Machine Thinking, Fast and Slow. Trends Cogn. Sci. 2020, 24, 1019–1027. [Google Scholar] [CrossRef] [PubMed]
- Jalal, S.; Parker, W.; Ferguson, D.; Nicolaou, S. Exploring the role of artificial intelligence in an emergency and trauma radiology department. Can. Assoc. Radiol. J. 2021, 72, 167–174. [Google Scholar] [CrossRef] [PubMed]
- Pubmed Search with (Algorethics OR Algor Ethics). Available online: https://pubmed.ncbi.nlm.nih.gov/38573370/ (accessed on 20 July 2024).
- Mantini, A. Technological Sustainability and Artificial Intelligence Algor-ethics. Sustainability 2022, 14, 3215. [Google Scholar] [CrossRef]
- Benanti, P. The urgency of an algorethics. Discov. Artif. Intell. 2023, 3, 11. [Google Scholar] [CrossRef]
- Paolo, B. Algor-éthique: Intelligence artificielle et réflexion éthique. Rev. D’éthique Théologie Morale 2020, 307, 93–110. Available online: https://www.cairn.info/revue-d-ethique-et-de-theologie-morale-2020-3-page-93.htm (accessed on 20 July 2024). [CrossRef]
- Montomoli, J.; Bitondo, M.M.; Cascella, M.; Rezoagli, E.; Romeo, L.; Bellini, V.; Semeraro, F.; Gamberini, E.; Frontoni, E.; Agnoletti, V.; et al. Algor-ethics: Charting the ethical path for AI in critical care. J. Clin. Monit. Comput. 2024, 38, 931–939. [Google Scholar] [CrossRef]
- Brady, A.P.; Neri, E. Artificial Intelligence in Radiology—Ethical Considerations. Diagnostics 2020, 10, 231. [Google Scholar] [CrossRef]
- Anyanwu, U.S. Towards a Human-Centered Innovation in Digital Technologies and Artificial Intelligence: The Contributions of the Pontificate of Pope Francis. Theol. Sci. 2024, 22, 595–613. [Google Scholar] [CrossRef]
- Ayinla, B.S.; Amoo, O.O.; Atadoga, A.; Abrahams, T.O.; Osasona, F.; Farayola, O.A. Ethical AI in practice: Balancing technological advancements with human values. Int. J. Sci. Res. Arch. 2024, 11, 1311–1326. [Google Scholar] [CrossRef]
- Di Tria, F. Measurement of Ethical Issues in Software Products. Comput. Sci. 2020; preprints. [Google Scholar] [CrossRef]
- Amato, S. Artificial Intelligence and Constitutional Values. In Encyclopedia of Contemporary Constitutionalism; Cremades, J., Hermida, C., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Casà, C.; Marotta, C.; Di Pumpo, M.; Cozzolino, A.; D’Aviero, A.; Frisicale, E.M.; Silenzi, A.; Gabbrielli, F.; Bertinato, L.; Brusaferro, S. COVID-19 and digital competencies among young physicians: Are we (really) ready for the new era? Ann. Dell’istituto Super. Di Sanita 2021, 57, 1–6. [Google Scholar] [CrossRef] [PubMed]
- Arokiaswamy, G. Artificial Intelligence within the Context of Economy, Employment and Social Justice. Asian Horiz. 2020, 14, 628–643. Available online: https://dvkjournals.in/index.php/ah/article/view/3208 (accessed on 20 July 2024).
- Koshiyama, A.; Kazim, E.; Treleaven, P. Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms Computer. Computer 2022, 55, 40–50. [Google Scholar] [CrossRef]
- Milossi, M.; Alexandropoulou-Egyptiadou, E.; Psannis, K.E. AI Ethics: Algorithmic Determinism or Self-Determination? The GPDR Approach. IEEE Access 2021, 9, 58455–58466. [Google Scholar] [CrossRef]
- Huang, C.; Zhang, Z.; Mao, B.; Yao, X. An Overview of Artificial Intelligence Ethics. IEEE Trans. Artif. Intell. 2023, 4, 799–819. [Google Scholar] [CrossRef]
- Jameel, T.; Ali, R.; Toheed, I. Ethics of Artificial Intelligence: Research Challenges and Potential Solutions. In Proceedings of the 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Lahore, Pakistan, 4–5 March 2020. [Google Scholar]
- Koene, A.; Dowthwaite, L.; Seth, S. IEEE P7003TM Standard for Algorithmic Bias Considerations. In Proceedings of the International Workshop on Software Fairness, Kraków, Poland, 12 October 2018. [Google Scholar]
- Miller, G.J. Artificial Intelligence Project Success Factors: Moral Decision-Making with Algorithms. In Proceedings of the 16th Conference on Computer Science and Intelligence Systems (FedCSIS), Gdańsk, Poland, 8–11 September 2021. [Google Scholar]
- Hingle, A.; Rangwala, H.; Johri, A.; Monea, A. Using Role-Plays to Improve Ethical Understanding of Algorithms Among Computing Students. In Proceedings of the IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 13–16 October 2021. [Google Scholar]
- IEEE. CertifAIEd™—Ontological Specification for Ethical Algorithmic Bias. Available online: https://engagestandards.ieee.org/rs/211-FYL-955/images/IEEE%20CertifAIEd%20Ontological%20Spec-Algorithmic%20Bias-2022%20%5BI1.3%5D.pdf (accessed on 20 July 2024).
- Available online: https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models (accessed on 20 July 2024).
- Available online: https://www.modulos.ai/eu-ai-act/?utm_term=ai%20act%20european%20union&utm_campaign=EU+AI+Act+(December+2023)&utm_source=adwords&utm_medium=ppc&hsa_acc=9558976660&hsa_cam=20858946124&hsa_grp=159677877987&hsa_ad=705319461314&hsa_src=g&hsa_tgt=kwd-2178244031979&hsa_kw=ai%20act%20european%20union&hsa_mt=p&hsa_net=adwords&hsa_ver=3&gad_source=1&gclid=CjwKCAjw5Ky1BhAgEiwA5jGujik2Y5RZXOVwXSvUjE-1RARfMpPgen5q2S7-8FnFFLLIiF052SYAwxoC2oEQAvD_BwE (accessed on 20 July 2024).
- Available online: https://www.dermatologytimes.com/view/fda-organizations-issue-joint-paper-on-responsible-and-ethical-use-of-artificial-intelligence-in-medical-research (accessed on 20 July 2024).
- Available online: https://www.pharmacytimes.com/view/fda-issues-paper-on-the-responsible-use-of-artificial-intelligence-in-medical-research (accessed on 20 July 2024).
- Available online: https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/ethics/#:~:text=The%20AI%20Ethics%20Initiative%20supports,risk%20and%20providing%20ethical%20assurance (accessed on 20 July 2024).
- Available online: https://www.canada.ca/en/public-health/services/reports-publications/canada-communicable-disease-report-ccdr/monthly-issue/2020-46/issue-6-june-4-2020/ethical-framework-artificial-intelligence-applications.html (accessed on 20 July 2024).
- Available online: https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released (accessed on 20 July 2024).
- Mirzakhani, F.; Sadoughi, F.; Hatami, M.; Amirabadizadeh, A. Which model is superior in predicting ICU survival: Artificial intelligence versus conventional approaches. BMC Med. Inform. Decis. Mak. 2022, 22, 167. [Google Scholar] [CrossRef]
- Wang, C.; Xu, Y.; Lin, Y.; Zhou, Y.; Mao, F.; Zhang, X.; Shen, S.; Zhang, Y.; Sun, Q. Comparison of CTS5 risk model and 21-gene recurrence score assay in large-scale breast cancer population and combination of CTS5 and recurrence score to develop a novel nomogram for prognosis prediction. Breast 2022, 63, 61–70. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Lenharo, M. The testing of AI in medicine is a mess. Here’s how it should be done. Nature 2024, 632, 722–724. [Google Scholar] [CrossRef] [PubMed]
- Sridhar, G.R.; Lakshmi, G. Ethical Issues of Artificial Intelligence in Diabetes Mellitus. Med. Res. Arch. 2023, 11. [Google Scholar] [CrossRef]
- Fritzsche, M.-C.; Akyüz, K.; Abadía, M.C.; McLennan, S.; Marttinen, P.; Mayrhofer, M.T.; Buyx, A.M. Ethical layering in AI-driven polygenic risk scores—New complexities, new challenges. Front. Genet. 2023, 14, 1098439. [Google Scholar] [CrossRef] [PubMed]
- Goldberg, C.B.; Adams, L.; Blumenthal, D.; Brennan, P.F.; Brown, N.; Butte, A.J.; Cheatham, M.; DeBronkart, D.; Dixon, J.; Drazen, J.; et al. To Do No Harm—And the Most Good—With AI in Health Care. Nejm Ai 2024, 1, AIp2400036. [Google Scholar] [CrossRef]
- Ratwani, R.M.; Sutton, K.; Galarraga, J.E. Addressing AI Algorithmic Bias in Health Care. JAMA 2024. [Google Scholar] [CrossRef]
- Li, J.; Dada, A.; Puladi, B.; Kleesiek, J.; Egger, J. ChatGPT in healthcare: A taxonomy and systematic review. Comput. Methods Programs Biomed. 2024, 245, 108013. [Google Scholar] [CrossRef] [PubMed]
- Tian, S.; Jin, Q.; Yeganova, L.; Lai, P.-T.; Zhu, Q.; Chen, X.; Yang, Y.; Chen, Q.; Kim, W.; Comeau, D.C.; et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Briefings Bioinform. 2023, 25, bbad493. [Google Scholar] [CrossRef] [PubMed]
- Giansanti, D. The Chatbots Are Invading Us: A Map Point on the Evolution, Applications, Opportunities, and Emerging Problems in the Health Domain. Life 2023, 13, 1130. [Google Scholar] [CrossRef]
- Available online: https://apps.apple.com/ch/app/replika-virtual-ai-friend/id1158555867?l=it (accessed on 15 April 2023).
- Available online: https://www.cnet.com/culture/hereafter-ai-lets-you-talk-with-your-dead-loved-ones-through-a-chatbot/ (accessed on 15 April 2023).
- Available online: https://www.prega.org/ (accessed on 15 April 2023).
- Lastrucci, A.; Wandael, Y.; Barra, A.; Ricci, R.; Maccioni, G.; Pirrera, A.; Giansanti, D. Exploring Augmented Reality Integration 307 in Diagnostic Imaging: Myth or Reality? Diagnostics 2024, 14, 1333. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Lastrucci, A.; Wandael, Y.; Ricci, R.; Maccioni, G.; Giansanti, D. The Integration of Deep Learning in Radiotherapy: Exploring 310 Challenges, Opportunities, and Future Directions through an Umbrella Review. Diagnostics 2024, 14, 939. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Giansanti, D. An Umbrella Review of the Fusion of fMRI and AI in Autism. Diagnostics 2023, 13, 3552. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
Keyword Focus | Keywords | Search Query |
---|---|---|
Algorithmic Ethics and Frameworks | Algorithmic Ethics, Ethical AI, Responsible AI, AI Ethics Frameworks, Ethical Design Principles, Algorithmic Governance, Algorithmic Justice, Algorethics, Algor-Ethics | (“Algorithmic Ethics” OR “Ethical AI” OR “Responsible AI” OR “AI Ethics Frameworks” OR “Ethical Design Principles” OR “Algorithmic Governance” OR “Algorithmic Justice” OR “Algorethics” OR “Algor-Ethics”) |
Fairness and Bias in AI | Fairness in Algorithms, Bias in AI, Algorithmic Bias Mitigation, Ethical Machine Learning | (“Fairness in Algorithms” OR “Bias in AI” OR “Algorithmic Bias Mitigation” OR “Ethical Machine Learning”) |
Transparency and Accountability | AI Transparency, Algorithmic Accountability, Algorithmic Transparency | (“AI Transparency” OR “Algorithmic Accountability” OR “Algorithmic Transparency”) |
Governance and Regulation | AI Governance, AI Regulation, Ethics of Automation, Data Privacy, Data Ethics | (“AI Governance” OR “AI Regulation” OR “Ethics of Automation” OR “Data Privacy” OR “Data Ethics”) |
Ethical Decision-Making and Safety | Ethical Decision-Making in AI, AI Safety, AI Impact Assessment | (“Ethical Decision-Making in AI” OR “AI Safety” OR “AI Impact Assessment”) |
Learning Methods | Supervised Learning, Unsupervised Learning, Machine Learning | (“Supervised Learning” OR “Unsupervised Learning” OR “Machine Learning”) |
Moral and Automated Systems | Moral Algorithms, Automated Decision Systems | (“Moral Algorithms” OR “Automated Decision Systems”) |
Comprehensive Search Query | All keywords | (“Algorithmic Ethics” OR “Ethical AI” OR “Responsible AI” OR “AI Ethics Frameworks” OR “Ethical Design Principles” OR “Algorithmic Governance” OR “Algorithmic Justice” OR “Algorethics” OR “Algor-Ethics”) AND (“Fairness in Algorithms” OR “Bias in AI” OR “Algorithmic Bias Mitigation” OR “Ethical Machine Learning”) AND (“AI Transparency” OR “Algorithmic Accountability” OR “Algorithmic Transparency”) AND (“AI Governance” OR “AI Regulation” OR “Ethics of Automation” OR “Data Privacy” OR “Data Ethics”) AND (“Ethical Decision-Making in AI” OR “AI Safety” OR “AI Impact Assessment”) AND (“Supervised Learning” OR “Unsupervised Learning” OR “Machine Learning”) AND (“Moral Algorithms” OR “Automated Decision Systems”) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lastrucci, A.; Pirrera, A.; Lepri, G.; Giansanti, D. Algorethics in Healthcare: Balancing Innovation and Integrity in AI Development. Algorithms 2024, 17, 432. https://doi.org/10.3390/a17100432
Lastrucci A, Pirrera A, Lepri G, Giansanti D. Algorethics in Healthcare: Balancing Innovation and Integrity in AI Development. Algorithms. 2024; 17(10):432. https://doi.org/10.3390/a17100432
Chicago/Turabian StyleLastrucci, Andrea, Antonia Pirrera, Graziano Lepri, and Daniele Giansanti. 2024. "Algorethics in Healthcare: Balancing Innovation and Integrity in AI Development" Algorithms 17, no. 10: 432. https://doi.org/10.3390/a17100432
APA StyleLastrucci, A., Pirrera, A., Lepri, G., & Giansanti, D. (2024). Algorethics in Healthcare: Balancing Innovation and Integrity in AI Development. Algorithms, 17(10), 432. https://doi.org/10.3390/a17100432