Next Article in Journal
Practitioner Perspectives on a Restorative Community: An Inductive Evaluative Study of Conceptual, Pedagogical, and Routine Practice
Previous Article in Journal
Genetic Discrimination in Access to Life Insurance: Does Ukrainian Legislation Offer Sufficient Protection against the Adverse Consequences of the Genetic Revolution to Insurance Applicants?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects)

by
Vasiliy Andreevich Laptev
,
Inna Vladimirovna Ershova
and
Daria Rinatovna Feyzrakhmanova
*
Department of Entrepreneurial and Corporate Law, Kutafin Moscow State Law University, 109180 Moscow, Russia
*
Author to whom correspondence should be addressed.
Submission received: 20 September 2021 / Revised: 13 December 2021 / Accepted: 24 December 2021 / Published: 29 December 2021

Abstract

:
Background: Cutting-edge digital technologies are being actively introduced into healthcare. The recent successful efforts of artificial intelligence in diagnosing, predicting and studying diseases, as well as in surgical assisting demonstrate its high efficiency. The AI’s ability to promptly take decisions and learn independently has motivated large corporations to focus on its development and gradual introduction into everyday life. Legal aspects of medical activities are of particular importance, yet the legal regulation of AI’s performance in healthcare is still in its infancy. The state is to a considerable extent responsible for the formation of a legal regime that would meet the needs of modern society (digital society). Objective: This study aims to determine the possible modes of AI’s functioning, to identify the participants in medical-legal relations, to define the legal personality of AI and circumscribe the scope of its competencies. Of importance is the issue of determining the grounds for imposing legal liability on persons responsible for the performance of an AI system. Results: The present study identifies the prospects for a legal assessment of AI applications in medicine. The article reviews the sources of legal regulation of AI, including the unique sources of law sanctioned by the state. Particular focus is placed on medical-legal customs and medical practices. Conclusions: The presented analysis has allowed formulating the approaches to the legal regulation of AI in healthcare.

1. Introduction

The history of the application of digital technologies and artificial intelligence (AI) in healthcare spans several decades. In the 1970s, Stanford University introduced MYCIN, an expert system that allowed doctors to identify bacterial infections, such as bacteremia and meningitis, and to suggest an appropriate treatment. MYCIN was not applied in practice and only served as an experimental model demonstrating the capabilities of AI. In 1986, the University of Massachusetts developed DXplain, a decision support system that used a patient’s symptoms to produce a list of potential diagnoses for the physician’s reference. Subsequently, the University of Washington implemented the Germwatcher expert system for the detection of infections in patients (Kahn et al. 1993, 1996). Since the beginning of the 21st century, the development of AI-based medical applications has become a priority concern in the IT industry.
According to the expert opinion of IBM, the developer of the Watson supercomputer, AI technologies can be effectively used in healthcare for the purposes of structuring medical data (e.g., processing natural language by transforming it into clinical text), analyzing patient data (e.g., abstracting treatment logs into a summary of a patient’s medical history), identifying clinical diagnostic similarities (in order to determine the appropriate course of patient treatment based on similar cases), as well as advancing medical thought (by verifying medical hypotheses) (What is Artificial Intelligence in Medicine? 2021).
AI represents a set of technological solutions that simulate human cognitive functions (including the abilities of independent learning and decision making in the absence of a predetermined algorithm) and that, when applied to certain tasks, can produce results that are comparable or better than those achieved by human intellectual activity. This set of technological solutions includes information and communication infrastructure, software (including applications that employ machine learning methods), data processing and decision-making services and tools.
International organizations in the field of AI, in particular, the European Association for Artificial Intelligence (Brussels, Belgium) or the Association for the Advancement of Artificial Intelligence (CA, USA) proceed from the history of the development of AI and cybernetics in general (7 Types of Artificial Intelligence 2019), define AI as a technology for managing information-data.
There are proposals for dividing AI into appropriate types. In particular, Naveen Joshi distinguishes seven types: Reactive Machines, Limited Memory, Theory of Mind, Self-aware, Artificial Narrow Intelligence, Artificial General Intelligence, Artificial Superintelligence (Akkus et al. 2021).
Researchers are actively working on advancing the capabilities of AI-based technologies, including both general aspects of their virtual and material application (Amisha et al. 2019; Barh 2020; Xing et al. 2020) and specific instances of their application, such as automatic detection of cardiac arrhythmias in an electrocardiogram (Zhang et al. 2020), outpatient neurological diagnosing on the basis of electrophysiological activity of the brain (Calhas et al. 2020), etc. The results of such work are presented at scientific conferences (Haque et al. 2017; Pusiol et al. 2016). The merits and demerits of AI application in different fields of medicine, including cardiology, echocardiography (Marlicz et al. 2020) pulmonary medicine, endocrinology, nephrology, gastroenterology (Hamamoto et al. 2020; Ishimura et al. 2021), neurology, oncology (Simopoulos and Tsaroucha 2021), computational diagnosis of cancer in histopathology, colorectal cancer screening, diagnosis and treatment (Gurung et al. 2021), medical imaging and validation of AI-based medical technologies (Briganti and Moine 2020), are being thoroughly examined in specialized literature.
According to a recent forecast by Deloitte, investments in healthcare IT and AI-based technologies will amount to $57.6 billion by 2021 (Machine Learning: Things Are Getting Intense 2017; Artificial Intelligence Act 2021). Microsoft plans to invest $40 million in AI applications for general healthcare (with the aim to promote research activities, development of analytic tools, and accessibility to medicine) (Microsoft will Invest $40 Million into AI for Healthcare 2020) and to spend an additional $20 million on AI solutions for combatting COVID-19 (including data analytics, treatment and diagnosis, resource allocation, dissemination of reliable information and scientific research) (Microsoft will Invest $20 Million into Combatting COVID-19 with the Help of AI 2020).
The rapid development of digital technologies, in particular AI, has led to the emergence of a problem of legal regulation of the concept, conditions and features of development, functioning and areas of application, integration into other systems and control over the use of end-to-end digital AI technology. In each country, this problem is solved individually, taking into account the peculiarities of the local legal system. By 2020, different countries have accumulated their own experience in legislative regulation of relations arising in connection with the development of AI, from national AI strategies to the principles of AI application in specific areas. Within the framework of this study, the authors propose to analyze the system of the sources of legal regulation of the use of AI in medicine, as well as to consider certain legal problems associated with the use of this technology.
The remainder of the paper is organized as follows. Section 3.1 explains directions for the use of AI technology in health care (drug development and validation, disease diagnosis and off-site AI applications, treatment). The legal aspects of medical applications of AI (sources of legal regulation of the use of AI technology in healthcare, legal liability for the work of AI, protection of personal data of patients) are described in Section 3.2. Section 4 contains information about the results of the research.

2. Materials and Methods

The research information base consists of 76 sources, including various regulatory legal acts on AI, medical and legal research, thematic publications in the media and Internet sources.
The present study relies on the core empirical methods of scientific inquiry, including observation (of the development of AI-based technologies and their application in medicine), comparison (of the efficiency of AI and human medical personnel), material modeling (of AI application to healthcare). The employed theoretical methods include analysis (of merits and demerits of AI application), theoretical modeling (of the prospects for employing AI in different fields of medicine and the extents of its liability).
The current study focuses on the legal aspects of employing AI in healthcare and the legal approaches to addressing the following issues:
  • the rules of medical law that should regulate the application of AI: the law vs medical legal customs and practices;
  • optimal implementation of AI in healthcare;
  • functioning of AI in space, over time and at a place of its deployment, including issues pertaining to its transboundary nature;
  • legal liability for the performance of AI: liable persons and ways of ensuring their accountability;
  • protection of personal data of patients when using AI technology.

3. Discussion

3.1. Directions for the Use of AI in Health Care

The most promising applications of AI technology in healthcare are drug development and application; medical imaging and diagnostics; physician decision support; forecasting and risk analysis; lifestyle management and monitoring; information processing and analysis from wearable devices; monitoring of chronic conditions; virtual assistants; emergency care and surgery. Within the framework of this study, it is proposed to analyze in detail the following areas of using AI technology in medicine.
The authors examine the application of the following AI-based systems:
  • a cyborg-AI-doctor–a human individual with an intelligent AI chip implanted in their brain (a cybernetic organism);
  • an AI-robot–an autonomous cyberphysical system (machine) that can independently navigate through the hospital or visit outpatients in their homes;
  • an AI-hospital, or an AI-medical organization–AI implemented within a perimeter of a given medical organization (on-site);
  • an AI-cloud-doctor–an AI-based software platform, whose information and communication infrastructure, data processing and decision-making tools are hosted in a cloud storage service (off-site).
As a result of the analysis, a palette of possible forms of AI in medical practice is revealed, taking into account territorial, technological, ethnic and other factors that affect the choice of the form of AI in the provision of medical services.

3.1.1. Drug Development and Validation

In the process of creating new drugs, AI first of all allowed to accelerate the work with big data. The main objective of AI, in this case, is to predict the interactions between future drug molecules and human cell proteins and, therefore, the effectiveness of the future drug. In addition, AI can be used to study and understand disease mechanisms and search for biomarkers.
In 2015, during an epidemic of Ebola virus disease in West Africa, Atomwise partnered with the University of Toronto and IBM to quickly develop a cure for Ebola virus infections. Atomwise has provided core AI technology for drug research (New Ebola Treatment Using Artificial Intelligence 2015).
In 2020, for example, AI technology made it possible to analyze the activity of thousands of drugs in relation to their ability to block an enzyme without which the SARS-CoV-2 virus cannot multiply in human cells.
In the context of the COVID-19 pandemic, the use of AI technology has improved the organization of clinical trials, optimized the development of new vaccines and the analysis of trial results, as well as the comparison and systematization of data from different groups of patients (Kaushal et al. 2020; Pires 2021). For example, Moderna Therapeutics is using AI to discover potentially effective drugs for coronavirus and to develop an appropriate vaccine (The Role of AI in the Race for a Coronavirus Vaccine 2020).

3.1.2. Disease Diagnosis and Off-Site AI Applications

The algorithmic thinking of medical AI has numerous advantages over human cognition, including its ability to work 24/7 without interruptions, as well as the lack of susceptibility to fatigue or emotional bias. These advantages become especially critical in the event of a disease outbreak (epidemics and pandemics), treatment of severe forms of diseases, as well as the emergence of new diseases that were previously unknown to medicine. The ability of AI to accurately diagnose multiple conditions is attested to by physicians. To give only a few examples, AI has been demonstrated to successfully identify colon polyps during a colonoscopy (Azer 2019), to detect coagulopathy and inflammation in trauma (Thorn et al. 2019), etc.
AI is capable of not only assessing the current health status of a patient but also of immediately putting it in the context of their entire medical history. Such functionality is implemented, for example, in Botkin.AI, a Russian software platform for diagnosing cancer published on the Microsoft Azure Marketplace (It will detect cancer and provide explanations 2019), the American Google Health (Meet David Feinberg, Head of Google Health 2019) and the Israeli Zebra Medical Vision (Nanox AI 2021).
AI is also able to provide an on-the-fly evaluation of the external factors relevant for the diagnosis, treatment and future prevention of the disease in a given individual, such as:
  • weather and climatic conditions (temperature, atmospheric pressure and humidity level),
  • sanitary and epidemiological situation,
  • genetic predisposition of patients to certain infections,
  • economic factors (household income, living conditions and working capacity),
  • degree of social well-being (access to healthcare, availability of medicines),
  • legislative norms, in particular, those regulating healthcare.
All forms of AI (a cyborg-AI-doctor, an AI-robot, an AI-hospital and an AI-cloud-doctor) can be effectively used to address the above issues. AI can play a role in managing health conditions, preventing diseases and monitoring the risks of disease spread (Tian et al. 2019).
AI can be used off-site in the following areas of healthcare:
  • genetic analysis of predisposition to and progression of diseases (in oncology, gastroenterology, orthopedics, ophthalmology, endocrinology, gynecology, etc.),
  • remote medical examination of a patient based on their symptoms and medical history,
  • assessment of the need for hospitalization (e.g., based on the results of an ECG, coronary angiogram or ultrasound examination).
Obviously, one cannot currently have an X-ray, MRI, or CAT scan at home (off-site). Physical contact with a patient is still required in many cases. The available portable equipment allows only limited medical testing to be done at a patient’s home. Thus, cyborg-AI-doctors, AI-robots and AI-medical organizations will need to be deployed in healthcare facilities. An AI-cloud-doctor is not yet capable of providing a comprehensive patient examination remotely, but some promising results have already been achieved in this area (Chan et al. 2009) (e.g., Ada, an AI-powered doctor app and telemedicine service Our Global Health Initiative 2021); MedWhat, an intelligent virtual assistant that can answer complex medical questions posed in a natural language (MedWhat 2021).
Every doctor, like any other specialist, is susceptible to doubt and fear of admitting an error. This leads to potential disagreements among physicians with regard to diagnosis, prediction of the disease progression and choice of treatment. A cyborg-AI-doctor, AI-robot, AI-hospital and AI-cloud-doctor, on the other hand, will be able to take the most appropriate course of action without hesitation, by relying on the algorithms programmed into them and the available medical databases. The application of machine learning algorithms to large sets of patient data can make the work of medical AI more objective compared to human physicians (Qi and Lyu 2018; Wang and Summers 2012).

3.1.3. Treatment: Novel AI-Powered Solutions

Medical practice has accumulated a large body of approaches to disease treatment. It appears that the high investment appeal of medical technologies may prompt hospitals to make decisions that run counter to the physician’s Hippocratic oath. This may hamper the widespread application of AI-cloud-doctors, and the predominant use of cyborg-AI-doctors, AI-robots and AI-medical organizations in the foreseeable future.
It goes without saying that the correct diagnosis is a necessary requirement for choosing an appropriate treatment. The ability to select effective medicines and adequate treatment protocols on the basis of genomic data was recently demonstrated by the AI-robot Sophia (Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide 2018).
All forms of AI (cyborg-AI-doctors, AI-robots, AI-hospitals and AI-cloud-doctors) can be effectively employed in the following areas:
  • medical interventions (surgery),
  • manufacture and prescription of medicines (pharmaceutics and pharmacology),
  • treatment with the use of immunologic agents (immunotherapy) and plants (herbal medicine),
  • prevention of epidemics (epidemiology), etc.
AI can operate in one of the three different modes:
  • a cyborg-AI-doctor,
  • an AI-robot/AI-hospital/AI-cloud-doctor assisting a human physician,
  • an autonomous and remotely controlled AI-robot/AI-hospital.
The prospects for interaction with these forms of AI look promising. For example, a human surgeon doing an intervention with the use of a remote-controlled machine with surgical instruments may benefit from the help of an AI-robot assistant. During both surgeries and diagnostic procedures, AI can promptly access the medical history of a patient and evaluate the factors that may affect the choice of treatment (climatic conditions, epidemiological situation, the patient’s genetic predisposition to infections, etc.).
The future of medicine lies with robotic surgery and pharmaceutics that would allow reducing the costs of staffing and round-the-clock patient care. Even suboptimal or inadequate AI-powered treatment has the potential to be beneficial by evoking the placebo effect in a patient. Moreover, on a subconscious level, patients will be more likely to deem the work of AI error-free, owing to the fact that its algorithms are designed to minimize errors.

3.2. Legal Aspects of Medical Applications of AI

3.2.1. The Place of Medical AI in the Digital Space of Trust

Scientific and technological progress and digital technology have enabled the creation of a unified digital space of trust. It is expected that participants in this space have trust in the information derived from it, and their identification and authentication proceed automatically.
The conversation about the space of trust was first brought up with respect to the recognition of electronic signatures. To address this issue, the European Union adopted Directive 1999/93/EC of the European Parliament and of the Council of December 13, 1999, on a Community Framework for Electronic Signatures (Official Journal of the European Union 2000), that was subsequently replaced by Regulation (EU) No 910/2014 of the European Parliament and of the Council of July 23, 2014, on Electronic Identification and Trust Services for Electronic Transactions in the Internal Market and Repealing Directive 1999/93/EC (Official Journal of the European Union 2014). A similar approach was adopted by the Eurasian Economic Union (see the Treaty on the Eurasian Economic Union of 29 May 2014) (Treaty on the Eurasian Economic Union 2014) and other international associations. The Internet made it possible to transmit information remotely in electronic form. The existence of the digital space of trust with regard to electronic signatures provides the assurance that the received information is reliable and trustworthy through the identification and authentication of participants in information exchange.
Broadly speaking, any digital information is deciphered by means of converting binary code (ones and zeroes) into human-readable text. It appears reasonable to use the e-signature space of trust as a model when developing a digital space of trust in the field of AI, in which the identification and authorization of medical AI would proceed uniformly.
It is expected that the establishment of the unified digital space of trust in AI will be hampered by different practices adopted in medical schools and medical traditions across the globe (the subjective factor). Nevertheless, all doctors are united by a common goal–to cure the patient or prevent them from becoming ill, regardless of their nationality, religion, gender and race, or the location of the hospital. Thus, it is likely that international organizations and alliances of doctors (the World Health Organization, the International Federation of Red Cross and Red Crescent Societies, Médecins Sans Frontières, etc.) will play a key role in the formation of the digital space of trust in AI.
The core principle behind the establishment of the unified digital space of trust in medical AI will be that doctors, patients, the government, the state and civil society as a whole will acknowledge the accuracy of the exchanged information related to AI.

3.2.2. Sources of Legal Regulation of the use of AI in Health Care

One of the main objectives of modern “digital” legal regulation of the use of AI technology in healthcare is primarily to limit the emergence of risks to public health or safety or the environment, as well as to respect the confidentiality of patients’ personal data. However, not all of the risks associated with the use of AI technology in this sphere are currently known to the people, so legislators in developing various regulations are also faced with the task of anticipating and neutralizing the emergence of such risks.
Two approaches to the legal regulation of the development and application of AI technologies stand out in the scientific literature:
-
the formal (legal) approach, according to which, first, the fact whether AI and robotics fall within the scope of existing legislation is considered. From the point of view of the adherents of the legalistic approach, a more conservative approach, it would be correct to assign responsibility for the actions of a robot to the person who launched it;
-
technological approach, the essence of which is to preliminarily determine the existence of new problems created using AI and then measure the legal need for special regulation of such new problems (Chung 2017). Proponents of the technological approach insist on the secondary nature of law. According to the supporters of the technological approach, insurance of liability of robots for their actions will be sufficient, when a percentage of the economic effect when using a robot should be deducted to a special fund, from which the damage caused by the robot is covered.
However, the use of a formal (legal) approach to the legal regulation of the development and application of AI technologies will lead to a slowdown in the development of robotics, as a result of which there will be a serious economic lag behind countries adhering to the technological approach.
In order to ensure a balance of interests, we consider it necessary to highlight the third approach to the legal regulation of AI technologies-a compromise approach-according to which legal regulation will only concern the ethical aspects of the use of AI technology.
Medical activities that involve AI technologies are not only subject to legal regulation, but also raise psychological, ethical and moral issues related to patient treatment. The latter concern gave rise to bioethics, which originated from the Hippocratic oath (Rudnev 1994) and was later modernized by the Declaration of Geneva (1948) adopted by the General Assembly of The World Medical Association. This attests to the fact that medical activities are regulated not only by the rule of law but also by non-legal instruments, including rules of morality, ethics, psychology, sociology, etc.
Traditionally, the universally recognized principles and rules of international law, international treaties, and domestic acts (laws and by-laws), customs are viewed as the sources of all branches of law.
The importance of international legal regulation in the field of AI application lies in:
-
the possibility of establishing uniform “rules of the game” for all participants in the global market;
-
creating a benchmark for the development of individual provisions of laws in the national law of individual states.
Currently, there are no multilateral international treaties-conventions that would enshrine general provisions on the use of AI technology, especially on the use of AI technology in healthcare. Only a few documents have been adopted that contribute to the formation of the foundations of international legal regulation in the field of AI and are of a recommendatory nature. Among such documents are the following:
-
Okinawa Charter on Global Information Society (G8 Kyushu-Okinawa Summit Meeting 2000, Kyushu-Okinawa Japan) (G8 2000), which proclaimed the need for a regulatory framework that promotes cooperation to optimize global networks and reduce the digital divide;
-
OECD Council Recommendation on Artificial Intelligence (adopted by the Council at Ministerial Level on 22 May 2019) (OECD Council Recommendation on Artificial Intelligence 2019) as the first intergovernmental standard on artificial intelligence. This document contains general principles for the use of AI, as well as recommendations for national governments on the development of AI;
-
G20 Ministerial Statement on Trade and Digital Economy (2019, Japan) (G20 Ministerial Statement on Trade and Digital Economy 2019), in which the principles of the development of AI were approved on behalf of the states-participants of the so-called G20.
Within the framework of international legal regulation, it is also necessary to highlight international technical standards. The creation of international standards is the result of the activities of international standardization organizations that develop and publish standards, guidelines, recommendations and technical reports. Such organizations include the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU).
In 2017, a technical committee on artificial intelligence (ISO/IEC JTC 1/SC 42 Artificial intelligence) was created within the ISO structure, which developed and approved the following standards:
-
ISO/IEC 20546: 2019 Information technology–Big data–Overview and vocabulary (2019) (the document contains a set of terms and definitions necessary to improve communication and understanding of this area (ISO/IEC 20546: 2019));
-
ISO/IEC TR 20547-2: 2018 Information technology-Big data reference architecture-Part 2: Use cases and derived requirements (2018) (ISO/IEC TR 20547-2: 2018);
-
ISO/IEC 20547-3: 2020 Information technology-Big data reference architecture-Part 3: Reference architecture (2020) (ISO/IEC 20547-3: 2020);
-
ISO/IEC TR 20547-5: 2018 Information technology-Big data reference architecture-Part 5: Standards roadmap (the document includes a roadmap for standards) (2018) (ISO/IEC TR 20547-5: 2018).
At the level of the European Union (EU), the following sources of legal regulation of the use of AI technology can be distinguished.
In February 2017, the EU Parliament adopted Resolution 2015/2103(INL) Civil Law Rules on Robotics (European Parliament Resolution 2017). European legislation on robotics and AI is based on Isaac Asimov’s laws: (1) a robot must not, by act or omission, cause harm to humans; (2) a robot must obey human commands if they do not contradict the first law; (3) a robot must take care of its safety to the extent that this does not contradict the first or second law. Primarily, the provisions of the said regulation apply to robotics, but it can be assumed that these provisions can also be applied to AI technology by analogy. The resolution includes approaches to defining liability for damage caused by robotics, creating a European registration system for “smart” robots, and proposing that AI robots be given the status of electronic persons.
Subsequently, in April 2018, representatives of 25 European countries, including those that are not members of the European Union, signed a Declaration of cooperation on Artificial Intelligence (EU Declaration of Cooperation on Artificial Intelligence Signed at Digital Day 2018), according to the provisions of which the participating States pledged to work on an integrated European approach to the development of artificial intelligence, pursuing coherent national policies to enhance the competitiveness of the European Union, and creating digital innovation hubs at a pan-European level.
Also in 2018, the European Commission developed the Coordinated Plan for Artificial Intelligence of 7 December 2018 (Coordinated Plan on Artificial Intelligence 2021 Review. COM (2021) 205 Final 2021), which provides a European strategy for the development of robotics and AI. The overall goal of the participating States working together is to ensure that Europe becomes the world’s leading region for the development and application of “advanced, ethical and safe AI”. Thus, as part of the implementation of this plan, the European Commission has increased investment in AI technology under the Horizon 2020 research and innovation framework program to €1.5 billion between 2018 and 2020, an increase of 70% compared to the 2014–2017 period.
The regulation of ethical issues in the use of AI is also not ignored. So, in 2019, the European Commission approved Ethics Guidelines for Trustworthy AI (Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence 2019), the main purpose of which is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3 ) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. According to the provisions of this document, the main ethical principles for the use of AI are respect for human autonomy, prevention of harm, fairness and explicability.
In addition, the EU has adopted the Digital Europe Strategy Programme as a systemic component of the EU financial consolidation for 2021–2027 to digitalize European society and maintain competitiveness in innovative technologies, including robotics and AI technologies (The Digital Europe Programme for the Period 2021–2027. COM (2018) 434 2018). The main objective of this program is to stimulate digital transformation by providing funding for the implementation of emerging technologies in the most important areas, each with its own budget, including high performance computing, AI, cyber security and trust, high level of digital literacy, implementation and optimal use of digital literacy.
In 2019, the European Committee issued an opinion on “Artificial Intelligence for Europe” (Opinion of the European Committee of the Regions on ‘Artificial Intelligence for Europe’. (2019/C 168/03) 2019). which highlighted the importance of working closely together to create an enabling environment for investment in AI technologies by all levels of public authority, including local and regional authorities; the need to strengthen inter-regional cooperation based on smart specialization strategies.
At the end of April 2021, the European Commission published a draft of the world’s first law to regulate in detail the development and use of systems based on AI-Laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts [David and Jauffret-Spinosi 2002]. The main objectives of this bill are:
(1)
to establish a legal framework guaranteeing the safety and compliance with European law of AI systems entering the EU market;
(2)
to provide a legal environment for investment and innovation in the field of AI;
(3)
to establish an enforcement mechanism in this area.
In the same bill, the EU proposes a risk-based approach to the regulation of AI technology by dividing AI systems into the following groups: prohibited systems with unacceptable risk; high-risk systems; and low- or low-risk systems. Accordingly, regulation will differ according to the classification of AI systems.
As we noted earlier, Deloitte predicts that investment in IT healthcare and AI technologies will reach $57.6 billion by 2021, so countries are actively developing and adopting national laws and policies to address this area, both to remain competitive and to ensure information security. In addition, the European Commission is urging all EU member states to develop a national AI strategy. To date, more than thirty countries, including China (Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan 2017), Saudi Arabia (Saudi Arabia’s National Strategy for Data and AI 2020), Canada (Pan-Canadian Artificial Intelligence Strategy 2020), the US, the UK, France (AI for Humanity. French Strategy for Artificial Intelligence 2018), Germany (Artificial Intelligence Strategy of the German Federal Government 2020) and Russia (from 2019), have adopted national strategies in one form or another.
Thus, the contents of strategic development documents on the use of AI technology generally include the following components:
(1)
the current level of development of AI technologies in the world, the key sectors of their implementation;
(2)
expectations for the development of technology in the short, medium and long term;
(3)
key stages, tasks and objectives of development of AI technologies in a particular country;
(4)
main problems and challenges of AI technology development;
(5)
plan of main activities aimed at technology development in general;
(6)
financing of AI technology development;
(7)
and so on.
In Russia, on 10 October 2019, Presidential Decree No. 490 “On the Development of Artificial Intelligence in the Russian Federation” approved the National Artificial Intelligence Development Strategy for the period until 2030 (the Strategy) (On the Development of Artificial Intelligence in the Russian Federation 2019), one of the main goals of which is to create a comprehensive system for regulating social relations arising in connection with the development and use of AI technologies, which includes the following aspects:
-
ensuring favorable legal conditions (including through the creation of an experimental legal regime) for access to predominantly anonymized data, including data collected by public authorities and medical organizations
-
provision of special conditions (regimes) for access to data, including personal data, for scientific research, creation of AI technologies and development of technological solutions based on them
-
elimination of administrative barriers to the export of civil products (works, services) created on the basis of AI
-
creation of unified systems for standardization and conformity assessment of technological solutions developed on the basis of AI, development of international cooperation of the Russian Federation on standardization issues and ensuring the possibility of certification of products (works, services) created on the basis of AI;
-
development of ethical rules for human interaction with AI and other aspects.
The Strategy also stipulates the principles of development and use of AI technologies, the observance of which is mandatory: protection of human rights and freedoms; safety of AI use; transparency of AI use; technological sovereignty; integrity of the innovation cycle; and support for the competition. These principles are expected to influence the legal regulation of AI technology that will be implemented as part of the implementation of this Strategy.
Since 2020, Russia has been developing the first editions of national standards in the field of AI in healthcare. The development will be coordinated by the Technical Committee for Standardization “Artificial Intelligence” (TC 164) (Technical Committee 164 “Artificial Intelligence” 2019), established based on the Russian Venture Company. Thus, according to the work plan of this committee, by 2027 it is planned to develop about 50 standards concerning the use of AI technology in healthcare in specific areas, including general requirements and classification of AI systems in clinical medicine, radiology and functional diagnostics, remote monitoring systems, histology, medical decision support systems, image reconstruction in diagnostics and treatment, big data in healthcare, medical analytics and forecasting systems, educational programs.
Along with the traditional sources of law, a unique form of law (rules of conduct) can be distinguished-a medical custom.
According to researchers of comparative law René David and Camille Jauffret-Spinosi, legal customs require legitimization within the law, however, this does not prevent them from being considered as an independent, objective and fair source of law (David and Jauffret-Spinosi 2002). Academics generally agree that legal customs are recognized as sources of law on a par with legislative acts (e.g., see the works of Panagiotis Zepos) (Zepos 1962). The contribution of legal customs to the system of sources of law in different legal traditions is thoroughly described in the works of Raymond Legeais (Legeais 2004). Following this approach, it can be argued that medical legal customs sanctioned by the state or other public institutions (e.g., the World Health Organization) can be viewed as sources of regulation of AI.
Yet, the issue is not void of controversy. Given that medical customs are developed by the medical community over the course of many years of work, how do we incorporate them into the system of legal regulation of AI technologies (cyborg-AI-doctor, AI-robot, AI-medical organization and AI-cloud-doctor)? Can the collaboration between doctors and AI systems result in the emergence of such legal customs in the future? Providing answers to these questions is difficult because the history of AI spans only a few decades. It is likely that a key role in these matters will be attributed to the state, whose competent authorities will have to determine which medical legal customs (written or undocumented) may regulate AI technologies in healthcare. Importantly, it is necessary to distinguish between legal and non-legal medical customs, the latter of which is not sanctioned by the state and does not constitute a source of law.
Medical legal customs need to be systematized and made available to AI. This is necessary to ensure the quality and objectivity of AI. The guidelines and principles for the application of AI technologies (such as an AI-hospital, AI-robot and cyborg-AI-doctor) will largely depend on the legal tradition of a given country. Subsequently, we will need to create an international database of medical legal customs adopted by all countries participating in the integration program, which will provide the basis for international AI (i.e., for an AI-cloud-doctor).
Currently, the legal regulation of the use of AI technology is at a nascent and emerging stage, as states are primarily seeking to develop and adopt general (strategic) legal regulations concerning, in principle, the use of AI technology. Since the legal regulation of the use of AI technology in health care has not yet been formed, it seems appropriate to be guided by the following principles in the development, implementation and use of AI technology:
(1)
control over health-related decision-making should remain in the hands of the individual;
(2)
protection of patients’ privacy and confidential information;
(3)
that developers of AI technology comply with the safety, accuracy and effectiveness requirements for the use of AI in health care;
(4)
non-discriminatory and equitable use of AI technology;
(5)
implementation of professional training for healthcare professionals in the use of AI technology;
(6)
transparency in the use of AI technology.

3.2.3. Can AI Take Part in Forming Medical Practice as a Source of Medical Law?

In legal science, only case law (in European legal systems) and judicial precedent (in Anglo-Saxon legal systems), that are formed by courts (as a rule, state courts) in the course of the application of legislative norms, are considered de jure sources of law.
Can medical practices based on the application of laws and state-sanctioned legal medical customs be regarded as sources of law? De facto, medical practice influences decisions made by physicians, and practical knowledge accumulated over the years is a major factor in determining the appropriate course of medical or prophylactic treatment. In the words of Russian poet Alexander Pushkin, “An accident is the god of invention”. In medical practice, a single case may unexpectedly lead to the development of an effective way to treat or diagnose a disease. Yet, in cases of disputes and medical conflicts, including those settled through the courts, the adopted medical practice is not always regarded as an undeniable indicator of reasonable and good faith behavior of doctors from a legal standpoint, owing to biological differences across individuals.
It is necessary to establish the grounds for recognizing medical practice as a source of regulation of medical legal relations. To this end, it is important to determine whether the work carried out by AI can be considered a genuine part of the developing medical practice and legal customs on a par with the work of human physicians. It is also necessary to determine the limits of AI’s autonomy and capacity for self-learning. Can an AI system be viewed as a rule-making entity?
At first, one may think that the behavior of AI algorithms will always be predictable to humans since it is the humans who designed them. At least that is what one would expect. In reality, the capacity for self-learning in combination with the access to Big Data on global medical practices (Liu et al. 2018) may result in the most unexpected behavior of the cyberphysical system, which can be either beneficial (positive effect) or harmful (negative effect). Therefore, beneficial AI practices may form part of general medical practice. At the same time, if we let AI participate in the formation of medical practice, we will need to be able to monitor its work, both online and in the follow-up.

3.2.4. Legal Liability for the Work of AI

The problem of legal liability for the actions of AI is one of the most important in the field of legal regulation of AI relations. To confirm the urgency of this problem, an example can be cited when the IBM Watson supercomputer prescribed the wrong methods of treating cancer, which led to a worsening of the patient’s condition (AI Oncologist IBM Watson Caught in Medical Errors 2018).
In the scientific literature, there are different approaches to the possibility of holding accountable for the work of artificial intelligence, including the responsibility of the person who programmed the robot (Filipova 2020); responsibility of the person using the robot as a tool (Vasiliev and Shpoper 2018); responsibility of the most intelligent robot (Morhat 2017). The unpredictability of the future behavior of the AI robot is embedded in the digital algorithm, which is very clearly revealed in the article on the security problems of artificial intelligence (Amodei et al. 2016). So, endowing a robot with self-organization with a poor design of artificial intelligence, it is impossible to exclude its unwanted learning (machine learning), including that caused by hacking the program.
The time perspective of legal regulation of liability for the activities of AI can be conditionally divided into three periods: short-term (the next few decades); medium-term (from the middle to the end of the XXI century); long-term (since the end of the XXI century) (Laptev 2019).
Short-term period. In the near future, AI robots will be considered exclusively as an object of law (AI-robot, AI-hospital and AI-cloud-doctor).
Liability for activities related to the use of AI rests with those who use this intelligence as an object of increased danger. At the same time, a source of increased danger is understood as any activity, the implementation of which creates an increased likelihood of harm due to the impossibility of complete control over it by a person.
Obviously, intelligent robots, as well as intelligent computer software products, belong to such objects of law. Consequently, Russian legislation already contains the necessary legal norms governing the procedure for bringing to liability for the activities of AI (Article 1079 of the Civil Code of the Russian Federation).
The existing legal structure assumes that the person who owns (for example, hospital) or managing AI (doctor, operator, or another person who sets the parameters of his work) or his behavior (cyborg-AI-doctor), in particular, to ensure his production and economic activities.
Additionally, the responsible person should be recognized as the creators (manufacturers) of the AI-robot or AI-software complex of the computer, since the owners and users of AI are not always technically capable of influencing the work of artificial intelligence, as well as predicting their behavior. In this issue, you can use the analogy with low-quality products, in particular-artificial intelligence.
Thus, the responsibility for the activities of AI is borne by the owner of the AI; the person managing the AI; the developer (creator) AI.
The given rationale for considering AI as an object of law or in the structure of an object of law does not require a significant change in the legal doctrine.
Medium-term period. The next stage in the development of robotics will make it possible to talk about the presence of the properties of subjects of law in robots. Recognizing robots as a subject of law will inevitably allow them to be held accountable.
In this issue, the role of the creators (producers) of AI-robot, AI-hospital and AI-cloud-doctor should be assessed. The question of bringing them to legal responsibility will be very delicate. The following principal approaches are proposed: AI producers are liable only in the case of purposeful creation of intelligence in order to commit an offense; proof of the direct fault of the creator in the onset of legal consequences.
An important issue will be the question of the limits of legal liability of the creator of AI-robot, AI-hospital and AI-cloud-doctor. It is proposed to recall the construction of subsidiary liability—the liability of a person (subsidiary obliged, in this case—the creator) in addition to the liability of the person (the main debtor, in this case, AI-robot, AI-hospital and AI-cloud-doctor) in case of refusal or impossibility to satisfy the main the debtor of his obligations to the creditor. The creator of AI-robot, AI-hospital and AI-cloud-doctor is not released from liability if his development, recognized by the subject of law, is not capable of being a proper and bona fide participant in legal relations.
The proposed thesis also needs a reservation. In particular, for example, if the behavior of AI-robot, AI-hospital and AI-cloud-doctor went beyond the possible reasonable foresight of its creator. When designing an object function for an AI system, the developer sets a goal, rather than specific steps to achieve it; giving more autonomy to AI increases the chance of error (Sodhani 2018). Thus, the creator should not be responsible for the actions of AI-robot, AI-hospital and AI-cloud-doctor, which even caused harm to human life and health, if the legal fact proved the excess of the AI-robot.
The proposed approach allows us to assert that the liability for the activities of AI will be borne by: (1) AI-robot; (2) AI developer (creator); (3) AI owner.
Long-term period. In the future, in the XXII century, the recognition of AI-robot, AI-hospital and AI-cloud-doctor, capable of performing digital actions—decisions (both with their materialization in the real world and without it), as subjects of law is possible.
The previous stage (mid-term stage) differs from the future, primarily in that the recognition of the legal personality of AI-robot, AI-hospital and AI-cloud-doctor is preceded by the materialization of the robot’s behavior in the real world by performing actions that generate legal facts (for example, medical activity). Thus, in the medium term, the processor AI-robot, AI-hospital and AI-cloud-doctor perform only a digital computational function, similar to the human brain, acts as a constituent element of the robot and is considered an object of the law.
AI-robot, AI-hospital and AI-cloud-doctor in the cyber-physical space in the future will acquire legal personality and be recognized as a participant in cyber-physical relations in the digital space, even taking into account the fact that the AI system is tied to a material medium (computing processor).
The construction of constructive legal models for solving the issue of legal liability of AI-intelligence in the virtual world, for obvious reasons, is now difficult. Programmers who are able to logically and reliably recognize the path of decisions and other computational actions performed by AI will help to understand this problem.
Of course, as in the second stage, the developer of AI-robot, AI-hospital and AI-cloud-doctor is not exempt from responsibility, since first of all, his creation and the initial algorithms laid down in the creation of AI-intelligence also predetermine it. development and self-organization. Of course, the unintentional spontaneous deviation of AI from the originally owed goal in it, including through interference in its work by third parties, should not automatically impose legal liability on its creator. In each case, a technical and legal assessment should be given to the nature and consequences of the actions of AI-robot, AI-hospital and AI-cloud-doctor.
It is important to note that legal cyber liability primarily has regulatory and protective functions that ensure the normal organization of relations in cyberspace and the stability of cyber-physical relations. Legal liability functions, such as educational and preventive do not matter AI-robot, AI-hospital and AI-cloud-doctor.
The approach proposed above to the third stage of development of AI allows us to determine the following circle of persons responsible for the activities of artificial intelligence: (1) AI-intelligence; (2) AI developer (creator).

3.2.5. Protection of Personal Data of Patients

Of importance is the issue of granting AI access to personal medical data of patients at the level of a given hospital, state, nation vs at the global level (extraterritorial approach). Medical ethical standards and biological differences between patient populations (children, the elderly, etc.) must be taken into account when considering this issue. An objective picture can be obtained only if the medical databank is large enough. The availability of a representative patient sample is especially crucial for rare diseases (Tietze’s syndrome, Duplay’s disease, Thiemann-Fleischner’s disease, Whipple’s disease, Fields’ disease, progeria, congenital analgesia, etc.). The most populous countries—China, India and the United States—have an obvious advantage in this regard. It appears beneficial to grant the medical community global access to such information, with the stipulation that personal patient data would be securely protected from unauthorized access.
Currently, the Convention for the Protection of Individuals with regard to Automated Processing of Personal Data (1981, Strasbourg) is in force at the international level, which establishes the basic concepts, principles and conditions for the processing of personal data, and also defines approaches to the protection and processing of personal data.
In 2005, Russia adopted Federal Law No. 160-FZ of 19.12.2005 “On Ratification of the Council of Europe Convention on the Protection of Individuals with regard to Automated Processing of Personal Data”. The ratification procedure was completed on May 15, 2013, and the Convention entered into force with respect to Russia on September 1, 2013. As a result of the ratification of the Convention, Federal Law No. 152-FZ of July 27, 2006 “On Personal Data” was adopted, which, similarly to the Convention, established general principles and rules for processing personal data, as well as approaches to their protection.
Taking into account the task of increasing the availability and quality of data, adaptation of legislation is required in order to ensure favorable legal conditions for the safe and responsible access of developers of AI systems and robotics to data and the safe exchange of various types of data, including data collected by government agencies and medical organizations. Additionally, special conditions (modes) must be provided for access to data, including personal data (provided that measures are taken to protect the interests of subjects of personal data, including depersonalization), in order to conduct scientific research, teach AI and develop technological solutions based on them, and See also the legal conditions for organizing identification using AI and robotics technologies (subject to the human right to privacy).
It is advisable to clarify the rules for obtaining consent to the processing of personal data for cases when such processing is carried out during scientific research in the field of AI (for example, in the field of health, ecology, sociology, etc.) and its training, as well as use AI systems that provide the necessary the level of protection of personal data. Taking into account the special sensitivity of the sphere, in order to ensure the guarantees of the rights of subjects of personal data, such exemptions should imply increased protection of personal data.
Thus, in the field of legal regulation of the protection of personal data of patients when using AI technology, it seems appropriate and justified to amend the current legislation of various states on personal data appropriate amendments.

4. Results

Based on the results of the study, the authors formulated the following conclusions and proposals.
(1)
The authors have identified the following forms of AI in medicine:
-
a cyborg-AI-doctor–a human individual with an intelligent AI chip implanted in their brain (a cybernetic organism);
-
an AI-robot–an autonomous cyberphysical system (machine) that can independently navigate through the hospital or visit outpatients in their homes;
-
an AI-hospital, or an AI-medical organization–AI implemented within a perimeter of a given medical organization (on-site);
-
an AI-cloud-doctor–an AI-based software platform, whose information and communication infrastructure, data processing and decision-making tools are hosted in a cloud storage service (off-site).
(2)
Analyzing the approaches to the legal regulation of the use of AI technology, the authors came to the conclusion that it is necessary to highlight the third (compromise) approach, according to which legal regulation will only concern the ethical aspects of the use of AI.
(3)
The authors concluded that it is necessary to supplement the traditional sources of legal regulation with a special unique form of law—medical custom.
(4)
It is noted that AI can potentially take part in the formation of medical practice.
(5)
Considering the issues of legal liability of AI, the authors identified three perspectives and the corresponding approaches on the issue of bringing to legal liability.
Short-term prospects (the next few decades). An AI-robot, AI-hospital and AI-cloud-doctor will be viewed as objects of law; a cyborg-AI-doctor will be considered a subject of law. Legal liability for the work of an AI-robot, AI-hospital, or AI-cloud-doctor will lie with its operator (the physician controlling it) or another person who sets its parameters (an AI-robot, AI-hospital, or AI-cloud-doctor) and determines its behavior (a cyborg-AI-doctor, AI-robot, AI-hospital). The developer (manufacturer) of the AI system (such as an AI-robot, AI-hospital, or AI-cloud-doctor) will also be subject to subsidiary legal liability in case of detection of technical defects in AI.
Medium-term prospects (until the end of the 21st century). An AI-robot, AI-hospital and AI-cloud-doctor will acquire legal personality, become participants in legal relations, and will be held personally liable for their actions. The developer of an AI-robot will be brought to subsidiary liability, together with the robot, but only if he has been proven guilty of the emergence of legal consequences. Legal regulation of the work of an AI-robot will be based on the principle of autonomy of its will, which will, however, still be constrained by its fundamental purpose—namely, to serve for the benefit of humankind. AI-robots, AI-hospitals and AI-cloud-doctors will work autonomously.
Long-term prospects (from the 22nd century onward). The legal personality of AI-robots, AI-hospitals and AI-cloud-doctors will exist in the virtual (digital) space dissociated from the material world. The consequences of the work of cyborg-AI-doctors will be viewed in the context of a unified cognitive system of human and machine intelligence. Cyberphysical legal liability, which includes liability for actions carried out in the cyberphysical space, will primarily perform the regulatory and protective functions (e.g., liability for wrong prescriptions or inappropriate treatment), whereas the educational and preventive functions will play a secondary role.
(6) The authors came to the conclusion that it is necessary and advisable to amend the current legislation in the field of legal regulation of the protection of personal data of patients when using AI technologies (in terms of establishing special conditions (modes) of access to data, including personal data (subject to the adoption of measures to protect the interests of subjects of personal data, including depersonalization), to conduct scientific research, teach AI and develop technological solutions based on them, as well as clarify the rules for obtaining consent to the processing of personal data for cases when such processing is carried out during scientific research in areas of AI).

5. Conclusions

Based on the analysis performed, the following advantages of AI in medicine can be distinguished:
-
improving the quality of patient care: AI can provide better patient care by detecting diseases earlier and suggesting more effective treatments;
-
data-driven decision making using machine learning algorithms, AI can document and suggest more information about a patient’s status and help clinicians make better data-driven decisions by providing a better picture;
-
save time and money for administrative tasks: AI can perform administrative tasks, such as registering patients, entering patient data, and scheduling doctors for appointment requests.
However, there are currently the following limitations to the use of AI in medicine:
  • patient privacy: for example, data sharing between a number of companies is not allowed in many jurisdictions unless the patient requests it. These rules may slow down the adoption of AI in the healthcare industry;
  • complex and rigorous AI testing procedures: the AI testing process is a long and expensive process that can take years. The use of AI in healthcare is impossible without obtaining the approval of the relevant government agency.
Legal regulation of the work of AI in the field of healthcare (including cyborg-AI-doctors, AI-robots, AI-hospitals and AI-cloud-doctors) must pursue the following objectives:
  • creation of a unified digital space of trust in AI in its different forms,
  • unification and harmonization of national and international legal regimes and approaches to the regulation of AI’s work,
  • enabling non-discriminatory access to medical AI,
  • ensuring legal liability of the developers, administrators and operators of AI for its performance.
At the initial stage, this will require the adoption of a codified normative legal act in every state, which will be followed by the adoption of an international normative act (agreement, convention) that would affirm the legal status of cyborg-AI-doctors, AI-robots, AI-hospitals and AI-cloud-doctors as subjects of law and would define cyberphysical relations, juridical facts and mechanisms for enforcing legal liability. The choice of a specific approach will be largely determined by each country individually, with due regard for the opinion of the medical community.

Author Contributions

Conceptualization, V.A.L. and I.V.E.; methodology, D.R.F.; writing—original draft preparation, D.R.F.; writing—review and editing, V.A.L. and I.V.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AI for Humanity. 2018. French Strategy for Artificial Intelligence. Available online: https://www.aiforhumanity.fr/en/ (accessed on 15 August 2021).
  2. AI Oncologist IBM Watson Caught in Medical Errors. 2018. Available online: https://hightech.plus/2018/07/27/ii-onkologa-ibm-watson-ulichili-vo-vrachebnih-oshibkah (accessed on 10 December 2021). (In Russian).
  3. Akkus, Zeynettin, Yousof H. Aly, Itzhak Z. Attia, Francisco Lopez-Jimenez, Adelaide M. Arruda-Olson, Patricia A. Pellikka, Sorin V. Pislaru, Garvan C. Kane, Paul A. Friedman, and Jae K. 2021. Artificial Intelligence (AI)-Empowered Echocardiography Interpretation: A State-of-the-Art Review. Journal of Clinical Medicine 10: 1391. [Google Scholar] [CrossRef]
  4. Amisha, Paras Malik, Monika Pathania, and Vyas Kumar Rathaur. 2019. Overview of artificial intelligence in medicine. Journal of Family Medicine and Primary Care 8: 2328–31. Available online: http://www.jfmpc.com/text.asp?2019/8/7/2328/263820 (accessed on 15 October 2021).
  5. Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. 2016. Concrete Problems in AI Safety. arXiv arXiv:1606.06565v1. Available online: https://arxiv.org/pdf/1606.06565v1.pdf (accessed on 10 December 2021).
  6. Artificial Intelligence Strategy of the German Federal Government. 2020. Available online: https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf (accessed on 15 October 2021).
  7. Azer, Sami A. 2019. Challenges facing the detection of colonic polyps: What can deep learning do? Medicina Journal 55: 473. [Google Scholar] [CrossRef] [Green Version]
  8. Barh, Debmalya, ed. 2020. Artificial Intelligence in Precision Health: From Concept to Applications. Cambridge: Academic Press. [Google Scholar] [CrossRef]
  9. Briganti, Giovanni, and Olivier Le Moine. 2020. Artificial intelligence in medicine: Today and tomorrow. Frontiers in Medicine 7: 27. [Google Scholar] [CrossRef]
  10. Calhas, David, Enrique Romero, and Rui Henriques. 2020. On the use of pairwise distance learning for brain signal classification with limited observations. Artificial Intelligence in Medicine 105: 101852. [Google Scholar] [CrossRef] [PubMed]
  11. Chan, Marie, Eric Campo, Daniel Esteve, and Jean-Yves Fourniols. 2009. Smart homes—Current features and future perspectives. Maturitas 64: 90–97. [Google Scholar] [CrossRef]
  12. Chung, Jason. 2017. Zmk A Hey Watson, Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine. pp. 1–22. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3076576 (accessed on 22 October 2021).
  13. Coordinated Plan on Artificial Intelligence 2021 Review. COM (2021) 205 Final. 2021. Available online: https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review (accessed on 15 October 2021).
  14. David, Rene, and Camille Jauffret-Spinosi. 2002. Les Grands Systèmes de Droit Contemporains [The Major Contemporary Legal Systems]. Paris: Éditions Dalloz. (In French) [Google Scholar]
  15. Decree of the President of the Russian Federation No. 490 of 10 October 2019 “On the Development of Artificial Intelligence in the Russian Federation”. 2019. Available online: http://www.kremlin.ru/acts/bank/44731 (accessed on 22 September 2021).
  16. Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence. 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 2 December 2021).
  17. EU Declaration of Cooperation on Artificial Intelligence Signed at Digital Day on 10th April 2018. Available online: https://ec.europa.eu/digital-single-market/en/events/digital-day-2018 (accessed on 2 December 2021).
  18. European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Available online: http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf (accessed on 2 December 2021).
  19. Filipova, Irina A. 2020. Legal Regulation of Artificial Intelligence: Tutorial. Nizhny Novgorod: Nizhny Novgorod State University, 90p. (In Russian) [Google Scholar]
  20. G20 Ministerial Statement on Trade and Digital Economy. 2019. Available online: https://trade.ec.europa.eu/doclib/docs/2019/june/tradoc_157920.pdf (accessed on 1 December 2021).
  21. G8. 2000. Okinawa Charter on Global Information Society. Paper presented at the G8 Kyushu-Okinawa Summit Meeting 2000, Kyushu-Okinawa, Japan, July 21–23. [Google Scholar]
  22. Gurung, Arun Bahadur, Mohammad Ali Ali, Joongku Lee, Mohammad Abul Farah, and Klalid Mashay Al-Anazi. 2021. An Updated Review of Computer-Aided Drug Design and Its Application to COVID-19. BioMed Research International 2021: 8853056. [Google Scholar] [CrossRef] [PubMed]
  23. Hamamoto, Ryuji, Kruthi Suvarna, Masayoshi Yamada, Kazuma Kobayashi, Norio Shinkai, Mototaka Miyake, Masamichi Takahashi, Shunichi Jinnai, Ryo Shimoyama, Akira Sakai, and et al. 2020. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers 12: 3532. [Google Scholar] [CrossRef] [PubMed]
  24. Haque, Albert, Michelle Guo, Alexandre Alahi, Serena Yeung, Zelun Luo, Alisha Rege, Jeffrey Jopling, Lance Downing, William Beninati, Amit Singh, and et al. 2017. Towards vision-based smart hospitals: A system for tracking and monitoring hand hygiene compliance. Paper presented at the Machine Learning in Healthcare Conference (MLHC), Boston, MA, USA, August 18–19; Available online: http://proceedings.mlr.press/v68/haque17a.html (accessed on 21 October 2021).
  25. Ishimura, Norihisa, Akihiko Oka, and Shunji Ishihara. 2021. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics 11: 1719. [Google Scholar] [CrossRef]
  26. ISO/IEC 20546: 2019 Information Technology-Big data-Overview and Vocabulary. ISO/IEC JTC 1/SC 42 Artificial Intelligence. Available online: https://www.iso.org/standard/68305.html (accessed on 2 December 2021).
  27. ISO/IEC 20547-3: 2020 Information Technology-Big Data Reference Architecture-Part 3: Reference Architecture. ISO/IEC JTC 1/SC 42 Artificial Intelligence. Available online: https://www.iso.org/ru/standard/71277.html (accessed on 2 December 2021).
  28. ISO/IEC TR 20547-2: 2018 Information Technology-Big Data Reference Architecture-Part 2: Use Cases and Derived Requirements. ISO/IEC JTC 1/SC 42 Artificial Intelligence. 2018. Available online: https://www.iso.org/standard/71276.html (accessed on 2 December 2021).
  29. ISO/IEC TR 20547-5: 2018 Information Technology-Big Data Reference Architecture-Part 5: Standards Roadmap (the Document Includes a Roadmap for Standards). 2018. ISO/IEC JTC 1/SC 42 Artificial Intelligence. Available online: https://www.iso.org/ru/standard/72826.html (accessed on 2 December 2021).
  30. Najdet rak I ob”jasnit: Kak platforma Botkin.AI analiziruet cifrovye snimki i ishhet na nih onkologiju [It will Detect Cancer and Provide Explanations: How the Botkin.AI Platform Analyzes Digital Images and Screens Them for Oncological Disorders]. 2019. Available online: https:/’igh-techh.fm/2019/07/02/botkin-ai (accessed on 15 May 2021). (In Russian).
  31. Kahn, M. G., S. A. Steib, V. J. Fraser, and W. C. Dunagan. 1993. An expert system for culture-based infection control surveillance. Proc Annu Symp Comput Appl Med Care 1: 171–175. [Google Scholar]
  32. Kahn, Michael G., Sherry A. Steib, Wiliam Claiborne Dunagan, and Victoria J. Fraser. 1996. Monitoring expert system performance using continuous user feedback. Journal of the American Medical Informatics Association 3: 216–23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Kaushal, Karanvir, Phulan Sarma, S. V. Rana, Bikash Medhi, and Manisha Naithani. 2020. Emerging role of artificial intelligence in therapeutics for COVID-19: A systematic review. Journal of Biomolecular Structure and Dynamics 10: 1–16. [Google Scholar] [CrossRef]
  34. Laptev, Vasiliy A. 2019. The concept of artificial intelligence and liability for its work. Pravo. Zhurnal Vysshey Shkoly Ekonomiki 2: 79–102. (In Russian) [Google Scholar] [CrossRef]
  35. Legeais, Raymond. 2004. Les Grands Systèmes de Droit Contemporains: Une Approche Comparative [The Major Contemporary Legal Systems: A Comparative Approach]. Paris: Litec. (In French) [Google Scholar]
  36. Liu, B.H., K.L. He, and G. Zhi. 2018. The impact of big data and artificial intelligence on the future medical model. Journal of Life and Environmental Sciences (PeerJ) 39: 1–4. (In Japanese). [Google Scholar]
  37. Machine Learning: Things Are Getting Intense. 2017. Available online: https://www2.deloitte.com/content/dam/Deloitte/global/Images/infographics/technologymediatelecommunications/gx-deloitte-tmt-2018-intense-machine-learning-report.pdf (accessed on 15 May 2021).
  38. Marlicz, Wojciech, George Koulaouzidis, and Anastasios Koulaouzidis. 2020. Artificial Intelligence in Gastroenterology—Walking into the Room of Little Miracles. Journal of Clinical Medicine 9: 3675. [Google Scholar] [CrossRef]
  39. MedWhat. 2021. Available online: http://www.medwhat.com/about-us/index.html (accessed on 22 May 2021).
  40. Meet David Feinberg, Head of Google Health. 2019. Available online: https://www.blog.google/technology/health/david-feinberg-google-health/ (accessed on 15 May 2021).
  41. Microsoft will Invest $20 Million into Combatting COVID-19 with the Help of AI. 2020. Available online: https://news.microsoft.com/ru-ru/ai-for-health-covid-19/ (accessed on 15 May 2021).
  42. Microsoft will Invest $40 Million into AI for Healthcare. 2020. Available online: https://news.microsoft.com/ru-ru/microsoft-ai-for-health-40m/ (accessed on 15 May 2021). (In Russian).
  43. Morhat, Peter. 2017. Artificial Intelligence: Legal View: Scientific Monograph. Moscow: Buki vedi, 257p. (In Russian) [Google Scholar]
  44. Nanox AI. 2021. Available online: https://www.zebra-med.com/solutions (accessed on 15 May 2021).
  45. New Ebola Treatment Using Artificial Intelligence. 2015. Available online: https://www.atomwise.com/2015/03/24/new-ebola-treatment-using-artificial-intelligence/ (accessed on 22 October 2021).
  46. Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan. State Council Document. No. 35. 2017. Available online: https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of-Artificial-Intelligence-Development-Plan-1.pdf (accessed on 1 October 2021).
  47. OECD Council Recommendation on Artificial Intelligence. 2019. Adopted by the Council at Ministerial Level on 22 May 2019. Available online: https://one.oecd.org/document/C/MIN(2019)3/FINAL/en/pdf (accessed on 1 December 2021).
  48. Official Journal of the European Union. 2000. vol. L 13, pp. 12–22. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2000:013:FULL&from=EN (accessed on 22 September 2021).
  49. Official Journal of the European Union. 2014. vol. L 257, pp. 73–114. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2014:257:FULL&from=EN (accessed on 22 September 2021).
  50. Opinion of the European Committee of the Regions on ‘Artificial Intelligence for Europe’. (2019/C 168/03). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018IR3953&rid=10 (accessed on 15 September 2021).
  51. Our Global Health Initiative. 2021. Available online: https://ada.com/global-health-initiative/ (accessed on 22 May 2021).
  52. Pan-Canadian Artificial Intelligence Strategy. CIFAR. 2020. Available online: https://cifar.ca/wp-content/uploads/2020/11/AICan-2020-CIFAR-Pan-Canadian-AI-Strategy-Impact-Report.pdf (accessed on 15 September 2021).
  53. Pires, Carla. 2021. A Systematic Review on the Contribution of Artificial Intelligence in the Development of Medicines for COVID-2019. Journal of Personalized Medicine 11: 926. [Google Scholar] [CrossRef]
  54. Pusiol, Guido, Andre Esteva, Scott S. Hall, Michael Frank, Arnold Milstein, and Li Fei-Fei. 2016. Vision-based classification of developmental disorders using eye-movements. Paper presented at the International Conference on Medical Imaging Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, October 17–21; Available online: http://vision.stanford.edu/pdf/pusiol2016miccai.pdf (accessed on 30 September 2021).
  55. Qi, R. J., and W. T. Lyu. 2018. The role and challenges of artificial intelligence-assisted diagnostic technology in the medical field. Chinical Medicine Device Information 24: 27–28. [Google Scholar]
  56. Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM/2021/206 Final. 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206 (accessed on 29 October 2021).
  57. Rudnev, Vladimir I. 1994. Hippocrates. Selected Works. Moscow: Svarog. (In Russian) [Google Scholar]
  58. Saudi Arabia’s National Strategy for Data and AI. 2020. Available online: https://www.accesspartnership.com/introducing-saudi-arabias-national-strategy-for-data-and-ai/ (accessed on 15 August 2021).
  59. Simopoulos, Constantinos, and Alexandra K. Tsaroucha. 2021. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. Current Oncology 28: 1581–607. [Google Scholar] [CrossRef]
  60. Sodhani, Shagun. 2018. A Summary of Concrete Problems in AI Safety. Available online: https://futureoflife.org/2018/06/26/a-summary-of-concrete-problems-in-ai-safety/ (accessed on 10 December 2021).
  61. Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide. 2018. Available online: https://www.prnewswire.com/news-releases/sophia-ai-reaches-key-milestone-by-helping-to-better-diagnose-200000-patients-worldwide-680907791.html (accessed on 22 May 2021).
  62. Technical Committee 164 “Artificial Intelligence”. 2019. Available online: https://www.rvc.ru/eco/expertise/tc164/ (accessed on 15 August 2021). (In Russian).
  63. The Digital Europe Programme for the Period 2021–2027. COM (2018) 434. 2018. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52018PC0434 (accessed on 15 September 2021).
  64. The Role of AI in the Race for a Coronavirus Vaccine. 2020. Available online: https://www.informationweek.com/ai-or-machine-learning/the-role-of-ai-in-the-race-for-a-coronavirus-vaccine (accessed on 22 October 2021).
  65. Thorn, Sophie, Helge Güting, Marc Maegele, Russell L. Gruen, and Biswadev Mitra. 2019. Early identification of acute traumatic coagulopathy using clinical prediction tools: A systematic review. Medicina 55: 653. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Tian, Shuo, Wenbo Yang, Jehane Michael Le Grange, Peng Wang, Wei Huang, and Zhewei Ye. 2019. Smart healthcare: Making medical care more intelligent. Global Health Journal 3: 62–65. [Google Scholar] [CrossRef]
  67. Treaty on the Eurasian Economic Union. 2014. Available online: https://www.un.org/en/ga/sixth/70/docs/treaty_on_eeu.pdf (accessed on 20 October 2021).
  68. 7 Types of Artificial Intelligence. 2019. Available online: https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/?sh=619e7829233e (accessed on 29 October 2021).
  69. Vasiliev, Anton A., and Dar Shpoper. 2018. Artificial intelligence: Legal aspects. Izvestia AltSU. Legal Sciences 6: 23–26. (In Russian). [Google Scholar]
  70. Wang, Shijun J., and Ronald M. Summers. 2012. Machine learning and radiology. Medical Image Analysis 16: 933–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. What is Artificial Intelligence in Medicine? 2021. Available online: https://www.ibm.com/watson-health/learn/artificial-intelligence-medicine (accessed on 15 May 2021).
  72. Xing, Lei, Maryellen Giger, and James Min, eds. 2020. Artificial Intelligence in Medicine: Technical Basis and Clinical Applications. Cambridge: Academic Press. [Google Scholar] [CrossRef]
  73. Zepos, Pan Z. 1962. Quinze années d’application du Code civil hellénique (1946–1961) [Fifteen years of application of the Greek Civil Code (1946–1961)]. Revue Internationale de Droit Comparé 14: 281–308. (In French) [Google Scholar] [CrossRef]
  74. Zhang, Jing, Xun Chen, Aiping Liu, Xiang Chen, Xu Zhang, and Min Gao. 2020. ECG-based multi-class arrhythmia detection using spatio-temporal attention-based convolutional recurrent neural network. Artificial Intelligence in Medicine 106: 101856. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Laptev, V.A.; Ershova, I.V.; Feyzrakhmanova, D.R. Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects). Laws 2022, 11, 3. https://doi.org/10.3390/laws11010003

AMA Style

Laptev VA, Ershova IV, Feyzrakhmanova DR. Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects). Laws. 2022; 11(1):3. https://doi.org/10.3390/laws11010003

Chicago/Turabian Style

Laptev, Vasiliy Andreevich, Inna Vladimirovna Ershova, and Daria Rinatovna Feyzrakhmanova. 2022. "Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects)" Laws 11, no. 1: 3. https://doi.org/10.3390/laws11010003

APA Style

Laptev, V. A., Ershova, I. V., & Feyzrakhmanova, D. R. (2022). Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects). Laws, 11(1), 3. https://doi.org/10.3390/laws11010003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop