1. Introduction
The contemporary world follows the path of technological development, as well as enhances the formation of appropriate tools in the field of digital transformation. A large number of proposed technological solutions have already put on the agenda the issues of artificial intelligence (AI) technologies development and their status in legal relations. This, in turn, highlights fundamental legal approaches to the regulation of AI and gives rise to discussions of both philosophical and ethical nature, and purely practical, legal, and applied issues, as well.
At the same time, in most cases, scientists are interested in the very legal nature of AI and its perception and definitions [
1,
2,
3]. However, the issue of working out the mechanisms of public legal relations is considered extremely carefully in view of the potential risks of this technology for society.
It should be noted that the issues of regulatory approaches to AI and its implementation are being worked out by various supranational associations and international organizations, such as the EU Committee on AI, OECD, UNESCO, the World Bank, and the International Red Cross (in the context of the military application of AI), and these are presented in the
Table 1.
It should be noted that in the other section, for the AI Committee, as well as UNESCO, an additional principle is highlighted to ensure an appropriate level of AI safety for the environment and society as a whole, and the World Bank focuses on the development of additional requirements and approaches for the professional community, engaged in the development of AI technologies. At the same time, it should be noted that the essential assumption in the absence of the rule of law is the prevention of harm from AI.
In addition, we consider it relevant to underline that the basic principle of ensuring technical reliability, as well as interrelated principles with the prevention of harm, etc. do not provide for the presence of a “red button”, which should be present not only for the developer but also for the supervisory authorities in order to instantly disable AI in case of potential risks of global actions that would entail the prosecution of developers and other persons in the framework of an administrative and criminal case.
In this regard, the most down-to-earth and applied vision of AI is presented by the ICRC, which focuses on the need to maintain human control over AI (in the military sphere), which, in the event of complete autonomy under uncertain scenarios, can cause significant damage to the planet and the world as a whole.
Within the framework of our previous studies, we have explored a set of issues of legal regulation of AI in the public sphere of countries of various legal families (Anglo-Saxon, Romano-Germanic, religious, socialist, traditional) [
9].
The mentioned analysis considered a statutory definition of AI, an AI-focused responsible authority in a country, the issues of specific targets, and strategic plans for AI implementation in the public sphere. Our research of strategic documents of various countries has revealed that most countries within different legal families prefer not to directly fix the specific goal-setting in relation to AI at the level of strategic documents and the expected results for some particular periods (i.e., short, medium, and long-term milestones) and respective measures thereof, regarding public legal relations and administrative law in general.
This finding is in line with other researchers who explore the public sphere for AI through the experience of selected countries and underline the need for some international vision of AI implementation [
10,
11,
12].
Therefore, the comparative study of applied cases of national policies and practices across countries is relevant as it allows those involved to accumulate and sort out the relevant data.
The data in
Table 1 reveals those principles that are either intersecting or outliers. Thus, harm prevention is not available from the OECD and the World Bank, confidentiality is not available from the Red Cross and the OECD, rule of Law is considered by UNESCO and the World Bank only.
Further, the data reveals additional unique principles for the appropriate level of safety of AI for the environment (UNESCO), the development of requirements for developers (World Bank), etc.
While considering these drop-down principles and additional principles, it is possible to suggest that it is necessary to develop separate additional tools that implement them.
Furthermore, it is necessary to bear in mind that there might be varied background points. In our previous research, we have used the legal system sort (Anglo-Saxon, Roman, religious, etc.) and the world index of country development under the Global Talent Competitiveness Index (GTCI) rating. However, we understand that there might be a different approach based on the region/continent specifics.
Therefore, at the current stage of world research and practice, it seems timely to move to a comprehensive consideration of the issue under study.
While setting forth this statement, we consider it necessary to underline that the paper stands on the definition of the public section as provided by the regulative international bodies, i.e., [
13], research community [
14,
15], and dictionaries [
16]. Following the mentioned sources, we consider the public sector as defined in the SNA (Chapter 19) [
13] as the national, regional, and local governments and those institutional units, namely businesses and industries, that are owned or controlled by the government.
The present study sets forth the hypothesis that there might be some global trends regarding the AI phenomenon within international institutional vision, research, and national authorities.
To check the above hypothesis, we consider it relevant to specify the research questions that need consideration to check the above hypothesis:
- RQ 1.
What are current research dimensions regarding public functions implementation by AI?
- RQ 2.
What are current practices of public functions implementation by AI across countries of different continents?
- RQ 3.
Are there any common/global trends, regarding the AI phenomenon within international institutional vision, research, and national authorities?
Considering the above questions, the paper’s objectives are to suggest common measures for countries to ensure the public functions implementation by AI and to consider these measures within some period framework to provide public authorities with trajectories to regulate AI in terms of its implementation of public functions regarding countries of different regions. Further, the paper aims to formulate generalized key critical points for introducing AI into the public sphere in different countries.
The paper contributes to the existing data in a number of ways.
First, the comparative data regarding the major research dimensions and current practices of public functions implementation by AI across countries of different continents enhances the bank of applied knowledge on the topic under study.
Second, the comparative study enhances the list of issues and awareness thereof regarding the common/global problems regarding the AI phenomenon within international institutional vision, research, and national authorities.
Third, the author suggests common measures for countries to ensure the public functions implementation by AI at different stages of AI regulation (short-, medium-, and long-term periods) and formulates key critical points for introducing AI into the public sphere in countries of different regions and varied public areas.
The above material contributes to public authorities’ vision and policies regarding the relevant trajectories to regulate the implementation by AI of public functions in countries of different regions.
2. Materials and Methods
The research material integrates analytical reports under the umbrella of international organizations, national governments and authorized agencies, academic research data, legislation, and administrative regulations of national and international status.
The academic sources were collected through the Google Scholar database. The search was conducted with the keywords AI implementation measures, AI use in society, AI implementation plan, and AI implementation strategy. This technique initially suggests about 2,770,000 results (0.06 s). Nonetheless, we considered it relevant to limit the search to the 2022–2023 period as the AI development is skyrocketing, and the data becomes obsolete quickly. This limitation led to the search result of 16,600 items within 0.19 s. Next, the Google Scholar list of the titles of academic sources and their brief descriptions was organized as a text corpus to be submitted for computer-facilitated processing. QDA Miner Lite tool (
https://qda-miner-lite.software.informer.com/1.2/ (accessed on 1 March 2023)) was used to identify the list of the most frequent word combinations as the thematic codes. The country and region cluster filters were also activated. The digital processing of information revealed the duplication of data. Therefore, 1318 unique publications were subject to the analysis.
The country and region clustering identified a limited number of nations where research on AI and AI use in society is conducted, this list practically mirrored the OECD list of top states where AI policies and practices are way underway, (URL:
https://oecd.ai/en/ (accessed on 1 March 2023)). Bearing in mind the hypothesis about the periodization of national strategies and common universal measures regarding AI policies and its implementation into the public sphere, the above-mentioned 1318 sources were subject to additional automated text annotation with the keywords “period” (with mutually replaced and combined details: short-term, mid-term, long-term) and “common measures” (with mutually replaced and combined details: supranational, regional, international, continental). No region and continent-based clustering was provided by the system. Meanwhile, clustering covers common spots in terms of areas of AI use (medicine, social services, social security, commerce, law, courts, etc.). The present paper cites over 70 sources that support the author’s statements and conclusions. The selection of the sources for their inclusion in the present article reference list stands on their top-to-down position in the section QDA Miner Lite tool regarding the frequency of the thematic codes present in the respective text.
Regarding the selection of countries to explore national policies and practices in the field of AI implementation in the public sphere, we have already mentioned earlier in this section that we took into account the OECD Artificial Intelligence Policy Observatory (
https://oecd.ai/en/ (accessed on 1 March 2023)) which is known as the top world source on AI data. While exploring the national practices of AI implementation in the public sphere, we also rested on The Global Talent Competitiveness Index (GTCI), which ranks states according to their economy and technology development, including AI (URL:
https://www.insead.edu/ (accessed on 1 March 2023)). The countries with the highest Global Talent Competitiveness Index (GTCI) [
17] were considered primarily relevant for the analysis. The data regarding their practices in the field of AI implementation in the public sphere of the state were taken from the official sites of the respective governmental bodies and agencies. The data was verified by the end of December 2022 when the comparative study was finished.
The similar or duplicating documents, having been excluded, the final scope of sources for consideration resulted in 412 items; about 40 of them are mentioned in the reference list to support the evidence on the issues under study. The selection of the sources for their inclusion in the present article reference list stands on their top-to-down position in the section QDA Miner Lite tool regarding the frequency of the thematic codes present in the respective document.
The unique country-affiliated documents on AI implementation into the public sphere were structured into the text corpus. The corpus used the QDA Minor Lite tool (
https://qda-miner-lite.software.informer.com/1.2/ (accessed on 1 March 2023)). The filters marked the country and region. The thematic codes were introduced in line with the research hypothesis on common measures and periods of their implementation, and the codes for AI implementation into the public sphere were also used (medicine, social services, social security, commerce, law, courts, etc.). The author initially used predetermined codes in line with the above-mentioned topics and included them in the system. The parameters of the word frequency ranking were also activated in the system. The combination of these tools was used as an instrument for the text corpus classification in terms of major semantic markers for the text parts. Thus, the whole text corpus structured from the items describing the real cases, procedures, academic research topics, etc. was marked by a particular topic with reference to the country/institution/public sector field, etc. Such a data structure allowed the author to identify country-obvious practices, organization-affiliated research trends, public sector field-specific issues/biases, and challenges.
The research methodology stands within the qualitative research that has been traditionally accepted for legal administrative studies [
18]. Scholars agree on the benefits and relevance of qualitative analysis within legal research as it explores “things in their natural settings, understand and interpret their social realities, and provide inputs on various aspects of social life” [
19].
The research methods were specified in line with the goal and hypotheses of the study. It was conducted on the grounds of the general scientific dialectical method, which made it possible to consider the AI implementation in the public sphere from the angle terms of its regulation variability and trends in the subsequent development of the regulation of this technology.
The analysis was carried out within a comparative legal paradigm to identify current practices of public functions implementation by artificial intelligence in different countries. The study also used formal logic tools, including description, comparison, analysis, and synthesis, thanks to which measures were identified to ensure an integrated approach to the regulation of AI in the public law field.
The testing of the hypothesis was conducted in the course of the coding procedure, and the results of clustering activities are presented in tables across the text of the paper, as supported by the elaboration of the results in the discussion section.
3. Results
This section follows the research questions and respective tasks and considers current research dimensions regarding public functions implementation by AI, explores current practices of public functions implementation by AI across countries of different continents, and tries to summarize any common/global trends, regarding the AI phenomenon within international institutional vision, research, and national authorities?
3.1. Current Research Dimensions
The review of current research dimensions reveals that they integrate analytical expert reports and academic publications of doctrinal legal studies. They both confirm that there are a large number of separate attempts to cover the issues of regulation of AI within the scientific and applied framework. This section incorporates data on analytical reports and academic research.
3.1.1. Review of International Analytical Expert Reports
The results of the analysis can be specified as follows in
Table 2. It outlines the topics, their universal and region dominated character, and the relative percentage of mentions in the research text corpus.
The author considers it important to provide comments on the table data.
First, there are comparative studies at the international level, which are conducted to rank countries according to the level of technical and legal readiness to integrate AI into the field of public relations [
20,
21].
Next, there are separate reviews prepared by specialized law firms on applied issues of AI regulation in the context of each specific country where they provide their legal services (i.e., Singapore [
22,
23], the UK [
24], etc.).
Furthermore, some reports focus on a particular issue. One of the notable examples is a report prepared by a group of specialists from the United States on the basic regulation of AI in various countries of the world, which is of a pinpoint nature, highlighting a series of definitions for various countries regarding the regulation of military AI and unmanned vehicles [
25].
These reports as well as other specialized contributions are of a general nature in order to understand local regulation or attempts to identify leaders in individual methodological approaches.
Additionally, we have to take into account the current attempts to identify regional features of AI regulation by highlighting common and differentiating aspects of AI regulation and implementation in the sphere of public relations with reference to particular regions.
Thus, the regional overview of Latin America is introduced in the report prepared by a group of experts from the Inter-American Development Bank. This report examines the phenomenon of AI in 12 key countries in Latin America in the context of public services, academic research, the functioning of the ecosystems involved in AI, and the readiness of the community for this technology. As part of this report, the national strategies of these states, the legal status of AI, as well as, in particular, the existing AI services for public needs and their regulation were studied. This report has paved the way to the basic conclusions regarding the conditions for technology success that will depend on:
Development of a shared vision with which to align the efforts and actors of the AI ecosystem;
Delivery of digital infrastructure facilitated by governments in association with the private sector;
Development of local talent and research on relevant issues;
Adoption of AI by civil society to advance its goals;
Decision-making that places humans at the center of every AI-related conversation and activity;
Strengthening of the entrepreneurship ecosystem;
Respect for the ethical framework and guidelines for developing and adopting AI.
However, the study has revealed a significant uneven readiness for AI technologies on the part of the state, both at the level of public services, and at the level of ecosystems and academic research. Furthermore, this study does not contain a specific action plan for the region in terms of the levels of technology readiness and the timing of their implementation [
26].
In the context of the regional overview of the EU countries, it is relevant to consider the scientific report prepared by a special unit of the European Commission. The report stands on the study of 230 initiatives of AI use in EU Member States and an analysis of their key features in terms of technology and value drivers.
The applied review of the use of AI technologies in the EU and their public services also contains the analysis of academic and scientific literature on AI integration into the public sphere.
Further, the report provides an in-depth classification of AI services used in the EU public sphere, the procedure for their legal registration and law enforcement practice in relation to developers and operating organizations (public persons), and social and political aspects were carried out as well.
As part of this study, the experts came to the conclusion that the practice of integrating AI into the EU public sphere is very varied. The report authors also underlined the need to actively introduce additional tools for analyzing the effectiveness of these technologies, the importance to analyze certain industry specifics of using AI for public purposes (chatbots), the necessity to provide human-oriented services based on AI, the relevance to use public procurement to fund innovation and ensure reliable AI, and the obligation to protect fundamental rights in AI-powered public services and defend social infrastructure.
It is noteworthy that this study also does not contain a specific action plan for the specified region in terms of technology readiness levels and the timing of their implementation, and it also notes the significant competitive advantage of the United States and China [
27].
In the context of the Asian region, the issues of AI integration into the sphere of public relations are not on the current agenda, and experts consider certain issues of regulating legal relations in the field of copyright protection [
28] and the commercial potential of the technology in general [
29].
When turning to the African region, we should take into account the AI4D and APRI reports.
AI4D specialists, having conducted a comparative study of the current regulatory policy, identify a high degree of technological readiness for the integration of AI into the field of public relations in Kenya, Nigeria, and South Africa, in the complete absence of special AI policy tools on the African continent [
30].
APRI experts note similar problems of the lack of a regulatory framework for AI technology and its implementation in the field of public relations, as well as potential abuse by commercial companies within the specified region [
31].
Table 2.
Major topics of international analytical expert reports.
Table 2.
Major topics of international analytical expert reports.
Topic | National Strategies and Analytical Reports (Examples) | Percentage | Note |
---|
AI regulation (general legislation and specified laws) | Analytical reports of Singapore [22,23], the UK [24], Africa [30,31] | 35% | Universal topic for all countries |
Ethical framework and guidelines for developing and adopting AI. | Inter-American Development Bank [26] | 24% | Primarily EU, US and Asia. |
Plan of integrating AI into the public sphere | Analytical reports [27,28,29] | 21% | Universal topic for all countries |
technical and legal readiness to integrate AI | AI Index [20], Stanford University [21] | 10% | Universal topic for all countries |
definitions regarding the regulation of military AI | ICRC [8] | ~5% | Primarily EU, US and Asia |
Others | (labor market, IP, TRL etc.) | <5% | Universal topic for all countries |
Regarding the Middle East and North America, it is not possible to single out comprehensive regional reports in relation to the public use of AI and regulatory approaches, since most reports use either global analysis (as indicated at the beginning of this block) or specialization at the level of national strategies.
Thus, the current variability of approaches and regulation of AI at the global, regional, and national levels is extremely different and heterogeneous, while at the level of studies and reviews, the issue of predicting the further integration of AI into the sphere of public legal relations, the regulation of these legal relations, as well as the development of specific rules and approaches take into account the different degree of technological readiness of the technology (narrow, general, and super AI).
3.1.2. Review of Doctrinal Legal Studies
The results of the analysis can be summarized as follows in
Table 3. It outlines the topics and the lative percentage of their presence in the doctrinal legal research as mentioned in the research text corpus.
The author considers it important to provide comments on the table data.
Our analysis reveals that at the level of theoretical doctrinal research, isolated studies of the experience of specific countries in terms of the use of AI are currently quite widely represented. Moreover, today’s academic studies have shaped a number of research trends, namely the use of AI for the purposes of law enforcement, antimonopoly, personal data protection, and administrative law should be singled out as a formed trend.
It should be mentioned that theoretical issues of using AI to solve law enforcement problems began to be considered more than two decades ago [
32].
At present, in relation to this area, it seems relevant to mention the research of Rademacher, who notes the need to answer three basic challenges associated with this technology: formation of regulatory requirements in terms of accountability for employees using AI; AI use to overcome discriminatory principles implemented by police officers; the need for a balanced formation of the perfect rule of law, and freedoms and personal human rights in the application of AI [
33].
The issues of improving the predictive (predictive) and risk-oriented activities of bodies in the framework of criminal justice are also explored by a number of other researchers [
34,
35].
The use of IIA in the antimonopoly sphere is mainly used through examples of specific cases. Thus, Bonin and Malhi examine in detail the phenomenon of abuse of the dominant position of Google Corporation and the administrative actions of the European Commission. The scientists noted the usefulness of AI in processing large volumes of data and pattern recognition, especially against the backdrop of a rapidly digitizing European economy, which opens significant opportunities for the future of competitive law enforcement in Europe [
36].
In terms of personal data protection and AI technologies, it is important to take into account the work of those scientists who explore how an integral part of AI (ML models) can be recognized as personal data in accordance with European data protection law. Scholars argue that many socio-technical problems related to AI are not fully addressed through regulations such as the GDPR, which are the result of the slow evolution of definitions and issues [
37].
The field of AI implementation in the field of administrative law and AI positioning as a public entity includes a large number of studies that analyze mainly the practices of specific countries.
Corvalán (Argentina) recognizes the influence of the quality of ICT technologies on the implementation of regulatory policy, notes the need to improve Argentina’s ICT regulatory policy on grounds of best international practices, and argues for the need for balanced financing and regulation of AI in the context of different regions of the country [
38].
In relation to Canada, scientists investigate AI decision-making problems from the standpoint of observing the principles of administrative justice, eliminating discrimination in these procedures, and developing universal legislative and law enforcement principles regarding AI for the purposes of administrative law [
39].
Canadian scholars also specify the need to improve legislation in the field of personal data protection (GDPR) by increasing penalties, the importance of increasing the transparency of AI activities through mandatory publication in the public domain, where and how AI is used by the state, and the demand to fix the mandatory right to review the decision of AI by an employee of the department on behalf of which the program acts [
40].
U.S. researchers specifically identify the need to introduce additional requirements for AI developers who are involved in the creation of software for the purposes of public authorities. As noted in the respective studies, judicial and administrative practice in relation to AI involved in public law requires the formation of additional principles of accountability for participants in the process, maintaining the principles of manual control of AI until the stage of its formation as a fully autonomous, transparent, and accountable system, both for the purposes of administrative law and for the administration of justice in court [
41].
Standing on the judicial practice, as well as using theoretical approaches in the field of the formation of predictive justice, French researchers formulate proposals for the purposes of AI-facilitated administrative justice and predictive justice in France: every AI that operates in the legal field, should have identified developers so that the results of its activity can be integrated into the adversarial process in the same way as other types of evidence; the nature of data processing and the calculation of AI indicators should be made public; there is also a strong requirement for publication of the sources, nature, and architecture of the data used to train the algorithms; the publication of areas of application of AI and the limiting contour of the impact of said technology for the litigation should be a must, as well [
42].
Among the studies of the integration of AI in the field of public relations in Germany, it seems relevant to note the following trends: the regulation of fully autonomous decision-making for the purposes of administrative law and the quality of machine learning technology.
Thus, Finck (2020) sets forth a number of the basic principles of AI regulation for the purposes of public law in Germany, namely the formation of a single team of AI developers and relevant tools for interaction within all branches of government in Germany; the regulation of additional procedures for the technical audit of AI, and the observance of the ethical principles at the stage of AI development [
43].
Hermstrüwer identified the imperfection of machine learning technology for law enforcement purposes in Germany from the point of view of administrative law. Among the basic problems, there is the issue of sampling administrative cases for AI and the need to form an implicit sample of cases in order to minimize information noise, the formation of unregulated zones due to the behavioral adaptation of the subjects of the offense, as well as other technical traps for AI in the analysis of administrative cases [
44].
Buscema and Tastle underline that, despite the wide scope of AI applications and the already established practice of AI use, the transparency of the AI decision, the need to form compensatory legislative and law enforcement approaches to AI to ensure its technological security, and the formation of legal institutions that ensure prevention of AI from encroaching on constitutional human rights and respecting the rule of law need to be addressed [
45].
Table 3.
Major topics of international analytical expert reports.
Table 3.
Major topics of international analytical expert reports.
Topic | Doctrinal Research in AI Integration (Examples) | Percentage |
---|
BIAS, discrimination, human rights | Universal scientists (public law, civil law, criminal law and so on) [36,39,45] | 31 |
GDPR | Scientists (public law, civil law) [37,40,45] | 23 |
Predictive law | Scientists (public law, criminal law) [34,35,42,44]. | 15 |
additional requirements for AI developers | Scientists (public law, civil law) [41,43] | 11 |
law enforcement | Scientists from police and others [32,33,44]. | 7 |
Technical regulation | Scientists (public law) [43] | 6 |
Others | (legal personality, IP, insurance, etc.) | <7% |
In general, the review of doctrinal legal studies reveals that a comprehensive and specific set of tools regarding the implementation of AI in the sphere of public legal relations is not considered with reference to possible periods of planning the strategic activities of the state. Doctrinal legal research follows the tradition of generalized studies based on the current level of AI (highly specialized AI or machine learning). No groundwork is made so far to control the development of the technology itself, taking into account the emergence of general AI (general AI is their stable terminology) and beyond AI (super AI).
3.2. Current Practices of Public Functions Implementation by AI across Countries of Different Continents
The research results reveal that the current practices of public functions implementation by AI across countries and world regions vary regarding particular sectors, regulations, and facilities/services/software systems. This section starts with the descriptive materials and is further summarized in
Section 3.3 in the table format.
3.2.1. Current Practices in Latin American Countries
Colombia actively uses a system for identifying and classifying potential recipients of social subsidies [
46], which is formed on the basis of primary data and forms the social economic rating of citizens. The system uses the Quantile Gradient Boosting machine learning model to identify potential recipients of social assistance by assessing the “well-being” of a person on a scale from 0 to 100. Further, based on this analysis, representatives of a state organization make the final decision on the opportunity for a particular person to receive financial support. An additional interesting development in the service of the Colombian authorities is the KBoot program. The prerequisite for the program creation was the increasing number of online sales without the corresponding income being declared to the tax authorities.
Initially, the Treasury of Medellin solved this problem “manually”, but exponential growth proved the need for algorithmization and digitization of the indicated work of AI-related civil servants. This robot collected data that was aggregated on the Instagram platform [
47], namely keywords, names, phone numbers, users subscribed to pages, number of messages, etc. As a result of cross-checking and related inquiries to the city’s telephone operators, over 2.6 thousand people were identified as those who conduct trading through advertising on Instagram, while only 453 of them were registered with the Treasury. Subsequently, these persons were included in the state program to support small businesses in order to legalize their business activities [
48].
In order to optimize the activities of the Department of Industry and Trade, which is responsible, among other things, for regulating industrial property issues, AI has been introduced to analyze the patent application and issue recommendations on technology classification [
49].
Argentina has a special Laura system that has replaced a number of tasks of the Ministry of Finance civil servants, who are responsible for checking pension contributions. Laura automatically connects to the ANSES database and reconciles the data of a potential recipient of pension contributions with his/her salary and other information that affects the assessment of pension payments amount. In addition, additional reconciliation is carried out to consider potential benefits opportunities for the recipient and the identification of fraud or errors affecting pension payments [
26].
Brazil, as the regional leader in terms of economics [
50], operates a similar AI named Laura. However, it performs a different function [
51]. This AI is aimed at solving medical issues, including those associated with sepsis. Through remote monitoring, the AI automatically determines the condition of patients and, in the event of a fundamental deterioration, issues an imperative prescription for an on-site team of doctors to visit the patient. The effectiveness of this AI activity is undeniable since it saves 12 lives every day. In the context of financial control, the Brazilian authorities, represented by the Administrative Council for Economic Protection [
52], use AI to analyze competition in critical market areas [
53]. Through the use of advanced market and price analysis mechanisms, additional cartel practices have been identified that affect gas prices in the country [
54].
3.2.2. Current Practices in North American Countries
As far as the approaches to the regulation of AI in public relations in the United States are concerned, it should be noted that AI is planned for use in various areas of public relations, such as the analysis of credit reporting [
55], analysis in the field of labor relations [
56], the military-industrial complex of the country, etc. [
57].
In terms of legislative initiatives, the Washington state legislature, represented by its Senator Bob Hasegawa, has introduced a bill [
58], which establishes new rules for government departments, that establish new rules for the use of automated decision-making systems. If passed, government agencies in Washington state would be prohibited from using automated decision-making systems that discriminate against various groups or make final decisions that affect the constitutional or legal rights of Washington residents.
In New York City, a law is passed banning AI recruitment systems that fail annual audits for discrimination based on race or gender. The legislation imposes fines on employers or employment agencies of up to
$1500 for each violation [
59]. Particular attention is paid to AI technology for processing the personal data of citizens (in particular, face recognition). In the state of Alabama, a direct ban on the use of this technology for authorities in the framework of criminal prosecution has been introduced [
60]. There is a similar practice in the state of Colorado [
61], the state of Virginia [
62], etc. [
63], but at the federal level, a similar attempt is being worked out by a relevant committee and so far, the final version of the document has not been submitted for consideration.
In the state of Illinois, when conducting an interview with an applicant with the use of AI technology, companies must report on the work of the AI mechanism and the decisions made to the State Department of Commerce and Economics [
64].
The Michigan Unemployment Insurance Agency designed, built, and implemented MiDAS to automatically detect UI fraud. The specified AI determined the possibility of committing fraud and, on behalf of the agency, sent a corresponding letter to the applicant, with the subsequent recovery of benefits, without an appeal or other tools to challenge the decision of the AI [
65].
The result of the trial on this technology was the responsibility of the developers for the AI mistakes and flaws [
66].
A similar approach has been formulated in Canada, where, in accordance with the Financial Management Act, the Canadian Treasury Board issued the Automated Decision-Making Directive, which came into force on November 26, 2018 [
67]. This directive defines the key responsibilities of federal agencies that use AI-based decision-making systems and specifies the key requirements for AI transparency, its accountability to a government authority, etc.
Considering the experience of algorithmic decision-making for the purposes of public administration in Canada, the case of Ewert v Canada [
68] should be highlighted. In this case, the Supreme Court of Canada ruled on the use of actuarial risk assessment tools in the context of correctional facilities. This phenomenon is part of the counter-practice used in the U.S. (Compas, Traverse City, MI, USA [
69]). The defendant challenged the use of algorithmic risk assessment tools to make decisions about his prison needs and risk of recidivism. As a legal position in the framework of the administrative appeal regarding the illegality of using the specified algorithm, the defendants emphasized that the program was trained on non-indigenous populations and that there were no studies confirming their applicability to indigenous peoples (paragraph 12 of the case). He subsequently filed in federal court alleging that the tests violated his rights to equality and due process under the Canadian Charter of Rights and Freedoms [
70] and the Corrections and Conditional Release [
71] in the context of non-compliance with the provisions of Article 24.1, which provides for the obligation of the authorities to use all reasonable measures to obtain the most accurate, up-to-date, and complete information about the defendant.
The case does not explicitly mention algorithmic decision-making, but addresses the issue where the data used to develop and train the algorithm, or the assumptions encoded in the algorithm, create biases that can lead to inaccurate predictions about people. The court noted that the authority was aware of the concerns about the possibility of using psychological and algorithmic tools for demonstrating prejudice (paragraph 49), the authority has the duty to conduct a study of how the tools affect cultural groups and verify their validity. As a consequence, the court partially granted the defendant’s claims (p. 90).
3.2.3. Current Practices in EU Countries
It should be noted that already now, it is possible to conduct a centralized analysis of the services used by the EU member states thanks to the intelligent system AI-X [
72].
In the Estonian Department of Agriculture, AI is used to determine whether agricultural land has been mowed or not, using images intelligent processing and geomonitoring principles to do this. This SATIKAS system [
73] uses deep learning methods and high-precision neural network approaches to analyze satellite data coming from the European COPERNICUS program. These data are analyzed together with the reference data of the Estonian Meteorological Service. Such interest in this technology in the context of public authority is due, first to the fact that mowing or grazing is one of the keys and frequently used requirements in the framework of subsidy granting, and second, due to extremely high rates of non-compliance with this requirement. This program replaces manual verification and recording of subsidy compliance status, thereby minimizing the risk of negligence (remarkably, only 5% of the total sample referred to on-site subsidy compliance checks) [
20].
Regarding the Netherlands’ experience, it should be noted that they used the SyRi system to effectively detect social security fraud [
74]. This system was developed in 2014 in order to ensure the exchange of data between different institutions. However, this type of AI faced serious resistance from society. Thus, the UN Human Rights Rapporteur also expressed concern about the use of SyRi, as this could pose a serious threat to human rights [
75]. The final position regarding this AI was expressed by the Dutch court at the beginning of 2020 when it ruled that the use of SyRi by civil servants did not comply with Article 8 of the European Convention on Human Rights) [
76]. The reasoning of the court was based on the fact that the AI algorithm was not sufficiently transparent and unverifiable; in addition, the collective, economic interests in the field of combating fraud did not sufficiently outweigh the social interests in the field of confidentiality.
The Ministry of Labor and Social Policy of Poland faced a similar negative experience. As a result of the reform, the Ministry was supposed to provide an automated solution that would simplify the analysis of the labor market and would not increase the number of employees in the department (without increasing the budget) [
77]. As a result of this initiative, three categories of the unemployed were identified, taking into account individual characteristics. As part of the data intellectual processing (initial interview, testing, etc.), the citizens were categorized to determine the kind of subsidies he/she could receive from the state (employment, professional retraining, benefits, etc.). At the same time, in a number of cases, binary processing of the decision was performed to decide whether to provide state support in full or not [
78]. The specified program showed that in almost 100% of cases, the responsible representative of the state body agreed with the program recommendations; however, this situation could hide a negligent attitude, refer to objective overload on the part of employees, their belief in the accuracy of the result of AI (delegation of responsibility to it).
This approach caused a negative reaction from the society. Finally, according to the results of monitoring measures by the Supreme Control Chamber of Poland, and that of the Commissioner for Human Rights, the Constitutional Court of Poland recognized this AI product as unconstitutional [
79].
In order to curb the growing abusive practice of false police reports in Spain [
80], the Spanish National Police implemented the VeriPol artificial intelligence system to detect false police reports. This AI product was developed in collaboration with Cardiff University and Charles III University of Madrid. As part of this work, 1122 reports were provided to the scientists of the Institute for AI training, including 534 correct and 588 false reports [
81].
This AI uses the features of natural language processing and machine learning to define basic patterns to provide grounds for the report being classified as a false one. Within the pilot testing, the AI identified that for the most part, the false reports contained short statements that described a stolen object and did not provide any details of the crime, suspected criminal, and attack itself.
In January 2019, the use of VeriPol by the Spanish police revealed 64 false reports in just one week, of which 80% confirmed the inconsistency of the actions of the police officers.
In addition, an anonymous survey of employees showed that the VeriPol system was useful and easy to use, but should include more features for detecting other forms of crime within police units (similar to internal security).
An additional add-on for the system can be both a polygraph system and other ways to obtain data on how people falsify reports or lie.
3.2.4. Current Practices in African Countries
While considering AI use in the public sphere of the African continent, it should be noted that no country has specific legislation on AI [
82], although Mauritius has partial legislation on AI [
83]. At the same time, only 30 countries focus on data protection related to automated decision-making systems, and four countries have a national AI strategy [
84]. It should be taken into account that many AI services, including those in the context of public functions, are provided by large corporations or start-ups [
85] on the basis of concession agreements. However, this approach raises a number of fundamental questions at the level of judicial practice in these countries [
86].
This approach creates significant risks for the governments of these countries, especially in terms of data collection and processing. The above 30 countries have actively integrated into their practice legislation on the protection of personal data only in the last few years (including Nigeria, Kenya, Rwanda, and Uganda [
87].
At the same time, the very introduction of AI into the state apparatus faces the fundamental difficulties of the human factor (and not the legal one), so the intellectual recruitment model, which has potentially optimized the process of hiring for the civil service in South Africa, is currently slowly developing due to the fact that it leaves no opportunity to choose specific and right candidates in which the representatives of the employer are really interested [
88].
In the context of the medical services provision, all conclusions formulated by AI must be accepted by the attending physician [
89], a document must be signed by the patient confirming the consent to the provision of the AI-facilitated specified medical service [
90], and the service provider (developer) must provide all information on how and under what conditions the person will be provided with AI-facilitated medical services. This approach, as well as the outdated regulatory framework, offsets all the benefits of AI both in South Africa [
91] and in the region as a whole.
3.2.5. Current Practices in Arab Countries
As part of strengthening the digitalization of the state function in the UAE, a number of separate regulatory legal acts have been issued, which fix the special role of the city of Dubai [
92]. In order to form a regulatory sandbox on the territory of Dubai, a mechanism has been launched for licensing companies that carry out developments in the field of AI, which provide special conditions for obtaining visas by company employees [
93].
Regarding AI regulation in the field of public relations, two examples can be distinguished in the field of healthcare. The Dubai Health Authority has selected four companies that offer telemedicine services and other mechanisms for predictive analysis of the patient’s condition [
94], while all technologies used within the framework of this initiative must necessarily have emergency notification mechanisms in case of failures, comply with conditions of confidentiality and transparency, as well as undergo mandatory certification [
95].
Similarly, the Dubai Roads and Transport Authority has signed a partnership agreement with SWIM.AI [
96]. The specified service allows specialization in the use of digital twins and artificial intelligence to optimize data in order to optimize supply chains within the city. These technologies do not explicitly represent an automated function of civil servants, but perform a point task set for commercial organizations in narrowly specified parameters.
As far as the Kingdom of Saudi Arabia is concerned, “Vision 2030” can be defined as the first government document in relation to the formation of a strategy for digital transformation and the formation of prerequisites for the development of AI. This document forms a long-term plan for economic reforms to stimulate new industries and diversify the economy, simplify public–private business models, and, ultimately, account for reducing the country’s dependence on oil revenues [
97]. Furthermore, a periodic report is published on the achieved qualitative and quantitative indicators in the areas enshrined in the mentioned document [
98].
As part of the implementation of the tasks to maximize the potential of the Kingdom of Saudi Arabia, the Saudi Arabian Data and Artificial Intelligence Authority (SDAIA) has been established. SDAIA’s primary mission is to support and advance the Kingdom’s data and artificial intelligence program and its vision is to position the state as the world leader in the elite league of data-driven economies. SDAIA includes several subsidiary bodies [
99].
Regarding the future prospects, the mentioned authority in cooperation with the Ministry of Communications and Information Technology, prepared a national strategy in the field of information and AI [
100]. This document is remarkable in that it has both a general strategic nature for assessing the effectiveness of AI, and specific indicators for the achievement of which the Data and Artificial Intelligence Department is responsible for achieving by 2030:
40% of the total number of employees trained in basic skills of working with data and artificial intelligence;
15,000 local data and artificial intelligence specialists;
5000 data and artificial intelligence experts;
top 10 countries in the open data index;
top 20 countries in peer-reviewed KSA publications on data and artificial intelligence
high elaboration of legislative aspects etc. (Issues of investment in AI);
meanwhile, there is no legally fixed definition (at the time of preparation of the study) in the Kingdom of Saudi Arabia.
Currently, the only evidence of recognition of the rights of robots with artificial intelligence can be traced in a 2017 statement at the Future Investment Initiative event when the humanoid robot Sophia, developed by Hanston Robotics, received citizenship from the Kingdom of Saudi Arabia [
101]. The development of the robot was carried out with the aim of training and adapting it to human behavior for subsequent interaction with people. Robot Sophia is the first robot in world history that has citizenship and is endowed with the status of a subject of law equal to a person. Detailed information regarding the consequences of this case was not disclosed. Many experts have criticized the decision of the government of the country, arguing that “it is wrong to grant citizenship to a robot in a situation where human rights are violated” [
102].
3.2.6. Current Practices in China
Considering the practice of AI use in public relations, it is necessary to note the experience of Shanghai (China). The Shanghai Artificial Intelligence Traffic Authority (SAITA) is capable of not only setting traffic rules but also putting the legal regulations into practice.
Programmers have created a system capable of changing traffic rules, speed limits, signage, and lane configurations, i.e., virtually every element of traffic regulation in a city [
103]. When facts of traffic violations are detected, the AI automatically cuts off the violator from the traffic flow by placing barriers at intersections, imposing administrative fines, or bringing the violator to criminal liability for serious offenses. At the same time, in the course of its activities, the AI both has made mistakes and progressed in self-training, introducing a progressive scale of fines, taking into account the social danger of the offense.
Considering especially malicious violators who went around the stream on the side of the road, the AI applied the “pursuit” mechanism, which eventually turned into 90 days in prison for the offender. At the same time, the widespread experience of introducing AI in traffic regulation found a positive response and began to be actively applied in other regions of China, showing a significant improvement in the traffic situation [
104,
105]. This approach shows that an evolving AI system that combines the administrative functions of various departments, with appropriate technical capabilities, can show the result better than a person. In addition, the provision of the function of local lawmaking in terms of traffic rules shows its viability.
At the same time, in the context of regulation itself, the approach of China’s public authorities was formulated back in 2018, when the Ministry of Public Security and the Ministry of Transport jointly issued a set of rules for testing autonomous vehicles in China [
106]. These rules contain requirements for the cars themselves and for test drivers, while directly fixing the driver’s obligation to sit in the driver’s seat during the test and be ready to take control of the car at any time (Articles 6.7, 13.18 of these rules). For these vehicles, appropriate temporary licenses and a corresponding temporary license plate are issued [
107].
3.2.7. Current Practices in Russia
In the Russian Federation, the National Center for the Development of Artificial Intelligence is the curator of AI implementation projects in the public sector [
108]. It is assumed that the new structure will accompany the development of the national portal in the field of AI (ai.gov.ru (accessed on 20 May 2023)). It will be used to select AI solutions for business, science, and the state, will monitor the development of AI, and examine documents on the regulation of this area. In addition, the center will be responsible for supporting the implementation of AI in industries and the public sector, as well as compiling an index of readiness for the implementation of this technology, and will act as the main platform that will bring together authorities, business, and science to effectively solve the existing problems of developing artificial intelligence in Russia. The projected timeline for introducing AI into the system of government is planned for 2024.
The overall analysis of current practices regarding AI implementation into the public sphere in countries across continents reveals that there might be identified common trends regarding the areas of AI-related policies and societal activities. However, no questions of common measures with references to planning and its periods can be identified so far.
3.3. Identifying Key Global Trends
Summarizing the above data, we can conclude that countries of the world are looking for universal approaches, and this trend can be traced at the level of supranational organizations (UN, etc.) and associations (OECD), whose materials we have considered in the introduction, see
Table 1.
It is also important to mention that the present article does not aim to explore the particular varied soft systems across countries. Thus, the results do not mention, for instance, PrometEA [
109] (which predicts court decisions and prepares sets of documents for individual cases in Argentina, a similar tool case in China (System 206) [
110], AI that is used in Mexico to assess the possibility of receiving social benefits (EXPERTIUS) [
111], software for the unification and preparation of court documents in Brazil (Victor) [
112], as well as many other national AI tools that are not considered within the paper data. the detailed consideration of all existing soft systems should be the subject of special research with a focus on technical issues.
The data on the results of the research can be summarized in
Table 4.
Summarizing the data of the Results section, including the findings structured in
Table 2,
Table 3 and
Table 4, we can conclude that there might be =some global trends, regarding the AI phenomenon within international institutional vision, research, and national authorities.
The generalized profile of these trends includes the following points
- –
ethics of AI use;
- –
technical regulation of the said technology;
- –
controversial issues of singling out a separate legal personality (including through the construction of a legal entity);
- –
the issue of a separate law regulating the complex of legal relations associated with this technology (both at the stage of development and application of regulation and at the stage of law enforcement with the distribution of responsibility of the participants in the process);
- –
separate issues of data security and the procedure for their processing.
The current technological level of AI around the world is mainly represented by highly specialized AI, and the formation of general AI and super AI is still in process.
Many countries of the world note this specificity in their strategies, but the current legislative initiatives do not directly take into account the possibility of technological progress.
The author proposes to take this feature into account when developing regulatory approaches, taking into account the best practices and serious blunders resulting from law enforcement in different countries. Furthermore, it is relevant to recall the suggestions regarding particular additional tools that might be relevant for a particular period framework, the point was specified earlier in the Introduction with reference to
Table 1 data.
4. Discussion
It should be noted that the theoretical and law enforcement aspects of the integration of AI into the sphere of public relations are considered by both scientists and practitioners in the same vector as
Table 1,
Table 2 and
Table 3 reveal. The basic problems of ensuring the transparency of AI decision making, its bias in the context of making a decision, form the groundwork for further research. The most frequent issue is the technical readiness of public authorities to integrate AI. However, at the doctrinal and practical levels, no specific proposals are made to determine the level of technical readiness of AI and fix indicative approaches to this technology at the level of legislative initiatives.
Within the above context, it is relevant to mention the position of Wilson, who, in his study of 16 national strategies of a number of countries in the field of AI, notes the abstractness of the fixed provisions, which leads to a weak involvement of end users-citizens [
113].
Further on, Nordström, while conducting research on the AI positioning in the field of public administration, comes to the general conclusion that it is necessary to take into account the large uncertainty regarding the use of AI at different time intervals. However, the scholar does not specify exactly how to implement public policy at the level of specific tools and examples [
114].
Meanwhile, Wilson and van der Velden in their study set an interesting, in the author’s opinion, framework while they define sustainable AI for the purposes of application in the field of public administration, outline cross-border aspects in its definition, and specify common vectors for further law enforcement activities based on the integrated model proposed by scientists that specifies a general approach to the regulation of AI. Such an approach is definitely promising. However, the scholars do not go into a detailed legal analysis of the practical integration of AI into the public sector [
115].
In addition, legal scholars argue for the need to ensure the protection of personal data, subject to the publication of information about the activities of AI, which provides mutually exclusive approaches at the level of legislative regulation.
Among the applied studies confirming this angle relevance, it seems necessary to note the article by Cheng et al. which underlines, based on the results of the survey and analysis, the negative aspect of the use of AI, i.e., its uncertainty, opacity, and potential interference with privacy and gaining access to personal data [
116].
According to the author, first of all, it is necessary to grade public relations in accordance with their risk-oriented component for the purposes of public law and, depending on the segment of regulation and the scale of risk for public relations, fix the admission of AI to the public functions of government bodies. This approach will also make it possible to manage the risk of AI bias based on the data provided by the administrative authority and promptly respond to the suppression of AI illegal actions.
Taking into account AI implementation practice and theoretical research in this area, the author shares the position on the need to maintain manual control over technology at all stages of its life, as a subject of public legal relations. The specified control should be of a balanced nature, aimed at eradicating fundamental problems arising from administrative law, and not fixing individual law enforcement acts. This approach is due to the fact that the data sample used by AI is variable and not always representative.
In this regard, it is necessary to turn to Fink’s study, which offers five practical tools for solving the above problems (ban on the separate development of AI elements for the state segment, expansion of databases for AI applied, etc.) for the purposes of administrative law and automated decision making [
43].
Furthermore, there are a number of recent papers that aim to specify stages (periods) with reference to the technology under study.
Thus, the scholars advocate a human-centric perspective to AI, regarding the challenges and limitations at the design stage and further with reference to the market-driven approach of AI regulations [
117], to the stages of the Internet of Things development [
118], specifying a time dimension, and arguing for particular research to observe different stages of organizational AI maturity, while keeping the focus of fairness and accountability as well as implications of AI technology [
119]. There is also research on periods for the development of AI ethics and the normative steps of global AI governance within the concepts of democracy and human rights [
120].
The analysis of the practice of different countries in the field of AI use in public relations allows the author to make a number of suggestions.
The research data confirms that AI implementation in the public sphere can vary from country to country regarding the scope of public areas and services. However, the results of the comparative research suggest identifying short-term (1–3 years), medium-term (3–5 years), and long-term (5–10 years) measures and approaches to the regulation of AI in various branches of law that might be considered as common for different countries and therefore viewed as those of global character. It should be taken into account that legislative revision is required at each period within the dimensions of all interrelated segments of the law, specialized areas of AI application, the speed of development of the AI technology itself, and the readiness of the authorities for public functions implementation by AI.
The present section further elaborates on the suggested periodization with reference to those publications that set forth similar topics.
The approach of public authorities to the regulation of AI in the short-term period includes the consideration and implementation of the following public functions implementation by AI. The key issue requiring the regulation of AI in the public sphere, as a representative of authority, is AI transparency and confidentiality. International experience reveals that current AI services, in the absence of transparency, cannot be used within the administrative functions of government bodies, as they violate basic human rights. Regarding the doctrinal research, we consider it timely to mention the practice-oriented works of Raso et al. which explore the current imperfection of AI technologies in certain cases of their application (criminal justice, access to the financial system including credit scores, etc.). These scholars highlight that the basic principle of respect for human rights laid down in the Universal Declaration of Human Rights allows legal proceedings to correct illegal AI decisions in the performance of public functions [
121]. In addition, the performance by AI of public authority functions (as well as AI in the litigation) requires a specialized approach to the analysis of the respective situations. It is recommended to assign such functions to the scientific community, and not to commercial organizations, due to the potentially specific ultimate goals of these institutions. At this stage, we also consider it relevant to suggest developing a gradation of AI access to personal data or commercial secrets that it can use in the analysis to perform public functions within the competence of the authority. Such an approach will allow both targeted support to the relevant segments of the population or organizations and targeted response to the degree of impact on society, such as an administrative or other offense that an individual or legal entity has produced.
The approach of public authorities to the regulation of AI in the medium-term period includes the consideration and implementation of the following public functions.
In the context of AI’s legal personality and vision regarding its participation in civil, administrative, and other legal relations, a promising prototype of AI is an electronic person (a special legal form), which additionally includes the functions of a legal entity. Within the framework of this approach, it is recommended to make part of the data open to citizens and legal entities while using the principles of measuring not only AI potential risk and danger but also the AI merits brought to society (the equivalent of a social rating).
It should be understood, that the position regarding the assignment of a separate legal personality to AI is debatable and covers extensive views, both that of this position’s supporters and opponents. Among the obvious opponents of such a concept of legal personality, one can single out Loiseau and Bensamoun, who believe that granting artificial intelligence with an independent legal status is groundless and premature [
122].
We also consider it relevant to mention M. Diamantis, D. Bryson, and T. Grant, who emphasize that the assignment of the status of electronic persons to artificial intelligence units can lead to a weakening of the legal protection of people in comparison with them [
123].
The author shares the stand of those who support the legal personality of AI. Among such scholars, one can single out the position of Lawrence Solum, who back in 1992 determined the possibility of separating legal personality through the construction of a trust [
124]. This approach has also been developed by Florian Möslein, Bayern et al., and Vladeck [
125,
126,
127] and further developed by Yastrebov [
128].
Bearing in mind the data and discussion provided in this article, the author also considers it timely to suggest the following measures for the medium-term measures:
- –
First, to introduce the common approaches for supranational regulation of these AI technologies;
- –
Second, to initiate the creation of a single AI registry in order to prevent duplication of technology (especially dangerous technologies that have been tested in a number of countries and have been recognized as dangerous to society);
- –
Third, to determine the main “hub” of AI in order to promptly turn it off or determine the source of compensation for damage.
Regarding the legal personality issues in the exercise of the functions of a public authority, the author considers it relevant to develop a separate specialized agency that coordinates and implements a policy regarding public services with the participation of AI, in order to accumulate a single database. At the same time, the “manual calibration” of law enforcement practice remains with the person in all cases. This should be reflected in the documents regulating the activities of AI and the authorities.
The approach of public authorities to the regulation of AI in the long-term period includes the consideration and implementation of the following public functions implementation by AI. Regarding the AI legal personality in the exercise of the functions of a public authority, we should understand that the AI will depend on the level of readiness of the technology and its application in society. Depending on this, the replacement by AI for most of the social and public functions (health care, fiscal function, etc.) is obviously possible. Further, the performance by AI of predictive law enforcement functions (to prevent potential offenses before they are potentially committed or to grant rights for certain types of actions before they are officially required in the relevant body) is also possible. Although we stand to the point that certain segments of regulation should be maintained exclusively by the person (i.e., defense segment).
We should also mention that the length of the periods and the scope of activities within each period might vary depending on the current situation in a particular country regarding the field of study.
5. Concluding Remarks
The author bears in mind that his suggestions regarding the periodization of measures regarding public service implementation by AI are tentative as each country has its own national vision regarding the activities under study. However, in general, the results of the analysis allow the author to formulate key critical points for introducing AI into the public sphere in countries of different regions and varied public areas.
First, the introduction of AI into the public sphere of various countries requires the consolidation of control by public authorities over AI. This control can be implemented through scientific institutions activities, in order to minimize the risks of outside influence on the authorities through the unfair behavior of the developer.
Second, a flexible system is needed to control the level of technological readiness and branching of AI technology for the purpose of regulating the AI market by the state.
Third, international experience confirms the need to develop common approaches to ensuring transparency and confidentiality for the purposes of using AI in the public sphere.
Moreover, the data proves that it is relevant to develop a graded approach to the access of processed information, taking into account the ways of obtaining it and the potential use by the authorities of AI in line with their goals.
Next, considering the thesis of public participation of AI in legal relations with a person or legal and public persons, it should be noted that the agent function is mainly used.
Finally, in order to systematically form the regulation of AI, it is necessary to take into account the stages of technology development and its level of readiness in terms of TRL.
The present research faces some limitations. First, the data collection refers to a particular period. The AI develops in a skyrocketing manner and every half a year there are novel facts and developments at both international and national levels. Second, a wider selection of countries could specify the conclusions and suggestions as well. The present paper has limited the list on grounds of the GTCI criteria.
Regarding further research issues, we consider it relevant to deepen the analysis within major areas and institutions of the public sector.