Next Article in Journal
Fathers’ Experiences of Relationship Breakdown Including Post-Separation Abuse and Parental Alienating Behaviours
Next Article in Special Issue
Podcasting as an Innovative Pedagogical Tool in Social Work Education
Previous Article in Journal
Fathers’ Experiences of Negotiating Co-Parenting Arrangements and Family Court
Previous Article in Special Issue
Bridging Boundaries to Acquire Research and Professional Skills: Reflecting on the Impact and Experiences of Technology-Enabled Collaborative Cross-Institutional and Transnational Social Work Placement Projects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens

1
Department of Social Work, Hong Kong Baptist University, 15 Baptist University Road, Kowloon Tong, KLN, Hong Kong
2
Institute of Information Management, National Cheng Kung University, No.1, University Road, Tainan City 701401, Taiwan
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(1), 30; https://doi.org/10.3390/socsci14010030
Submission received: 24 October 2024 / Revised: 5 December 2024 / Accepted: 12 December 2024 / Published: 10 January 2025
(This article belongs to the Special Issue Digital Intervention for Advancing Social Work and Welfare Education)

Abstract

:
This study provides a comprehensive analysis of the opportunities for democratizing artificial intelligence (AI) for social good using a bibliometric–systematic literature review method. It combines the quantitative analysis of bibliometric methods with the qualitative synthesis of systematic reviews. This approach helps identify patterns, trends, and gaps in the literature, advancing theoretical insights and mapping future research directions. Design/methodology/approach: Scopus, PubMed, and Web of Science, as prominent scientific databases, were utilized to examine publications between 2014 and 2024. The article selection followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The bibliometric analysis was conducted using CiteSpace software. Findings: The bibliometric analysis identified the most influential articles, journals, countries, authors, and key themes. The systematic thematic analysis identified established modes of using AI for social good. Moreover, future research directions are suggested and discussed in this article. Practical implications: The findings give future research directions and guidance to academics, practitioners, and policymakers for real-world applications.

1. Background

Social good refers to actions, services, or interventions that aim to improve society’s overall well-being, particularly by addressing critical social issues, including poverty, healthcare access, education, and environmental sustainability. Social good encompasses both direct services provided to vulnerable populations and broader social interventions designed to address systemic inequalities (Mor Barak 2020). In this technology era, the integration of advanced technologies into social good initiatives has the potential to leverage their impact, increase efficiency, and extend their reach to a broader population, thereby fostering greater societal benefits. Among many technology options, artificial intelligence (AI) stands out as particularly promising, due to its capacity to address social challenges, making it a pivotal tool for advancing social good (Chui et al. 2018). AI creates systems that perform human intelligence tasks, such as reasoning, learning, and visual perception, by using technologies like machine learning and neural networks to process data and make decisions autonomously (Sheikh et al. 2023). As AI continues to evolve, its applications for social good are expanding across various fields, including healthcare, education, social work, and governance (Floridi et al. 2020; Følstad et al. 2021). These advancements offer significant opportunities to enhance social good initiatives and promote the well-being of marginalized communities.
However, the application of AI for social good has sparked various debates and highlighted challenges. While AI may enhance the effectiveness of social services and interventions, concerns about equity, inclusivity, and ethical governance are equally critical. Some observations show that while AI can solve complex social problems, it can also exacerbate inequalities if not managed carefully. For instance, AI systems often rely on large datasets, which may introduce biases if not properly curated. Furthermore, the rapid advancement of AI raises questions about privacy, data security, and the digital divide. These concerns emphasize the need for a more nuanced and well-structured approach to understanding and regulating AI in the social good domain (Floridi et al. 2020).
Against this background, democratizing AI for social good has become a significant goal, which refers to the effort to make AI tools widely accessible and involve diverse voices in AI development and regulation. Democratizing AI can empower interdisciplinary experts, practitioners, and even non-technical people across various fields to harness AI’s potential for solving problems, innovating solutions, and addressing societal challenges. For example, social entrepreneurs can utilize user-friendly AI tools without needing deep technical expertise. This trend encourages interdisciplinary collaboration, bringing diverse perspectives into AI development and its applications. Cupać et al. (2024) have categorized the democratization of AI for social good into three main areas: AI developed or used for social good, AI deployed in various contexts, and the regulation of AI. Seger et al. (2023) identified four major components that constitute AI democratization. The first is the democratization of AI use targeting AI tools’ usability for many people. Second, for societies to benefit from AI, greater efforts should be put into engaging local communities and diverse social groups in the design of AI systems. Thirdly, there are aims to equally disburse any economic gains made from AI profit. Lastly, the democratization of AI governance aims to ensure that decisions about matters like AI use, development, and profits are based on the preferences and requirements of the people who will be affected. Thus, the current and potential roles of AI democratization deserve greater attention, as they hold significant potential for enhancing the practical application of AI. Existing discussions note that social good and democratization are inherently linked. In some way, efforts at democratizing AI should be focused on achieving social good, and social good initiatives should be democratized to ensure broad accessibility, equitable benefits, and inclusive participation from all segments of society. This study adopted an inclusive operational definition of “democratizing AI for social good”, which means ensuring equitable access to, participation in, and benefits from AI technologies for social good.
Given the complexity and multifaceted nature of work on democratizing AI for social good, research efforts in this area are inherently interdisciplinary (Floridi et al. 2020; Følstad et al. 2021). While interdisciplinary research benefits from integrating various perspectives, discipline-based lenses remain crucial for providing depth, rigor, and clarity (Leavy 2019). These lenses ground research in established knowledge and methodologies, identify specific gaps, and facilitate effective communication and collaboration among experts. Understanding AI from a social science perspective is particularly valuable as it offers insights into societal implications, ethical considerations, and human interactions with AI technologies.
Social sciences are inherently multidisciplinary yet coherent (Barthel and Seidl 2017), integrating various disciplinary perspectives to address complex human and societal issues. Fields like psychology, sociology, economics, political science, and anthropology offer unique methodologies and theories, contributing to a comprehensive understanding of social phenomena. Despite diverse approaches, these disciplines share goals of exploring human interactions, social institutions, and cultural norms. Interdisciplinary collaboration and shared methods ensure coherence, enabling social sciences to generate nuanced insights and effective solutions to societal challenges.
Adopting a technology-informed social science lens to interpret AI-related studies means understanding the technology from an angle underpinned by social science theories, methodologies, and perspectives (Z. Liu 2021; Miller 2019). This approach bridges the gap between technological advancements and social science insights, ensuring that insights into social dynamics, ethical considerations, and human needs informs our perceived knowledge of AI. By doing so, researchers can generate comprehensive and actionable insights that advance academic knowledge and contribute to the development of AI systems that are socially beneficial and ethically sound.
As such, we determined that it was worthwhile conducting a review to identify potential future directions for research into democratizing AI for social good, focusing on how social science can play a contributive role in such democratization. The research questions were as follows:
  • Which articles, journals, countries, and authors have the most significant influence in the field?
  • What are the dominant modes of AI applications in this research domain?
  • What research trends in this domain have been explored in the past decade?
  • What are the potential future directions for research into democratizing AI for social good, and what are the implications for social science researchers?

2. Method: Bibliometric–Systematic Review

This study provides a comprehensive analysis of the opportunities for democratizing AI for social good using a bibliometric–systematic review method (Brignardello-Petersen et al. 2024; Marzi et al. 2024). It combines the quantitative analysis of bibliometric methods with the qualitative synthesis of systematic reviews. This approach helps identify patterns, trends, and gaps in the literature, advancing theoretical insights and mapping future research directions. Bibliometric analysis is suitable for this study because it quantitatively evaluates the academic literature, providing insights into research trends, influential works, and key contributors on AI for social good. By analyzing citation patterns, co-authorship networks, and keyword occurrences, bibliometric analysis helps identify the most significant research themes and emerging areas of interest (Öztürk et al. 2024). In addition, a systematic thematic analysis of reviews can uncover underlying themes, theoretical frameworks, and methodological approaches prevalent in the literature. This mixed-methods approach thoroughly examines the breadth and depth of existing research, ensuring that the analysis captures the bibliometric profiles and topics in research on AI for social good.

2.1. The Bibliometric Approach: Selection, Analysis, and Visualization

The article selection process followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines recommended for systematic literature reviews (Page et al. 2021). Figure 1 explains the methodological process. The first stage involved identifying the search terms within the Scopus, PubMed, and Web of Science (WOS) databases. We specifically selected some terms encompassing the broad range of AI, such as “chatbot”, “deep learning”, “machine learning”, “computer vision”, “natural language”, and “image recognition”, in the journal database search engine. Thus, the search terms used were:
(“artificial intelligence” OR “AI” OR “chatbot” OR “deep learning” OR “machine learning” OR “computer vision” OR “natural language” OR “image recognition”) AND (“democra*” OR “empower*“) AND (“social”)
The search result from PRISMA methodology is depicted in Table 1. The screening process was guided by inclusion and exclusion criteria. The research scope was limited to studies published within a specific period (2014–2024) and only included articles written in English. Certain document types, such as books, book chapters, conference proceedings, reports, and review articles, were excluded from the selection. The remaining publications underwent a thorough eligibility review. This involved carefully reading each paper’s title and abstract to ensure they were relevant to AI for social good.
Bibliographic data analysis was conducted using CiteSpace for data visualization. This bibliometric analysis provided a detailed overview by examining various aspects of the research, including authorship, geographic origins, keyword co-occurrences, clustering of related topics, and thematic maps. This approach allows for a deeper understanding of the research landscape.
We use bibliometric analysis to analyze eligible articles because it provides a robust and quantitative method for assessing the impact and relevance of academic publications. Examining metrics such as citation counts helps identify the most influential works within a given field. In this context, only the most relevant papers of the 181 eligible articles were selected based on their citations and practical significance. This selection process ensures that the subsequent systematic thematic analysis focuses on the most impactful research, which is crucial for uncovering key debates, identifying research gaps, and highlighting areas for future study. As such, the study can more effectively distill critical insights and guide future research directions by leveraging bibliometric analysis.
Despite the advantage of the bibliometric approach, this method also has limitations such as dependency on search terms, citation biases, and inconsistencies in metadata (Belter 2015; Holden et al. 2005). Additionally, interdisciplinary research may be misclassified, and overlaps between databases can create duplicate records, complicating data analysis. These factors may limit the comprehensiveness and balance of the findings. Notwithstanding these known limitations, it is still useful to adopt a bibliometric approach because it provides a quantitative overview of research trends and identifies influential works, authors, and collaborations. This makes it a powerful tool for understanding and navigating complex research landscapes.

2.2. Systematic Thematic Analysis Through a Social Science Lens

Thematic analysis seeks to identify patterns within the narrative data (Fereday and Muir-Cochrane 2006). This analysis is not merely deductive (applying pre-existing concepts) or inductive (conceptualizing raw data); rather, it employs a hermeneutical approach, where the analysis is informed by both the researcher’s initial concepts and the raw data (Cole and Avison 2007; Fereday and Muir-Cochrane 2006).
As noted at the beginning of this article, adopting a technology-informed social science lens prioritizes understanding AI in terms of human behavior, societal impacts, and ethical considerations. This approach helps identify gaps in the existing literature and facilitates effective interdisciplinary communication. The goal is to achieve a thorough analysis that social science researchers can easily understand.
In this study, the researcher refined, conceptualized, and theorized around these initial ideas as the interpretation progressed. The thematization process was cyclic and hermeneutic, continuously refining themes based on initial themes and the actual narrative data. This analysis approach allows for the exploration of individual participant experiences while ensuring that key topics are addressed (Gubrium et al. 2012).

3. Results

3.1. Results of the Bibliometric Analysis

3.1.1. Most Globally Cited Articles

Table 2 shows the 10 most influential articles on AI for social good, selected from the 181 eligible articles, based on their citation counts. These articles are notable for their significant impact, as demonstrated by their number of citations. The paper titled “AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings”, authored by Kuziemski and Misuraca, published in Telecommunications Policy (Kuziemski and Misuraca 2020), stands as the most frequently cited work in the field.

3.1.2. Most “Important” Articles in the Eligible Article Set

Another important measure of an article’s influence is Betweenness Centrality, which quantifies how often that node (article) appears in the shortest paths between pairs of nodes in a given network (Cheng et al. 2018). In a citation network where Article A cites Article B, and Article B cites Article C, Article B acts as a bridge between A and C. Betweenness centrality measures how often a node, like Article B, lies on the shortest paths between other nodes. In the context of this study, it reflects the article’s centrality among the eligible articles and all references cited by them. A higher centrality indicates that an article is a key “connector” within the network. Consequently, a greater centrality indicates a higher level of influence of that article.
Table 3 lists the top ten most important articles. The one by Floridi et al. is ranked first. The lead author of that article studies philosophy and the ethics of information, is considered a key reference in AI and social good research. Additionally, most of the authors in the top ten list are not from pure engineering or computing science backgrounds but from diverse fields such as ethics, business, literature, and economics. This highlights this research domain’s interdisciplinary nature and its humanities and social science orientation.

3.1.3. Number of Articles by Subject Categories

Table 4 presents the distribution of articles across different subject categories. The left column shows the number of articles in each category, while the right column lists the corresponding subject areas. The table shows that “Computer Science” has the highest number of articles (48), followed by “Business and Economics” with 22 articles. It should be noted that articles may belong to multiple categories (e.g., a single article could be classified under both “Computer Science” and “History and Philosophy of Science”). Therefore, these breakdowns do not imply that most articles are exclusively related to computer science, but rather highlight the overlapping and interconnected nature of research across various disciplines.

3.1.4. Top Journals

Table 5 offers important findings regarding the most productive journals covering research on AI democratization for social good. The leading journal is Science, with 29 articles among the 181 eligible articles. Notably, other non-pure science journals (e.g., AI & Society, Big Data & Society) also feature comparable numbers of publications, highlighting the interdisciplinary and social nature of this research domain.

3.1.5. Most Productive Institutions

Table 6 highlights the most productive institutions based on the 181 eligible articles analyzed, listing the number of articles they published along with their respective publication years. This analysis is based on the institutional affiliations of all contributing authors in an article. Although the University of Oxford and Harvard University stand out as the most productive, each contributing six articles since 2020, the single-digit publication counts indicate that research output is widely dispersed across institutions, with no single institution really emerging as a dominant contributor.

3.1.6. Most Productive Countries

Table 7 illustrates the top countries that contributed to the research area. This analysis considers the country affiliations of all contributing authors in an article, and these distributions reflect the significant involvement of researchers from these nations. Notably, the USA demonstrates the highest research productivity, with a total of 31 articles. The United Kingdom and Germany emerge as the second and third most prominent contributors, respectively, each accounting for 16 articles. Additionally, countries such as Canada, Australia, and China also show considerable contributions, highlighting the global nature of the research efforts in this field.

3.1.7. Emerging Trend

A keyword serves as a binding and indicative term that reflects specific research topics. It defines the field in detail and establishes a common vocabulary, providing a tool to study research evolution (Barki et al. 1993). Advanced analytical tools such as CiteSpace facilitate a more systematic analysis of keywords in the literature. This enables researchers to draw insights about the current state status of research in a specific field and to identify future directions.
Keyword analysis was used to observe the emerging studies in research related to AI and social good. A total of 212 keywords from 2015 to 2024 were identified. A visualization of a co-occurring keyword network was obtained and is shown in Figure 2. Each label represents a keyword. The co-occurring frequency of the keywords determines the label size. The results show that “artificial intelligence”, “machine learning”, “human”, and “social work” are the most frequent keywords to appear in AI and social good studies. The “artificial intelligence” and “machine learning” keywords generate a high betweenness centrality value, as shown in Table 8, indicating that those keywords highly influence AI and social good studies. Figure 3 visualizes the clusters of keywords with similar terms, revealing eight distinct groups.
Eight clusters are formed, with over five keyword members within a cluster, illustrating the typical dominance of AI. The clusters’ names were synthetized from the titles of articles using a log-likelihood method. Each cluster’s interpretation is as follows:
  • Cluster #0 Ethical Artificial Intelligence. This cluster, comprising 50 members, occupies a central position. The most cited keywords are “artificial intelligence”, “decision making”, “social good”, “ethics”, and “generative artificial intelligence”. Most of these keywords were cited in the studies related to ethics in artificial intelligence (covered in Astobiza et al. 2021; Keller and Drake 2021; Moore 2019; Nasir et al. 2023; Sreenivasan and Suresh 2024).
  • Cluster #1 Social Science. This is the second-largest cluster, which consists of 27 representative keywords, with the top 5 cited keywords being “social work” “human”, “article”, “natural language processing”, and “human experiment”. Topics in this cluster cover studies related to the application of the AI method in behavioral and social science studies (Robila and Robila 2020). In addition, issues related to the application of AI in mediated communication technology (Goldenthal et al. 2021), social work education (Hodgson et al. 2022), social services (James and Whelan 2022), and marketing for social good (Hermann 2022) are addressed.
  • Cluster #2 Automated Deep Learning. In this cluster, the most cited keywords are “deep learning”, “big data”, “neural networks”, “bias”, and “algorithms”. Most of these keywords were found in an article titled Automated deep learning: Neural architecture search is not the end authored by Dong et al. (2024). Seven of the twenty-four keywords from this cluster occur in Dong’s article.
  • Cluster #3 Hard Choice. This cluster consists of 22 keyword members, with the most cited keywords as “artificial intelligence (AI)”, “AI ethics”, “technology”, “AI governance”, and “inclusive design”. Four keywords from this cluster occur in articles authored by Dobbe et al. (2021), along with those titled Hard choices in artificial intelligence, Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry (Saeidnia 2023), Exploring the effects of AI literacy in teacher learning: An empirical study (Du et al. 2024), and Democratization in the age of artificial intelligence: Introduction to the special issue (Cupać et al. 2024).
  • Cluster #4 Machine Learning. This cluster consists of 20 keywords, with the most cited keywords being “machine learning”, “automation”, “visual analytics”, “open data”, and “data cleaning”. The core member is the study by Serrano et al. (2019) on proactive social inclusion powered by machine learning, which includes five keywords from this cluster.
  • Cluster #6 Info-Communication Expert. This cluster consists of ten keywords, with the most cited keywords being “generative AI”, “political economy”, “large language models”, “society”, and “conversational AI”. A study by Feher et al. (2024) titled Modeling AI trust for 2050: Perspectives from media and info-communication experts includes four keywords from this cluster. In addition, the most cited article in this cluster is Generative AI in education and research: Opportunities, concerns, and solutions, authored by Alasadi and Baiz (2023).
  • Cluster #7 Human-Centric AI-Thinking Approach. This consists of a total of eight keywords, with the most cited keywords being “AI for good”, “quality”, “optimization”, “Bayesian analysis”, and “predictive modeling”. Artificial intelligence-enhanced decision support for informing global sustainable development: A human-centric AI-thinking approach authored by M.-L. How et al. (2020b) covers seven keywords from this cluster.
  • Cluster #9 Capabilities. This cluster consists of only six keywords, with the most cited keywords being “challenges”, “competence”, “artifacts”, “business model innovation”, and “creation”. How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops authored by Sjödin et al. (2021) is the article that best represents this topic, covering four of the keywords.
The formed clusters reflect the sub-field of AI related to social studies. Each formed cluster consists of keywords representing the cluster’s topic. Figure 4 shows a visual representation of the evolution from deep learning to artificial intelligence, generative AI, and ethics, highlighting keywords reflected in the literature from the recent decade. In 2024, keywords such as “technology”, “management”, “smart city”, “user acceptance”, “social media”, and “conversational AI” emerged. To further explore recent research trends and frontiers in AI for social good, we present the burst intensity of keywords, along with their start and end times, in Table 9. The focus within the AI for social good domain has shifted to neural networks and ethics, with neural networks garnering attention earlier than ethics. These two topics represent the emerging trends currently shaping the field of AI for social good.

3.2. Results of the Systematic Thematic Analysis

Based on the results of the bibliometric analysis, only the most pertinent papers—selected for their citation impact and practical significance—were included in the subsequent systematic thematic analysis. This analysis aimed to uncover key debates, identify research gaps, and highlight areas for future research.

3.2.1. Benefits and Hurdles of AI Democratization

AI holds significant potential to address societal challenges and enhance overall well-being. The democratization of AI must therefore prioritize inclusivity, ensuring that these technologies are accessible and tailored to meet the diverse needs of society (Prabhakar Rao and Siva Prasad 2021). It is crucial to foster an environment where AI contributes to creating an equitable society by facilitating the widespread availability of information for all individuals.
AI can be tailored to solve critical issues and improve quality of life, especially if used with a focus on inclusivity and accessibility. As Tzouganatou (2022) has pointed out, democratizing AI involves making emerging technologies and automation more inclusive through a human-centered approach. This approach also embodies the diversity of the digital cultural heritage and helps achieve different degrees of openness, all vital to increasing access. In the same way, Hermann (2022) has explained that democratizing AI benefits vulnerable groups, especially those lacking digital literacy or financial literacy, by improving the services’ availability, the customer experience, and insights into how decisions are made about the services offered. Foffano et al. (2023) also argue for ensuring a commitment to fostering AI development for positive social change. Inclusive AI is particularly important in addressing the needs of underrepresented groups, such as aging populations, who have often been overlooked in the digital and AI revolutions (Stypinska 2023). The development of AI technologies like deep learning and the proliferation of AI in society underscore the urgency of addressing ageism in AI. Frameworks that emphasize humanistic principles, such as AI for People, hold significant potential for incorporating the perspectives of vulnerable and underrepresented groups when designing and applying AI systems.
If the developments in AI are to be effectively used in contributing to a just society, then AI will need to be more inclusive, making access to information easier for everyone (Prabhakar Rao and Siva Prasad 2021). AI facilitates access to information and helps citizens play a more active part in decision-making processes, further democratizing information sharing, and promotes social justice. For example, AI-driven speech and language technologies can positively impact the betterment of access to information for marginalized communities through processes such as automatic translation into regional or local languages. This capacity increases access to information by breaking down barriers due to diverse languages and opens up content to a broader range of cultural contexts and social groups. Moreover, AI makes information more accessible for persons with disabilities by automatically providing alternative formats, such as text-to-audio or summaries. Additionally, AI itself helps spread verifiable information by automating fact-checking processes. Noorman and Swierstra (2023) further articulate that the latest technological shift has accelerated democratization toward AI, hence enabling it to be built upon solutions to some of humanity’s gigantic problems, including climate change and chronic diseases. It will, therefore, be much easier to bring about an infrastructure of collaboration that allows different AI systems to better coordinate for the greater good of humanity when AI is decentralized. As Montes and Goertzel (2019) have noted, this innovation underlines the wide-ranging potential of AI in contributing to social good in many dimensions, from inclusivity and accessibility to global issues.

3.2.2. Against AI Democratization

While AI can democratize content generation and make it more accessible in information contexts, the same AI can exponentially spread misinformation. AI provides personalized learning in education but can also be utilized to widen the digital divide, leaving sections of societies behind. Other concerns that remain include unfair competition within the market, misuse of data, data manipulation, and a change in relations during human–machine interactions, which will require a great deal of careful consideration and further research. Moreover, there is the risk of deepening existing inequalities in AI access and development, especially for under-resourced communities.
Despite AI systems’ rapid and impressive development, several untrustworthy aspects have also been revealed. H. Liu et al. (2022) highlight that AI can unintentionally cause harm by making unreliable decisions in critical situations or by undermining fairness through unintended discrimination against certain groups. These vulnerabilities can render AI systems unusable and lead to severe adverse economic and security consequences. The lack of trust in AI has become a significant barrier to its wider adoption and its potential economic value. Himmelreich (2023) argues that the democratization of AI is built on weak foundations, as it fails to address the need for legitimate governance and often overlaps with existing structures. He suggests that rather than broadening participation, democratization efforts should focus on improving the democratic quality of decision-making processes within administrative and executive frameworks.
Additional concerns exist around using new AI technologies, such as generative AI chatbots and advanced machine learning techniques. These tools may enable companies to collect excessive amounts of personal data, potentially exploiting consumer biases or vulnerabilities through price discrimination or violating privacy, contributing to what Capraro et al. (2024) term as “surveillance capitalism”. Ramaul et al. (2024) note that the quality of responses generated by AI systems depends heavily on the clarity and specificity of the user’s input, meaning that ambiguous queries can lead to inaccurate or irrelevant results.
Moon (2023) also raises concerns about the risks associated with the inclusive use of AI, such as job losses due to automation, privacy violations, the spread of “deepfakes”, and algorithmic biases caused by poor-quality data. Sundberg and Holmström (2023) add that the democratization of AI, especially in no-code AI development, cannot solve issues related to biased or inadequate data. Regulation and intervention are urgently needed to mitigate these risks, which limit the spread of AI democratization (Capraro et al. 2024). This highlights the importance of establishing appropriate regulations and ethical guidelines to ensure that AI advances benefit society while safeguarding justice and protecting individual rights (Rakowski and Kowaliková 2024). Norms and standards should be established at an elevated level and in specific decisions regarding AI development, such as the type of data used to train models. Thus, future research opportunities for AI for social good will require further advancements in balancing AI’s specific social, economic, and ethical advantages and challenges.

3.2.3. Deploying AI

The implementation of AI is complex, requiring deep knowledge of business goals and unique challenges an organization is trying to solve. The application of no-code AI to solve such challenges helps democratize AI for more users (Bagrow 2020; Panda et al. 2024; Sundberg and Holmström 2023). No-code AI enables the most nontechnical person to create and deploy an AI model with minimum coding, hence bridging the gap between business and technical experts. This will undoubtedly drive faster problem-solving, better machine learning operations, and easier infrastructural management. Strong interoperability and sound data governance and protection are key for no-code AI to be integrated effectively into organizations (Sundberg and Holmström 2023). With no-code AI, companies can construct ambidextrous machine learning systems that are sensitive to local conditions while maintaining effective communication with global AI ecosystems. Previous research from Sundberg and Holmström (2023) also proposes overcoming these challenges in no-code AI adoption in three dimensions: embedding no-code tools within strategies of AI, creating collaboration among managers and technical experts that aligns business objectives with data and model development, and leveraging no-code AI for responsible AI.
In the development of a democratic approach to AI, Dobbe et al. (2021) advanced a framework that calls on AI practitioners and designers to have a “split identity” that balances the craft of design with reflections and critiques of AI systems. This approach serves to ensure better and more ethical AI systems. Democratization of AI, though associated with great benefits regarding broader access and more equal opportunities of contribution, according to Victor, faces several pitfalls regarding potential biases, data privacy concerns, and the danger of misinformation when one develops AI. All these challenges indicate the urgent need for cautious oversight regarding wider access to AI. Further, AI learning needs democratization to equitably impact AI development (Luchs et al. 2023). A rapid rise in the capabilities of AI and its utilization are not guaranteed. While these AI tools have the potential to expand human capabilities, their actual implementation depends on people being able to understand when, where, and how to use them properly, or, in other words, using them in the right context (Gursoy and Kakadiaris 2023; Luchs et al. 2023; Robertson et al. 2024; Sjödin et al. 2021).

3.2.4. Regulating AI

Given the calls for AI democratization, it is possible that there are more examples showing AI systems causing harm to society than those meeting this potential (H. Liu et al. 2022). Such harm has indeed led to several guidelines for ethical AI. In this space, protection from the impact of AI in a social context should be ensured by considering AI governance and the principles that support democratization.
AI systems must adhere to key principles, including clear justifications, promotion of equality and human rights, participatory design processes, technical safeguards, and openness to validation (Züger and Asghari 2023). Moreover, it is essential to embed ethical values within these technologies to ensure responsible and equitable outcomes, especially since government oversight in the AI era has been both scarce and neglected (Cremer and Whittlestone 2021; Nzobonimpa and Savard 2023; Züger and Asghari 2023). If AI is to benefit society broadly, there is an urgent need to establish democratic control over its development, particularly for those who will be directly impacted. Consequently, guidelines are necessary to protect citizens from harmful decision-making and the abusive use of AI by governments or private businesses, which could infringe on individual freedom and dignity (Fukuda-Parr and Gibbons 2021).
AI governance presents a multifaceted challenge due to the wide array of actors involved, the rapid pace of technological advancements, and the perceived inevitability of the technology itself. In this context, the report by Kuziemski and Misuraca (2020) outlines seven critical requirements for ethical AI development: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. Additionally, as public opinion significantly influences the development and adoption of modern technologies, it is crucial to consider how ethical issues surrounding AI are portrayed in the media. Related to this, previous findings from Ouchchy et al. (2020) show that themedia presents a balanced and neutral view of AI ethics, offering a practical foundation for future discussions and decisions.

3.2.5. Modes of AI for Social Good

Democratization of AI for social good will lead to socially desirable outcomes that were once unattainable, unaffordable, or less efficient and effective. It brings unprecedented opportunities on multifaceted fronts with huge promise. The development of AI for social good enables the achievement of outcomes that were once difficult, costly, or less efficient to attain. AI presents vast opportunities across multiple sectors and is particularly valuable in addressing increasingly global, complex, and interconnected challenges. For instance, it can play a critical role in improving health outcomes and reducing environmental risks (Cowls et al. 2021; Floridi et al. 2020).
Here, based on the findings from earlier work, we have divided the context into several categories based on the practical implications. These categories include information/news generation, marketing and customer engagement, disease diagnoses, clinical care and digital health assistants, disability accessibility, energy monitoring and measurement, intelligence mentoring/assisting, simulated-based education with different AI approaches, and mediated communication, as shown in Table 10.

Information/News Generator

Democratization of generative AI in the information sphere may democratize access to and the creation of content (Capraro et al. 2024). Using generative AI in business can also help uncover relevant and timely consumer preferences with sentiment analysis, such as trending topics on platforms such as vlogs, blogs, and more (Cui et al. 2024). However, it might also multiply the misinformation being produced and disseminated. A worry increasingly shared is that improved generative AI, combined with sophisticated machine learning, will allow excessive amounts of personal data to be leveraged by companies.

Marketing and Customer Engagement

According to the World Economic Forum’s State of Social Enterprise 2024, social enterprises are a major economic and social force, and providing micro-funding helps marginalized groups start small businesses (The State of Social Enterprise: A Review of Global Data 2013–2023 2024). AI can help small–medium enterprises (SMEs) with limited access to capital by offering cost-effective solutions like personalized marketing strategies and automated customer service through virtual assistants (Cui et al. 2024). In the social good context, democratization of AI in business can bridge commercial objectives with social good by fostering inclusivity, accessibility, and equity in consumer interactions. Prior research by Hermann (2022) focuses on the design of AI technologies aimed at enhancing service accessibility, optimizing customer experiences and interactive journeys, and dynamically supporting consumer decision-making, particularly for vulnerable populations. Furthermore, the advancement of AI technology enables SMEs to harness its potential for streamlining work processes and fostering innovation, thereby enhancing their product offerings and bolstering their long-term competitiveness (Rajaram and Tinguely 2024).

Disease Diagnoses and Digital Health Assistants

Democratization of AI in healthcare has concentrated on clinical care, while the availability of medical images has been pivotal in extending the research on how AI can help diagnose diseases or provide clinical treatment and prediction (Hoff 2023; Holzmeyer 2021). In that context, the most common application is for disease diagnosis. Diseases have also been diagnosed, monitored, and treated by AI-powered smartphone apps for patients with different health issues. However, the very essence of training data becomes of key importance. In equity issues within precision medicine, several critiques have highlighted that biased datasets and outcomes can result, especially in cases when resources are diverted away from a focus on structural and social determinants of health (Capasso and Umbrello 2022; Tomašev et al. 2020). However, designing and deploying AI-driven digital personal assistants in healthcare should be guided by responsible innovation principles that avoid harm and actively contribute to positive outcomes.

Disability Accessibility

Chemnad and Othman (2024), in seeking to make AI truly accessible, conducted a bibliometric analysis and a literature review that underlined how readily accessible AI is important in avoiding exclusion and discrimination. Thus, there is a need for an overarching digital accessibility strategy that covers disabilities of different natures: AI for the visually impaired, persons with speech and hearing impairments, autism spectrum disorder, neurological disorders, and motor impairments. The reviewed literature emphasizes the potential for some AI applications to increase mobility, safety, and the quality of life. Their contribution to users can involve many facets; object detection, for instance, may help people in navigation, education, and access to social media. For speech and hearing impairments, AI technologies are categorized based on the kind of benefit conferred. These developments underpin the more general role of AI in improving accessibility and enhancing the quality of life for people with various kinds of disabilities.

Energy Monitoring and Measurement

Cowls et al. (2021) give some promising initial evidence that AI is indeed actively being used to fight against climate change and associated challenges, focusing on democratizing and further developing AI for this important purpose. meanwhile, M. L. How et al. (2020a) present how AI can be democratized using a user-friendly, human-centered probabilistic reasoning approach, especially in the analysis of Environmental Performance Index (EPI) data associated with sustainability. Such a human-centered approach is the cognitive scaffolding that predisposes analysts to become AI-thinking, as well as informs policy decisions related to sustainability.

Intelligence Mentor/Assistant

Democratization of AI-based technologies, although promising personalized learning experiences, calls into question the legitimacy of online assessments and academic integrity. AI is often considered a tool that can replicate and even surpass the roles of teachers, tutors, mentors, or educational administrators (Kazimzade et al. 2019; Schiff 2021). In this regard, it is vital to balance the broader consequences of AI for education, from pedagogy and curriculum to teacher roles, automation, international development, ownership over educational choices, and behavioral manipulation. Kazimzade et al. (2019) emphasize that in designing educational AI assistants, such diverse aspects as personality, preference, ability/disability, cultural and demographic background, and interactions by learners during learning should be considered. Merging accessible user-centered systems with assistive technologies might be a meaningful step toward inclusion in education (Ifelebuegu 2023).

Simulation-Based Education

Simulation-based education has been recognized as an innovative approach to address the evolving needs in field education (Asakura et al. 2020). Practicum sites, where students typically receive hands-on training through direct observation and supervision that integrates theory with practice, are well-documented. Many social work programs now incorporate simulation as a teaching method to enhance student readiness, helping students build foundational skills before entering their practicum activity.

AI-Mediated Communication

Given the prospect that AI technologies and tools might offer many benefits, a study by Goldenthal et al. (2021) focus on the gap between the availability and accessibility of AI-mediated communication tools. AI-mediated communication tools are defined as those that support the communicator in interpersonal communication by intelligent agents. The researchers distinguish six types of functionalities of AI: voice-assisted communication, language correction, predictive text suggestion, transcription, translation, and personalized language learning.

4. Discussion

4.1. Trends in AI for Social Good

The current study contributes to the existing knowledge on AI democratization for social good by examining 66 articles extracted from Scopus, WoS, and PubMed. In this research, bibliometric analysis was conducted to investigate the most influential articles, journals, countries, and authors (RQ1). Notably, the Science journal emerged as the most influential journal in the global context. In addition, Floridi Luciano, who has an interest in AI ethics, digital ethics, and information ethics, was identified as a prominent author in the field of AI for social good. Moreover, the analysis reveals that the USA leads the way in the realm of AI in social good research. The USA is known as a major player globally, and there has been a notable surge in academic and industrial research within the USA, focusing on AI in social good enhancement. Furthermore, when analyzing authors’ keywords in the bibliometric analysis, we revealed some prominent modes of AI applications such as in information/news generation, marketing and customer engagement, disease diagnoses, clinical care and digital health assistants, disability accessibility, energy monitoring and measurement, intelligence mentoring/assisting, simulated-based education with different AI approaches, and mediated communication (RQ2).
The key themes in democratization of AI for social good (RQ3) were effectively identified through the examination of keyword co-occurrence in a bibliometric analysis and systematic literature review. According to the time and intensity of keyword emergence, ethics, generative AI, and technology are the research hotspots. The need for ethics arises from the urgent requirement to establish appropriate regulations and ethical guidelines to manage the spread of AI democratization, particularly to minimize risk (Gianni et al. 2022; Hermann 2022; Ouchchy et al. 2020; Rakowski and Kowaliková 2024; Saeidnia 2023). Existing research also explores generative AI as the most popular research trend in the field of democratizing AI for social good, with the ability to create new and original information output (Rajaram and Tinguely 2024; Robertson et al. 2024; Victor et al. 2023).

4.2. Implications for Future Research in Social Sciences

Given the complexity and multifaceted nature of work on democratizing AI for social good, research efforts in this area must be inherently interdisciplinary. While integrating various perspectives is beneficial, discipline-specific lenses remain crucial for providing depth, rigor, and clarity. These lenses ensure that research is grounded in established knowledge and methodologies, help identify specific gaps, and facilitate effective communication and collaboration among experts. Adopting an interdisciplinary framework with a solid social science foundation is essential for enabling social science researchers and practitioners to move beyond the roles of informed commentators or critical readers. It empowers them to understand and address the implications of AI technologies more deeply. This approach ensures that their contributions are knowledge-based, leading to meaningful participation in research and development and fostering socially responsible and equitable innovations.
Inspired by the findings from bibliometric and thematic analyses of core texts, we have identified five key areas where social sciences can make significant contributions to the field of AI for social good (RQ4). This discussion outlines these areas and explores how social sciences can play a pivotal role in democratizing AI to ensure its benefits are equitably distributed across society.

4.2.1. Addressing Real-World Social Challenges

AI’s potential to tackle global challenges presents a significant research opportunity. With their nuanced understanding of various social challenges, social scientists can focus on how AI can be tailored to address specific societal issues rather than merely demonstrating novel AI applications. By integrating insights from social science, researchers can ensure that AI developments are aligned with pressing social needs, such as mental health, resource distribution, and environmental sustainability, thus ensuring that technological advances benefit society.
The research trend reflected by the bibliometric analysis (Section 3.1.7) reveals that much of the influential research on AI for social good is concentrated on these critical areas, demonstrating how AI can be a powerful tool in addressing real-world social challenges. The thematic analysis (Section 3.2.1) highlights that AI must prioritize inclusivity, ensuring that emerging technologies are accessible and tailored to meet the diverse needs of society. AI-driven technologies, such as language translation tools, are recognized for their potential to break down barriers and improve access to information for disadvantaged groups, such as marginalized communities. This is crucial in addressing societal challenges like health equity and environmental sustainability, aligning AI developments with pressing social needs (Hermann 2022; Prabhakar Rao and Siva Prasad 2021). Social science researchers are essential for understanding the societal impacts of AI and designing systems that bridge rather than widen societal gaps. Their contributions help ensure that AI serves as a tool for promoting fairness and justice (Stypinska 2023). Social science researchers play a crucial role in understanding the affordances of AI and connecting them with social needs.

4.2.2. Shaping Ethical AI Development

As AI becomes increasingly pervasive, the development of robust ethical frameworks is paramount. Future research needs to prioritize the creation of ethical guidelines that prevent data misuse, ensure privacy, and promote transparency. Social science researchers are uniquely positioned to explore how these frameworks can be adapted to diverse cultural and regional contexts, making AI governance more globally relevant and equitable. This also includes investigating the impact of biased AI systems and identifying strategies for mitigating these biases, ensuring that AI systems do not perpetuate existing inequalities.
The research trend reflected by the bibliometric analysis shows that ethical considerations, particularly around AI governance and AI ethics, are already a major focus within the research community. Additionally, the thematic analysis discusses the importance of developing robust ethical frameworks for AI systems to mitigate risks of bias and unintended discrimination (Section 3.2.2). These frameworks are essential to prevent the misuse of data and protect vulnerable populations from exploitation. This section stresses that ethical AI development should be governed by principles that promote fairness, privacy, and accountability, and that these frameworks must be adaptable to various cultural and regional contexts (H. Liu et al. 2022; Moon 2023). Additionally, researchers should see that these frameworks ensure that AI governance is inclusive and responsible, to address issues of transparency and fairness (Capraro et al. 2024; Ramaul et al. 2024).
By incorporating perspectives on human behavior, societal structures, and ethics, social science researchers can help ensure that AI systems are designed to serve all segments of society. Their work in creating ethical frameworks is crucial for preventing the exploitation of vulnerable groups and ensuring that AI technologies are deployed responsibly. This involves not only addressing issues of bias and fairness but also promoting inclusivity and accountability in AI development.

4.2.3. Facilitating Public Participation in AI Governance

AI governance is a critical research area that requires a focus on promoting fairness, accountability, and transparency. Social science researchers can play a pivotal role by investigating governance models that incorporate public participation, ensuring that those most affected by AI systems have a voice in their development. This approach aligns AI technologies with public needs and enhances societal trust in AI systems.
The need for such inclusive governance is supported by bibliometric analysis, which points to the growing body of research on AI governance. Influential works in this area have focused on promoting public participation in governance, with a particular emphasis on ensuring that AI systems reflect the needs and values of society. The prominence of AI governance in the trend analysis (Section 3.1.7) underscores the importance of developing governance frameworks that are participatory and transparent. The thematic analysis reveals the concerns about public participation in AI governance (Section 3.2.3). Public consultations, citizen assemblies, and participatory design workshops are noted as critical tools for involving affected communities in decision-making processes. This participatory approach ensures that AI governance models are transparent and reflect societal needs, promoting fairness and accountability (Cupać et al. 2024). Similarly, the concerns about the importance of including diverse voices in AI governance to build societal trust (Section 3.2.4) are paramount. This ensures that AI governance is democratic and responsive to the needs of underrepresented groups, fostering greater transparency (Hodgson et al. 2022).
The governance of AI systems will be a crucial area of focus for social science researchers. They can examine how different regions develop AI regulations and propose best practices promoting transparency, fairness, and accountability. This involves a comparative analysis of existing governance frameworks and identifying successful models that can be adapted and implemented in diverse contexts. Public participation in AI governance can take various forms, including public consultations, participatory design workshops, and citizen assemblies. Social scientists can design and evaluate these participatory methods to ensure they are effective and inclusive. By doing so, they can help create governance structures that are more democratic and responsive to the needs of all societal groups, particularly those who are often marginalized in technological decision-making. Furthermore, social scientists can contribute to the development of AI by engaging with stakeholders from various sectors, including government, industry, and civil society.

4.2.4. Developing AI Literacy, Accessibility, and Social Inclusivity

For AI to truly benefit society, it must be accessible and understood by a diverse demographic, including those from underserved populations. AI can democratize access to critical services such as healthcare and education, but this requires a concerted effort to develop AI literacy that empowers individuals to engage meaningfully with AI technologies.
Research indicates a growing focus on AI literacy and accessibility, particularly in the context of underserved populations. The bibliometric analysis reveals an increase in research contributions from the fields of social sciences and information sciences (Section 3.1.2), which reflects the importance of developing inclusive AI literacy frameworks. These frameworks should be accessible to a broad audience, considering factors like educational background and cultural context. The thematic analysis also reveals the importance of the discussion about developing AI literacy, particularly for underserved populations (Section 3.2.1). It stresses that AI literacy programs should not only focus on technical skills but also address the societal implications of AI, helping individuals understand how AI works and its potential impacts on society (Noorman and Swierstra 2023; Prabhakar Rao and Siva Prasad 2021). It also emphasizes the need to make AI literacy programs accessible to a broad audience, considering cultural and educational backgrounds. This ensures that AI benefits are distributed equitably, fostering a more informed and critically engaged public (Tzouganatou 2022).
AI literacy should encompass critical thinking and skills to effectively counter the issue of fake news and misinformation (see the worries noted in the section on Information/News Generator). Additionally, understanding the limitations imposed by the hegemony behind so-called no-code AI is also crucial (see Section 3.2.2). While no-code AI platforms democratize access to AI technology, they often obscure the complexities and biases inherent in their algorithms, which can perpetuate existing power structures and stifle genuine innovation. Therefore, it is imperative to cultivate a form of literacy that not only celebrates the user-friendliness of AI but also counters misinformation and critically assesses the broader implications of these technologies.
In terms of gender inclusiveness, a noteworthy observation is warranted. While male writers dominate computing sciences, we can identify prominent female authors in the AI for social good domain. For example, Virginia Eubanks, through her work “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor”, sheds light on the intersection of technology and social justice (Virginia Eubanks 2018). Similarly, Ouchchy et al. (2020) have revealed the portrayal of ethical issues of AI in the media, and the second author of this article is female. This does not imply that gender inclusiveness has been achieved, but it may imply that this domain has the potential for increased gender diversity, revealing a key area that warrants further attention.
Finally, it is reasonable to conclude that developing AI literacy is not merely a top-down initiative akin to working from a technical manual; instead, it requires a deep understanding of human cognition, behaviors, and learning theories. This involves designing educational materials and programs that are engaging and relevant to diverse audiences. AI literacy programs should emphasize critical thinking, ethical considerations, and social impact. Social scientists can investigate effective ways to teach AI literacy, considering factors such as cultural differences, educational backgrounds, and cognitive abilities.

4.2.5. Promoting Interdisciplinary Collaboration in AI Research

AI for social good is an inherently interdisciplinary field, requiring inputs from technologists, ethicists, policymakers, and social scientists. Future research should focus on fostering collaboration across these disciplines to ensure that AI systems are developed with a comprehensive understanding of their social impact.
The bibliometric analysis highlights the interdisciplinary nature of research on AI for social good, with contributions coming from diverse fields such as ethics, business, and social sciences, from diverse journals, and authors from diverse disciplines (Section 3.1.1, Section 3.1.2, Section 3.1.3, Section 3.1.4, Section 3.1.5, Section 3.1.6 and Section 3.1.7). The thematic analysis also underscores the interdisciplinary nature of AI research for social good (Section 3.2.5). It stresses the need for collaboration across disciplines; social scientists, technologists, and policymakers must work together to develop AI applications with a comprehensive understanding of their impacts (Asakura et al. 2020; Hermann 2022; Kazimzade et al. 2019).
Social scientists can take the initiative to develop communication platforms that facilitate interdisciplinary dialogue and collaboration. These platforms can serve as hubs for democratizing AI for social good, bringing together technologists, policymakers, and ethicists to ensure that AI systems are designed and deployed in ways that are both technically sound and socially responsible. By promoting such interdisciplinary research, social scientists can help balance technological innovations with social responsibility.

5. Conclusions

In conclusion, the democratization of AI for social good presents a transformative opportunity that goes beyond technological advancements, driven by interdisciplinary collaborations and ethical considerations. Social scientists play a crucial role in shaping AI’s future, offering insights that ensure AI technologies are inclusive, equitable, and aligned with societal needs. As AI continues to evolve, ongoing dialogue between technologists, policymakers, and social scientists is essential to develop socially responsible innovations. This collaborative effort will guide AI toward addressing global challenges, empowering communities, and fostering a more just and equitable future for all.

Author Contributions

Conceptualization, C.C.; methodology, C.C.; software, A.N.; validation, A.N.; formal analysis, A.N.; investigation, C.C. and A.N.; resources, C.C.; data curation, A.N.; writing—original draft preparation, C.C. and A.N.; writing—review and editing, C.C.; visualization, A.N.; supervision, C.C.; project administration, C.C.; funding acquisition, C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC was funded by an internal grant from Hong Kong Baptist University; project number RC-FNRA-IG/23-24/SOSC/02.

Institutional Review Board Statement

Not applicable (this study did not involve humans or animals).

Informed Consent Statement

Not applicable (this study did not involve humans or animals).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Grammar checks and language expression suggestions were partially supported by Grammarly and ChatGPT-4o.

Conflicts of Interest

The authors declare no conflicts of interest. The preparation of the article was partially funded by a small internal grant from Hong Kong Baptist University. The funder had no involvement in the study’s design, data collection, analysis, or interpretation, nor in the writing of the manuscript or the decision to publish the findings.

References

  1. Alasadi, Elman A., and Carlos R. Baiz. 2023. Generative AI in Education and Research: Opportunities, Concerns, and Solutions. Journal of Chemical Education 100: 2965–71. [Google Scholar] [CrossRef]
  2. Asakura, Kenta, Katherine Occhiuto, Sarah Todd, Cedar Leithead, and Robert Clapperton. 2020. A Call to Action on Artificial Intelligence and Social Work Education: Lessons Learned from A Simulation Project Using Natural Language Processing. Journal of Teaching in Social Work 40: 501–18. [Google Scholar] [CrossRef]
  3. Astobiza, Aníbal Monasterio, Mario Toboso, Manuel Aparicio, and Daniel Lopez. 2021. AI Ethics for Sustainable Development Goals. IEEE Technology and Society Magazine 40: 66–71. [Google Scholar] [CrossRef]
  4. Ayanwale, Musa Adekunle, Ismaila Temitayo Sanusi, Owulabi Paul Adelana, Kehinde D. Aruleba, and Solomon Sunday Oyelere. 2022. Teachers’ readiness and intention to teach artificial intelligence in schools. Computers and Education: Artificial Intelligence 3: 100099. [Google Scholar] [CrossRef]
  5. Bagrow, James P. 2020. Democratizing AI: Non-expert design of prediction tasks. PeerJ Computer Science 6: e296. [Google Scholar] [CrossRef]
  6. Barki, Henri, Suzanne Rivard, and Jean Talbot. 1993. A Keyword Classification Scheme for IS Research Literature: An Update. MIS Quarterly 17: 209. [Google Scholar] [CrossRef]
  7. Barthel, Roland, and Roman Seidl. 2017. Interdisciplinary Collaboration between Natural and Social Sciences—Status and Trends Exemplified in Groundwater Research. PLoS ONE 12: e0170754. [Google Scholar] [CrossRef] [PubMed]
  8. Belter, Christopher. W. 2015. Bibliometric indicators: Opportunities and limits. Journal of the Medical Library Association: JMLA 103: 219–21. [Google Scholar] [CrossRef] [PubMed]
  9. Berberich, Nicholas, Toyoaki Nishida, and Shoko Suzuki. 2020. Harmonizing Artificial Intelligence for Social Good. Philosophy and Technology 33: 613–38. [Google Scholar] [CrossRef]
  10. Bones, Helen, Susan Ford, Rachel Hendery, Kate Richards, and Teresa Swist. 2021. In the Frame: The Language of AI. Philosophy and Technology 34: 23–44. [Google Scholar] [CrossRef]
  11. Brignardello-Petersen, Romina, Nancy Santesso, and Gordon H. Guyatt. 2024. Systematic reviews of the literature: An introduction to current methods. American Journal of Epidemiology, kwae232. [Google Scholar] [CrossRef]
  12. Capasso, Marianna, and Steven Umbrello. 2022. Responsible nudging for social good: New healthcare skills for AI-driven digital personal assistants. Medicine, Health Care and Philosophy 25: 11–22. [Google Scholar] [CrossRef] [PubMed]
  13. Capraro, Valerio, Austin Lentsch, Daron Acemoglu, Selin Akgun, Aisel Akhmedova, Ennio Bilancini, Jean François Bonnefon, Pablo Brañas-Garza, Luigi Butera, Karen M. Douglas, and et al. 2024. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus 3: 191. [Google Scholar] [CrossRef] [PubMed]
  14. Chemnad, Khansa, and Achraf Othman. 2024. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review. In Frontiers in Artificial Intelligence. Lausanne: Frontiers Media SA, vol. 7. [Google Scholar] [CrossRef]
  15. Cheng, Fei-Fei, Yu-Wen Huang, Hsin-Chun Yu, and Chin-San Wu. 2018. Mapping knowledge structure by keyword co-occurrence and social network analysis. Library Hi Tech 36: 636–50. [Google Scholar] [CrossRef]
  16. Chui, Michael, Martin Harryson, James Manyika, Roger Roberts, Rita Chung, van A. Heteren, and Pieter Nel. 2018. Notes from the AI Frontier: Applying AI for Social Good. New York, NY, USA: McKinsey Global Institute. [Google Scholar]
  17. Cole, Melissa, and David Avison. 2007. The potential of hermeneutics in information systems research. European Journal of Information Systems 16: 820–33. [Google Scholar] [CrossRef]
  18. Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2021. A definition, benchmark and database of AI for social good initiatives. In Nature Machine Intelligence. Berlin: Nature Research, vol. 3, pp. 111–15. [Google Scholar] [CrossRef]
  19. Cremer, Carla Zoe, and Jess Whittlestone. 2021. Artificial canaries: Early warning signs for anticipatory and democratic governance of ai. International Journal of Interactive Multimedia and Artificial Intelligence 6: 100–9. [Google Scholar] [CrossRef]
  20. Cui, Yuanyuan (Gina), van Patrick Esch, and Stevan Phelan. 2024. How to build a competitive advantage for your brand using generative AI. Business Horizons 67: 583–94. [Google Scholar] [CrossRef]
  21. Cupać, Jelena, Hendrik Schopmans, and İrem Tuncer-Ebetürk. 2024. Democratization in the age of artificial intelligence: Introduction to the special issue. Democratization 31: 899–921. [Google Scholar] [CrossRef]
  22. De Andrade, Ivan Martins, and Cleonir Tumelero. 2022. Increasing customer service efficiency through artificial intelligence chatbot. Revista de Gestão 29: 238–51. [Google Scholar] [CrossRef]
  23. de Fine Licht, Karl, and Jenny de Fine Licht. 2020. Artificial intelligence, transparency, and public decision-making. AI and Society 35: 917–26. [Google Scholar] [CrossRef]
  24. Dobbe, Roel, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial intelligence. Artificial Intelligence 300: 103555. [Google Scholar] [CrossRef]
  25. Dong, Xuanyi, David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys. 2024. Automated Deep Learning: Neural Architecture Search Is Not the End. Foundations and Trends® in Machine Learning 17: 767–920. [Google Scholar] [CrossRef]
  26. Du, Hua, Yanchao Sun, Haozhe Jiang, A. Y. M. Atiquil Islam, and Xiaoqing Gu. 2024. Exploring the effects of AI literacy in teacher learning: An empirical study. Humanities and Social Sciences Communications 11: 559. [Google Scholar] [CrossRef]
  27. Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press. [Google Scholar]
  28. Feher, Katalin, Lilla Vicsek, and Mark Deuze. 2024. Modeling AI Trust for 2050: Perspectives from media and info-communication experts. AI & Society 39: 2933–46. [Google Scholar] [CrossRef]
  29. Fereday, Jennifer, and Eimear Muir-Cochrane. 2006. Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. International Journal of Qualitative Methods 5: 80–92. [Google Scholar] [CrossRef]
  30. Floridi, Luciano, Josh Cowls, Thomas C. King, and Mariarosaria Taddeo. 2020. How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics 26: 1771–96. [Google Scholar] [CrossRef] [PubMed]
  31. Foffano, Francesca, Teresa Scantamburlo, and Atia Cortés. 2023. Investing in AI for social good: An analysis of European national strategies. AI and Society 38: 479–500. [Google Scholar] [CrossRef]
  32. Følstad, Asbjørn, Theo Araujo, Effie Lai-Chong Law, Petter Bae Brandtzaeg, Symeon Papadopoulos, Lea Reis, Marcoz Baez, Guy Laban, Patrick McAllister, Carolin Ischen, and et al. 2021. Future directions for chatbot research: An interdisciplinary research agenda. Computing 103: 2915–42. [Google Scholar] [CrossRef]
  33. Frey, William R., Desmond U. Patton, Michael B. Gaskell, and Kyle A. McGregor. 2020. Artificial intelligence and inclusion: Formerly gang-involved youth as domain experts for analyzing unstructured twitter data. Social Science Computer Review 38: 42–56. [Google Scholar] [CrossRef]
  34. Fukuda-Parr, Sakiko, and Elizabeth Gibbons. 2021. Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines. Global Policy 12: 32–44. [Google Scholar] [CrossRef]
  35. Gianni, Robert, Santtu Lehtinen, and Mika Nieminen. 2022. Governance of Responsible AI: From Ethical Guidelines to Cooperative Policies. Frontiers in Computer Science 4: 873437. [Google Scholar] [CrossRef]
  36. Goldenthal, Emma, Jennifer Park, Sunny X. Liu, Hannah Mieczkowski, and Jeffrey T. Hancock. 2021. Not All AI are Equal: Exploring the Accessibility of AI-Mediated Communication Technology. Computers in Human Behavior 125: 106975. [Google Scholar] [CrossRef]
  37. Gubrium, Jaber, James Holstein, Amir Marvasti, and Karyn McKinney. 2012. The SAGE Handbook of Interview Research: The Complexity of the Craft. Southend Oaks: SAGE Publications, Inc. [Google Scholar] [CrossRef]
  38. Gursoy, Furkan, and Ioannis A. Kakadiaris. 2023. Artificial intelligence research strategy of the United States: Critical assessment and policy recommendations. Frontiers in Big Data 6: 1206139. [Google Scholar] [CrossRef] [PubMed]
  39. Hermann, Erik. 2022. Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective. Journal of Business Ethics 179: 43–61. [Google Scholar] [CrossRef] [PubMed]
  40. Hermann, Erik, G. Y. Williams, and S. Puntoni. 2023. Deploying artificial intelligence in services to AID vulnerable consumers. Journal of the Academy of Marketing Science 52: 1431–51. [Google Scholar] [CrossRef]
  41. Himmelreich, Johannes. 2023. Against “Democratizing AI”. AI & Society 38: 1333–46. [Google Scholar]
  42. Hodgson, David, Sohie Goldingay, Jennifer Boddy, Sharlene Nipperess, and Lynelle Watts. 2022. Problematising Artificial Intelligence in Social Work Education: Challenges, Issues and Possibilities. The British Journal of Social Work 52: 1878–95. [Google Scholar] [CrossRef]
  43. Hoff, Jan Luuk. 2023. Unavoidable futures? How governments articulate sociotechnical imaginaries of AI and healthcare services. Futures 148: 103131. [Google Scholar] [CrossRef]
  44. Holden, G., G. Rosenberg, and K. Barker. 2005. Tracing Thought Through Time and Space. Social Work in Health Care 41: 1–34. [Google Scholar] [CrossRef] [PubMed]
  45. Holzmeyer, Cheryl. 2021. Beyond ‘AI for Social Good’ (AI4SG): Social transformations—Not tech-fixes—For health equity. Interdisciplinary Science Reviews 46: 94–125. [Google Scholar] [CrossRef]
  46. How, Meng-Leong, Sin-Mei Cheah, Aik Cheow Khor, and Yong-Jiet Chan. 2020a. Artificial intelligence-enhanced predictive insights for advancing financial inclusion: A human-centric ai-thinking approach. Big Data and Cognitive Computing 4: 8. [Google Scholar] [CrossRef]
  47. How, Meng-Leong, Sin-Mei Cheah, Yong-Jiet Chan, Aik Cheow Khor, and Eunice Mei Ping Say. 2020b. Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach. Information 11: 39. [Google Scholar] [CrossRef]
  48. Ifelebuegu, A. O. 2023. Rethinking online assessment strategies: Authenticity versus AI chatbot intervention. Journal of Applied Learning and Teaching 6: 385–92. [Google Scholar] [CrossRef]
  49. James, Alexandra, and Andrew Whelan. 2022. ‘Ethical’ artificial intelligence in the welfare state: Discourse and discrepancy in Australian social services. Critical Social Policy 42: 22–42. [Google Scholar] [CrossRef]
  50. Kazimzade, Gunay, Yasmin Patzer, and Niels Pinkwart. 2019. Artificial Intelligence in Education Meets Inclusive Educational Technology—The Technical State-of-the-Art and Possible Directions. Berlin/Heidelberg: Springer, pp. 61–73. [Google Scholar] [CrossRef]
  51. Keller, Perry, and Archie Drake. 2021. Exclusivity and paternalism in the public governance of explainable AI. Computer Law & Security Review 40: 105490. [Google Scholar] [CrossRef]
  52. Kuziemski, Maciej, and Gianlucca Misuraca. 2020. AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy 44: 101976. [Google Scholar] [CrossRef]
  53. Leavy, Patricia. 2019. The 21st-Century Academic Landscape. In The Oxford Handbook of Methods for Public Scholarship. Edited by P. Leavy. Oxford: Oxford University Press, pp. 16–35. [Google Scholar] [CrossRef]
  54. Lee, Hye-Kyung. 2022. Rethinking creativity: Creative industries, AI and everyday creativity. Media, Culture & Society 44: 601–12. [Google Scholar] [CrossRef]
  55. Liu, Haochen, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil Jain, and Jiliang Tang. 2022. Trustworthy AI: A Computational Perspective. ACM Transactions on Intelligent Systems and Technology 14: 1–59. [Google Scholar] [CrossRef]
  56. Liu, Zheng. 2021. Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass 15: e12851. [Google Scholar] [CrossRef]
  57. Luchs, Inga, Clemens Apprich, and Marcel Broersma. 2023. Learning machine learning: On the political economy of big tech’s online AI courses. Big Data and Society 10: 20539517231153806. [Google Scholar] [CrossRef]
  58. Marzi, Giacomo, Marco Balzano, Andrea Caputo, and Massimiliano M. Pellegrini. 2024. Guidelines for Bibliometric-Systematic Literature Reviews: 10 steps to combine analysis, synthesis and theory development. International Journal of Management Reviews 27: 81–103. [Google Scholar] [CrossRef]
  59. Mateos-Sanchez, Montserrat, Amparo Casado Melo, Laura Sánchez Blanco, and Ana M. Fermoso García. 2022. Chatbot, as Educational and Inclusive Tool for People with Intellectual Disabilities. Sustainability 14: 1520. [Google Scholar] [CrossRef]
  60. Miller, Tim. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267: 1–38. [Google Scholar] [CrossRef]
  61. Montes, Gabriel Axel, and Ben Goertzel. 2019. Distributed, decentralized, and democratized artificial intelligence. In Technological Forecasting and Social Change. Amsterdam: Elsevier Inc., vol. 141, pp. 354–58. [Google Scholar] [CrossRef]
  62. Moon, M. Jae. 2023. Searching for inclusive artificial intelligence for social good: Participatory governance and policy recommendations for making AI more inclusive and benign for society. Public Administration Review 83: 1496–505. [Google Scholar] [CrossRef]
  63. Moore, Jared. 2019. AI for Not Bad. Frontiers in Big Data 2: 32. [Google Scholar] [CrossRef]
  64. Mor Barak, M Michàlle. E. 2020. The Practice and Science of Social Good: Emerging Paths to Positive Social Impact. Research on Social Work Practice 30: 139–50. [Google Scholar] [CrossRef]
  65. Nasir, Osama, Rana Tallal Javed, Shivam Gupta, Ricardo Vinuesa, and Junaid Qadir. 2023. Artificial intelligence and sustainable development goals nexus via four vantage points. Technology in Society 72: 102171. [Google Scholar] [CrossRef]
  66. Noorman, Merel, and Tsjalling Swierstra. 2023. Democratizing AI from a Sociotechnical Perspective. Minds and Machines 33: 563–86. [Google Scholar] [CrossRef]
  67. Nzobonimpa, Stany, and Jean-François Savard. 2023. Ready but irresponsible? Analysis of the Government Artificial Intelligence Readiness Index. Policy and Internet 15: 397–414. [Google Scholar] [CrossRef]
  68. Ouchchy, Leila, Allen Coin, and Veljko Dubljević. 2020. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI and Society 35: 927–36. [Google Scholar] [CrossRef]
  69. Öztürk, Oğuzhan, Rıdvan Kocaman, and Dominik K. Kanbach. 2024. How to design bibliometric research: An overview and a framework proposal. Review of Managerial Science 18: 3333–61. [Google Scholar] [CrossRef]
  70. Page, Matthew J., Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, and et al. 2021. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 372: n71. [Google Scholar] [CrossRef]
  71. Panda, Dhalabaleswar K., Vipin Chaudhary, Eric Fosler-Lussier, Raghu Machiraju, Amit Majumdar, Beth Plale, Rajiv Ramnath, Ponnuswamy Sadayappan, Neelima Savardekar, and Karen Tomko. 2024. Creating intelligent cyberinfrastructure for democratizing AI. AI Magazine 45: 22–28. [Google Scholar] [CrossRef]
  72. Prabhakar Rao, Jandhyala, and Rambhatla Siva Prasad. 2021. Tangible and Intangible Impact of AI Usage: AI for Information Accessibility. The International Review of Information Ethics 29: 1–7. [Google Scholar] [CrossRef]
  73. Rajaram, Kumaran, and Patrick Nicholas Tinguely. 2024. Generative artificial intelligence in small and medium enterprises: Navigating its promises and challenges. Business Horizons 67: 629–48. [Google Scholar] [CrossRef]
  74. Rakowski, Roman, and Petra Kowaliková. 2024. The political and social contradictions of the human and online environment in the context of artificial intelligence applications. Humanities and Social Sciences Communications 11: 289. [Google Scholar] [CrossRef]
  75. Ramaul, Laavanya, Paavo Ritala, and Mika Ruokonen. 2024. Creational and conversational AI affordances: How the new breed of chatbots is revolutionizing knowledge industries. Business Horizons 67: 615–27. [Google Scholar] [CrossRef]
  76. Robertson, Jeandri, Caitlin Ferreira, Elsamari Botha, and Kim Oosthuizen. 2024. Game changers: A generative AI prompt protocol to enhance human-AI knowledge co-construction. Business Horizons 67: 499–510. [Google Scholar] [CrossRef]
  77. Robila, Mihaela, and Stefan A. Robila. 2020. Applications of Artificial Intelligence Methodologies to Behavioral and Social Sciences. Journal of Child and Family Studies 29: 2954–66. [Google Scholar] [CrossRef]
  78. Saeidnia, Hamid Reza. 2023. Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry. Library Hi Tech News. [Google Scholar] [CrossRef]
  79. Schiff, Daniel. 2021. Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI and Society 36: 331–48. [Google Scholar] [CrossRef] [PubMed]
  80. Seger, Elizabeth, Aviv Ovadya, Divya Siddarth, Ben Garfinkel, and Allan Dafoe. 2023. Democratising AI: Multiple Meanings, Goals, and Methods. Paper presented at AIES 2023—Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, August 8–10; pp. 715–22. [Google Scholar] [CrossRef]
  81. Serrano, Emilio, Mari Carmen Suárez-Figueroa, Jacinto González-Pachón, and Asunción Gómez-Pérez. 2019. Toward proactive social inclusion powered by machine learning. Knowledge and Information Systems 58: 651–67. [Google Scholar] [CrossRef]
  82. Sheikh, Haroon, Corien Prins, and Erik Schrijvers. 2023. Artificial Intelligence: Definition and Background. Berlin/Heidelberg: Springer, pp. 15–41. [Google Scholar] [CrossRef]
  83. Sit, Muhammed, and Ibrahim Demir. 2023. Democratizing Deep Learning Applications in Earth and Climate Sciences on the Web: EarthAIHub. Applied Sciences 13: 3185. [Google Scholar] [CrossRef]
  84. Sjödin, David, Vinit Parida, Maximilian Palmié, and Joakim Wincent. 2021. How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops. Journal of Business Research 134: 574–87. [Google Scholar] [CrossRef]
  85. Sreenivasan, Aswathy, and M. Suresh. 2024. Design thinking and artificial intelligence: A systematic literature review exploring synergies. International Journal of Innovation Studies 8: 297–312. [Google Scholar] [CrossRef]
  86. Stypinska, Justyna. 2023. AI ageism: A critical roadmap for studying age discrimination and exclusion in digitalized societies. AI and Society 38: 665–77. [Google Scholar] [CrossRef]
  87. Sundberg, Leif, and Jonny Holmström. 2023. Democratizing artificial intelligence: How no-code AI can leverage machine learning operations. Business Horizons 66: 777–88. [Google Scholar] [CrossRef]
  88. The State of Social Enterprise: A Review of Global Data 2013–2023. 2024. Colony: World Economic Forum.
  89. Tomašev, Nenad, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela Picciariello, Bec Connelly, Danielle C. M. Belgrave, Daphne Ezer, Fanny Cachat van der Haert, Frank Mugisha, and et al. 2020. AI for social good: Unlocking the opportunity for positive impact. Nature Communications 11: 2468. [Google Scholar] [CrossRef] [PubMed]
  90. Tzouganatou, Angeliki. 2022. Openness and privacy in born-digital archives: Reflecting the role of AI development. AI and Society 37: 991–99. [Google Scholar] [CrossRef]
  91. Victor, Bryan G., Rebeccah L. Sokol, Lauri Goldkind, and Brian E. Perron. 2023. Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models. Journal of the Society for Social Work and Research 14: 563–77. [Google Scholar] [CrossRef]
  92. Züger, Theresa, and Hadi Asghari. 2023. AI for the public. How public interest theory shifts the discourse on AI. AI and Society 38: 815–28. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram.
Figure 1. PRISMA flow diagram.
Socsci 14 00030 g001
Figure 2. Keyword co-occurrence visualization.
Figure 2. Keyword co-occurrence visualization.
Socsci 14 00030 g002
Figure 3. Keyword cluster.
Figure 3. Keyword cluster.
Socsci 14 00030 g003
Figure 4. The evolution of keywords over different time periods.
Figure 4. The evolution of keywords over different time periods.
Socsci 14 00030 g004
Table 1. PRISMA process results.
Table 1. PRISMA process results.
DatabaseTotalEligible for Bibliometric AnalysisIncluded in the Systematic-Thematic Analysis
PubMed2211
WOS62310344
SCOPUS9957721
TOTAL164018166
Table 2. The 10 most globally cited articles.
Table 2. The 10 most globally cited articles.
NoCitationsAuthorsTitlePublish YearJournal Publication
1537(Kuziemski and Misuraca 2020)AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings2020Telecommunications Policy
2389(Floridi et al. 2020)How to Design AI for Social Good: Seven Essential Factors2020Science and Engineering Ethics
3356(Sjödin et al. 2021)How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops2021Journal of Business Research
4326(Schiff 2021)Out of the laboratory and into the classroom: the future of artificial intelligence in education2021AI & Society
5291(Tomašev et al. 2020)AI for social good: unlocking the opportunity for positive impact2020Nature Communications
6272(Hermann 2022)Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective2022International Journal of Knowledge, Culture, and Change Management
7256(Ouchchy et al. 2020)AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media2020AI & Society
8250(Liu et al. 2022)Trustworthy AI: A Computational Perspective2022ACM Transactions on Intelligent System and Technology
9240(Alasadi and Baiz 2023)Generative AI in Education and Research: Opportunities, Concerns, and Solutions2023Journal of Chemical Education
10235(de Fine Licht and de Fine Licht 2020)Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy2020AI & Society
Table 3. The 10 most impactful articles in terms of centrality.
Table 3. The 10 most impactful articles in terms of centrality.
No.Centrality 1Freq. 2Degree 3AuthorsArticle TitlePublish Year
10.031062(Floridi et al. 2020)How to Design AI for Social Good: Seven Essential Factors2021
20.02964(Tomašev et al. 2020)AI for social good: unlocking the opportunity for positive impact2020
30.02944(Cowls et al. 2021)A definition, benchmark and database of AI for social good initiatives2021
40.01741(Lee 2022)Rethinking creativity: creative industries, AI and everyday creativity2022
50.01733(Sundberg and Holmström 2023)Democratizing artificial intelligence: How no-code AI can leverage machine learning operations2023
60.01931(Berberich et al. 2020)Harmonizing Artificial Intelligence for Social Good2020
70.01630(De Andrade and Tumelero 2022)Increasing customer service efficiency through artificial intelligence chatbot2022
80.00630(Liu et al. 2022)Trustworthy AI: A Computational Perspective2022
90.00328(Frey et al. 2020)Artificial Intelligence and Inclusion: Formerly Gang-Involved Youth as Domain Experts for Analyzing Unstructured Twitter Data2020
100.00325(Moore 2019)AI for Not Bad2019
Note: 1 Betweenness centrality of an article (node), which reflects how often the node is included in the shortest paths of any pair of nodes in a given network (i.e., the eligible articles and all the references cited by them). The larger the number of betweenness centrality, the higher the influence of that node. 2 Frequency of an article indicates the total times cited by other articles in the network. 3 Degree is the number of connections an article has with other articles in the network, including both citing other articles and being cited by other articles in the network.
Table 4. The 10 most common subject categories.
Table 4. The 10 most common subject categories.
ArticlesSubject Category
48Computer Science
22Business and Economics
16Social Sciences—Other Topics
11Science and Technology—Other Topics
9Engineering
7Government and Law
6Telecommunications
6Information Science and Library Science
4History and Philosophy of Science
3Arts and Humanities—Other Topics
Table 5. The 10 most productive journals.
Table 5. The 10 most productive journals.
No. of ArticlesJournal
29Science
26Nature
23Science Engineering Ethics
22AI & Society
20Artificial Intelligence
20Big Data & Society
19Arxiv
18Nature Machine Intelligence
18Nature Communication
17Computer Human Behavior
Table 6. The 10 most productive institutions.
Table 6. The 10 most productive institutions.
No. of ArticlesPublishing YearInstitutions
62020University of Oxford
62020Harvard University
42020Nanyang Technological University
32020China University of Petroleum (East China)
22023Tarbiat Modares University
22023Tsinghua University
22023University of Oslo
22022Deakin University
22022Stockholm University
22021Georgia Institute of Technology
Table 7. The 10 most productive countries.
Table 7. The 10 most productive countries.
CountryNumber of Articles
USA31
England (UK)16
Germany16
Netherlands14
China12
Canada12
Spain11
Australia11
Italy9
Sweden8
Table 8. Top keywords based on centrality value.
Table 8. Top keywords based on centrality value.
CentralityFreq.KeywordYear Occurrence
0.9993artificial intelligence2019
0.1818machine learning2019
0.029social work2020
0.017human2020
Table 9. Top 20 keywords with citation burst strength.
Table 9. Top 20 keywords with citation burst strength.
KeywordStrengthBeginEnd2015–2024
social good1.7120192021▂▂▂▂▃▃▃▂▂▂
human1.5120202022▂▂▂▂▂▃▃▃▂▂
natural language processing1.3120202020▂▂▂▂▂▃▂▂▂▂
article1.2920202022▂▂▂▂▂▃▃▃▂▂
Bayesian analysis1.1320202020▂▂▂▂▂▃▂▂▂▂
optimization1.1320202020▂▂▂▂▂▃▂▂▂▂
quality1.1320202020▂▂▂▂▂▃▂▂▂▂
predictive modeling1.1320202020▂▂▂▂▂▃▂▂▂▂
AI for good120202020▂▂▂▂▂▃▂▂▂▂
AI governance2.0320212021▂▂▂▂▂▂▃▂▂▂
challenges1.0420212022▂▂▂▂▂▂▃▃▂▂
quantitative analysis1.0120212021▂▂▂▂▂▂▃▂▂▂
anticipatory governance1.0120212021▂▂▂▂▂▂▃▂▂▂
participatory technology assessments1.0120212021▂▂▂▂▂▂▃▂▂▂
neural networks1.0920222024▂▂▂▂▂▂▂▃▃▃
data cleaning1.0820222022▂▂▂▂▂▂▂▃▂▂
drawing1.0820222022▂▂▂▂▂▂▂▃▂▂
information management1.0820222022▂▂▂▂▂▂▂▃▂▂
open data1.0820222022▂▂▂▂▂▂▂▃▂▂
ethics1.1620232024▂▂▂▂▂▂▂▂▃▃
Table 10. Different types of AI for social good and representative examples from the literature.
Table 10. Different types of AI for social good and representative examples from the literature.
Types of ApplicationGenerative AI/ChatbotMachine/Deep LearningNLPComputer VisionPredictive Modeling
Information/news generator(Capraro et al. 2024; Cui et al. 2024; Victor et al. 2023)
Marketing and customer engagement(Cui et al. 2024; Rajaram and Tinguely 2024)(Hermann 2022; Hermann et al. 2023; Sjödin et al. 2021)
Disease diagnoses and digital health assistants(Capasso and Umbrello 2022)(Hoff 2023; Holzmeyer 2021)
Disability accessibility (Mateos-Sanchez et al. 2022)(Chemnad and Othman 2024)(Chemnad and Othman 2024)(Chemnad and Othman 2024)
Energy monitoring and measurement (Cowls et al. 2021; Sit and Demir 2023) (M. L. How et al. 2020a)
Intelligence mentor/assistant(Alasadi and Baiz 2023)(Ayanwale et al. 2022; Kazimzade et al. 2019; Schiff 2021)
Simulated-based education (Ayanwale et al. 2022)(Asakura et al. 2020)
AI-mediated communication (Berberich et al. 2020; Bones et al. 2021; Goldenthal et al. 2021)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chan, C.; Nurrosyidah, A. Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens. Soc. Sci. 2025, 14, 30. https://doi.org/10.3390/socsci14010030

AMA Style

Chan C, Nurrosyidah A. Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens. Social Sciences. 2025; 14(1):30. https://doi.org/10.3390/socsci14010030

Chicago/Turabian Style

Chan, Chitat, and Afifah Nurrosyidah. 2025. "Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens" Social Sciences 14, no. 1: 30. https://doi.org/10.3390/socsci14010030

APA Style

Chan, C., & Nurrosyidah, A. (2025). Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens. Social Sciences, 14(1), 30. https://doi.org/10.3390/socsci14010030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop