1. Background
Social good refers to actions, services, or interventions that aim to improve society’s overall well-being, particularly by addressing critical social issues, including poverty, healthcare access, education, and environmental sustainability. Social good encompasses both direct services provided to vulnerable populations and broader social interventions designed to address systemic inequalities (
Mor Barak 2020). In this technology era, the integration of advanced technologies into social good initiatives has the potential to leverage their impact, increase efficiency, and extend their reach to a broader population, thereby fostering greater societal benefits. Among many technology options, artificial intelligence (AI) stands out as particularly promising, due to its capacity to address social challenges, making it a pivotal tool for advancing social good (
Chui et al. 2018). AI creates systems that perform human intelligence tasks, such as reasoning, learning, and visual perception, by using technologies like machine learning and neural networks to process data and make decisions autonomously (
Sheikh et al. 2023). As AI continues to evolve, its applications for social good are expanding across various fields, including healthcare, education, social work, and governance (
Floridi et al. 2020;
Følstad et al. 2021). These advancements offer significant opportunities to enhance social good initiatives and promote the well-being of marginalized communities.
However, the application of AI for social good has sparked various debates and highlighted challenges. While AI may enhance the effectiveness of social services and interventions, concerns about equity, inclusivity, and ethical governance are equally critical. Some observations show that while AI can solve complex social problems, it can also exacerbate inequalities if not managed carefully. For instance, AI systems often rely on large datasets, which may introduce biases if not properly curated. Furthermore, the rapid advancement of AI raises questions about privacy, data security, and the digital divide. These concerns emphasize the need for a more nuanced and well-structured approach to understanding and regulating AI in the social good domain (
Floridi et al. 2020).
Against this background, democratizing AI for social good has become a significant goal, which refers to the effort to make AI tools widely accessible and involve diverse voices in AI development and regulation. Democratizing AI can empower interdisciplinary experts, practitioners, and even non-technical people across various fields to harness AI’s potential for solving problems, innovating solutions, and addressing societal challenges. For example, social entrepreneurs can utilize user-friendly AI tools without needing deep technical expertise. This trend encourages interdisciplinary collaboration, bringing diverse perspectives into AI development and its applications.
Cupać et al. (
2024) have categorized the democratization of AI for social good into three main areas: AI developed or used for social good, AI deployed in various contexts, and the regulation of AI.
Seger et al. (
2023) identified four major components that constitute AI democratization. The first is the democratization of AI use targeting AI tools’ usability for many people. Second, for societies to benefit from AI, greater efforts should be put into engaging local communities and diverse social groups in the design of AI systems. Thirdly, there are aims to equally disburse any economic gains made from AI profit. Lastly, the democratization of AI governance aims to ensure that decisions about matters like AI use, development, and profits are based on the preferences and requirements of the people who will be affected. Thus, the current and potential roles of AI democratization deserve greater attention, as they hold significant potential for enhancing the practical application of AI. Existing discussions note that social good and democratization are inherently linked. In some way, efforts at democratizing AI should be focused on achieving social good, and social good initiatives should be democratized to ensure broad accessibility, equitable benefits, and inclusive participation from all segments of society. This study adopted an inclusive operational definition of “democratizing AI for social good”, which means ensuring equitable access to, participation in, and benefits from AI technologies for social good.
Given the complexity and multifaceted nature of work on democratizing AI for social good, research efforts in this area are inherently interdisciplinary (
Floridi et al. 2020;
Følstad et al. 2021). While interdisciplinary research benefits from integrating various perspectives, discipline-based lenses remain crucial for providing depth, rigor, and clarity (
Leavy 2019). These lenses ground research in established knowledge and methodologies, identify specific gaps, and facilitate effective communication and collaboration among experts. Understanding AI from a social science perspective is particularly valuable as it offers insights into societal implications, ethical considerations, and human interactions with AI technologies.
Social sciences are inherently multidisciplinary yet coherent (
Barthel and Seidl 2017), integrating various disciplinary perspectives to address complex human and societal issues. Fields like psychology, sociology, economics, political science, and anthropology offer unique methodologies and theories, contributing to a comprehensive understanding of social phenomena. Despite diverse approaches, these disciplines share goals of exploring human interactions, social institutions, and cultural norms. Interdisciplinary collaboration and shared methods ensure coherence, enabling social sciences to generate nuanced insights and effective solutions to societal challenges.
Adopting a technology-informed social science lens to interpret AI-related studies means understanding the technology from an angle underpinned by social science theories, methodologies, and perspectives (
Z. Liu 2021;
Miller 2019). This approach bridges the gap between technological advancements and social science insights, ensuring that insights into social dynamics, ethical considerations, and human needs informs our perceived knowledge of AI. By doing so, researchers can generate comprehensive and actionable insights that advance academic knowledge and contribute to the development of AI systems that are socially beneficial and ethically sound.
As such, we determined that it was worthwhile conducting a review to identify potential future directions for research into democratizing AI for social good, focusing on how social science can play a contributive role in such democratization. The research questions were as follows:
Which articles, journals, countries, and authors have the most significant influence in the field?
What are the dominant modes of AI applications in this research domain?
What research trends in this domain have been explored in the past decade?
What are the potential future directions for research into democratizing AI for social good, and what are the implications for social science researchers?
2. Method: Bibliometric–Systematic Review
This study provides a comprehensive analysis of the opportunities for democratizing AI for social good using a bibliometric–systematic review method (
Brignardello-Petersen et al. 2024;
Marzi et al. 2024). It combines the quantitative analysis of bibliometric methods with the qualitative synthesis of systematic reviews. This approach helps identify patterns, trends, and gaps in the literature, advancing theoretical insights and mapping future research directions. Bibliometric analysis is suitable for this study because it quantitatively evaluates the academic literature, providing insights into research trends, influential works, and key contributors on AI for social good. By analyzing citation patterns, co-authorship networks, and keyword occurrences, bibliometric analysis helps identify the most significant research themes and emerging areas of interest (
Öztürk et al. 2024). In addition, a systematic thematic analysis of reviews can uncover underlying themes, theoretical frameworks, and methodological approaches prevalent in the literature. This mixed-methods approach thoroughly examines the breadth and depth of existing research, ensuring that the analysis captures the bibliometric profiles and topics in research on AI for social good.
2.1. The Bibliometric Approach: Selection, Analysis, and Visualization
The article selection process followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines recommended for systematic literature reviews (
Page et al. 2021).
Figure 1 explains the methodological process. The first stage involved identifying the search terms within the Scopus, PubMed, and Web of Science (WOS) databases. We specifically selected some terms encompassing the broad range of AI, such as “chatbot”, “deep learning”, “machine learning”, “computer vision”, “natural language”, and “image recognition”, in the journal database search engine. Thus, the search terms used were:
(“artificial intelligence” OR “AI” OR “chatbot” OR “deep learning” OR “machine learning” OR “computer vision” OR “natural language” OR “image recognition”) AND (“democra*” OR “empower*“) AND (“social”)
The search result from PRISMA methodology is depicted in
Table 1. The screening process was guided by inclusion and exclusion criteria. The research scope was limited to studies published within a specific period (2014–2024) and only included articles written in English. Certain document types, such as books, book chapters, conference proceedings, reports, and review articles, were excluded from the selection. The remaining publications underwent a thorough eligibility review. This involved carefully reading each paper’s title and abstract to ensure they were relevant to AI for social good.
Bibliographic data analysis was conducted using CiteSpace for data visualization. This bibliometric analysis provided a detailed overview by examining various aspects of the research, including authorship, geographic origins, keyword co-occurrences, clustering of related topics, and thematic maps. This approach allows for a deeper understanding of the research landscape.
We use bibliometric analysis to analyze eligible articles because it provides a robust and quantitative method for assessing the impact and relevance of academic publications. Examining metrics such as citation counts helps identify the most influential works within a given field. In this context, only the most relevant papers of the 181 eligible articles were selected based on their citations and practical significance. This selection process ensures that the subsequent systematic thematic analysis focuses on the most impactful research, which is crucial for uncovering key debates, identifying research gaps, and highlighting areas for future study. As such, the study can more effectively distill critical insights and guide future research directions by leveraging bibliometric analysis.
Despite the advantage of the bibliometric approach, this method also has limitations such as dependency on search terms, citation biases, and inconsistencies in metadata (
Belter 2015;
Holden et al. 2005). Additionally, interdisciplinary research may be misclassified, and overlaps between databases can create duplicate records, complicating data analysis. These factors may limit the comprehensiveness and balance of the findings. Notwithstanding these known limitations, it is still useful to adopt a bibliometric approach because it provides a quantitative overview of research trends and identifies influential works, authors, and collaborations. This makes it a powerful tool for understanding and navigating complex research landscapes.
2.2. Systematic Thematic Analysis Through a Social Science Lens
Thematic analysis seeks to identify patterns within the narrative data (
Fereday and Muir-Cochrane 2006). This analysis is not merely deductive (applying pre-existing concepts) or inductive (conceptualizing raw data); rather, it employs a hermeneutical approach, where the analysis is informed by both the researcher’s initial concepts and the raw data (
Cole and Avison 2007;
Fereday and Muir-Cochrane 2006).
As noted at the beginning of this article, adopting a technology-informed social science lens prioritizes understanding AI in terms of human behavior, societal impacts, and ethical considerations. This approach helps identify gaps in the existing literature and facilitates effective interdisciplinary communication. The goal is to achieve a thorough analysis that social science researchers can easily understand.
In this study, the researcher refined, conceptualized, and theorized around these initial ideas as the interpretation progressed. The thematization process was cyclic and hermeneutic, continuously refining themes based on initial themes and the actual narrative data. This analysis approach allows for the exploration of individual participant experiences while ensuring that key topics are addressed (
Gubrium et al. 2012).
4. Discussion
4.1. Trends in AI for Social Good
The current study contributes to the existing knowledge on AI democratization for social good by examining 66 articles extracted from Scopus, WoS, and PubMed. In this research, bibliometric analysis was conducted to investigate the most influential articles, journals, countries, and authors (RQ1). Notably, the Science journal emerged as the most influential journal in the global context. In addition, Floridi Luciano, who has an interest in AI ethics, digital ethics, and information ethics, was identified as a prominent author in the field of AI for social good. Moreover, the analysis reveals that the USA leads the way in the realm of AI in social good research. The USA is known as a major player globally, and there has been a notable surge in academic and industrial research within the USA, focusing on AI in social good enhancement. Furthermore, when analyzing authors’ keywords in the bibliometric analysis, we revealed some prominent modes of AI applications such as in information/news generation, marketing and customer engagement, disease diagnoses, clinical care and digital health assistants, disability accessibility, energy monitoring and measurement, intelligence mentoring/assisting, simulated-based education with different AI approaches, and mediated communication (RQ2).
The key themes in democratization of AI for social good (RQ3) were effectively identified through the examination of keyword co-occurrence in a bibliometric analysis and systematic literature review. According to the time and intensity of keyword emergence, ethics, generative AI, and technology are the research hotspots. The need for ethics arises from the urgent requirement to establish appropriate regulations and ethical guidelines to manage the spread of AI democratization, particularly to minimize risk (
Gianni et al. 2022;
Hermann 2022;
Ouchchy et al. 2020;
Rakowski and Kowaliková 2024;
Saeidnia 2023). Existing research also explores generative AI as the most popular research trend in the field of democratizing AI for social good, with the ability to create new and original information output (
Rajaram and Tinguely 2024;
Robertson et al. 2024;
Victor et al. 2023).
4.2. Implications for Future Research in Social Sciences
Given the complexity and multifaceted nature of work on democratizing AI for social good, research efforts in this area must be inherently interdisciplinary. While integrating various perspectives is beneficial, discipline-specific lenses remain crucial for providing depth, rigor, and clarity. These lenses ensure that research is grounded in established knowledge and methodologies, help identify specific gaps, and facilitate effective communication and collaboration among experts. Adopting an interdisciplinary framework with a solid social science foundation is essential for enabling social science researchers and practitioners to move beyond the roles of informed commentators or critical readers. It empowers them to understand and address the implications of AI technologies more deeply. This approach ensures that their contributions are knowledge-based, leading to meaningful participation in research and development and fostering socially responsible and equitable innovations.
Inspired by the findings from bibliometric and thematic analyses of core texts, we have identified five key areas where social sciences can make significant contributions to the field of AI for social good (RQ4). This discussion outlines these areas and explores how social sciences can play a pivotal role in democratizing AI to ensure its benefits are equitably distributed across society.
4.2.1. Addressing Real-World Social Challenges
AI’s potential to tackle global challenges presents a significant research opportunity. With their nuanced understanding of various social challenges, social scientists can focus on how AI can be tailored to address specific societal issues rather than merely demonstrating novel AI applications. By integrating insights from social science, researchers can ensure that AI developments are aligned with pressing social needs, such as mental health, resource distribution, and environmental sustainability, thus ensuring that technological advances benefit society.
The research trend reflected by the bibliometric analysis (
Section 3.1.7) reveals that much of the influential research on AI for social good is concentrated on these critical areas, demonstrating how AI can be a powerful tool in addressing real-world social challenges. The thematic analysis (
Section 3.2.1) highlights that AI must prioritize inclusivity, ensuring that emerging technologies are accessible and tailored to meet the diverse needs of society. AI-driven technologies, such as language translation tools, are recognized for their potential to break down barriers and improve access to information for disadvantaged groups, such as marginalized communities. This is crucial in addressing societal challenges like health equity and environmental sustainability, aligning AI developments with pressing social needs (
Hermann 2022;
Prabhakar Rao and Siva Prasad 2021). Social science researchers are essential for understanding the societal impacts of AI and designing systems that bridge rather than widen societal gaps. Their contributions help ensure that AI serves as a tool for promoting fairness and justice (
Stypinska 2023). Social science researchers play a crucial role in understanding the affordances of AI and connecting them with social needs.
4.2.2. Shaping Ethical AI Development
As AI becomes increasingly pervasive, the development of robust ethical frameworks is paramount. Future research needs to prioritize the creation of ethical guidelines that prevent data misuse, ensure privacy, and promote transparency. Social science researchers are uniquely positioned to explore how these frameworks can be adapted to diverse cultural and regional contexts, making AI governance more globally relevant and equitable. This also includes investigating the impact of biased AI systems and identifying strategies for mitigating these biases, ensuring that AI systems do not perpetuate existing inequalities.
The research trend reflected by the bibliometric analysis shows that ethical considerations, particularly around AI governance and AI ethics, are already a major focus within the research community. Additionally, the thematic analysis discusses the importance of developing robust ethical frameworks for AI systems to mitigate risks of bias and unintended discrimination (
Section 3.2.2). These frameworks are essential to prevent the misuse of data and protect vulnerable populations from exploitation. This section stresses that ethical AI development should be governed by principles that promote fairness, privacy, and accountability, and that these frameworks must be adaptable to various cultural and regional contexts (
H. Liu et al. 2022;
Moon 2023). Additionally, researchers should see that these frameworks ensure that AI governance is inclusive and responsible, to address issues of transparency and fairness (
Capraro et al. 2024;
Ramaul et al. 2024).
By incorporating perspectives on human behavior, societal structures, and ethics, social science researchers can help ensure that AI systems are designed to serve all segments of society. Their work in creating ethical frameworks is crucial for preventing the exploitation of vulnerable groups and ensuring that AI technologies are deployed responsibly. This involves not only addressing issues of bias and fairness but also promoting inclusivity and accountability in AI development.
4.2.3. Facilitating Public Participation in AI Governance
AI governance is a critical research area that requires a focus on promoting fairness, accountability, and transparency. Social science researchers can play a pivotal role by investigating governance models that incorporate public participation, ensuring that those most affected by AI systems have a voice in their development. This approach aligns AI technologies with public needs and enhances societal trust in AI systems.
The need for such inclusive governance is supported by bibliometric analysis, which points to the growing body of research on AI governance. Influential works in this area have focused on promoting public participation in governance, with a particular emphasis on ensuring that AI systems reflect the needs and values of society. The prominence of AI governance in the trend analysis (
Section 3.1.7) underscores the importance of developing governance frameworks that are participatory and transparent. The thematic analysis reveals the concerns about public participation in AI governance (
Section 3.2.3). Public consultations, citizen assemblies, and participatory design workshops are noted as critical tools for involving affected communities in decision-making processes. This participatory approach ensures that AI governance models are transparent and reflect societal needs, promoting fairness and accountability (
Cupać et al. 2024). Similarly, the concerns about the importance of including diverse voices in AI governance to build societal trust (
Section 3.2.4) are paramount. This ensures that AI governance is democratic and responsive to the needs of underrepresented groups, fostering greater transparency (
Hodgson et al. 2022).
The governance of AI systems will be a crucial area of focus for social science researchers. They can examine how different regions develop AI regulations and propose best practices promoting transparency, fairness, and accountability. This involves a comparative analysis of existing governance frameworks and identifying successful models that can be adapted and implemented in diverse contexts. Public participation in AI governance can take various forms, including public consultations, participatory design workshops, and citizen assemblies. Social scientists can design and evaluate these participatory methods to ensure they are effective and inclusive. By doing so, they can help create governance structures that are more democratic and responsive to the needs of all societal groups, particularly those who are often marginalized in technological decision-making. Furthermore, social scientists can contribute to the development of AI by engaging with stakeholders from various sectors, including government, industry, and civil society.
4.2.4. Developing AI Literacy, Accessibility, and Social Inclusivity
For AI to truly benefit society, it must be accessible and understood by a diverse demographic, including those from underserved populations. AI can democratize access to critical services such as healthcare and education, but this requires a concerted effort to develop AI literacy that empowers individuals to engage meaningfully with AI technologies.
Research indicates a growing focus on AI literacy and accessibility, particularly in the context of underserved populations. The bibliometric analysis reveals an increase in research contributions from the fields of social sciences and information sciences (
Section 3.1.2), which reflects the importance of developing inclusive AI literacy frameworks. These frameworks should be accessible to a broad audience, considering factors like educational background and cultural context. The thematic analysis also reveals the importance of the discussion about developing AI literacy, particularly for underserved populations (
Section 3.2.1). It stresses that AI literacy programs should not only focus on technical skills but also address the societal implications of AI, helping individuals understand how AI works and its potential impacts on society (
Noorman and Swierstra 2023;
Prabhakar Rao and Siva Prasad 2021). It also emphasizes the need to make AI literacy programs accessible to a broad audience, considering cultural and educational backgrounds. This ensures that AI benefits are distributed equitably, fostering a more informed and critically engaged public (
Tzouganatou 2022).
AI literacy should encompass critical thinking and skills to effectively counter the issue of fake news and misinformation (see the worries noted in the section on Information/News Generator). Additionally, understanding the limitations imposed by the hegemony behind so-called no-code AI is also crucial (see
Section 3.2.2). While no-code AI platforms democratize access to AI technology, they often obscure the complexities and biases inherent in their algorithms, which can perpetuate existing power structures and stifle genuine innovation. Therefore, it is imperative to cultivate a form of literacy that not only celebrates the user-friendliness of AI but also counters misinformation and critically assesses the broader implications of these technologies.
In terms of gender inclusiveness, a noteworthy observation is warranted. While male writers dominate computing sciences, we can identify prominent female authors in the AI for social good domain. For example, Virginia Eubanks, through her work “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor”, sheds light on the intersection of technology and social justice (
Virginia Eubanks 2018). Similarly,
Ouchchy et al. (
2020) have revealed the portrayal of ethical issues of AI in the media, and the second author of this article is female. This does not imply that gender inclusiveness has been achieved, but it may imply that this domain has the potential for increased gender diversity, revealing a key area that warrants further attention.
Finally, it is reasonable to conclude that developing AI literacy is not merely a top-down initiative akin to working from a technical manual; instead, it requires a deep understanding of human cognition, behaviors, and learning theories. This involves designing educational materials and programs that are engaging and relevant to diverse audiences. AI literacy programs should emphasize critical thinking, ethical considerations, and social impact. Social scientists can investigate effective ways to teach AI literacy, considering factors such as cultural differences, educational backgrounds, and cognitive abilities.
4.2.5. Promoting Interdisciplinary Collaboration in AI Research
AI for social good is an inherently interdisciplinary field, requiring inputs from technologists, ethicists, policymakers, and social scientists. Future research should focus on fostering collaboration across these disciplines to ensure that AI systems are developed with a comprehensive understanding of their social impact.
The bibliometric analysis highlights the interdisciplinary nature of research on AI for social good, with contributions coming from diverse fields such as ethics, business, and social sciences, from diverse journals, and authors from diverse disciplines (
Section 3.1.1,
Section 3.1.2,
Section 3.1.3,
Section 3.1.4,
Section 3.1.5,
Section 3.1.6 and
Section 3.1.7). The thematic analysis also underscores the interdisciplinary nature of AI research for social good (
Section 3.2.5). It stresses the need for collaboration across disciplines; social scientists, technologists, and policymakers must work together to develop AI applications with a comprehensive understanding of their impacts (
Asakura et al. 2020;
Hermann 2022;
Kazimzade et al. 2019).
Social scientists can take the initiative to develop communication platforms that facilitate interdisciplinary dialogue and collaboration. These platforms can serve as hubs for democratizing AI for social good, bringing together technologists, policymakers, and ethicists to ensure that AI systems are designed and deployed in ways that are both technically sound and socially responsible. By promoting such interdisciplinary research, social scientists can help balance technological innovations with social responsibility.